Linear Algebra Optimization Machine Learning PDF
Linear Algebra Optimization Machine Learning PDF
Linear Algebra Optimization Machine Learning PDF
Aggarwal
Linear Algebra
and Optimization
for Machine
Learning
A Textbook
Linear Algebra and Optimization for Machine Learning
Charu C. Aggarwal
This Springer imprint is published by the registered company Springer Nature Switzerland AG.
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
To my wife Lata, my daughter Sayani,
and all my mathematics teachers
Contents
VII
VIII CONTENTS
Bibliography 483
Index 491
Preface
“Mathematics is the language with which God wrote the universe.”– Galileo
A frequent challenge faced by beginners in machine learning is the extensive background
required in linear algebra and optimization. One problem is that the existing linear algebra
and optimization courses are not specific to machine learning; therefore, one would typically
have to complete more course material than is necessary to pick up machine learning.
Furthermore, certain types of ideas and tricks from optimization and linear algebra recur
more frequently in machine learning than other application-centric settings. Therefore, there
is significant value in developing a view of linear algebra and optimization that is better
suited to the specific perspective of machine learning.
It is common for machine learning practitioners to pick up missing bits and pieces of lin-
ear algebra and optimization via “osmosis” while studying the solutions to machine learning
applications. However, this type of unsystematic approach is unsatisfying, because the pri-
mary focus on machine learning gets in the way of learning linear algebra and optimization
in a generalizable way across new situations and applications. Therefore, we have inverted
the focus in this book, with linear algebra and optimization as the primary topics of interest
and solutions to machine learning problems as the applications of this machinery. In other
words, the book goes out of its way to teach linear algebra and optimization with machine
learning examples. By using this approach, the book focuses on those aspects of linear al-
gebra and optimization that are more relevant to machine learning and also teaches the
reader how to apply them in the machine learning context. As a side benefit, the reader
will pick up knowledge of several fundamental problems in machine learning. At the end
of the process, the reader will become familiar with many of the basic linear-algebra- and
optimization-centric algorithms in machine learning. Although the book is not intended to
provide exhaustive coverage of machine learning, it serves as a “technical starter” for the key
models and optimization methods in machine learning. Even for seasoned practitioners of
machine learning, a systematic introduction to fundamental linear algebra and optimization
methodologies can be useful in terms of providing a fresh perspective.
The chapters of the book are organized as follows:
1. Linear algebra and its applications: The chapters focus on the basics of linear al-
gebra together with their common applications to singular value decomposition, ma-
trix factorization, similarity matrices (kernel methods), and graph analysis. Numerous
machine learning applications have been used as examples, such as spectral clustering,
XVII
XVIII PREFACE
kernel-based classification, and outlier detection. The tight integration of linear alge-
bra methods with examples from machine learning differentiates this book from generic
volumes on linear algebra. The focus is clearly on the most relevant aspects of linear
algebra for machine learning and to teach readers how to apply these concepts.
This book contains exercises both within the text of the chapter and at the end of the
chapter. The exercises within the text of the chapter should be solved as one reads the
chapter in order to solidify the concepts. This will lead to slower progress, but a better
understanding. For in-chapter exercises, hints for the solution are given in order to help the
reader along. The exercises at the end of the chapter are intended to be solved as refreshers
after completing the chapter.
Throughout this book, a vector or a multidimensional data point is annotated with a bar,
such as X or y. A vector or multidimensional point may be denoted by either small letters
or capital letters, as long as it has a bar. Vector dot products are denoted by centered dots,
such as X · Y . A matrix is denoted in capital letters without a bar, such as R. Throughout
the book, the n × d matrix corresponding to the entire training data set is denoted by
D, with n data points and d dimensions. The individual data points in D are therefore
d-dimensional row vectors and are often denoted by X 1 . . . X n . Conversely, vectors with
one component for each data point are usually n-dimensional column vectors. An example
is the n-dimensional column vector y of class variables of n data points. An observed value
yi is distinguished from a predicted value ŷi by a circumflex at the top of the variable.
I would like to thank my family for their love and support during the busy time spent in
writing this book. Knowledge of the very basics of optimization (e.g., calculus) and linear
algebra (e.g., vectors and matrices) starts in high school and increases over the course of
many years of undergraduate/graduate education as well as during the postgraduate years
of research. As such, I feel indebted to a large number of teachers and collaborators over
the years. This section is, therefore, a rather incomplete attempt to express my gratitude.
My initial exposure to vectors, matrices, and optimization (calculus) occurred during my
high school years, where I was ably taught these subjects by S. Adhikari and P. C. Pathrose.
Indeed, my love of mathematics started during those years, and I feel indebted to both these
individuals for instilling the love of these subjects in me. During my undergraduate study
in computer science at IIT Kanpur, I was taught several aspects of linear algebra and
optimization by Dr. R. Ahuja, Dr. B. Bhatia, and Dr. S. Gupta. Even though linear algebra
and mathematical optimization are distinct (but interrelated) subjects, Dr. Gupta’s teaching
style often provided an integrated view of these topics. I was able to fully appreciate the value
of such an integrated view when working in machine learning. For example, one can approach
many problems such as solving systems of equations or singular value decomposition either
from a linear algebra viewpoint or from an optimization viewpoint, and both perspectives
provide complementary views in different machine learning applications. Dr. Gupta’s courses
on linear algebra and mathematical optimization had a profound influence on me in choosing
mathematical optimization as my field of study during my PhD years; this choice was
relatively unusual for undergraduate computer science majors at that time. Finally, I had
the good fortune to learn about linear and nonlinear optimization methods from several
luminaries on these subjects during my graduate years at MIT. In particular, I feel indebted
to my PhD thesis advisor James B. Orlin for his guidance during my early years. In addition,
Nagui Halim has provided a lot of support for all my book-writing projects over the course
of a decade and deserves a lot of credit for my work in this respect. My manager, Horst
Samulowitz, has supported my work over the past year, and I would like to thank him for
his help.
I also learned a lot from my collaborators in machine learning over the years. One
often appreciates the true usefulness of linear algebra and optimization only in an applied
setting, and I had the good fortune of working with many researchers from different areas
on a wide range of machine learning problems. A lot of the emphasis in this book to specific
aspects of linear algebra and optimization is derived from these invaluable experiences and
XIX
XX ACKNOWLEDGMENTS
collaborations. In particular, I would like to thank Tarek F. Abdelzaher, Jinghui Chen, Jing
Gao, Quanquan Gu, Manish Gupta, Jiawei Han, Alexander Hinneburg, Thomas Huang,
Nan Li, Huan Liu, Ruoming Jin, Daniel Keim, Arijit Khan, Latifur Khan, Mohammad
M. Masud, Jian Pei, Magda Procopiuc, Guojun Qi, Chandan Reddy, Saket Sathe, Jaideep
Srivastava, Karthik Subbian, Yizhou Sun, Jiliang Tang, Min-Hsuan Tsai, Haixun Wang,
Jianyong Wang, Min Wang, Suhang Wang, Wei Wang, Joel Wolf, Xifeng Yan, Wenchao Yu,
Mohammed Zaki, ChengXiang Zhai, and Peixiang Zhao.
Several individuals have also reviewed the book. Quanquan Gu provided suggestions
on Chapter 6. Jiliang Tang and Xiaorui Liu examined several portions of Chapter 6 and
pointed out corrections and improvements. Shuiwang Ji contributed Problem 7.2.3. Jie Wang
reviewed several chapters of the book and pointed out corrections. Hao Liu also provided
several suggestions.
Last but not least, I would like to thank my daughter Sayani for encouraging me to
write this book at a time when I had decided to hang up my boots on the issue of book
writing. She encouraged me to write this one. I would also like to thank my wife for fixing
some of the figures in this book.
Author Biography
XXI
Chapter 1
“No matter what engineering field you’re in, you learn the same basic science
and mathematics. And then maybe you learn a little bit about how to apply
it.”–Noam Chomsky
1.1 Introduction
Machine learning builds mathematical models from data containing multiple attributes (i.e.,
variables) in order to predict some variables from others. For example, in a cancer predic-
tion application, each data point might contain the variables obtained from running clinical
tests, whereas the predicted variable might be a binary diagnosis of cancer. Such models are
sometimes expressed as linear and nonlinear relationships between variables. These relation-
ships are discovered in a data-driven manner by optimizing (maximizing) the “agreement”
between the models and the observed data. This is an optimization problem.
Linear algebra is the study of linear operations in vector spaces. An example of a vector
space is the infinite set of all possible Cartesian coordinates in two dimensions in relation to
a fixed point referred to as the origin, and each vector (i.e., a 2-dimensional coordinate) can
be viewed as a member of this set. This abstraction fits in nicely with the way data is rep-
resented in machine learning as points with multiple dimensions, albeit with dimensionality
that is usually greater than 2. These dimensions are also referred to as attributes in machine
learning parlance. For example, each patient in a medical application might be represented
by a vector containing many attributes, such as age, blood sugar level, inflammatory mark-
ers, and so on. It is common to apply linear functions to these high-dimensional vectors in
many application domains in order to extract their analytical properties. The study of such
linear transformations lies at the heart of linear algebra.
While it is easy to visualize the spatial geometry of points/operations in 2 or 3 dimen-
sions, it becomes harder to do so in higher dimensions. For example, it is simple to visualize
1. Scalars: Scalars are individual numerical values that are typically drawn from the real
domain in most machine learning applications. For example, the value of an attribute
such as Age in a machine learning application is a scalar.
2. Vectors: Vectors are arrays of numerical values (i.e., arrays of scalars). Each such
numerical value is also referred to as a coordinate. The individual numerical values of
the arrays are referred to as entries, components, or dimensions of the vector, and the
number of components is referred to as the vector dimensionality. In machine learning,
a vector might contain components (associated with a data point) corresponding to
numerical values like Age, Salary, and so on. A 3-dimensional vector representation
of a 25-year-old person making 30 dollars an hour, and having 5 years of experience
might be written as the array of numbers [25, 30, 5].
that the (i, j)th element of A is denoted by aij . Furthermore, defining A = [aij ]n×d
refers to the fact that the size of A is n × d. When a matrix has the same number of
rows as columns, it is referred to as a square matrix. Otherwise, it is referred to as a
rectangular matrix. A rectangular matrix with more rows than columns is referred to
as tall, whereas a matrix with more columns than rows is referred to as wide or fat.
It is possible for scalars, vectors, and matrices to contain complex numbers. This book will
occasionally discuss complex-valued vectors when they are relevant to machine learning.
Vectors are special cases of matrices, and scalars are special cases of both vectors and
matrices. For example, a scalar is sometimes viewed as a 1 × 1 “matrix.” Similarly, a d-
dimensional vector can be viewed as a 1 × d matrix when it is treated as a row vector. It
can also be treated as a d × 1 matrix when it is a column vector. The addition of the word
“row” or “column” to the vector definition is indicative of whether that vector is naturally
a row of a larger matrix or whether it is a column of a larger matrix. By default, vectors
are assumed to be column vectors in linear algebra, unless otherwise specified. We always
use an overbar on a variable to indicate that it is a vector, although we do not do so for
matrices or scalars. For example, the row vector [y1 , . . . , yd ] of d values can be denoted by y
or Y . In this book, scalars are always represented by lower-case variables like a or δ, whereas
matrices are always represented by upper-case variables like A or Δ.
In the sciences, a vector is often geometrically visualized as a quantity, such as the ve-
locity, that has a magnitude as well as a direction. Such vectors are referred to as geometric
vectors. For example, imagine a situation where the positive direction of the X-axis cor-
responds to the eastern direction, and the positive direction of the Y -axis corresponds to
the northern direction. Then, a person that is simultaneously moving at 4 meters/second
in the eastern direction and at 3 meters/second in √ the northern direction is really moving
in the north-eastern direction in a straight line at 42 + 32 = 5 meters/second (based on
the Pythagorean theorem). This is also the length of the vector. The vector of the velocity
of this person can be written as a directed line from the origin to [4, 3]. This vector is
shown in Figure 1.1(a). In this case, the tail of the vector is at the origin, and the head of
the vector is at [4, 3]. Geometric vectors in the sciences are allowed to have arbitrary tails.
For example, we have shown another example of the same vector [4, 3] in Figure 1.1(a) in
which the tail is placed at [1, 4] and the head is placed at [5, 7]. In contrast to geometric
vectors, only vectors that have tails at the origin are considered in linear algebra (although
the mathematical results, principles, and intuition remain the same). This does not lead to
any loss of expressivity. All vectors, operations, and spaces in linear algebra use the origin
as an important reference point.
[5, 7] [5, 7]
Y-AXIS
Y-AXIS
Y-AXIS
[1, 4] [1, 4]
[4/5, 3/5]
Vector addition is commutative (like scalar addition) because x + y = y + x. When two vec-
tors, x and y, are added, the origin, x, y, and x + y represent the vertices of a parallelogram.
For example, consider the vectors A = [4, 3] and B = [1, 4]. The sum of these two vectors
is A + B = [5, 7]. The addition of these two vectors is shown in Figure 1.1(b). It is easy to
show that the four points [0, 0], [4, 3], [1, 4], and [5, 7] form a parallelogram in 2-dimensional
space, and the addition of the vectors is one of the diagonals of the parallelogram. The
other diagonal can be shown to be parallel to either A − B or B − A, depending on the
direction of the vector. Note that vector addition and subtraction follow the same rules
in linear algebra as for geometric vectors, except that the tails of the vectors are always
origin rooted. For example, the vector (A − B) should no longer be drawn as a diagonal of
the parallelogram, but as an origin-rooted vector with the same direction as the diagonal.
Nevertheless, the diagonal abstraction still helps in the computation of (A − B). One way of
visualizing vector addition (in terms of the velocity abstraction) is that if a platform moves
on the ground with velocity [1, 4], and if the person walks on the platform (relative to it)
with velocity [4, 3], then the overall velocity of the person relative to the ground is [5, 7].
It is possible to multiply a vector with a scalar by multiplying each component of the
vector with the scalar. Consider a vector x = [x1 , . . . xd ], which is scaled by a factor of a:
x = ax = [a x1 . . . a xd ]
For example, if the vector x contains the number of units sold of each product, then one
can use a = 10−6 to convert units sold into number of millions of units sold. The scalar
multiplication operation simply scales the length of the vector, but does not change its
direction (i.e., relative values of different components). The notion of “length” is defined
more formally in terms of the norm of the vector, which is discussed below.
Vectors can be multiplied with the notion of the dot product. The dot product between
two vectors, x = [x1 , . . . , xd ] and y = [yi , . . . yd ], is the sum of the element-wise multiplication
of their individual components. The dot product of x and y is denoted by x · y (with a dot
in the middle) and is formally defined as follows:
d
x·y = x i yi (1.1)
i=1
1.2. SCALARS, VECTORS, AND MATRICES 5
Consider a case where we have x = [1, 2, 3] and y = [6, 5, 4]. In such a case, the dot product
of these two vectors can be computed as follows:
The dot product is a special case of a more general operation, referred to as the inner
product, and it preserves many fundamental rules of Euclidean geometry. The space of
vectors that includes a dot product operation is referred to as a Euclidean space. The dot
product is a commutative operation:
d
d
x·y = x i yi = yi x i = y · x
i=1 i=1
The dot product also inherits the distributive property of scalar multiplication:
x · (y + z) = x · y + x · z
The dot product of a vector, x = [x1 , . . . xd ], with itself is referred to as its squared norm
or Euclidean norm. The norm defines the vector length and is denoted by · :
d
x2 = x · x = x2i
i=1
The norm of the vector is the Euclidean distance of its√coordinates from the origin. In
the case of Figure 1.1(a), the norm of the vector [4, 3] is 42 + 32 = 5. Often, vectors are
normalized to unit length by dividing them with their norm:
x x
x = =√
x x·x
Scaling a vector by its norm does not change the relative values of its components, which
define the direction of the vector. For example, the Euclidean distance of [4, 3] from the
origin is 5. Dividing each component of the vector by 5 results in the vector [4/5, 3/5],
which changes the length of the vector to 1, but not its direction. This shortened vector is
shown in Figure 1.1(c), and it overlaps with the vector [4, 3]. The resulting vector is referred
to as a unit vector.
A generalization of the Euclidean norm is the Lp -norm, which is denoted by · p :
d
xp = ( |xi |p )(1/p) (1.3)
i=1
Here, | · | indicates the absolute value of a scalar, and p is a positive integer. For example,
when p is set to 1, the resulting norm is referred to as the Manhattan norm or the L1 -norm.
d
x − y2 = (x − y) · (x − y) = (xi − yi )2 = Euclidean(x, y)2
i=1
6 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
[1.0, 1.732]
Y-AXIS
600
X-AXIS
150
[0.966, 0.259]
Dot products satisfy the Cauchy-Schwarz inequality, according to which the dot product
between a pair of vectors is bounded above by the product of their lengths:
d
| xi yi | = |x · y| ≤ x y (1.4)
i=1
The Cauchy-Schwarz inequality can be proven by first showing that |x · y| ≤ 1 when x and y
are unit vectors (i.e., the result holds when the arguments are unit vectors). This is because
both x−y2 = 2−2x·y and x+y2 = 2+2x·y are nonnegative. This is possible only when
|x · y| ≤ 1. One can then generalize this result to arbitrary length vectors by observing that
the dot product scales up linearly with the norms of the underlying arguments. Therefore,
one can scale up both sides of the inequality with the norms of the vectors.
Problem 1.2.1 (Triangle Inequality) Consider the triangle formed by the origin, x, and
y. Use the Cauchy-Schwarz inequality to show that the side length x − y is no greater than
the sum x + y of the other two sides.
A hint for solving the above problem is that both sides of the triangle inequality are non-
negative. Therefore, the inequality is true if and only if it holds after squaring both sides.
The Cauchy-Schwarz inequality shows that the dot product between a pair of vectors is
no greater than the product of vector lengths. In fact, the ratio between these two quantities
is the cosine of the angle between the two vectors (which is always less than 1). For example,
one often represents the coordinates of a 2-dimensional vector in polar form as [a, θ], where
a is the length of the vector, and θ is the counter-clockwise angle the vector makes with
the X-axis. The Cartesian coordinates are [a cos(θ), a sin(θ)], and the dot product of this
Cartesian coordinate vector with [1, 0] (the X-axis) is a cos(θ). As another example, consider
two vectors with lengths 2 and 1, respectively, which make (counter-clockwise) angles of
60◦ and −15◦ with respect to the X-axis in a 2-dimensional setting. These vectors √ are
shown in Figure 1.2. The coordinates of these vectors are [2 cos(60), 2 sin(60)] = [1, 3] and
[cos(−15), sin(−15)] = [0.966, −0.259].
The cosine function between two vectors x = [x1 . . . xd ] and y = [yi , . . . yd ] is algebraically
defined by the dot product between the two vectors after scaling them to unit norm:
x·y x·y
cos(x, y) = √ √ = (1.5)
x·x y·y x y
The algebraically computed cosine function over x and y has the normal trigonometric
interpretation of being equal to cos(θ), where θ is the angle between the vectors x and y.
1.2. SCALARS, VECTORS, AND MATRICES 7
For example, the two vectors A and B in Figure 1.2 are at an angle of 75◦ to each other,
and have norms of 1 and 2, respectively. Then, the algebraically computed cosine function
over the pair [A, B] is equal to the expected trigonometric value of cos(75):
√
0.966 × 1 − 0.259 × 3
cos(A, B) = ≈ 0.259 ≈ cos(75)
1×2
In order to understand why the algebraic dot product between two vectors yields the trigono-
metric cosine value, one can use the cosine law from Euclidean geometry. Consider the tri-
angle created by the origin, x = [x1 , . . . , xd ] and y = [y1 , . . . , yd ]. We want to find the angle
θ between x and y. The Euclidean side lengths of this triangle are a = x, b = y, and
c = x − y. The cosine law provides a formula for the angle θ in terms of side lengths as
follows:
a 2 + b2 − c 2 x2 + y2 − x − y2 x·y
cos(θ) = = =√ √
2ab 2 (x) (y) x·x y·y
The second relationship is obtained by expanding x − y2 as (x − y) · (x − y) and then using
the distributive property of dot products. Almost all the wonderful geometric properties of
Euclidean spaces can be algebraically traced back to this simple relationship between the
dot product and the trigonometric cosine. The simple algebra of the dot product operation
hides a lot of complex Euclidean geometry. The exercises at the end of this chapter show that
many basic geometric and trigonometric identities can be proven very easily with algebraic
manipulation of dot products.
A pair of vectors is orthogonal if their dot product is 0, and the angle between them is
90◦ (for non-zero vectors). The vector 0 is considered orthogonal to every vector. A set of
vectors is orthonormal if each pair in the set is mutually orthogonal and the norm of each
vector is 1. Orthonormal directions are useful because they are employed for transforma-
tions of points across different orthogonal coordinate systems with the use of 1-dimensional
projections. In other words, a new set of coordinates of a data point can be computed
with respect to the changed set of directions. This approach is referred to as coordinate
transformation in analytical geometry, and is also used frequently in linear algebra. The
1-dimensional projection operation of a vector x on a unit vector is defined the dot prod-
uct between the two vectors. It has a natural geometric interpretation as the (positive or
negative) distance of x from the origin in the direction of the unit vector, and therefore it
is considered a coordinate in that direction. Consider the point [10, 15] in a 2-dimensional
coordinate system. Now imagine that you were given the orthonormal directions [3/5, 4/5]
and [−4/5, 3/5]. One can represent the point [10, 15] in a new coordinate system defined by
the directions [3/5, 4/5] and [−4/5, 3/5] by computing the dot product of [10, 15] with each
of these vectors. Therefore, the new coordinates [x , y ] are defined as follows:
One can express the original vector using the new axes and coordinates as follows:
These types of transformations of vectors to new representations lie at the heart of linear
algebra. In many cases, transformed representations of data sets (e.g., replacing each [x, y]
in a 2-dimensional data set with [x , y ]) have useful properties, which are exploited by
machine learning applications.
8 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
It is easy to see that the transpose of the transpose (AT )T of a matrix A is the original
matrix A. Like matrices, row vectors can be transposed to column vectors, and vice versa.
Like vectors, matrices can be added only if they have exactly the same sizes. For example,
one can add the matrices A and B only if A and B have exactly the same number of rows and
columns. The (i, j)th entry of A+B is the sum of the (i, j)th entries of A and B, respectively.
The matrix addition operator is commutative, because it inherits the commutative property
of scalar addition of its individual entries. Therefore, we have:
A+B =B+A
A zero matrix or null matrix is the matrix analog of the scalar value of 0, and it contains
only 0s. It is often simply written as “0” even though it is a matrix. It can be added to a
matrix of the same size without affecting its values:
A+0=A
Note that matrices, vectors, and scalars all have their own definition of a zero element,
which is required to obey the above additive identity. For vectors, the zero element is the
vector of 0s, and it is written as “0” with an overbar on top.
It is easy to show that the transpose of the sum of two matrices A = [aij ] and B = [bij ]
is given by the sum of their transposes. In other words, we have the following relationship:
(A + B)T = AT + B T (1.6)
The result can be proven by demonstrating that the (i, j)th element of both sides of the
above equation is (aji + bji ).
An n × d matrix A can either be multiplied with a d-dimensional column vector x as
Ax, or it can be multiplied with an n-dimensional row vector y as yA. When an n × d
matrix A is multiplied with d-dimensional column vector x to create Ax, an element-wise
multiplication is performed between the d elements of each row of the matrix A and the d
elements of the column vector x, and then these element-wise products are added to create
a scalar. Note that this operation is the same as the dot product, except that one needs to
transpose the rows of A to column vectors to rigorously express it as a dot product. This
is because dot products are defined between two vectors of the same type (i.e., row vectors
or column vectors). At the end of the process, n scalars are computed and arranged into an
n-dimensional column vector in which the ith element is the product between the ith row
of A and x. An example of a multiplication of a 3 × 2 matrix A = [aij ] with a 2-dimensional
column vector x = [x1 , x2 ]T is shown below:
⎡ ⎤ ⎡ ⎤
a11 a12 a11 x1 + a12 x2
⎣ a21 a22 ⎦ x1 = ⎣ a21 x1 + a22 x2 ⎦ (1.7)
x2
a31 a32 a31 x1 + a32 x2
1.2. SCALARS, VECTORS, AND MATRICES 9
One can also post-multiply an n-dimensional row vector with an n × d matrix A = [aij ] to
create a d-dimensional row vector. An example of the multiplication of a 3-dimensional row
vector v = [v1 , v2 , v3 ] with the 3 × 2 matrix A is shown below:
⎡ ⎤
a11 a12
[v1 , v2 , v3 ] ⎣ a21 a22 ⎦ = [v1 a11 + v2 a21 + v3 a31 , v1 a12 + v2 a22 + v3 a32 ] (1.8)
a31 a32
It is clear that the multiplication operation between matrices and vectors is not commuta-
tive.
The multiplication of an n × d matrix A with a d-dimensional column vector x to create
an n-dimensional column vector Ax is often interpreted as a linear transformation from
d-dimensional space to n-dimensional space. The precise mathematical definition of a linear
transformation is given in Chapter 2. For now, we ask the reader to observe that the result
of the multiplication is a weighted sum of the columns of the matrix A, where the weights
are provided by the scalar components of vector x. For example, one can rewrite the matrix-
vector multiplication of Equation 1.7 as follows:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
a11 a12 a11 a12
⎣ a21 a22 ⎦ x1 = x1 ⎣ a21 ⎦ + x2 ⎣ a22 ⎦ (1.9)
x2
a31 a32 a31 a32
Here, a 2-dimensional vector is mapped into a 3-dimensional vector as a weighted combina-
tion of the columns of the matrix. Therefore, the n × d matrix A is occasionally represented
in terms of its ordered set of n-dimensional columns a1 . . . ad as A = [a1 . . . ad ]. This results
in the following form of matrix-vector multiplication using the columns of A and a column
vector x = [x1 . . . xd ]T of coefficients:
d
Ax = x i ai = b
i=1
Each xi corresponds to the “weight” of the ith direction ai , which is also referred to as the
ith coordinate of b using the (possibly non-orthogonal) directions contained in the columns
of A. This notion is a generalization of the (orthogonal) Cartesian coordinates defined by
d-dimensional vectors e1 . . . ed , where each ei is an axis direction with a single 1 in the ith
position and remaining 0s. For the case of the Cartesian system defined d by e1 . . . ed , the
coordinates of b = [b1 . . . bd ]T are simply b1 . . . bd , since we have b = i=1 bi ei .
The dot product between two vectors can be viewed as a special case of matrix-vector
multiplication. In such a case, a 1 × d matrix (row vector) is multiplied with a d × 1 matrix
(column vector), and the result is the same as one would obtain by performing a dot product
between the two vectors. However, a subtle difference is that the dot product is defined
between two vectors of the same type (typically column vectors) rather than between the
matrix representation of a row vector and the matrix representation of a column vector. In
order to implement a dot product as a matrix-matrix multiplication, we would first need
to convert one of the column vectors into the matrix representation of a row vector, and
then perform the matrix multiplication by ordering the “wide” matrix (row vector) before
the “tall” matrix (column vector). The resulting 1 × 1 matrix contains the dot product.
For example, consider the dot product in matrix form, which is obtained by matrix-centric
multiplication of a row vector with a column vector:
⎡ ⎤
x1
v · x = [v1 , v2 , v3 ] ⎣ x2 ⎦ = [v1 x1 + v2 x2 + v3 x3 ]
x3
10 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
The result of the matrix multiplication is a 1 × 1 matrix containing the dot product, which
is a scalar. It is clear that we always obtain the same 1 × 1 matrix, irrespective of the order
of the arguments in the dot product, as long as we transpose the first vector in order to
place the “wide” matrix before the “tall” matrix:
x · v = v · x, xT v = v T x
Unlike dot products, outer products can be performed between two vectors of different
lengths. Conventionally, outer products are defined between two column vectors, and the
second vector is transposed into a matrix containing a single row before matrix multiplica-
tion. In other words, the jth component of the second vector (in d dimensions) becomes the
(1, j)th element of the second matrix (of size 1 × d) in the multiplication. The first matrix
is simply a d × 1 matrix derived from the column vector. Unlike dot products, the outer
product is not commutative; the order of the operands matters not only to the values in
the final matrix, but also to the size of the final matrix:
x ⊗ v = v ⊗ x, x v T = v xT
The multiplication between vectors, or the multiplication of a matrix with a vector, are
both special cases of multiplying two matrices. However, in order to multiply two matrices,
certain constraints on their sizes need to be respected. For example, an n × k matrix U can
be multiplied with a k × d matrix V only because the number of columns k in U is the same
as the number of rows k in V . The resulting matrix is of size n × d, in which the (i, j)th
entry is the dot product between the vectors corresponding to the ith row of U and the
jth column of V . Note that the dot product operations within the multiplication require
the underlying vectors to be of the same sizes. The outer product between two vectors is
a special case of matrix multiplication that uses k = 1 with arbitrary values of n and d;
similarly, the inner product is a special case of matrix multiplication that uses n = d = 1,
but some arbitrary value of k. Consider the case in which the (i, j)th entries of U and V
are uij and vij , respectively. Then, the (i, j)th entry of U V is given by the following:
k
(U V )ij = uir vrj (1.10)
r=1
1.2. SCALARS, VECTORS, AND MATRICES 11
k
UV = U r Vr
r=1
n×d
Proof: Let uij and vij be the (i, j)th entries of U and V , respectively. It can be shown that
the rth term in the summation on the right-hand side of the equation in the statement of
the lemma contributes uir vrj to the (i, j)th entry in the summation matrix. Therefore, the
k
overall sum of the terms on the right-hand side is r=1 uir vrj . This sum is exactly the same
as the definition of the (i, j)th term of the matrix multiplication U V (cf. Equation 1.10).
In general, matrix multiplication is not commutative (except for special cases). In other
words, we have AB = BA in the general case. This is different from scalar multiplication,
which is commutative. A concrete example of non-commutativity is as follows:
1 1 1 0 2 0 1 0 1 1 1 1
= = =
0 0 1 0 0 0 1 0 0 0 1 1
In fact, if the matrices A and B are not square, it might be possible that one of the
products, AB, is possible to compute based on the sizes of A and B, whereas BA might
not be computable. For example, it is possible to compute AB for the 4 × 2 matrix A and
the 2 × 5 matrix B. However, it is not possible to compute BA because of mismatching
dimensions.
12 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
These properties also hold for matrix-vector multiplication, because all vectors are special
cases of matrices. The associativity property is very useful in ensuring efficient matrix
multiplication by carefully selecting from the different choices allowed by associativity.
Problem 1.2.3 Express the matrix ABC as the weighted sum of outer products of vectors
extracted from A and C. The weights are extracted from matrix B.
Problem 1.2.4 Let A be an 1000000 × 2 matrix. Suppose you have to compute the 2 ×
1000000 matrix AT AAT on a computer with limited memory. Would you prefer to compute
(AT A)AT or would you prefer to compute AT (AAT )?
Problem 1.2.5 Let D be an n × d matrix for which each column sums to 0. Let A be an
arbitrary d × d matrix. Show that the sum of each column of DA is also zero.
The key point in showing the above result is to use the fact that the sum of the rows of D
can be expressed as eT D, where e is a column vector of 1s.
The transpose of the product of two matrices is given by the product of their transposes,
but the order of multiplication is reversed:
(AB)T = B T AT (1.12)
This result can be easily shown by working out the algebraic expression for the (i, j)th entry
in terms of the entries of A = [aij ] and B = [bij ]. The result for transposes can be easily
extended to any number of matrices, as shown below:
Problem 1.2.6 Show the following result for matrices A1 . . . An :
(A1 A2 A3 . . . An )T = ATn ATn−1 . . . AT2 AT1
The multiplication between a matrix and a vector also satisfies the same type of transposi-
tion rule as shown above.
Problem 1.2.7 If A and B are symmetric matrices, then show that AB is symmetric if
and only if AB = BA.
The diagonal of a matrix is defined as the set of entries for which the row and column indices
are the same. Although the notion of diagonal is generally used for square matrices, the
definition is sometimes also used for rectangular matrices; in such a case, the diagonal starts
at the upper-left corner so that the row and column indices are the same. A square matrix
that has values of 1 in all entries along the diagonal and 0s for all non-diagonal entries is
referred to as an identity matrix, and is denoted by I. In the event that the non-diagonal
entries are 0, but the diagonal entries are different from 1, the resulting matrix is referred to
as a diagonal matrix. Therefore, the identity matrix is a special case of a diagonal matrix.
Multiplying an n × d matrix A with the identity matrix of the appropriate size in any order
results in the same matrix A. One can view the identity matrix as the analog of the value
of 1 in scalar multiplication:
AI = IA = A (1.13)
Since A is an n × d matrix, the size of the identity matrix I in the product AI is d × d,
whereas the size of the identity matrix in the product IA is n × n. This is somewhat
confusing, because the same notation I in Equation 1.13 refers to identity matrices of two
different sizes. In such cases, ambiguity is avoided by subscripting the identity matrix to
indicate its size. For example, an identity matrix of size d × d is denoted by Id . Therefore,
a more unambiguous form of Equation 1.13 is as follows:
AId = In A = A (1.14)
Although diagonal matrices are assumed to be square by default, it is also possible to create
a relaxed definition1 of a diagonal matrix, which is not square. In this case, the diagonal is
aligned with the upper-left corner of the matrix. Such matrices are referred to as rectangular
diagonal matrices.
A block diagonal matrix contains square blocks B1 . . . Br of (possibly) non-zero entries along
the diagonal. All other entries are zero. Although each block is square, they need not be
of the same size. Examples of different types of diagonal and block diagonal matrices are
shown in the top row of Figure 1.3.
A generalization of the notion of a diagonal matrix is that of a triangular matrix:
Definition 1.2.2 (Upper and Lower Triangular Matrix) A square matrix is an up-
per triangular matrix if all entries (i, j) below its main diagonal (i.e., satisfying i > j)
are zeros. A matrix is lower triangular if all entries (i, j) above its main diagonal (i.e.,
satisfying i < j) are zeros.
1 Instead of referring to such matrices as rectangular diagonal matrices, some authors use a quotation
around the word diagonal, while referring to such matrices. This is because the word “diagonal” was origi-
nally reserved for square matrices.
14 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
Proof Sketch: This result is easy to show by proving that the scalar expressions for the
(i, j)th entry in the sum and the product are both 0, when i > j.
The above lemma naturally applies to lower-triangular matrices as well.
Although the notion of a triangular matrix is generally meant for square matrices, it is
sometimes used for rectangular matrices. Examples of different types of triangular matrices
are shown in the bottom row of Figure 1.3. The portion of the matrix occupied by non-
zero entries is shaded. Note that the number of non-zero entries in rectangular triangular
matrices heavily depends on the shape of the matrix. Finally, a matrix A is said to be
sparse, when most of the entries in it have 0 values. It is often computationally efficient to
work with such matrices.
An = AA
. . . A
(1.15)
n times
The zeroth power of a matrix is defined to be the identity matrix of the same size. When
a matrix satisfies Ak = 0 for some integer k, it is referred to as nilpotent. For example,
all strictly triangular matrices of size d × d satisfy Ad = 0. Like scalars, one can raise a
square matrix to a fractional power, although it is not guaranteed to exist. For example,
if A = V 2 , then we have V = A1/2 . Unlike scalars, it is not guaranteed that A1/2 exists
for an arbitrary matrix A, even after allowing for complex-valued entries in the result (see
Exercise 14). In general, one can compute a polynomial function f (A) of a square matrix in
much the same way as one computes polynomials of scalars. Instead of the constant term
used in a scalar polynomial, multiples of the identity matrix are used; the identity matrix
1.2. SCALARS, VECTORS, AND MATRICES 15
is the matrix analog of the scalar value of 1. For example, the matrix analog of the scalar
polynomial f (x) = 3x2 + 5x + 2, when applied to the d × d matrix A, is as follows:
f (A) = 3A2 + 5A + 2I
All polynomials of the same matrix A always commute with respect to the multiplication
operator.
Observation 1.2.1 (Commutativity of Matrix Polynomials) Two polynomials f (A)
and g(A) of the same matrix A will always commute:
f (A)g(A) = g(A)f (A)
The above result can be shown by expanding the polynomial on both sides, and showing
that the same polynomial is reached with the distributive property of matrix multiplication.
Can we raise a matrix to a negative power? The inverse of a square matrix A is another
square matrix denoted by A−1 so that the multiplication of the two matrices (in any order)
will result in the identity matrix:
AA−1 = A−1 A = I (1.16)
A simple formula exists for inverting 2 × 2 matrices:
−1
a b 1 d −b
= (1.17)
c d ad − bc −c a
An example of two matrices that are inverses of each other is shown below:
8 3 2 −3 2 −3 8 3 1 0
= =
5 2 −5 8 −5 8 5 2 0 1
The inverse of a 1 × 1 matrix containing the element a is simply the 1 × 1 matrix containing
the element 1/a. Therefore, a matrix inverse naturally generalizes a scalar inverse. Not all
matrices have inverses, just as an inverse does not exist for the scalar a = 0. A matrix
for which an inverse exists is referred to as invertible or nonsingular. Otherwise, it is said
to be singular. For example, if the rows in Equation 1.17 are proportional, we would have
ad − bc = 0, and therefore, the matrix would not be invertible. An example of a matrix that
is not invertible is as follows:
1 1
A=
2 2
Note that multiplying A with any 2 × 2 matrix B will always result in a 2 × 2 matrix AB
in which the second row is twice the first. This is not the case for the identity matrix,
and, therefore, an inverse of A does not exist. The fact that the rows in the non-invertible
matrix A are related by a proportionality factor is not a coincidence. As you will learn
in Chapter 2, matrices that are invertible always have the property that a non-zero linear
combination of the rows does not sum to zero. In other words, each vector direction in the
rows of an invertible matrix must contribute new, non-redundant “information” that cannot
be conveyed using sums, multiples, or linear combinations of other directions. The second
row of A is twice its first row, and therefore the matrix A is not invertible.
When the inverse of a matrix A does exist, it is unique. Furthermore, the product of a
matrix with its inverse is always commutative and leads to the identity matrix. A natural
consequence of these facts is that the inverse of the inverse (A−1 )−1 is the original matrix
A. We summarize these properties of inverses in the following two lemmas.
16 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
Proof: We present a restricted proof by making the assumption that a matrix C always
exists so that CA = I. Then, we have:
C = CI = C(AB) = (CA)B = IB = B
The commutativity of the product of a matrix and its inverse can be viewed as an extension
of the statement in Observation 1.2.1 that the product of a matrix A with any polynomial
of A is always commutative. A fractional or negative power of a matrix A (like A−1 ) also
commutes with A.
Lemma 1.2.4 When the inverse of a matrix exists, it is always unique. In other words, if
B1 and B2 satisfy AB1 = AB2 = I, we must have B1 = B2 .
Proof: Since AB1 = AB2 , it follows that AB1 −AB2 = 0. Therefore, we have A(B1 −B2 ) =
0. One can pre-multiply the relationship with B1 to obtain the following:
B1 A(B1 − B2 ) = 0
I
One can extend the above results to show that (A1 A2 . . . Ak )−1 = A−1 −1 −1
k Ak−1 . . . A1 . Note
that the individual matrices Ai must be invertible for their product to be invertible. Even if
one of the matrices Ai is not invertible, the product will not be invertible (see Exercise 52).
Problem 1.2.10 Suppose that the matrix B is the inverse of matrix A. Show that for any
positive integer n, the matrix B n is the inverse of matrix An .
The inversion and the transposition operations can be applied in any order without affecting
the result:
(AT )−1 = (A−1 )T (1.19)
This result holds because AT (A−1 )T = (A−1 A)T = I T = I. One can similarly show that
(A−1 )T AT = I. In other words, (A−1 )T is the inverse of AT .
An orthogonal matrix is a square matrix whose inverse is its transpose:
AAT = AT A = I (1.20)
Although such matrices are formally defined in terms of having orthonormal columns, the
commutativity in the above relationship implies the remarkable property that they contain
both orthonormal columns and orthonormal rows.
A useful property of invertible matrices is that they define uniquely solvable systems of
equations. For example, the solution to Ax = b exists and is uniquely defined as x = A−1 b
when A is invertible (cf. Chapter 2). One can also view the solution x as a new set of
coordinates of b in a different (and possibly non-orthogonal) coordinate system defined by
the vectors contained in the columns of A. Note that when A is orthogonal, the solution
simplifies to x = AT b, which is equivalent to evaluating the dot product between b and each
column of A to compute the corresponding coordinate. In other words, we are projecting b
on each orthonormal column of A to compute the corresponding coordinate.
The result can be used for inverting triangular matrices (although more straightforward
alternatives exist):
Problem 1.2.11 (Inverting Triangular Matrices) A d × d triangular matrix L with
non-zero diagonal entries can be expressed in the form (Δ + A), where Δ is an invertible
diagonal matrix and A is a strictly triangular matrix. Show how to compute the inverse
of L using only diagonal matrix inversions and matrix multiplicatons/additions. Note that
strictly triangular matrices of size d × d are always nilpotent and satisfy Ad = 0.
It is also possible to derive an expression for inverting the sum of two matrices in terms
of the original matrices under the condition that one of the two matrices is “compact.” By
compactness, we mean that one of the two matrices has so much structure to it that it can
be expressed as the product of two much smaller matrices. The matrix-inversion lemma is
a useful property for computing the inverse of a matrix after incrementally updating it with
a matrix created from the outer-product of two vectors. These types of inverses arise often
in iterative optimization algorithms such as the quasi-Newton method and for incremental
linear regression. In these cases, the inverse of the original matrix is already available, and
one can cheaply update the inverse with the matrix inversion lemma.
Lemma 1.2.5 (Matrix Inversion Lemma) Let A be an invertible d × d matrix, and u
and v be non-zero d-dimensional column vectors. Then, A + u v T is invertible if and only if
v T A−1 u = −1. In such a case, the inverse is computed as follows:
A−1 u v T A−1
(A + u v T )−1 = A−1 −
1 + v T A−1 u
(A + u v T )A−1 u = 0
u + u v T A−1 u = 0
u(1 + v T A−1 u) = 0
1 + v T A−1 u = 0
Although matrix multiplication is not commutative in general, the above proof uses the fact
that the scalar v T A−1 u can be moved around in the order of matrix multiplication because
it is a scalar.
Variants of the matrix inversion lemma are used in various types of iterative updates in
machine learning. A specific example is incremental linear regression, where one often wants
to invert matrices of the form C = DT D, where D is an n × d data matrix. When a new
d-dimensional data point v is received, the size of the data matrix becomes (n + 1) × d with
the addition of row vector v T to D. The matrix C is now updated to DT D + v v T , and the
matrix inversion lemma comes in handy for updating the inverted matrix in O(d2 ) time.
One can even generalize the above result to cases where the vectors u and v are replaced
with “thin” matrices U and V containing a small number k of columns.
Theorem 1.2.1 (Sherman–Morrison–Woodbury Identity) Let A be an invertible d×
d matrix and let U, V be d×k non-zero matrices for some small value of k. Then, the matrix
A+U V T is invertible if and only if the k×k matrix (I+V T A−1 U ) is invertible. Furthermore,
the inverse is given by the following:
This type of update is referred to as a low-rank update; the notion of rank will be explained
in Chapter 2. We provide some exercises relevant to the matrix inversion lemma.
Problem 1.2.12 Suppose that I and P are two k × k matrices. Show the following result:
(I + P )−1 = I − (I + P )−1 P
A hint for solving this problem is to check what you get when you left multiply both sides
of the above identity with (I + P ). A closely related result is the push-through identity:
Problem 1.2.13 (Push-Through Identity) If U and V are two n × d matrices, show
the following result:
U T (In + V U T )−1 = (Id + U T V )−1 U T
Use the above result to show the following for any n × d matrix D and scalar λ > 0:
A hint for solving the above problem is to see what happens when one left-multiplies and
right-multiplies the above identities with the appropriate matrices. The push-through iden-
tity derives its name from the fact that we push in a matrix on the left and it comes out
on the right. This identity is very important and is used repeatedly in this book.
Note the use of · F to denote the Frobenius norm. The squared Frobenius norm is the
sum of squares of the norms of the row-vectors (or, alternatively, column vectors) in the
20 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
More generally, the trace of the product of two matrices C = [cij ] and D = [dij ] of sizes of
n × d is the sum of their entrywise product:
n
d
tr(CDT ) = tr(DC T ) = cij dij (1.24)
i=1 j=1
The trace of the product of two matrices A = [aij ]n×d and B = [bij ]d×n is invariant to the
order of matrix multiplication:
n
d
tr(AB) = tr(BA) = aij bji (1.25)
i=1 j=1
Problem 1.2.14 Show that the Frobenius norm of the outer product of two vectors is equal
to the product of their Euclidean norms.
The Frobenius norm shares many properties with vector norms, such as sub-additivity
and sub-multiplicativity. These properties are analogous to the triangle inequality and the
Cauchy-Schwarz inequality, respectively, in the case of vector norms.
Lemma 1.2.6 (Sub-additive Frobenius Norm) For any pair of matrices A and B of
the same size, the triangle inequality A + BF ≤ AF + BF is satisfied.
The above result is easy to show by simply treating a matrix as a vector and creating two
long vectors from A and B, each with dimensionality equal to the number of matrix entries.
Lemma 1.2.7 (Sub-multiplicative Frobenius Norm) For any pair of matrices A and
B of sizes n×k and k ×d, respectively, the sub-multiplicative property ABF ≤ AF BF
is satisfied.
Proof Sketch: Let a1 . . . an correspond to the rows of A, and b1 . . . bd contain the trans-
posed columns of B. Then, the (i, j)th entry of AB is ai ·bj , and the squared Frobenius norm
n d
of the matrix AB is i=1 j=1 (ai · bj )2 . Each (ai · bj )2 is less than ai 2 bj 2 according
to the Cauchy-Schwarz inequality. Therefore, we have the following:
n
d
n
d
n
d
AB2F = (ai · bj ) ≤
2
ai bj = (
2 2
ai )(
2
bj 2 ) = A2F B2F
i=1 j=1 i=1 j=1 i=1 j=1
Problem 1.2.15 (Small Matrices Have Large Inverses) Show that the √ Frobenius
norm of the inverse of an n × n matrix with Frobenius norm of is at least n/.
1.3. MATRIX MULTIPLICATION AS A DECOMPOSABLE OPERATOR 21
f (x) = Ax
One can view this function as a vector-centric generalization of the univariate linear function
g(x) = a x for scalar a. This is one of the reasons that matrices are viewed as linear operators
on vectors. Much of linear algebra is devoted to understanding this transformation and
leveraging it for efficient numerical computations.
One issue is that if we have a large d × d matrix, it is often hard to interpret what
the matrix is really doing to the vector in terms of its individual components. This is the
reason that it is often useful to interpret a matrix as a product of simpler matrices. Because
of the beautiful property of the associativity of matrix multiplication, one can interpret a
product of simple matrices (and a vector) as the composition of simple operations on the
vector. In order to understand this point, consider the case when the above matrix A can
be decomposed into the product of simpler d × d matrices B1 , B2 , . . . Bk , as follows:
A = B1 B2 . . . Bk−1 Bk
Assume that each Bi is simple enough that one can intuitively interpret the effect of mul-
tiplying a vector x with Bi easily (such as rotating the vector or scaling it). Then, the
aforementioned function f (x) can be written as follows:
The nested brackets on the right provide an order to the operations. In other words, we first
apply the operator Bk to x, then apply Bk−1 , and so on all the way down to B1 . Therefore,
as long as we can decompose a matrix into the product of simpler matrices, we can interpret
matrix multiplication with a vector as a sequence of simple, easy-to-understand operations
on the vector. In this section, we will provide two important examples of decomposition,
which will be studied in greater detail throughout the book.
• Addition operation: A scalar multiple of the jth row is added to the ith row. The
operation is defined by two indices i, j in a specific order, and a scalar multiple c.
• Scaling operation: The ith row is multiplied with scalar c. The operation is fully defined
by the row index i and the scalar c.
The above operations are referred to as elementary row operations. One can define exactly
analogous operations on the columns with elementary column operations.
An elementary matrix is a matrix that differs from the identity matrix by applying a
single row or column operation. Pre-multiplying a matrix X with an elementary matrix cor-
responding to an interchange results in an interchange of the rows of X. In other words, if E
is the elementary matrix corresponding to an interchange, then a pair of rows of X = EX
will be interchanged with respect to X. A similar result holds true for other operations
like row addition and row scaling. Some examples of 3 × 3 elementary matrices with the
corresponding operations are illustrated in the table below:
These matrices are also referred to as elementary matrix operators because they are used
to apply specific row operations on arbitrary matrices. The scalar c is always non-zero in
the above matrices, because all elementary matrices are invertible and are different from
the identity matrix (albeit in a minor way). Pre-multiplication of X with the appropriate
elementary matrix can result in a row exchange, addition, or row-wise scaling being applied
to X. For example, the first and second rows of the matrix X can be exchanged to create
X as follows: ⎡ ⎤⎡ ⎤ ⎡ ⎤
0 1 0 1 2 3 4 5 6
⎣ 1 0 0 ⎦⎣ 4 5 6 ⎦ = ⎣ 1 2 3 ⎦
0 0 1 7 8 9 7 8 9
Operator X X
The first row of the matrix can be scaled up by 2 with the use of the appropriate scaling
operator: ⎡ ⎤⎡ ⎤ ⎡ ⎤
2 0 0 1 2 3 2 4 6
⎣ 0 1 0 ⎦⎣ 4 5 6 ⎦=⎣ 4 5 6 ⎦
0 0 1 7 8 9 7 8 9
Operator X X
Post-multiplication of matrix X with the following elementary matrices will result in ex-
actly analogous operations on the columns of X to create X :
Only the elementary matrix for the addition operation is slightly different between row
and column operations (although the other two matrices are the same). In the following, we
show an example of how post-multiplication with the appropriate elementary matrix can
result in a column exchange operation:
⎡ ⎤⎡ ⎤ ⎡ ⎤
1 2 3 0 1 0 2 1 3
⎣ 4 5 6 ⎦⎣ 1 0 0 ⎦ = ⎣ 5 4 6 ⎦
7 8 9 0 0 1 8 7 9
X Operator X
Note that this example is very similar to the one provided for row interchange, except that
the corresponding elementary matrix is post-multiplied in this case.
Problem 1.3.1 Define a 4 × 4 operator matrix so that pre-multiplying any matrix X with
this matrix will result in addition of ci times the ith row of X to the 2nd row of X for each
i ∈ {1, 2, 3, 4} in one shot. Show that this matrix can be expressed as the product of three
elementary addition matrices and a single elementary multiplication matrix.
These types of elementary matrices are always invertible. The inverse of the interchange
matrix is itself. The inverse of the scaling matrix is obtained by replacing the entry c with
1/c. The inverse of the row or column addition matrix is obtained by replacing c with −c.
We make the following observation:
Keeping the inverses of elementary matrices in mind can sometimes be useful. Therefore,
the reader is encouraged to work out the details of these matrices using the exercise below:
Problem 1.3.2 Write down one example of each of the three types [i.e., interchange, mul-
tiplication, and addition] of elementary matrices for performing row operations on a matrix
of size 4 × 4. Work out the inverse of these matrices. Repeat this result for each of the three
types of matrices for performing column operations.
The following exercises are examples of the utility of the inverses of elementary matrices:
Problem 1.3.3 Let A and B be two matrices. Let Aij be the matrix obtained by exchanging
the ith and jth columns of A, and Bij be the matrix obtained by exchanging the ith and jth
rows of B. Write each of Aij and Bij in terms of A or B, and an elementary matrix. Now
explain why Aij Bij = AB.
Problem 1.3.4 Let A and B be two matrices. Let matrix A be created by adding c times
the jth column of A to its ith column, and matrix B be created by subtracting c times the
ith row of B from its jth row. Explain using the concept of elementary matrices why the
matrices AB and A B are the same.
It is also possible to apply elementary operations to matrices that are not square. For an
n × d matrix, the pre-multiplication operator matrix will be of size n × n, whereas the
post-multiplication operator matrix will be of size d × d.
24 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
Permutation Matrices
An elementary row (or column) interchange operator matrix is a special case of a permu-
tation matrix. A permutation matrix contains a single 1 in each row, and a single 1 in each
column. An example of a permutation matrix P is shown below:
⎡ ⎤
0 0 1 0
⎢ 1 0 0 0 ⎥
P =⎢ ⎣ 0 0 0 1 ⎦
⎥
0 1 0 0
Pre-multiplying any matrix with a permutation matrix shuffles the rows, and post-
multiplying any matrix with a permutation matrix shuffles the columns. For example, pre-
multiplying any four-row matrix with the above matrix P reorders the rows as follows:
Post-multiplying any four-column matrix with P reorders the columns, albeit in the reverse
order:
Column 2 ⇒ Column 4 ⇒ Column 1 ⇒ Column 3
It is noteworthy that a permutation matrix and its transpose are inverses of one another
because they have orthonormal columns. Such matrices are useful in reordering the items
of a data matrix, and applications will be shown for graph matrices in Chapter 10. Since
one can shuffle the rows of a matrix by using a sequence of row interchange operations, it
follows that any permutation matrix is a product of row interchange operator matrices.
AX = I
Row operations are applied on A to convert the matrix to the identity matrix. A systematic
approach to perform such row operations to convert A to the identity matrix is the Gaussian
elimination method discussed in Chapter 2. These operations are mirrored on the right-hand
side so that the identity matrix is converted to the inverse. As the final result of the row
operations, we obtain the following:
IX = A−1
Elementary matrices are fundamental because one can decompose any square and invertible
matrix into a product of elementary matrices. In fact, if one is willing to augment the set
of elementary multiplication operators to allow the scalar c on the diagonal to be zero
(which is traditionally not the case), then one can express any square matrix as a product
of augmented elementary matrices.
Finally, we discuss the important application of finding a solution to the system of
equations Ax = b. Here, A is an n × d matrix, x is d-dimensional column vector, and b is
an n-dimensional row vector. Note that a feasible solution might not exist to this system of
equations, especially
100 when some groups
100 of equations are mutually inconsistent. For example,
the equations i=1 xi = +1 and i=1 xi = −1 are mutually inconsistent.
1.3. MATRIX MULTIPLICATION AS A DECOMPOSABLE OPERATOR 25
The matrix-centric methodology for solving such a system of linear equations derives
its inspiration from the well-known methodology of eliminating variables from systems of
equations in multiple variables. For example, if we have a pair of linear equations in x1 and
x2 , we can create an equation without one of the variables by subtracting an appropriate
multiple of one equation from the other. This operation is identical to the elementary
row addition operation discussed in this chapter. This general principle can be applied to
systems containing any number of variables, so that the rth equation is defined only in
terms of xr , xr+1 , . . . xd . This is equivalent to converting the original system Ax = b into a
new system A x = b where A is triangular. Therefore, if we apply a sequence E1 . . . Ek of
elementary row operations to the system of equations, we obtain the following relationship:
Ek Ek−1 . . . E1 A x = Ek Ek−1 . . . E1 b
A b
A triangular system of equations is solved by first processing equations with fewer variables
and iteratively backsubstituting these values to reduce the system to fewer variables. These
methods will be discussed in detail in Chapter 2. It is noteworthy that the problem of solving
linear equations is a special case of the fundamental machine learning problem of linear
regression, in which the best-fit solution is found to an inconsistent system of equations.
Linear regression serves as the “parent problem” to many machine learning problems like
least-squares classification, support-vector machines, and logistic regression.
The final result is obtained by using a standard trigonometric identity for the cosines and
sines of the sums of angles, and the Cartesian coordinates shown on the right-hand side
are equivalent to the polar coordinates [a, α + θ]. In other words, the original coordinates
[a, α] have been rotated counter-clockwise by angle θ. The basic geometric operations like
rotation, reflection, and scaling can be performed by post-multiplication with appropriately
chosen matrices. We list these matrices below, which are defined for pre-multiplying column
vectors:
26 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
The above matrices are also referred to as elementary matrices for geometric operations
(like the elementary matrices for row and column operations). It is possible for the diagonal
entries of the scaling matrix to be negative or 0. Strictly speaking, the elementary reflection
matrix can be considered a special case of the scaling matrix by setting the different values
of ci to values drawn from {−1, 1}.
Problem 1.3.5 The above list of matrices for rotation, reflection, and scaling is designed
to transform a column vector x using the matrix-to-vector product Ax. Write down the
corresponding matrices for the case when you want to transform a row vector u as uB.
The matrix for a sequence of transformations can be computed by multiplying the corre-
sponding matrices. This is easy to show by observing that if we have A = A1 . . . Ak , then
successively pre-multiplying a column-vector x with Ak . . . A1 is the same as the expression
A1 (A2 (. . . (Ak x))). Because of the associativity of matrix multiplication, one can express
this matrix as (A1 . . . Ak )x = Ax. Conversely, if a matrix can be expressed as a product
of simpler matrices (like the geometric ones shown above), then multiplication of a vector
with that matrix is equivalent to a sequence of the above geometric transformations.
A fundamental result of linear algebra is that any square matrix can be shown to be a
product of rotation/reflection/scaling matrices by using a technique called singular value
decomposition. In other words, all linear transformations of vectors defined by matrix multi-
plication corresponding to the application of a sequence of rotations, reflections, and scaling
on the vector. Chapter 2 generalizes the 2 × 2 matrices in the above table to any number of
dimensions by using d × d matrices. These concepts are sometimes more complex in higher
dimensions — for example, it is possible to use an arbitrarily oriented axis of rotation in
higher dimensions unlike in the case of two dimensions. The decomposition of a matrix into
geometrically interpretable matrices can also be used for computing inverses.
Problem 1.3.6 Suppose that you are told that any invertible square matrix A can be ex-
pressed as a product of elementary rotation/reflection/scaling matrices as A = R1 R2 . . . Rk .
Express the inverse of A in terms of the easily computable inverses of R1 , R2 , . . . , Rk .
It is also helpful to understand the row addition operator, discussed in the previous section.
Consider the 2 × 2 row-addition operator:
1 c
A=
0 1
This operator shears the space along the direction of the first coordinate For example, if
vector z is [x, y]T , then Az yields the new vector [x+cy, y]T . Here, the y-coordinate remains
unchanged, whereas the x-coordinate gets sheared in proportion to its height. The shearing
of a rectangle into a parallelogram is shown in Figure 1.4. An elementary row operator
matrix is a very special case of a triangular matrix; correspondingly, a triangular matrix
with unit diagonal entries corresponds to a sequence of shears. This is because one can
convert an identity matrix into any such triangular matrix with a sequence of elementary
row addition operations.
1.4. BASIC PROBLEMS IN MACHINE LEARNING 27
ORIGINAL
1 0.2 x x + 0.2y
Y-AXIS
[2, 1] TRANSFORMED
= [2.2, 1]
0 1 y y
X-AXIS
Figure 1.4: An elementary row addition operator can be interpreted as a shear transform
1.4.2 Clustering
The problem of clustering is that of partitioning the rows of the n × d data matrix D
into groups of similar rows. For example, imagine a setting where one has data records
in which the rows of D correspond to different individuals, and the different dimensions
(columns) of D correspond to the number of units of each product bought in a supermarket.
Then, a clustering application might try to segment the data set into groups of similar
individuals with particular types of buying behavior. The number of clusters might either
be specified by the analyst up front, or the algorithm might use a heuristic to set the number
of “natural” clusters in the data. One can often use the segmentation created by clustering
as a preprocessing step for other analytical goals. For example, on closer examination of
the clusters, one might learn that particular individuals are interested in household articles
in a grocery store, whereas others are interested in fruits. This information can be used by
the supermarket to make recommendations. Various clustering algorithms like k-means and
spectral clustering are introduced in Chapters 8, 9, and 10.
1.4. BASIC PROBLEMS IN MACHINE LEARNING 29
yi ≈ f (X i )
The function f (X i ) is often parameterized with a weight vector W . Consider the following
example of binary classification into the labels {−1, +1}:
yi ≈ fW (X i ) = sign{W · X i }
Note that we have added a subscript to the function to indicate its parametrization. How
does one compute W ? The key idea is to penalize any kind of mismatching between the
observed value yi and the predicted value f (X i ) with the use of carefully constructed loss
function. Therefore, many machine learning models reduce to the following optimization
problem:
MinimizeW Mismatching between yi and fW (X i )
i
Once the weight vector W has been computed by solving the optimization model, it is
used to predict the value of the class variable yi for instances in which the class variable
is not known. Classification is also referred to as supervised learning, because it uses the
training data to build a model that performs the classification of the test data. In a sense, the
training data serves as the “teacher” providing supervision. The ability to use the knowledge
in the training data in order to classify the examples in unseen test data is referred to as
generalization. There is no utility in classifying the examples of the training data again,
because their labels have already been observed.
30 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
Regression
The label in classification is also referred to as dependent variable, which is categorical in
nature. In the regression modeling problem, the n × d training data matrix D is associated
with an n × 1 vector y of dependent variables, which are numerical. Therefore, the only
difference from classification is that the array y contains numerical values (rather than
categorical ones), and can therefore be treated as a vector. The dependent variable is also
referred to as a response variable, target variable, or regressand in the case of regression. The
independent variables are also referred to as regressors. Binary response variables are closely
related to regression, and some models solve binary classification directly with the use of
a regression model (by pretending that the binary labels are numerical). This is because
binary values have the flexibility of being treated as either categorical or as numerical
values. However, more than two classes like {Red, Green, Blue} cannot be ordered, and are
therefore different from regression.
The regression modeling problem is closely related to linear algebra, especially when a
linear optimization model is used. In the linear optimization model, we use a d-dimensional
column vector W = [w1 . . . wd ]T to represent the weights of the different dimensions. The
ith entry yi of y is obtained as the dot product of the ith row X i of D and W . In other
words, the function f (·) to be learned by the optimization problem is as follows:
yi = f (X i ) = X i W
One can also state this condition across all training instances using the full n × d data
matrix D:
y ≈ DW (1.27)
Note that this is a matrix representation of n linear equations. In most cases, the value
of n is much greater than d, and therefore, this is an over-determined system of linear
equations. In over-determined cases, there is usually no solution for W that exactly satisfies
this system. However, we can minimize the sum of squares of the errors to get as close to
this goal as possible:
1
J = DW − y2 (1.28)
2
On solving the aforementioned optimization problem, it will be shown in Chapter 4 that
the solution W can be obtained as follows:
The basic idea is that the rate of change of the function in any direction is 0, or else one can
move in a direction with negative rate of change to further improve the objective function.
This condition is necessary, but not sufficient, for optimization. More details of relevant
optimality conditions are provided in Chapter 4.
The d-dimensional vector of partial derivatives is referred to as the gradient:
T
∂f (·) ∂f (·)
∇f (x1 , . . . xd ) = ...
∂x1 ∂xd
The gradient is denoted by the symbol ∇, and putting it in front of a function refers to the
vector of partial derivatives with respect to the argument.
Here, f (a) is the first derivative of f (w) at a, f (w) is the second derivative, and so
on. Note that f (w) could be an arbitrary function, such as sin(w) or exp(w), and the
expansion expresses it as a polynomial with an infinite number of terms. The case of exp(w)
is particularly simple, because the nth order derivative of exp(w) is itself. For example,
exp(w) can be expanded about w = 0 as follows:
32 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
w2 w3 wn
+ exp(0)
exp(w) = exp(0) + exp(0)w + exp(0) + . . . + exp(0) ... (1.30)
2! 3! n!
w2 w3 wn
=1+w+ + + ... + ... (1.31)
2! 3! n!
In other words, the exponentiation function can be expressed as an infinite polynomial,
in which the trailing terms rapidly shrink in size because limn→∞ wn /n! = 0. For some
functions like sin(w) and exp(w), the Taylor expansion converges to the true function by
including an increasing number of terms (irrespective of the choice of w and a). For other
functions like 1/w or log(w), a converging expansion exists in restricted ranges of w at any
particular value of a. More importantly, the Taylor expansion almost always provides a very
good approximation of any smooth function near w = a, and the approximation is exact
at w = a. Furthermore, higher-order terms tend to vanish when |w − a| is small, because
(w − a)r /r! rapidly converges to 0 for increasing r. Therefore, one can often obtain good
quadratic approximations of a function near w = a by simply including the first three terms.
In practical settings like optimization, one is often looking to change the value w from
the current point w = a to a “nearby” point in order to improve the objective function
value. In such cases, using only up to the quadratic term of the Taylor expansion about
w = a provides an excellent simplification in the neighborhood of w = a. In gradient-descent
algorithms, one is often looking to move from the current point by a relatively small amount,
and therefore lower-order Taylor approximations can be used to guide the steps in order to
improve the polynomial approximation rather than the original function. It is often much
easier to optimize polynomials than arbitrarily complex functions.
One can also generalize the Taylor expansion to multivariable functions F (w) with d-
dimensional arguments of the form w = [w1 . . . wd ]T . The Taylor expansion of the function
F (w) about w = a = [a1 . . . ad ]T can be written as follows:
d d d
∂F (w) (wi − ai )(wj − aj ) ∂ 2 F (w)
F (w) = F (a) + (wi − ai ) + +
i=1
∂wi w=a i=1 j=1
2! ∂wi ∂wj w=a
d d d
(wi − ai )(wj − aj )(wk − ak ) ∂ 3 F (w)
+ + ...
i=1 j=1
3! ∂wi ∂wj ∂wk w=a
k=1
In the multivariable case, we have O(d2 ) second-order interaction terms, O(d3 ) third-order
interaction terms, and so on. One can see that the number of terms becomes unwieldy
very quickly. Luckily, we rarely need to go beyond second-order approximations in prac-
tice. Furthermore, the above expression can be rewritten using the gradients and matrices
compactly. For example, the second-order approximation can be written in vector form as
follows:
F (w) ≈ F (a) + [w − a]T ∇F (w) + [w − a]T H(a)[w − a]
Here, ∇F (W ) is the gradient, and H(a) = [hij ] is the d × d matrix of all second-order
derivatives of the following form:
2
∂ F (w)
hij =
∂wi ∂wj w=a
A third-order expansion would require the use of a tensor, which is a generalization of the
notion of a matrix. The first- and second-order expansions will be used frequently in this
book for developing various types of optimization algorithms, such as the Newton method.
1.5. OPTIMIZATION FOR MACHINE LEARNING 33
Problem 1.5.1 (Euler Identity) The Taylor series is valid for complex functions as well.
Use the Taylor series to show the Euler identity eiθ = cos(θ) + i sin(θ).
∂J
= 0, ∀i ∈ {1 . . . d} (1.33)
∂wi
The partial derivatives can be shown to be the following (cf. Section 4.7 of Chapter 4):
T
∂J ∂J
... = DT DW − DT y (1.34)
∂w1 ∂wd
For certain types of convex objective functions like linear regression, setting the vector
of partial derivatives to the zero vector is both necessary and sufficient for minimization
(cf. Chapters 3 and 4). Therefore, we have DT DW = DT y, which yields the following:
Linear regression is a particularly simple problem because the optimal solution exists in
closed form. However, in most cases, one cannot solve the resulting optimality conditions in
such a form. Rather, the approach of gradient-descent is used. In gradient descent, we use
a computational algorithm of initializing the parameter set W randomly (or a heuristically
chosen point), and then change the parameter set in the direction of the negative derivative
of the objective function. In other words, we use the following updates repeatedly with
step-size α, which is also referred to as the learning rate:
T
∂J ∂J
[w1 . . . wd ]T ⇐ [w1 . . . wd ]T − α ... = W − α[DT DW − DT y] (1.36)
∂w1 ∂wd
The d-dimensional vector of partial derivatives is referred to as the gradient vector, and it
defines an instantaneous direction of best rate of improvement of the objective function at
the current value of the parameter vector W . The gradient vector is denoted by ∇J(W ):
T
∂J ∂J
∇J(W ) = ...
∂w1 ∂wd
34 CHAPTER 1. LINEAR ALGEBRA AND OPTIMIZATION: AN INTRODUCTION
Therefore, one can succinctly write gradient descent in the following form:
W ⇐ W − α∇J(W )
The size of the step is defined by the learning rate α. Note that the best rate of improvement
is only over a step of infinitesimal size, and does not hold true for larger steps of finite
size. Since the gradients change on making a step, one must be careful not to make steps
that are too large or else the effects might be unpredictable. These updates are repeatedly
executed to convergence, when further improvements become too small to be useful. Such
a situation will occur when the gradient vector contains near-zero entries. Therefore, this
computational approach will also (eventually) reach a solution approximately satisfying the
optimality conditions of Equation 1.33. As we will show in Chapter 4, the gradient descent
method (and many other optimization algorithms) can be explained with the use of the
Taylor expansion.
Using gradient descent for optimization is a tricky exercise, because one does not always
converge to an optimal solution for a variety of reasons. For example, even the wrong step-
size, α, might result in unexpected numerical overflows. In other cases, one might terminate
at suboptimal solutions, when the objective function contains multiple minima relative to
specific local regions. Therefore, there is a significant body of work on designing optimization
algorithms (cf. Chapters 4, 5, and 6).
Input (d variables) ⇒ Dot product with parameter vector W ⇒ Prediction ⇒ Squared loss
x3 w3 x3 h12 h22 y
w4
x4 x4 h13 h23
w5
x5 x5
(cf. Figure 1.5(b)). Each node of this graph can compute a function of its incoming nodes
and the edge parameters. The overall function is potentially extremely complex, and often
d
cannot be expressed compactly in closed form (like the simple relationship y = i=1 wi xi
in a linear regression model). A model with many layers of nodes is referred to as a deep
learning model. Such models can learn complex, nonlinear relationships in the data.
How does one compute gradients with respect to edge parameters in computational
graphs? This is achieved with the use of a technique referred to as backpropagation, which
will be introduced in Chapter 11. The backpropagation algorithm yields exactly the same
gradient as is computed in traditional machine learning. For example, since Figure 1.5(a)
models linear regression, the backpropagation algorithm will yield exactly the same gradi-
ent as computed in the previous section. The main difference is that the backpropagation
algorithm will also be able to compute gradients in more complex cases like Figure 1.5(b).
Almost all the well-known machine learning models (based on gradient descent) can be
represented as relatively simple computational graphs. Therefore, computational graphs
are extremely powerful abstractions, as they include traditional machine learning as special
cases. We will discuss the power of such models and the associated algorithms in Chapter 11.
1.6 Summary
Linear algebra and optimization are intimately related because many of the basic problems
in linear algebra, such as finding the “best” solution to an over-determined system of linear
equations, are solved using optimization techniques. Many optimization models in machine
learning can also be expressed as objective functions and constraints using matrices/vectors.
A useful technique that is used in many of these optimization problems is to decompose these
matrices into simpler matrices with specific algebraic/geometric properties. In particular,
the following two types of decomposition are commonly used in machine learning:
• Any square and invertible matrix A can be decomposed into a product of elementary
matrix operators. If the matrix A is not invertible, it can still be decomposed with a
relaxed definition of matrix operators, which are allowed to be non-invertible.
• Any square matrix A can be decomposed into a product of two rotation matrices and
one scaling (diagonal) matrix in the particular order of rotation, scaling, and rotation.
This idea is referred to as singular value decomposition (cf. Chapter 7).
1.8 Exercises
1. For any two vectors x and y, which are each of length a, show that (i) x − y is
orthogonal to x + y, and (ii) the dot product of x − 3y and x + 3y is negative.
(a) Suppose you had to compute the matrix product ABC. From an efficiency per-
spective, would it computationally make more sense to compute (AB)C or would
it make more sense to compute A(BC)?
(b) If you had to compute the matrix product CAB, would it make more sense to
compute (CA)B or C(AB)?
3. Show that if a matrix A satisfies A = −AT , then all the diagonal elements of the
matrix are 0.
4. Show that if we have a matrix satisfying A = −AT , then for any column vector x, we
have xT Ax = 0.
6. Show that the matrix product AB remains unchanged if we scale the ith column of
A and the ith row of B by respective factors that are inverses of each other.
7. Show that any matrix product AB can be expressed in the form A ΔB , where A is
a matrix in which the sum of the squares of the entries in each column is 1, B is a
matrix in which the sum of the squares of the entries in each row is 1, and Δ is an
appropriately chosen diagonal matrix with nonnegative entries on the diagonal.
8. Discuss how a permutation matrix can be converted to the identity matrix using at
most d elementary row operations of a single type. Use this fact to express A as the
product of at most d elementary matrix operators.
9. Suppose that you reorder all the columns of an invertible matrix A using some random
permutation, and you know A−1 for the original matrix. Show how you can (simply)
compute the inverse of the reordered matrix from A−1 without having to invert the
new matrix from scratch. Provide an argument in terms of elementary matrices.
(a) The order in which you apply two elementary row operations to a matrix does
not affect the final result.
(b) The order in which you apply an elementary row operation and an elementary
column operation does not affect the final result.
12. Discuss why some power of a permutation matrix is always the identity matrix. [Hint:
Think in terms of the finiteness of the number of permutations.]
t
13. Consider the matrix polynomial i=0 ai Ai . A straightforward evaluation of this poly-
nomial will require O(t2 ) matrix multiplications. Discuss how you can reduce the
number of multiplications to O(t) by rearranging the polynomial.
14. Let A = [aij ] be a 2 × 2 matrix with a12 = 1, and 0s in all other entries. Show that
A1/2 does not exist even after allowing complex-valued entries.
15. Parallelogram law: The parallelogram law states that the sum of the squares of the
sides of a parallelogram is equal to the sum of the squares of its diagonals. Write this
law as a vector identity in terms of vectors A and B of Figure 1.1. Now use vector
algebra to show why this vector identity must hold.
16. Write the first four terms of the Taylor expansion of the following univariate functions
about x = a: (i) loge (x); (ii) sin(x); (iii) 1/x; (iv) exp(x).
17. Use the multivariate Taylor expansion to provide a quadratic approximation of sin(x+
y) in the vicinity of [x, y] = [0, 0]. Confirm that this approximation loses its accuracy
with increasing distance from the origin.
18. Consider a case where a d × k matrix P is initialized by setting all values randomly
√ to
either −1 or +1 with equal probability, and then dividing all entries by d. Discuss
why the columns of P will be (roughly) mutually orthogonal for large values of d of the
order of 106 . This trick is used frequently in machine learning for rapidly generating
the random projection of an n × d data matrix D as D = DP .
19. Consider the perturbed d × d matrix A = A + B, where the value of is small. Show
the following useful approximation for approximating A−1
from A−1 :
A−1
≈A
−1
− A−1 BA−1
20. Suppose that you have a 5 × 5 matrix A, in which the rows/columns correspond to
people in a social network in the order John, Mary, Jack, Tim, and Robin. The entry
(i, j) corresponds to the number of times person i sent a message to person j. Define a
matrix P , so that P AP T contains the same information, but with the rows/columns
in the order Mary, Tim, John, Robin, and Jack.
21. Suppose that the vectors x, y, and x − y have lengths 2, 3, and 4, respectively. Find
the length of x + y using only vector algebra (and no Euclidean geometry).
23. Let A1 , A2 , . . . Ad be d × d matrices that are strictly upper triangular. Then, the
product of A1 , A2 , . . . Ad is the zero matrix.
24. Apollonius’s identity: Let ABC be a triangle, and AD be the median from A to
BC. Show the following using only vector algebra and no Euclidean geometry:
AB 2 + AC 2 = 2(AD2 + BD2 )
25. Sine law: Express the sine of the interior angle between a and b (i.e., the angle not
greater than 180 degrees) purely in terms of a · a, b · b, and a · b. You are allowed to use
sin2 (x) + cos2 (x) = 1. Consider a triangle, two sides of which are the vectors a and
b. The opposite angles to these vectors are A and B, respectively. Show the following
using only vector algebra and no Euclidean geometry:
a b
=
sin(A) sin(B)
26. Trigonometry with vector algebra: Consider a unit vector x = [1, 0]T . The vector
v 1 is obtained by rotating x counter-clockwise by angle θ1 , and v 2 is obtained by
rotating x clockwise by θ2 . Use the rotation matrix to obtain the coordinates of unit
vectors v 1 and v 2 , and then show the following well-known trigonometric identity:
27. Coordinate geometry with matrix algebra: Consider the two lines y = 3x + 4
and y = 5x + 2 in the 2-dimensional plane. Write the equations in matrix form for
appropriately chosen A and b:
x
A =b
y
Find the intersection coordinates (x, y) of the two lines by inverting matrix A.
28. Use the matrix inversion lemma to invert a 10 × 10 matrix with 1s in each entry other
than the diagonal entries, which contain the value 2.
29. Solid geometry with vector algebra: Consider the origin-centered hyperplane in
3-dimensional space that is defined by the equation z = 2 x + 3 y. This equation has
infinitely many solutions, all of which lie on the plane. Find two solutions that are
not multiples of one another and denote them by the 3-dimensional column vectors
v 1 and v 2 , respectively. Let V = [v 1 , v 2 ] be a 3 × 2 matrix with columns v 1 and v 2 .
Geometrically describe the set of all vectors that are linear combinations of v 1 and v 2
with real coefficients c1 and c2 :
c1
V= V : c1 , c2 ∈ R
c2
Now consider the point [x, y, z]T = [2, 3, 1]T , which does not lie on the above hyper-
plane. We want to find a point b on the hyperplane for which b is as close to [2, 3, 1]T
as possible. How is the vector b − [2, 3, 1]T geometrically related to the hyperplane?
Use this fact to show the following condition on b:
⎛ ⎡ ⎤⎞
2
0
V T ⎝b − ⎣ 3 ⎦⎠ =
0
1
Find a way to eliminate the 3-variable vector b from the above equation and replace
with the 2-variable vector c = [c1 , c2 ]T instead. Substitute numerical values for entries
in V and find c and b with a 2 × 2 matrix inversion.
1.8. EXERCISES 39
30. Let A and B be two n×d matrices. One can partition them columnwise as A = [A1 , A2 ]
and B = [B1 , B2 ], where A1 and B1 are n × k matrices containing the first k columns
of A and B, respectively, in the same order. Let A2 and B2 contain the remaining
columns. Show that the matrix product AB T can be expressed as follows:
AB T = A1 B1T + A2 B2T
33. Tight sub-multiplicative case: Suppose that u and v are column vectors (of not
necessarily the same dimensionality). Show that the matrix u v T created from the
outer product of u and v has Frobenius norm of u v.
35. Let x and y be two orthogonal column vectors of dimensionality n. Let a and b be two
T
arbitrary d-dimensional column vectors. Show that the outer products x aT and y b
are Frobenius orthogonal (see Exercise 34 for definition of Frobenius orthogonality).
36. Suppose that a sequence of row and column operations is performed on a matrix. Show
that as long as the ordering among row operations and the ordering among column
operations is maintained, the way in which the row sequence and column sequence
are merged does not change the final result matrix. [Hint: Use operator matrices.]
38. Consider a set of vectors x1 . . . xn , which are known to be unit normalized. You do not
have access to the vectors, but you are given all pairwise squared Euclidean distances
in the n × n matrix Δ. Discuss why you can derive the n × n pairwise dot product
matrix by adding 1 to each entry of the matrix − 12 Δ.
39. We know that every matrix commutes with its inverse. We want to show a general-
ization of this result. Consider the polynomial functions f (A) and g(A) of the square
matrix A, so that f (A) is invertible. Show the following commutative property:
41. Let A be a rectangular matrix and f (·) be a polynomial function. Show that
AT f (AAT ) = f (AT A)AT . Assuming invertibility of f (AAT ) and f (AT A), show:
44. Compute the inverse of the following triangular matrix by expressing it as the sum of
two carefully chosen matrices (cf. Section 1.2.5):
⎡ ⎤
1 0 0
A=⎣ 2 1 0 ⎦
1 3 1
46. Show that if A and B commute, the matrix polynomials f (A) and g(B) commute.
47. Show that if invertible matrices A and B commute, Ak and B s commute for all
integers k, s ∈ [−∞, ∞]. Show the result of Exercise 46 for an extended definition of
“polynomials” with both positive and negative integer exponents included.
48. Let U = [uij ] be an upper-triangular d × d matrix. What are the diagonal entries of
the matrix polynomial f (U ) as scalar functions of the matrix entries uij ?
49. Inverses behave like matrix polynomials: The Cayley-Hamilton theorem states
that a finite-degree polynomial f (·) always exists for any matrix A satisfying f (A) = 0.
Use this fact to prove that the inverse of A is also a finite-degree polynomial.
50. Derive the inverse of a 3 × 3 row addition operator by inverting the sum of matrices.
∞
51. For any non-invertible matrix A, show that the infinite summation k=0 (I − A)k
cannot possibly converge to a finite matrix. Give two examples to show that if A is
invertible, the summation might or might not converge.
52. The chapter shows that the product, A1 A2 . . . Ak , of invertible matrices is invertible.
Show the converse that if the product A1 A2 . . . Ak of square matrices is invertible,
each matrix Ai is invertible. [Hint: You need only the most basic results discussed in
this chapter for the proof.]
53. Show that if a d×d diagonal matrix Δ with distinct diagonal entries λ1 . . . λd commutes
with A, then A is diagonal.
54. What fraction of 2 × 2 binary matrices with 0-1 entries are invertible?
Chapter 2
2.1 Introduction
Machine learning algorithms work with data matrices, which can be viewed as collections
of row vectors or as collections of column vectors. For example, one can view the rows of an
n × d data matrix D as a set of n points in a space of dimensionality d, and one can view
the columns as features. These collections of row vectors and column vectors define vector
spaces. In this chapter, we will introduce the basic properties of vector spaces and their
connections to solving linear systems of equations. This problem is also a special case of the
problem of linear regression, which is one of the fundamental building blocks of machine
learning.
We will also study matrix multiplication as a linear operator with geometric interpreta-
tion. As discussed in Section 1.3.2 of Chapter 1, multiplying a matrix with a vector can be
used to implement rotation, scaling, and reflection operations on the vector. In fact, a multi-
plication of a vector with a matrix can be shown to be some combination of rotation, scaling,
and reflection being applied to the vector. Much of linear algebra draws inspirations from
Cartesian geometry. However, Cartesian geometry is often studied in only 2 or 3 dimensions.
On the other hand, linear algebra is naturally defined in spaces of any dimensionality.
This chapter is organized as follows. The remainder of this section introduces the con-
cept of linear transformations. The next section provides a provides a basic understanding
of the geometric properties of linear transformations. The basics of linear algebra are intro-
duced in Section 2.3. The linear algebra of row spaces and column spaces is introduced in
Section 2.4. The problem of solving systems of linear equations is discussed in Section 2.5.
The notion of matrix rank is introduced in Section 2.6. Different methods for generating
orthogonal basis sets are introduced in Section 2.7. In Section 2.8, we show that solving
SUBTRACT THE
Y-AXIS
ROW-WISE MEAN
Y-AXIS
X-AXIS X-AXIS
ORIGIN
systems of linear equations is a special case of least-squares regression, which is one of the
fundamental building blocks of machine learning. The issue of ill-conditioned matrices and
ill-conditioned systems of equations is discussed in Section 2.9. Inner products are intro-
duced in Section 2.10. Complex vector spaces are introduced in Section 2.11. A summary
is given in Section 2.12.
All linear transforms are special cases of affine transforms, but not vice versa. There is
considerable confusion and ambiguity in the use of the terms “linear” and “affine” in math-
ematics. Many subfields of mathematics use the terms “linear” and “affine” interchangeably.
For example, the simplest univariate function f (x) = m · x + b, which is widely referred to as
“linear,” allows a non-zero translation b; this would make it an affine transform. However,
the notion of linear transform from the linear algebra perspective is much more restrictive,
and it does not even include the univariate function f (x) = m · x + b, unless the bias term
b is zero. The class of linear transforms (from the linear algebra perspective) can always
be geometrically expressed as a sequence of one or more rotations, reflections, and dila-
tions/contractions about the origin. The origin always maps to itself after these operations,
and therefore translation is not included. Unfortunately, the use of the word “linear” in
machine learning almost always allows translation (with copious use of bias terms), which
makes the terminology somewhat confusing. In this book, the words “linear transform” or
“linear operator” will be used in the context of linear algebra (where translation is not
allowed). Terms such as “linear function” will be used in the context of machine learning
(where translation is allowed).
Proof: The result f (ei ) = Aei holds, because Aei returns the ith column of A, which is
f (ei ). Furthermore, one can express f (x) for any vector x = [x1 . . . xd ]T as follows:
d
d
d
d
f (x) = f ( xi e i ) = xi f (ei ) = xi [Aei ] = A[ xi ei ] = Ax
i=1 i=1 i=1 i=1
row of the n × d matrix D = DV is the transformed representation of the ith row of the
original matrix D. Data matrices in machine learning often contain multidimensional points
in their rows.
Matrix transformations can be broken up into geometrically interpretable sequences of
transformations by expressing matrices as products of simpler matrices (cf. Section 1.3 of
Chapter 1):
Note the groupings of the expressions using parentheses so that simple geometric operations
corresponding to matrices V1 . . . Vr are sequentially applied to the corresponding vectors.
In the following, we discuss some important geometric operators. We start with orthogonal
operators.
Orthogonal Transformations
The orthogonal 2 × 2 matrices Vr and Vc that respectively rotate 2-dimensional row and
column vectors by θ degrees in the counter-clockwise direction are as follows:
cos(θ) sin(θ) cos(θ) −sin(θ)
Vr = , Vc = (2.1)
−sin(θ) cos(θ) sin(θ) cos(θ)
If we have an n × 2 data matrix D, then the product DVr will rotate each row of D using Vr ,
whereas the product Vc DT will equivalently rotate each column of DT . One can also view
a data rotation DVr in terms of projection of the original data on a rotated axis system.
Counter-clockwise rotation of the data with a fixed axis system is the same as clockwise
rotation of the axis system with fixed data. In essence, the two columns of the transformation
matrix Vr represent the mutually orthogonal unit vectors of a new axis system that is rotated
clockwise by θ. These two new columns are shown on the left of Figure 2.2 for a counter-
clockwise rotation of 30◦ . The transformation returns the coordinates DVr of the data points
on these column vectors, because we are computing the dot product of each row of D with
the (unit length) columns of Vr . In this case, the columns of Vr (orthonormal directions
in new axis system) make counter-clockwise angles of −30◦ and 60◦ with the vector [1, 0].
Therefore, the corresponding matrix Vr is obtained by populating the columns with vectors
of the form [cos(θ), sin(θ)]T , where θ is the angle each new orthonormal axis direction makes
with the vector [1, 0]. This results in the following matrix Vr :
cos(−30) cos(60) cos(30) sin(30)
Vr = = (2.2)
sin(−30) sin(60) −sin(30) cos(30)
After performing the projection of each data point on the new axes, we can reorient the
figure so that the new axes are aligned with the original X- and Y -axes (as shown in the left-
to-right transition of Figure 2.2). It is easy to see that the final result is a counter-clockwise
rotation of the data points by 30◦ about the origin.
2.2. THE GEOMETRY OF MATRIX MULTIPLICATION 45
DATA
MULTIPLY DATA WITH A
NEW Y-AXIS
POINTS
MATRIX WITH TWO
Y-AXIS
600
X-AXIS NEW X-AXIS
300
Figure 2.2: An example of counter-clockwise rotation with 30◦ with matrix multiplication.
The two columns of the transformation matrix are shown in the figure on the left
One obtains the final result by repeatedly grouping pairs of adjacent orthogonal matrices
like An ATn , and replacing it with the identity matrix. Since the transpose of the product
matrix A1 A2 . . . An is also its inverse, it follows that the product matrix is orthogonal.
What about the commutativity of the product of orthogonal matrices? At first glance, one
might mistakenly assume that the product of rotation matrices is commutative. After all,
it should not matter whether you first rotate an object 50◦ and then 30◦ or vice versa.
However, this type of 2-dimensional visualization of commutativity breaks down in higher
dimensions (or when reflection is combined with rotation even in two dimensions). In other
words, the product of orthogonal matrices is not necessarily commutative. The main issue
is that rotations in higher dimensions are associated with a vector referred to as the axis
of rotation. Orthogonal matrices that do not correspond to the same axis of rotation may
not be commutative; for example, if we successively rotate a sphere by 90◦ about two
mutually perpendicular axes, the point on the sphere closest to us will land at different
places depending on which rotation occurs first. In order to understand this point, consider
the following two 3 × 3 matrices R[1,0,0] and R[0,1,0] , which can perform counter-clockwise
rotations of angles α, β about [1, 0, 0] and [0, 1, 0], respectively:
⎡ ⎤ ⎡ ⎤
1 0 0 cos(β) 0 sin(β)
R[1,0,0] = ⎣ 0 cos(α) sin(α) ⎦ , R[0,1,0] = ⎣ 0 1 0 ⎦ (2.4)
0 −sin(α) cos(α) −sin(β) 0 cos(β)
In order to understand the nature of orthogonal matrices in more than two dimensions, we
ask the reader to convince themselves of the following facts:
1. Post-multiplication of row vector [x, y, z] with matrix R[1,0,0] only rotates the vector
about [1, 0, 0] (without changing the first coordinate), whereas the matrix R[0,1,0]
rotates this vector about [0, 1, 0] (without changing the second coordinate).
2. The matrix R[1,0,0] R[0,1,0] is a matrix with orthonormal rows and columns (which can
be verified algebraically).
3. The product of R[1,0,0] and R[0,1,0] is sensitive to the order of multiplication. Therefore,
the order of rotations matters.
All 3-dimensional rotation matrices can be geometrically expressed as a single rotation,
albeit with an arbitrary axis of rotation.
2.2. THE GEOMETRY OF MATRIX MULTIPLICATION 47
Problem 2.2.2 (Reflection of a Reflection) Verify algebraically that the square of the
Householder reflection matrix is the identity matrix.
Problem 2.2.3 Show that the elementary reflection matrix, which varies from the identity
matrix only in terms of flipping the sign of the ith diagonal element, is a special case of the
Householder reflection matrix.
This equivalence for dot products naturally carries over to Euclidean distances and angles,
which are functions of dot products. This also means that orthogonal transformations pre-
serve the sum of squares of Euclidean distances of the data points (i.e., rows of a data
matrix D) about the origin, which is also the (squared) Frobenius norm or energy of the
n × d matrix D. When the n × d matrix D is multiplied with the d × d orthogonal matrix
V , the Frobenius norm of DV can be expressed in terms of the trace operator as follows:
Transformations that preserve distances between pairs of points are said to be rigid. Rota-
tions and reflections not only preserve distances between points but also absolute distances
of points from the origin. Translations (which are not linear transforms) are also rigid be-
cause they preserve distances between pairs of transformed points. However, translations
usually do not preserve distances from the origin.
with dilation/contraction. When the scaling factors across different dimensions are differ-
ent, the scaling is said to be anisotropic. An example of a 2 × 2 matrix Δ corresponding to
anisotropic scaling is as follows:
2 0
Δ=
0 0.5
Multiplying a 2-dimensional vector with this matrix scales the first coordinate by 2 and the
second coordinate by 0.5. This transformation is not rigid because of non-unit scaling factors
in various directions. Furthermore, if we flip the sign of the first diagonal entry by changing
it from 2 to −2, then this transformation will combine positive dilation/contraction with
reflection via the following decomposition:
−2 0 2 0 −1 0
=
0 0.5 0 0.5 0 1
Stretching Reflection
Thus, a reflection matrix is a special case of a scaling (diagonal) matrix.
ROTATE 30
(MULTIPLY WITH V)
ORIGIN ORIGIN
(MULTIPLY DIAGONAL
MATRIX WITH ENTRIES
2 AND 0.5)
ROTATE -30
Figure 2.3: An example of anisotropic scaling along two mutually orthogonal directions
ORIGINAL
TRANSFORMED
ORIGIN
Figure 2.4: The transformation of Figure 2.3 as shown in terms of scaling along two directions
the other three points by stacking them up into a 3 × 2 matrix denoted by matrix D. The
resulting transformed matrix D = DA is as follows:
⎡ ⎤ ⎡ ⎤
1 0 1.625 −0.650
1.625 −0.650
D = DA = ⎣ 0 1 ⎦ = ⎣ −0.650 0.875 ⎦
−0.650 0.875
1 1 0.975 0.225
It is also helpful to understand the nature of the distortion pictorially. An example of the
sequence of transformations in terms V , Δ, V T (for a rectangular scatterplot) are shown
in Figure 2.3. The corresponding data set D = D(V ΔV T ) and the scaling are shown in a
concise way in Figure 2.4. One can also generalize this intuition to higher dimensions.
Not all transformations can be expressed in the form V ΔV T , as shown above. However,
all is not lost. A beautiful result, referred to as singular value decomposition (cf. Chapter 7),
2.3. VECTOR SPACES AND THEIR GEOMETRY 51
states that any square matrix A can be expressed in the form A = U ΔV T , where U
and V are both orthogonal matrices (which might be different) and Δ is a nonnegative
scaling matrix. Therefore, all linear transformations defined by matrix multiplication can be
expressed as a sequence of rotations/reflections, together with a single anisotropic scaling.
This result can even be extended to rectangular matrices.
The zero vector, denoted by 0, is included in all vector spaces, and always satisfies the
additive identity x+0 = x. A singleton set containing the zero vector can also be considered
a vector space (albeit a rather simple one), because it satisfies both the above properties.
Consider the subset of vectors from R3 , such that the head of each vector lies on a
2-dimensional hyperplane passing through the origin (and the tail is the origin). This set of
vectors is a vector space because adding or scaling vectors on an origin-centered hyperplane
leads to other vectors on the same hyperplane. Furthermore, all multiples of an arbitrary
vector like [2, 1, 3]T (i.e., all points on an infinite line in R3 ) also form a vector space,
which is also a special case of a hyperplane. In general, vector spaces that are subsets of Rn
correspond to vectors sitting on an origin-centered hyperplane of dimensionality at most
n. Therefore, vector spaces in Rn can be nicely mapped to our geometric understanding
of lower-dimensional hyperplanes. The origin-centered nature of these hyperplanes is im-
portant; the set of vectors with tails at the origin and heads on a hyperplane that is not
origin-centered does not define a vector space, because this set of vectors is not closed under
scaling and addition. Another example of a set of vectors that is not a vector space is the
set of all vectors with only non-negative components in R3 , because it is not closed under
multiplication with negative scalars. Other than the zero vector space, all vector spaces
contain an infinite set of vectors.
Finally, we observe that a fixed linear transformation of each element of a vector space
results in another vector space, because of the way in which linear transformations preserve
52 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
the properties of addition and scalar multiplication (cf. Definition 2.1.1). For example,
multiplying all vectors on an origin-centered hyperplane with the same matrix results in
a set of vectors sitting on another origin-centered hyperplane after undergoing a set of
geometrically interpretable linear transformations (like origin-centered rotation and scaling).
Definition 2.3.2 seems somewhat restrictive at first glance, because we have required all
vector spaces to be subsets of Rn . The modern notion of a vector space is more general than
vectors from Rn , because it allows all kinds of abstract objects to be considered “vectors”
and infinite sets of such objects to be considered vector spaces (along with appropriately
defined vector addition and scalar multiplication operations on these objects). For example,
the space of all upper-triangular matrices of a specific size is a vector space, although the
addition operation now corresponds to element-wise addition of the matrices. Similarly, the
space of all polynomial functions of a specific maximum degree is a vector space, and the
addition operation corresponds to addition of constituent monomial coefficients. In each
case, the nature of the addition and multiplication operations, and the definition of the
zero vector (such as the zero matrix or zero polynomial) depends on the type of object
being considered. It is also possible for the components of vectors and the scalar c in
Definition 2.3.2 to be drawn from the complex domain (or other sets of values1 satisfying a
set of properties known as the field axioms). Most of this book works with real-valued vector
spaces, although we will occasionally consider vectors drawn from C n , where C corresponds
to the field of complex numbers (cf. Section 2.11).
The assumption that vector spaces are subsets of Rn is not as restrictive as one might
think, because we can indirectly represent most vector spaces over a real field by mapping
them to Rn . For example, the vector space of m × m upper-triangular matrices can be
represented indirectly by populating a vector from R[m(m+1)/2] with matrix entries. Simi-
larly, polynomials with a pre-defined maximum degree can be represented as finite-length
vectors containing the coefficients of various monomials that constitute the polynomial. It
can be formally shown that large classes of vector spaces over the real field can be indi-
rectly represented using Rn , via the process of coordinate representation (cf. Section 2.3.1).
Furthermore, staying in Rn has the distinct advantage of being able to work with easily
understandable operations over matrices and vectors.
A subset of the vector space, which is itself a vector space, is referred to as a subspace:
The set notation “⊆” is used to denote a subspace as in S ⊆ V. The notation “⊂” denotes
a proper subspace of the parent space. The requirement that subspaces are vector spaces
ensures that subspaces of Rn contain vectors residing on hyperplanes in n-dimensional space
1 The field axioms are the properties of associativity, commutativity, distributivity, identity, and inverses.
For example, real numbers, complex numbers, and rational numbers form a field. However, integers do not
form a field. Refer to http://mathworld.wolfram.com/Field.html. Therefore, one can define vectors over
the set of real numbers, complex numbers, or rational numbers. Although one can define vectors more
restrictively over the set of integers, such vectors will not satisfy some fundamental rules of linear algebra
required for them to be considered a vector space.
2.3. VECTOR SPACES AND THEIR GEOMETRY 53
passing through the origin. When the hyperplane defining the subspace has dimensionality
strictly less than n, the corresponding subspace is a proper subspace of Rn because non-
hyperplane vectors in Rn are not members of the subspace. For example, the set of all scalar
multiples of the vector [2, 1, 5]T defines a proper subspace of R3 , and it contains all vectors
lying on a 1-dimensional hyperplane passing through the origin. However, vectors that do
not lie on this 1-dimensional hyperplane are not members of the subspace. Similarly, the
vectors [1, 0, 0]T and [1, 2, 1]T can be used to define a 2-dimensional hyperplane V1 , each
point on which is a linear combination of this pair of vectors. The set of vectors sitting on
this hyperplane also define a proper subspace of R3 . Both the vectors [5, 4, 2]T and [0, 2, 1]T
lie in this subspace because of the following:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤
5 1 1 0 1 1
⎣ 4 ⎦ = 3⎣ 0 ⎦ + 2⎣ 2 ⎦, ⎣ 2 ⎦=⎣ 2 ⎦−⎣ 0 ⎦
2 0 1 1 1 0
All scalar multiples of [5, 4, 2]T also define a vector space V2 that is a proper subspace of
V1 , because the line defining V2 sits on the hyperplane corresponding to V1 . In other words,
we have V2 ⊂ V1 ⊂ R3 . For the vector space R3 , examples of proper subspaces could be
the set of vectors sitting on (i) any 2-dimensional plane passing through the origin, (ii)
any 1-dimensional line passing through the origin, and (iii) the zero vector. Furthermore,
subspace relationships might exist among the lower-dimensional hyperplanes when one of
them contains the other (e.g., a 1-dimensional line sitting on a plane in R3 ).
A set of vectors {a1 . . . ad } is linearly dependent if a non-zero linear combination of these
vectors sums to zero:
Definition 2.3.4 (Linear Dependence) A set of non-zero vectors a1 . . . ad is linearly
dependent, if a set of d scalars x1 . . . xd can be found so that at least some of the scalars are
non-zero, and the following condition is satisfied:
d
x i ai = 0
i=1
We emphasize the fact that all scalars x1 . . . xd cannot be zero. Such a coefficient set is said
to be non-trivial. When no such set of non-zero scalars can be found, the resulting set of
vectors is said to be linearly independent. It is relatively easy to show that a set of vectors
a1 . . . ad that are mutually orthogonal must be linearly independent. If these d vectors are
linearly dependent, we must have non-trivial coefficients x1 . . . xd , such that i=1 xi ai = 0.
However, taking the dot product of the linear dependence condition with each ai and setting
each ai · aj = 0 for i = j yields each xi = 0, which is a trivial coefficient set.
Consider the earlier example of three linearly dependent vectors [0, 2, 1]T , [1, 2, 1]T , and
[1, 0, 0]T , which lie on a 2-dimensional hyperplane passing through the origin. These vectors
satisfy the following linear dependence condition:
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
0 1 1
⎣ 2 ⎦−⎣ 2 ⎦+⎣ 0 ⎦=0
1 1 0
Therefore, the coefficients x1 , x2 , and x3 of the linear dependence condition are +1, −1,
and +1 in this case. The key point is that one only needs two of these three vectors to define
the hyperplane on which all the vectors lie. This minimal set of vectors is also referred to
as a basis, and is defined as follows:
54 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
Definition 2.3.5 (Basis) A basis (or basis set) of a vector space V ⊆ Rn is a minimal
set of vectors B = {a1 . . . ad } ⊆ V, so that all vectors in V can be expressed as linear
combinations of a1 . . . ad . In other words, for any vector v ∈ V, we can find scalars x1 . . . xd
d
so that v = i=1 xi ai , and one cannot do this for any proper subset of B.
It is helpful to think of a basis geometrically as a coordinate system of directions or axes,
and the scalars x1 . . . xd as coordinates in order to express vectors. For example, the two
commonly used axis directions in the classical 2-dimensional plane of Cartesian geometry are
[1, 0]T and [0, 1]T , although we could always rotate this axis system by θ to get a new set of
axes {[cos(θ), sin(θ)]T , [−sin(θ), cos(θ)]T } and corresponding coordinates. Furthermore, the
representative directions need not even be mutually orthogonal. For example, every point
in R2 can be expressed as a linear combination of [1, 1]T and [1, 2]T . Clearly, the basis set
is not unique, just as coordinate systems are not unique in classical Cartesian geometry.
Note that the vectors in a basis must be linearly independent. This is because if the
vectors in the basis B are linearly dependent, we can drop any vector occurring in the linear
dependence condition from B without losing the ability to express all vectors in V in terms
of the remaining vectors. Furthermore, if the linear combination of a set of vectors B cannot
express a particular vector in v ∈ V, one can add v to the set B without disturbing its linear
independence. This process can be continued until all vectors in V are expressed by a linear
combination of the set B. Therefore, an alternative definition of the basis as follows:
Definition 2.3.6 (Basis: Alternative Definition) A basis (or basis set) of a vector
space V is a maximal set of linearly independent vectors in it.
Both definitions of the basis are equivalent and can be derived from one another. An inter-
esting artifact is that the vector space containing only the zero vector has an empty basis.
A vector space containing non-zero vectors always has an infinite number of possible basis
sets. For example, if we select any three linearly independent vectors in R3 (or even scale
the vectors in a basis set), the resulting set of vectors is a valid basis of R3 . An important
result, referred to as the dimension theorem of vector spaces, states that the size of every
basis set of a vector space must be the same:
Theorem 2.3.1 (Dimension Theorem for Vector Spaces) The number of members
in every possible basis set of a vector space V is always the same. This value is referred
to as the dimensionality of the vector space.
Proof: Suppose that we have two basis sets a1 . . . ad and b1 . . . bm so that d < m. In such
a case, we will prove that a subset of the vectors in b1 . . . bm must be linearly dependent,
which is a contradiction with the pre-condition of the lemma.
Each vector bi is a linear combination of the basis vectors a1 . . . ad :
d
bi = βij aj ∀i ∈ {1 . . . m} (2.5)
j=1
A key point is that we have m > d linear dependence conditions (see Equation 2.5), and
we can eliminate each of the d vectors a1 . . . ad at the cost of reducing one equation. For
example, we can select a linear dependence condition in which a1 occurs with a non-zero
coefficient, and express a1 as a linear combination of a2 . . . ad and at least one of b1 . . . bm .
This linear expression for a1 is substituted in all the other linear dependence conditions. The
linear dependence condition that was originally selected in order to create the expression
for a1 is dropped. This process reduces the number of linear dependence conditions and
2.3. VECTOR SPACES AND THEIR GEOMETRY 55
the number of vectors from the basis set {a1 . . . ad } by 1. One can repeat this process with
each of a2 . . . ad , and in each case, the corresponding vector is eliminated while reducing
the number of linear dependence conditions by 1. Therefore, after all the vectors a1 . . . ad
have been eliminated, we will be left with (m − d) > 0 linear conditions between b1 . . . bm .
This implies that b1 . . . bm are linearly dependent.
The notion of subspace dimensionality is identical to that of geometric dimensionality of
hyperplanes in Rn . For example, any set of n linearly independent directions in Rn can be
used to create a basis (or coordinate system) in Rn . For subspaces corresponding to lower-
dimensional hyperplanes, we only need as many linearly independent vectors sitting on the
hyperplane as are needed to uniquely define it. This value is the same as the geometric
dimensionality of the hyperplane. This leads to the following result:
Lemma 2.3.1 (Matrix Invertibility and Linear Independence) An n × n square
matrix A has linearly independent columns/rows if and only if it is invertible.
Proof: An n × n square matrix with linearly independent columns defines a basis for all
vectors in Rn in its columns. Therefore, we can find n coefficient vectors x1 , . . . , xn ∈ Rn so
that Axi = ei for each i, where ei is the ith column of the identity matrix. These conditions
can be written in matrix form as A[x1 . . . xn ] = [e1 . . . en ] = In . Since A and [x1 . . . xd ]
multiply to yield the identity matrix, we have A−1 = [x1 . . . xn ]. Conversely, if the matrix
A is invertible, multiplication of Ax = 0 with A−1 shows that x = 0 is the only solution
(which implies linear independence). One can show similar results with the rows.
p
When vector spaces contain abstract objects like degree-p polynomials of the form i=0 ci ti ,
the basis contains simple instantiations of these objects like {t , t , . . . t }. Choosing a basis
0 1 p
like this allows as to use the coefficients [c0 . . . cp ]T of each polynomial as the new vectors
space Rp+1 . Carefully chosen basis sets allow us to automatically map all d-dimensional
vector spaces over real fields to Rd for finite values of d. For example, V might be a d-
dimensional subspace of Rn (for d < n). However, once we select d basis vectors, the set of
d-dimensional combination coefficients for these vectors themselves create the “nicer” vector
space Rd . Therefore, we have a one-to-one isomorphic mapping between any d-dimensional
vector space V and Rd .
d
we have i=1 (xi − yi )ai = v − v = 0. This implies that the vectors a1 . . . ad are linearly
dependent. This results in the contradiction from the statement of the lemma that B is a
basis (unless the coordinate sets x1 . . . xd and y1 . . . yd are identical).
How can one find these unique coordinates? When a1 . . . ad correspond to an orthonormal
basis of V, the coordinates are simply the dot products of v with these vectors. By taking
d
the dot product of both sides of v = i=1 xi ai with each aj and √ using orthonormality,√it
is easy to show that xj = v · aj . For example, if a1 = [1, 1, 1]T / 3 and a2 = [1, −1, 0]T / 2
constitute the orthonormal basis set of vector space V containing all points√in √ the plane of
these vectors, the vector [2, 0, 1]T ∈ V can be shown to have coordinates [ 3, 2]T (using
the dot product method). Even though the basis vectors are drawn from R3 , the vector
space V is a 2-dimensional plane, and it will have only two coordinates.
It is much trickier to find the coordinates of a vector v in a non-orthogonal basis system.
The general problem is that of solving the system of equations Ax = v for x = [x1 . . . xd ]T ,
where the n-dimensional columns of the n × d matrix A contain the (linearly independent)
basis vectors. The problem boils down to finding a solution to the system of equations
Ax = v, where A = [a1 . . . ad ] contains the basis vectors of the d-dimensional vector space
V ⊆ Rn . Note that the basis vectors are themselves represented using n components like the
vectors of Rn , even though the vector space V is a d-dimensional subspace of Rn and the
coordinate vector x lies in Rd . If d = n, and the matrix A is square, the solution is simply
x = A−1 v. However, when A is not square, one may not be able to find valid coordinates, if
v does not lie in V ⊂ Rn . This occurs when v does not geometrically lie on the hyperplane
HA defined by all possible linear combinations of the columns of A. However, one can find
the best fit coordinates x by observing that the line joining the closest linear combination
Ax of the columns of A to v must be orthogonal to the hyperplane HA , and it is therefore
also orthogonal to every column of A. The condition that (Ax − v) is orthogonal to every
column of A can be expressed as the normal equation AT (Ax − v) = 0. This results in the
following:
The best-fit solution includes the exact solution when it is possible. The matrix (AT A)−1 AT
is referred to as the left-inverse of the matrix A with linearly independent columns and we
will encounter it repeatedly in this book via different derivations (see Section 2.8).
In order to illustrate the nature of coordinate transformations, we will show the coordi-
nates of the same vector [10, 15]T in three different basis sets including the
standard basis set.
3 4 T
4 3 T
The three basis sets correspond to the standard basis set, a basis set 5, 5 , −5, 5
obtained by rotating each vector in the standard basis counter-clockwise by sin−1 (4/5),
and a non-orthogonal basis {[1, 1]T , [1, 2]T } in which the vectors are not even unit nor-
malized. Each of these basis sets defines a coordinate system for representing R2 , and the
non-orthogonal coordinate system seems very different from the conventional system of
Cartesian coordinates. The corresponding basis directions are shown in Figure 2.5(a), (b),
and (c), respectively. For the case of the standard basis in Figure 2.5(a), the coordinates of
the vector [10, 15]T are the same as its vector components (i.e., 10 and 15). However, this is
not the case in any other basis. The coordinates of the vector [10, 15]T in the orthonormal
(rotated) basis of Figure 2.5(b) are [18, 1]T , and the coordinates in the non-orthogonal basis
of Figure 2.5(c) are [5, 5]T . The explanation for these values of the coordinates arises from
the decomposition of [10, 15]T in terms of various basis sets:
2.3. VECTOR SPACES AND THEIR GEOMETRY 57
[0, 1]
[0, 1]
[0, 1]
Figure 2.5: Examples of different bases in R2 with corresponding coordinates of the same
vector [10, 15]T . A basis set may be non-orthogonal and unnormalized, as in (c)
10 1 0 3/5 −4/5 1 1
= 10 + 15 = 18 +1 = 5 +5
15 0 1 4/5 3/5 1 2
Standard basis Basis of Figure 2.5(b) Basis of Figure 2.5(c)
Although the notion of a non-orthogonal coordinate system does exist in analytical geome-
try, it is rarely used in practice because of loss of visual interpretability of the coordinates.
However, such non-orthogonal basis systems are very natural to linear algebra, where some
loss of geometric intuition is often compensated by algebraic simplicity.
xb = Pa→b xa
For example, how might one transform the coordinates in the orthogonal basis set of Fig-
ure 2.5(b) into the non-orthogonal system of Figure 2.5(c)? Here, the key point is to observe
that the coordinates xa and xb are representations of the same vector, and they would there-
fore have the same coordinates in the standard basis. First, we use the basis sets to construct
two n × n matrices A = [a1 . . . an ] and B = [b1 . . . bn ]. Since the coordinates x of xa and xb
must be identical in the standard basis, we have the following:
Axa = Bxb = x
We have already established (cf. Lemma 2.3.1) that square matrices defined by linearly
independent vectors are invertible. Therefore, multiplying both sides with B −1 , we obtain
the following:
58 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
xb = [B −1 A] xa
Pa→b
In order to verify that this matrix does indeed perform the intended transformation, let
us compute the coordinate transformation matrix from the system in Figure 2.5(b) to
the system in Figure 2.5(c). Therefore, our matrices A and B in these two cases can be
constructed using the basis vectors in Figure 2.5 as follows:
3/5 −4/5 1 1 2 −1
A= , B= , B −1 =
4/5 3/5 1 2 −1 1
In order to check whether this coordinate transformation works correctly, we want to check
whether the coordinate [18, 1]T in Figure 2.5(b) gets transformed to [5, 5]T in Figure 2.5(c):
18 2/5 −11/5 18 5
Pa→b = =
1 1/5 7/5 1 5
Therefore, the transformation matrix correctly converts coordinates from one system to
another. The main computational work involved in the transformation is in inverting the
matrix B. One observation is that when B is an orthogonal matrix, the transformation
matrix simplifies to B T A. Furthermore, when the matrix A (i.e., source representation)
corresponds to the standard basis, the transformation matrix is B T . Therefore, working with
orthonormal bases simplifies computations, which is why the identification of orthonormal
basis sets is an important problem in its own right (cf. Section 2.7.1).
It is also possible to perform coordinate transformations between basis sets that define a
particular d-dimensional subspace V of Rn , rather than all of Rn . Let a1 . . . ad amd b1 . . . bd
be two basis sets for this d-dimensional subspace V, such that each of these basis vectors
is expressed in terms of the standard basis of Rn . Furthermore, let xa and xb be two d-
dimensional coordinates of the same vector v ∈ V in terms of the two basis sets. We want
to transform the known coordinates xa to the unknown coordinates xb in the second basis
set (and find a best fit if the two basis sets represent different vector spaces). As in the
previous case, let A = [a1 . . . ad ] and B = [b1 . . . bd ] be two n × d matrices whose columns
contain each of these two sets of basis vectors. Since xa and xb are coordinates of the same
vector, and have the same coordinates in the standard basis of Rn , we have Axa = Bxb .
However, since the matrix B is not square, it cannot be inverted in order to solve for xb
in terms of xa , and we sometimes might have to be content with a best fit. We observe
that this best-fit problem is similar to what was derived in Equation 2.6 with the use of the
normal equation, and Axa − Bxb needs to be orthogonal to every column of B in order to
be a best-fit solution. This implies that B T (Axa − Bxb ) = 0, and we have the following:
xb = (B T B)−1 B T A xa
Pa→b
When B is square and invertible, it is easy to show that this solution simplifies to B −1 Axa .
2.3. VECTOR SPACES AND THEIR GEOMETRY 59
[0, 1, 0]
[0, 1, 0]
A C A
B B
HYPERPLANE HYPERPLANE
THROUGH THROUGH
[1, 0, 0] [1, 0, 0]
ORIGIN C ORIGIN
Figure 2.6: The span of a set of linearly dependent vectors has lower dimension than the
number of vectors in the set
Definition 2.3.7 (Span) The span of a finite set of vectors A = {a1 , . . . , ad } is the vector
space defined by all possible linear combinations of the vectors in A:
d
Span(A) = {v : v = xi ai , ∀x1 . . . xd ∈ R}
i=1
For example, consider the vector spaces drawn on R3 . In this case, the span of the two
vectors [0, 2, 1]T , [1, 2, 1]T is the set of all vectors lying on the 2-dimensional hyperplane
defined by the vectors [0, 2, 1]T and [1, 2, 1]T . Points that do not lie on this hyperplane do
not lie in the span of two vectors. The span of an augmented set of three vectors, which
additionally includes the vector [1, 0, 0]T , is no different from the span of the first two
vectors; this is because the vector [1, 0, 0]T is linearly dependent on [0, 2, 1]T and ]1, 2, 1]T .
Therefore, adding a vector to a set A increases its span only when the added vector does not
lie in the subspace defined by the span of A. When the set A contains linearly independent
vectors, it is also a basis set of its span.
A pictorial example of what a span captures in R3 is illustrated in Figure 2.6. In Fig-
ure 2.6(a), the three vectors A, B, and C lie on a hyperplane passing through the origin,
although they are pairwise linearly independent. Therefore, any pair of them can span the
2-dimensional subspace containing all vectors lying on this hyperplane; however, the span
of all three vectors is still this same subspace because of the linear dependence of the three
vectors. Adding any number of vectors lying on the hyperplane to the set will not change
the span of the set. On the other hand, the three vectors in Figure 2.6(b) are linearly
independent, and therefore their span is R3 .
60 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
Since the three vectors in Figure 2.6(b) are linearly independent and span R3 , they
can be used to create a valid coordinate system to represent any vector in R3 (albeit a
non-orthogonal one). A natural question arises as to what would happen if one tried to use
the three linearly dependent vectors A, B, and C in Figure 2.6(a) to create a “coordinate
system” of R3 . First, note that any 3-dimensional vector that does not lie on the hyperplane
of Figure 2.6(a) cannot be represented as a linear combination of the three vectors A, B,
and C. Therefore, no valid coordinates would exist to represent such a vector. Furthermore,
even in cases where b does lie on the hyperplane of Figure 2.6(a), the solution to Ax = b
may not be unique because of linear dependence of the columns of A, and therefore unique
“coordinates” may not exist.
Note that all basis vectors are orthogonal, although they are not normalized to unit norm.
We would like to transform the time-series from the standard basis into this new set of
orthogonal vectors (after normalization). The problem is simplified by the fact that we have
to transform from a standard basis. As discussed at the end of the previous section, we can
create an orthogonal matrix B using these vectors, and then simply multiply the time series
s = [8, 6, 2, 3, 4, 6, 6, 5]T with B T to create the transformed representation. Note that the
transposed matrix B T will contain the basis vectors in its rows rather than columns. For
numerical and computational efficiency, we will not normalize the columns of B to unit norm
up front, and simply normalize the coordinates of s after multiplying with the unnormalized
2.3. VECTOR SPACES AND THEIR GEOMETRY 61
The rightmost vector sn contains the normalized wavelet coefficients. In many cases, the
dimensionality of the time-series is reduced by dropping those coefficients that are very
small in absolute magnitude. Therefore, a compressed representation of the time series can
be created. Note that the matrix B is very sparse, and it contains O(n log(n)) non-zero
entries for a transformation in Rn . Furthermore, since the matrix only contains values from
{−1, 0, +1}, the matrix multiplication reduces to only addition or subtraction of vector
components. In other words, such a matrix multiplication is very efficient.
If U and W are disjoint with basis sets Bu and Bw , the union B = Bu ∪ Bw of these basis
sets is a linearly independent set. Otherwise, we can apply the linear dependence condition
to B and place elements from each of the vector spaces on the two sides of the dependence
condition to create a vector that lies in both U and W. This is a contradiction to the
pre-condition of disjointedness.
An origin-centered plane in R3 and an origin-centered line in R3 represent disjoint
vector spaces as long as the line is not subsumed by the plane. However, vector spaces
created by any pair of origin-centered planes in R3 are not disjoint because they intersect
along a 1-dimensional line. The hyperplanes corresponding to two disjoint vector spaces
must intersect only at the origin, which is a 0-dimensional vector space. A special case of
disjointedness of vector spaces is that of orthogonality of the two spaces:
1-DIMENSIONAL
COMPLEMENTARY
SUBSPACE
(NON-ORTHOGONAL)
1-DIMENSIONAL
COMPLEMENTARY
SUBSPACE
2-DIMENSIONAL (ORTHOGONAL)
SUBSPACE
ORIGIN
Disjoint pairs of vector spaces need not be orthogonal, but orthogonal pairs of vector spaces
are always disjoint. One can show this result by contradiction. If the orthogonal vector
spaces U and W are not disjoint, one can select u ∈ U and w ∈ W to be the same non-zero
vector (i.e., u = w = 0) from the non-disjoint portion of the space, which cannot satisfy the
condition of Equation 2.7 (and this results in a contradiction).
Two orthogonal subspaces, such that the union of their basis sets span all of Rn are
referred to as orthogonal complementary subspaces.
Definition 2.3.10 (Orthogonal Complementary Subspace) Let U be a subspace of
Rn . Then, W is an orthogonal complementary subspace of U if and only if it satisfies the
following properties:
• The spaces U and W are orthogonal (and therefore disjoint).
• The union of the basis sets of U and W forms a basis for Rn .
Problem 2.3.3 Let U ⊂ R3 be defined by the basis set {[1, 0, 0]T , [0, 1, 0]T }. State the basis
sets of two possible complementary subspaces of U . In each case, provide a decomposition
of the vector [1, 1, 1]T as a sum of vectors from these complementary subspaces.
2.4. THE LINEAR ALGEBRA OF MATRIX ROWS AND COLUMNS 63
Problem 2.3.4 Let U ⊂ R3 be defined by the basis set B = {[1, 1, 1]T , [1, −1, 1]T }. Formu-
late a system of equations to find the orthogonal complementary subspace W of U . Use the
orthogonality of U and W to propose a fast method to express the vector [2, 2, 1]T as a sum
of vectors from these complementary subspaces.
Definition 2.4.2 (Null Space) The null space of a matrix A is the subspace of Rd con-
taining all column vectors x ∈ Rd , such that Ax = 0.
The null space of a matrix A is essentially the orthogonal complementary subspace of the
row space of A. The reason is that the condition Ax = 0 ensures that the dot product of x
with each transposed row of A (or a linear combination of them) is 0. Note that if d > n, the
d-dimensional rows of A (after transposition to column vectors) will always span a proper
subspace of Rd , whose orthogonal complement is non-empty; in other words, the null space
of A will be non-empty in this case. For square and non-singular matrices, the null space
only contains the zero vector.
The notion of a null space refers to a right null space by default. This is because the
vector x occurs on the right side of matrix A in the product Ax, which must evaluate to
the zero vector. Similar to the definition of a right null space, one can define the left null
space of a matrix, which is the orthogonal complement of the vector space spanned by the
columns of the matrix.
Definition 2.4.3 (Left Null Space) The left null space of an n × d matrix A is the sub-
space of Rn containing all column vectors x ∈ Rn , such that AT x = 0. The left null space
of A is the orthogonal complementary subspace of the column space of A.
T
Alternatively, the left null space of a matrix A contains all vectors x satisfying xT A = 0 .
The row space, column space, the right null space, and the left null space are referred to as
the four fundamental subspaces of linear algebra.
64 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
k k
xr
yc
d
n
LEFT NULL SPACE OF RIGHT NULL SPACE OF
MATRIX A MATRIX A
yn xn
y = yc + yn x = x r + xn
Figure 2.8: The four fundamental subspaces of linear algebra for an n × d matrix A
In Figure 2.8, we have shown the relationships among the four fundamental subspaces
of linear algebra for an n × d matrix A. In this particular case, the value of n is chosen to be
greater than d. Multiplying A with any d-dimensional vector x ∈ Rd maps to the column
space of A (including the zero vector) because the vector Ax is a linear combination of the
columns of A. Similarly, multiplying any n-dimensional vector y ∈ Rn with AT to create
the vector AT y yields a member of the row space of A, which is a linear combination of
the (transposed) rows of A. Another noteworthy point in Figure 2.8 is that the ranks of the
row space and the column space are the same. The equality is a fundamental result in linear
algebra, which will be shown in a later section. The fixed value of the row rank and column
rank is also referred to as the rank of the matrix. For example, consider the following 3 × 4
matrix: ⎡ ⎤
1 0 1 0
A=⎣ 0 1 0 1 ⎦ (2.8)
1 1 1 1
Note that neither the rows nor the columns of this matrix are linearly independent. The
row space has the basis vectors [1, 0, 1, 0]T , and [0, 1, 0, 1]T , whereas the column space has
the basis vectors [1, 0, 1]T , and [0, 1, 1]T . Therefore, the row rank is the same as the column
rank, which is the same as the matrix rank of 2.
Problem 2.4.1 Find a basis for each of the right and left null spaces of matrix A in
Equation 2.8.
Problem 2.4.2 For any n × d matrix A, show why the matrices P = AT A + λId and
Q = AAT + λIn always have an empty null space for any λ > 0.
A hint for solving the above problem is to show that xT P x can never be zero.
Definition 2.5.1 (Row and Column Equivalence) Two matrices are said to be row
equivalent, if one matrix is obtained from the other by a sequence of elementary row oper-
ations such as row interchange, row addition, or multiplication of a row with a non-zero
scalar. Similarly, two matrices are said to be column equivalent, if one matrix is obtained
from the other with a sequence of elementary column operations.
Note that applying elementary row operations does not change the vector space spanned by
the rows of a matrix. This is because row interchange and non-zero scaling operations do
not fundamentally change the (normalized) row set of the matrix. Furthermore, the span
of any pair of row vectors {ri , rj } is the same as that of {ri , ri + crj } for non-zero scalar c
because rj can be expressed in terms of the new set of rows as [(ri + crj ) − ri ]/c. Therefore,
any vector in the span of the original set of rows is also in the span of the new set of rows.
The converse can also be seen to be true because the new row vectors are directly expressed
in terms of the original rows. Similarly, column operations do not change the column space.
However, row operations do change the column space, and column operations do change
the row space. These results are summarized as follows:
Lemma 2.5.1 Elementary row operations do not change the vector space spanned by the
rows, whereas elementary column operations do not change the vector space spanned by the
columns.
A particularly convenient row-equivalent conversion of the matrix A is the row echelon form,
which is useful for solving linear systems of the type Ax = b. By applying the same row
operations to both the matrix A and the vector b in the system of equations Ax = b, one
can simplify the matrix A to a form that makes the system easily solvable. This is exactly
the row echelon form, and the procedure is equivalent to the Gaussian elimination method
for solving systems of equations.
All row echelon matrices are (rectangular) upper-triangular matrices, but the converse is
not true. For example, consider the following pair of upper-triangular matrices:
⎡ ⎤ ⎡ ⎤
1 7 4 3 5 1 7 4 3 5
⎢ 0 0 1 7 6 ⎥ ⎢ 0 0 1 7 6 ⎥
A = ⎢⎣ 0 0 0 1 3 ⎦
⎥ B = ⎢⎣ 0 0 1 5 3 ⎦
⎥
0 0 0 0 1 0 0 0 0 1
Here, the matrix A is in row echelon form, whereas the matrix B is not. This is because
the leftmost non-zero entry of the second and third rows of matrix B have the same column
index. The increasing column index of the leading non-zero entry ensures that non-zero rows
in echelon form are always linearly independent; adding rows in the order from bottom to
top of the matrix to a set S always increases the span of S by 1.
The bulk of the work in Gaussian elimination is to create a matrix in which the column
index of the leftmost non-zero entry is different for each row; further row interchange oper-
ations can create a matrix in which the leftmost non-zero entry has an increasing column
index, and row scaling operations can change the leftmost entry to 1. The entire process
uses three phases:
66 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
• Row addition operations: We repeatedly identify pairs of rows, so that the column
index of the leftmost non-zero entry is the same. For example, the second and third
rows of matrix B in the above example have a tied column index of the leftmost
non-zero entry. The elementary row addition operation is applied to the pair so that
one of these leftmost entries is set to 0. For example, consider two rows r1 and r2 with
the same leftmost column index. If the leftmost non-zero entries of rows r1 and r2
have values 3 and 7, respectively, then we can change row r1 to r1 − (3/7)r2 , so that
the leftmost entry of r1 becomes 0. We could also change r2 to r2 − (7/3)r1 to achieve
a similar effect. We always choose to perform the operation on the lower of the two
rows in order to ensure that the corresponding operator matrix is a lower triangular
matrix and the number of leading zeros in the lower row increases by 1. Since the
matrix contains n × d entries, and each operation increases the number of leading
zeros in the matrix, the procedure is guaranteed to succeed in removing column-index
ties after O(nd) row addition operations [each of which requires O(d) time]. However,
depending on the configuration of the original matrix, one may not be able to reach
a matrix in which the column index of the leftmost non-zero entry always increases.
For example, a 2 × 2 matrix with a value of 0 in the top-left corner and a value of 1 in
every other entry can never be converted to upper-triangular form with row addition
operations.
• Row interchange operations: In this phase, we permute the rows of the matrix, so
that the column index of the leftmost non-zero entry increases with increasing column
index. The permutation of the rows is achieved by interchanging “violating” pairs of
rows repeatedly, which do not satisfy the aforementioned condition. Random selection
of violating pairs will require O(d2 ) interchanges, although more judicious selection
can ensure that this is done in O(d) interchanges.
• Row scaling operations: Each row is divided by its leading non-zero entry to convert
the matrix to row echelon form.
All of the above operations can be implemented with the elementary row operations dis-
cussed in Section 1.3.1 of Chapter 1.
2.5.1 LU Decomposition
The goal of LU decomposition is to express a matrix as the product of a (square) lower
triangular matrix L and a (rectangular) upper triangular matrix U . However, it is not always
possible to create an LU decomposition of a matrix without permuting its rows first. We
provide an example in which row permutation is essential:
Observation 2.5.1 A non-singular matrix A = [aij ] with a11 = 0 can never be expressed
in the form A = LU , where L = [lij ] is lower-triangular and U = [uij ] is upper-triangular.
The above observation can be shown by contradiction by assuming that A = LU is possible.
Since A = LU , it can be shown that a11 = l11 u11 . In order for a11 to be zero, either l11 or
u11 must be 0. In other words, either the first row of L is zero or the first column of U is
zero. This means that either the first row or the first column of A = LU is zero. In other
words, A cannot be non-singular, which is a contradiction.
Let us examine the effect of the first two steps (row addition and interchange steps) of
the Gaussian elimination algorithm, which already creates a rectangular upper triangular
matrix U . Note that the row addition operations are always lower triangular matrices,
2.5. THE ROW ECHELON FORM OF A MATRIX 67
because lower rows are always subtracted from upper rows. Furthermore, the sequence of
row interchange operations is a permutation of rows, and can therefore be expressed as
the permutation matrix P . Therefore, we can express the first two steps of the Gaussian
elimination process in terms of a permutation matrix P and the m row-addition operations
defined by lower-triangular matrices L1 . . . Lm :
P Lm Lm−1 . . . L1 A = U
Multiplying both sides with P T and the inverses of the lower-triangular matrices Li in the
proper sequence, we obtain the following:
The inverses and products of lower-triangular matrices are lower triangular (cf. Chapter 1).
Therefore, we can consolidate these matrices to obtain a single lower-triangular matrix L
of size n × n. In other words, we have the following:
A = LP T U
This is, however, not the standard form of the LU decomposition. With some bookkeeping,
it is possible to obtain a decomposition in which the permutation matrix P T occurs before
the lower-triangular matrix L (although these matrices would be different when re-ordered):
A = P T LU
One can also write this decomposition as P A = LU . This is the standard form of LU
decomposition.
d(d − 1)/2 row operations will be required. This approach works only when the matrix is
nonsingular, or else some of the diagonal entries will be 0s. One can obtain the inverse of A
by performing the same row operations starting with the identity matrix, as one performs
these row operations on A to reach the identity matrix. A sequence of row operations that
transforms A to the identity matrix will transform the identity matrix to B = A−1 . The idea
is that we perform the same row operations on both sides of the equation AA−1 = I. The
row operations on the left-hand side AA−1 can be performed on A until it is transformed
to the identity matrix.
1. If the vector b does not occur in the column space of A, then no solution exists to
this system of linear equations although best fits are possible. This case is studied in
detail in Section 2.8.
2. If the vector b occurs in the column space of A, and A has linearly independent
columns (which implies that the columns form the basis of a d-dimensional subspace
of Rn ), the solution is unique. This result is based on the uniqueness of coordinates
(cf. Lemma 2.3.2). In the special case that A is square, the solution is simply x = A−1 b.
3. If the vector b occurs in the column space of A and the columns of A are linearly
dependent, then an infinite number of solutions exists to Ax = b. Note that if x1 and
x2 are solutions, then λx1 + (1 − λ)x2 is also a solution for any real λ.
The first situation arises very commonly in over-determined systems of linear equations
where the number of rows of the matrix is much greater than the number of columns. It is
possible for inconsistent systems of equations to occur even in matrices where the number
of rows is less than the number of columns. In order to understand this point, consider the
case where b = [1, 1]T , and a 2×100 matrix A contains two non-zero row vectors, so that the
second row vector is twice the first. However, it is impossible to find any non-zero solution
to the Ax = b unless the second component of b is twice the first. Similarly, the third case
occurs more commonly in cases where the number of columns d is greater than the number
of rows n, but it is possible to find linearly dependent column vectors even when d < n. We
present some exercises in order to gain some intuition about these difficult cases:
Problem 2.5.1 Suppose that no solution exists to the system of equations Ax = b, where
A is an n × d matrix and b is an n-dimensional column vector. Show that an n-dimensional
column vector z must exist that satisfies z T A = 0 and z T b = 0.
The above practice exercise simply states that if a system of equations is inconsistent, then
a weighted combination of the equations can always be found so that the left-hand side adds
2.5. THE ROW ECHELON FORM OF A MATRIX 69
up to zero, whereas the right-hand side adds up to a non-zero quantity. As a hint to solve
the exercise, note that b does not fully lie in the column space of A, but can be expressed
as a sum of vectors from the column space and left null space of A. The vector z can be
derived from this decomposition.
5 2
Problem 2.5.2 Express the system of equations
5 i=1 xi = 1, i=1 xi = −1, and
i=3 xi = −1 as Ax = b for appropriately chosen A and b. Informally discuss by in-
spection why this system of equations is inconsistent. Now define a vector z satisfying the
conditions of the previous exercise to show that the system is inconsistent.
The process of row echelon conversion is useful to identify whether a system of equations is
inconsistent, and also to characterize the set of solutions to a system of consistent equations.
One can use a sequence of row operations to convert the linear system Ax = b to a new
system A x = b in which the matrix A is in row echelon form. Whenever a row operation is
performed on A, exactly the same operation is performed on b. The resulting system A x = b
contains a wealth of information about the solutions to the original system. Inconsistent
systems will contain zero rows at the bottom of A after row echelon conversion, but a
corresponding non-zero entry in the same row of b (try to explain this using Problem 2.5.1
while recognizing that A contains linearly independent rows). Such a system can never
have a solution because a zero value on the left is being equated with a non-zero value on
the right. All zero rows in A need to be matched with zero entries in b for the system to
have a solution.
Assuming that the system is not inconsistent, how does one detect systems with unique
solutions? In such cases, each column will contain a leftmost non-zero entry of some row. It
is possible for some of the rows to be zeros. We present two examples of matrices, the first of
which satisfies the aforementioned property, and the second does not satisfy the property:
⎡ ⎤ ⎡ ⎤
1 7 4 1 7 4 3 5
⎢ 0 1 2 ⎥ ⎢ 0 1 9 7 6 ⎥
M = ⎢
⎣ 0
⎥ N = ⎢ ⎥
0 1 ⎦ ⎣ 0 0 0 1 3 ⎦
0 0 0 0 0 0 0 1
Note that the matrix N does not satisfy the uniqueness condition because the third column
(whose entries are in bold) does not contain the leftmost non-zero entry of any row. Such
a column is referred to as a free column because one can view the variable corresponding
to it as a free parameter. If there is no free column, one will obtain a square, triangular,
invertible matrix on dropping the zero rows of A and corresponding zero entries of b . For
example, one obtains a square, triangular, and invertible matrix on dropping the zero rows
of M . This matrix will be an upper-triangular matrix, which has values of 1 along the
diagonal. It is easy to find a unique solution by using backsubstitution. One can first set
the last component of x to the last component of b , and substitute it into the system of
equations to obtain a smaller upper-triangular system. This process is applied iteratively
to find all components of x.
The final case is one in which some free columns exist, which are not the leading non-
zero entries of some row. The variables corresponding to the free columns can be set to
any value, and a unique solution for the other variables can always be found. In this case,
the solution space contains infinitely many solutions. Consider the following system in row
echelon form:
70 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
⎡ ⎤
⎡ ⎤ x ⎡ ⎤
1 2 1 −3 ⎢ 1 ⎥ 3
x2
⎣ 0 0 1 2 ⎦⎢
⎣ x3
⎥=⎣ 2 ⎦
⎦
0 0 0 0 0
x4
A
In this system of equations, the second and fourth columns do not contain any entry that
are the leading non-zero entries of any row. Therefore, we can set x2 and x4 to arbitrary
numerical values (say, α and β) and also drop all the zero rows. Furthermore, setting x2
and x4 to numerical values will result in a system of equations with only two variables x1
and x3 (because α and β are now constants rather than variables). The vector b on the
right-hand size is adjusted to reflect the effect of these numerical constants. After making
these adjustments, the aforementioned system becomes the following:
1 1 x1 3 − 2α + 3β
=
0 1 x3 2 − 2β
This system is a square 2×2 system of equations with a unique solution in terms of α and β.
The value of x3 is set to 2−2β, and then back-substitution is used to derive x1 = 1−2α+5β.
Therefore, the set of solutions [x1 , x2 , x3 , x4 ] is defined as follows:
Here, α and β can be set to arbitrary numerical values; therefore, the system has infinitely
many solutions.
Problem 2.5.3 (Coordinate Transformations with Row Echelon) Consider the vec-
tor space V ⊂ Rn with basis B = {a1 . . . ad }, so that d < n. Show how to use the row echelon
method to find the d coordinates of v ∈ V in the basis B.
RAC = Δ
Here, R is an n × n matrix that is the product of the elementary row operator matrices, C
is a d × d matrix that is the product of the elementary column operator matrices, and Δ is
an n × d rectangular diagonal matrix.
This result has the remarkable implication that the ranks of the row space and the
column space of a matrix are the same.
2.6. THE NOTION OF MATRIX RANK 71
Lemma 2.6.1 The rank of the row space of a matrix is the same as that of its column
space.
Proof Sketch: The condition RA = ΔC −1 implies that the row rank of A is the same as
the number of non-zero diagonal entries in Δ (since row operations do not change rank of
A according to Lemma 2.5.1, and ΔC −1 contains as many non-zero, linearly independent
rows as the number of non-zero diagonal entries in Δ). Similarly, the condition AC = R−1 Δ
implies that the column rank of A is the same as the number of non-zero diagonal entries
in Δ. Therefore, the row rank of A is the same as its column rank.
The common value of the rank of the row space and the column space is referred to as the
rank of a matrix.
Definition 2.6.1 (Matrix Rank) The rank of a matrix is equal to the rank of its row
space, which is the same as the rank of its column space.
The matrix A contains d columns and therefore the rank of the column space is at most d.
Similarly, the rank of the row space is at most n. Since both ranks are the same, it follows
that this value must be at most min{n, d}.
Corollary 2.6.2 Consider an n × d matrix A with rank k ≤ min{n, d}. Then the rank of
the null space of A is d − k and the rank of the left null space of A is n − k.
This follows from the fact that rows of A are d-dimensional vectors, and the null space of
A is the orthogonal complement of the vector space defined by the (transposed) rows of A.
Therefore, the rank of the null space of A must be d − k. A similar argument can be made
for the left null space of A.
Lemma 2.6.2 (Matrix Addition Upper Bound) Let A and B be two matrices with
ranks a and b, respectively. Then, the rank of A + B is at most a + b.
Proof: Each row of A + B can be expressed as a linear combination of the rows of A and
the rows of B. Therefore, the rank of the row space of (A + B) is at most a + b.
One can show a similar result for the lower bound on matrix addition:
Lemma 2.6.3 (Matrix Addition Lower Bound) Let A and B be two matrices with
ranks a and b, respectively. Then, the rank of A + B is at least |a − b|.
Proof: The result follows directly from Lemma 2.6.2, because one can express the relation-
ship A + B = C as A + (−C) = (−B) or as B + (−C) = (−A). Therefore, if A and B have
ranks a and b, then the rank of −C must be at least |a − b| from the previous lemma.
One can also derive upper and lower bounds for multiplication operations.
72 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
Lemma 2.6.4 (Matrix Multiplication Upper Bound) Let A and B be two matrices
with ranks a and b, respectively. Then, the rank of AB is at most min{a, b}.
Proof: Each column of AB is a linear combination of the columns of A, where the linear
combination coefficients defining the ith column of AB are provided in the ith column of
B. Therefore, the rank of the column space of AB is no greater than that of the column
space of A. However, the column space of a matrix is the same as its rank. Therefore, the
matrix rank of AB is no greater than the matrix rank of A.
Similarly, each row of AB is a linear combination of the rows of B, where the linear
combination coefficients defining the ith row of AB are included in the ith row of A.
Therefore, the rank of the row space of AB is no greater than that of the row space of
B. However, the row space of a matrix is the same as its rank. Therefore, the matrix rank
of AB is no greater than the matrix rank of B. Combining the above two results, we obtain
the fact that rank of AB is no greater than min{a, b}.
Establishing a lower bound on the rank of the product of two matrices is much harder than
establishing an upper bound; a useful bound exists only in some special cases.
We omit a formal proof of this result, which is also referred to as Sylvester’s inequality.
It is noteworthy that d is the shared dimension of the two matrices (thereby allowing
multiplication), and the result is not particularly useful when a + b ≤ d. In such a case, the
lower bound on the rank becomes negative, which is trivially satisfied by every matrix and
therefore not informative. A useful lower bound can be established when the two matrices
have rank close to the shared dimension d (i.e., the maximum possible value). What about
the case when one or both matrices are square and are exactly of full rank? Some natural
corollaries of the above result are the following:
Corollary 2.6.3 Multiplying a matrix A with a square matrix B of full rank does not
change the rank of matrix A.
Corollary 2.6.4 Let A and B be two square matrices. Then AB is non-singular if and
only if A and B are both non-singular.
In other words, the product is of full rank if and only if both matrices are of full rank. This
result is important from the perspective of the invertibility of the Gram matrix AT A of
the column space of A. Note that the Gram matrix often needs to be inverted in machine
learning applications like linear regression. In such cases, the inversion of the Gram matrix is
part of the closed-form solution (see, for example, Equation 1.29 of Chapter 1). It is helpful
to know that the invertibility of the Gram matrix is determined by the linear independence
of the columns of the underlying data matrix of feature variables:
Lemma 2.6.6 (Linear Independence and Gram Matrix) The matrix AT A is said to
be the Gram matrix of the column space of an n × d matrix A. The columns of the matrix
A are linearly independent if and only if AT A is invertible.
Proof: Consider the case where AT A is invertible. This means that the rank of AT A is d,
and therefore the rank of each of the factors of AT A must also be at least d. This means that
A must have rank at least d, which is possible only when the d columns of A are linearly
independent.
2.7. GENERATING ORTHOGONAL BASIS SETS 73
Now suppose that A has linearly independent columns. Then, for any non-zero vector
x, we have xT AT Ax = Ax2 ≥ 0. This value can be zero only when Ax = 0. However, we
know that Ax = 0 for a non-zero vector x, because of the linear independence of the columns
of A. In other words, xT AT Ax is strictly positive, which is possible only when AT Ax is a
non-zero vector. In other words, for any non-zero vector x we have AT Ax = 0, which implies
that the square matrix AT A has linearly independent columns. This is possible only when
AT A is invertible (cf. Lemma 2.3.1).
One can use a very similar approach to show the stronger result that the ranks of the
matrices A, AT A, and AAT are the same (see Exercise 2). The matrix AAT is the Gram
matrix of the row space of A, and is also referred to as the left Gram matrix.
2. Increment i by 1.
74 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
This process is repeated for each i = 2 . . . d. This algorithm is referred to as the unnormalized
Gram-Schmidt method. In practice, the vectors are scaled to unit norm after the process.
We can show that the resulting vectors are mutually orthogonal by induction. For ex-
ample, consider the case when we make the inductive assumption that q 1 . . . q i−1 are or-
thogonal. Then, we can show that q i is also orthogonal to each q j for j ∈ {1 . . . i − 1}:
i−1
(ai · q r ) q r (q j · q j )
q j · q i = q j · ai − = q j · ai − (q j · ai ) = 0
r=1
q r q r q j 2
[Drop terms using induction]
2.7.2 QR Decomposition
We first discuss the QR decomposition of an n×d matrix with linearly independent columns.
Since the columns are linearly independent, we must have n ≥ d. Gram-Schmidt orthogonal-
ization can be used to decompose an n × d matrix A with the linearly independent columns
into the product of an n × d matrix Q with orthonormal columns and an upper-triangular
d × d matrix R. In other words, we want to compute the following QR decomposition:
A = QR (2.11)
from Gram-Schmidt orthogonalization. The columns appear in the same order as obtained
by processing a1 . . . ad by the Gram-Schmidt algorithm. Since the vectors a1 . . . ad are lin-
early independent, one would derive a full set of d orthonormal basis vectors. Note that the
projection of ar on each q j is q j · ar , which provides its jth coordinate in the new orthonor-
mal basis. Therefore, we define a d × d matrix R, in which the (j, r)th entry is q j · ar . For
j > r, q j is orthogonal to the space spanned by a1 . . . ar , and therefore the value of q j · ar
is 0. Therefore, the matrix R is upper triangular. It is easy to see that the rth column of
the product QR is the appropriate linear combination of the orthonormal basis defined by
Gram-Schmidt orthogonalization (to yield ar ), and therefore A = QR.
What happens when the columns of the n × d matrix A are not linearly independent?
In such a case, the Gram-Schmidt process will yield the vectors q 1 . . . q d , which are either
unit-normalized vectors or zero vectors. Assume that k of the vectors q 1 . . . q d are non-
zero. We can assume that the zero vectors also have zero coordinates in the Gram-Schmidt
representation, since the coordinates of zero vectors are irrelevant from a representational
point of view. As in the previous case, we create the decomposition QR (including the zero
columns in Q and matching zero rows in R), where Q is a n × d matrix and R is a d × d
upper-triangular (rectangular) matrix. Subsequently, we drop all the zero columns from Q,
and also drop the zero rows with matching indices from R. As a result, the matrix Q is now
of size n × k and the matrix R is of size k × d. This provides the most concise, generalized
QR decomposition of the original n × d matrix A.
Problem 2.7.3 (Solving Linear Equations) Show how you can use QR decomposition
to solve the system of equations Ax = b with back-substitution. Assume that A is a d × d
matrix with linearly independent columns and b is a d-dimensional column vector.
Therefore, the approach requires at most O(nd) Givens rotations, although far fewer rota-
tions will be required for sparse matrices. Entries (below the diagonal) with the smallest
column index j are zeroed first, and those with the same column index are selected in order
of decreasing row index i. Based on the notations on page 47, the Givens matrix used for
pre-multiplication of the current transformation R of A is Gc (i − 1, i, α), where α is chosen
to zero out the (i, j)th entry of the current matrix corresponding to running variable R.
Multiplication of Gc (i − 1, i, α) with R affects only the (i − 1)th and ith entries of each
column of R. If the lower-triangular portions of columns before index j have already been
set to 0, then multiplication with the Givens matrix will not affect them (since a rotation
of a zero vector is a zero vector). Therefore, work already done on setting earlier column
entries to 0 will remain undisturbed. Consider the current column index j, whose entries
are being set to 0. If the current matrix R contains entries rij , then one can pull out the
portion of the product of the Givens matrix Gc (i − 1, i, α) with R corresponding to the
rotation of the 2-dimensional vector [ri−1,j , rij ]T :
2
cos(α) −sin(α) ri−1,j 2
ri−1,j + rij
=
sin(α) cos(α) rij 0
One can verify that the solution to the above system yields the following value of α:
−rij ri−1,j
sin(α) = , cos(α) = (2.12)
2 + r2
rij 2 + r2
rij
i−1,j i−1,j
Note that α takes on (absolute) value of 90◦ , when ri−1,j is 0 but rij is not 0. Furthermore,
α is 0 or 180 when rij is already zero, and no rotation needs to be done (since a 180◦
rotation only flips the sign of ri−1,j ). The ordering of the processing of the O(nd) entries is
necessary to ensure that already zeroed entries are not disturbed by further rotations. The
pseudocode for the process is as follows:
Q ⇐ I; R ⇐ A;
for j = 1 to d − 1 do
for i = n down to (j + 1) do
Choose α based on Equation 2.12;
Q ⇐ Q Gc (i, i − 1, α)T ; R ⇐ Gc (i, i − 1, α) R;
endfor
endfor
return Q, R;
For n ≥ d and a matrix A with linearly independent columns, the above approach will
create an n × n matrix Q and an n × d matrix R. These matrices are larger than the ones
obtained with the Gram-Schmidt method. However, the bottom (n − d) rows of R will be
zeros, and therefore one can drop the last (n − d) columns of Q and the bottom (n − d)
rows of R without affecting the result. This yields a smaller QR decomposition with n × d
matrix Q and d × d matrix R.
It is also possible to use this approach of iteratively modifying Q and R with Householder
reflection matrices instead of Givens rotation matrices. In this case, at most (d−1) reflections
will be needed to triangulize the matrix, because each iteration is able to zero out all the
entries below the diagonal for a particular column (and the final one can be ignored). The
columns are processed in order of increasing column index. The basic geometric principle
is that for any n-dimensional coordinate vector (first column of A), it is possible to orient
a (n − 1)-dimensional “mirror” passing through the origin, so that the image of the vector
2.7. GENERATING ORTHOGONAL BASIS SETS 77
is mapped to a point in which only the first coordinate is non-zero. Such a transformation
is defined by multiplication with a Householder reflection matrix. We encourage the reader
to visualize a 1-dimensional reflection plane in 2-dimensional space, so that a specific point
[x, y]T is mapped to [ x2 + y 2 , 0]T . This principle also applies more generally to vectors in
n-dimensional space, such as the first column c1 of A. One can choose v 1 (normal vector
to the “mirror” hyperplane) in the first iteration to be the unit vector joining c1 to a
column vector c1 [1, 0, . . . , 0]T of equal length in which only the first component is non-
zero. Therefore, we have v 1 ∝ (c1 − c1 [1, 0, . . . , 0]T ), and it is scaled to unit norm. One
can then compute the Householder matrix Q1 = (I − 2v 1 v T1 ). Pre-multiplying A with Q1
will zero the bottom (n − 1) entries of the first column c1 of A. In subsequent iterations,
the entries of the first row of the resulting matrix R = Q1 A remain frozen to their current
values, and all modifications are performed only on the bottom (n − 1) rows. Therefore,
the n × n Householder reflection matrix Q2 = (I − 2v 2 v T2 ) will be chosen in the second
iteration so that any changes occur only in the bottom (n − 1) dimensions. The second
iteration zeros out the bottom (n − 2) entries of the second column c2 of the running
matrix R. This is achieved by first copying c2 to c2,n−1 , resetting the first entry of c2,n−1
to zero, evaluating unit vector v 2 ∝ c2,n−1 − c2,n−1 [0, 1, 0, . . . 0]T , and then updating
R ⇐ R (I − 2v 2 v T2 ). In the next iteration, the Householder matrix is computed by defining
c3,n−2 as a partial copy of the vector c3 with the first two entries set to zero. One can set the
unit vector v 3 ∝ c3,n−2 −c3,n−1 [0, 0, 1, 0, . . . 0]T , and then update R ⇐ R (I −2v 3 v T3 ). This
process is iteratively applied to zero the appropriate number of entries of each column of R.
The final orthogonal matrix of the QR decomposition is obtained as QT1 . . . QTd−1 . Careful
implementation choices are required to reduce numerical errors. For example, in the first
iteration, one can reflect c1 to either c1 [1, 0, . . . 0]T or to −c1 [1, 0, . . . 0]T . Selecting the
further of the two choices reduces numerical errors.
The Gram-Schmidt basis does not expose any specific properties of a vector with the help
of its coordinates. On the other hand, the wavelet basis discussed in Section 2.3.4 is an
orthogonal basis that exposes local variations in a time series. The discrete cosine transform
uses a basis with trigonometric properties in order to expose periodicity in a time series.
Consider a time-series drawn from Rn , which has n values (e.g., temperatures) drawn
at n equally spaced clock ticks. Choosing a basis in which each basis vector contains equally
spaced samples of a cosine time-series of a particular periodicity allows a transformation
in which the coordinates of the basis vectors can be interpreted as the amplitudes of the
different periodic components of the series. For example, a time-series of temperatures
over 10 years will have day-night variations as well as summer-winter variations, which
will be captured by the coordinates of different basis vectors (periodic components). These
coordinates are helpful in many machine learning applications.
Consider a high-dimensional time series of length n, which is represented as a column
vector in Rn . The n-dimensional basis vector of this time series with the largest possible
periodicity uses n equally spaced samples of the cosine function ranging between 0 and π
radians. The samples of the cosine function are spaced at a distance of π/n radians from one
another, and a natural question arises as to where one might select the first sample. Although
different variations of the discrete cosine transform select the first sample at different points
of the cosine function, the most common choice is to ensure that the samples are symmetric
78 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
about π/2, and therefore the first sample is chosen at π/2n. This yields the following basis
vector b:
b = [cos(π/2n), cos(3π/2n), . . . , cos([2n − 1]π/2n)]T
For a time-series of length n, this is the largest possible level of periodicity, where the entire
basis vector is an n-dimensional sample of only half a cosine wave (covering π radians). To
address smaller periodicities in the data, we would need more basis vectors in which the
n-dimensional sample is drawn from a larger number of cosine waves (i.e., a larger angle
than π). In other words, the n samples of the cosine function are obtained by sampling the
cosine function at n points between 0 and (j − 1)π for each value of j ∈ {1, . . . , n}:
bj = [cos([j − 1]π/2n), cos(3[j − 1]π/2n), . . . , cos([2n − 1][j − 1]π/2n)]T
Setting j = 1 yields b1 as a column vector of 1s, which is not periodic, but is a useful basis
vector for capturing constant offsets. The case of j = 2 corresponds to half a cosine wave
as discussed above.
One can create an unnormalized basis matrix B = [b1 . . . bn ] whose columns contain the
basis vectors discussed above. Let us assume that the ith component of the jth basis vector
bj is denoted by bij . In other words, the (i, j)th entry of B is bij , where bij is defined as
follows:
! "
π(2i − 1)(j − 1)
bij = cos , ∀i, j ∈ {1 . . . n}
2n
The above basis matrix includes the non-periodic (special) basis vector, and it is unnor-
malized because the norm of each column is not 1. A key point is the columns of the basis
matrix B are orthogonal:
Lemma 2.7.1 (Orthogonality of Basis Vectors) The dot product of any pair of basis
vectors bp and bq of the discrete cosine transform for p = q is 0.
Proof Sketch: We use the identity that cos(x)cos(y) = [cos(x + y) + cos(x − y)]/2. Using
this identity, it can be shown that the dot product between bp and bq is as follows:
! " ! "
1 1
n n
[p + q][2i − 1]π [p − q][2i − 1]π
bp · bq = cos + cos
2 i=1 2n 2 i=1 2n
The right-hand side can be broken up into the sum of two cosine series with their arguments
in arithmetic progression. This is a standard trigonometric identity [73]. Using the formula
for the sum of cosine series with arguments in arithmetic progression, these sums can be
shown to be proportional to sin(nδ/2)cos(nδ/2)/sin(δ/2) ∝ sin(nδ)/sin(δ/2), where δ =
(p + q)π/n in the first cosine series, and δ = (p − q)π/n in the second cosine series. The
value of sin(nδ) is 0 for both values of δ, and therefore both series sum to 0.
Lemma 2.7.2 (Norms of Basis √ Vectors) The norm of the special basis vector b1 of the
discrete cosine transform is n, whereas the norm of each bp for p ∈ {2, . . . , n} is n/2.
Proof Sketch: The proof for b1 is trivial. For p > 1 the squared norms of bp are the
sums of squares of cosines with arguments in arithmetic progression. Here, we can use the
trigonometric identity cos2 (x) = (1 + cos(2x))/2. Therefore, we obtain the following:
! "
n 1
n
p[2i − 1]π
bp = +
2
cos
2 2 i=1 n
0
2.8. AN OPTIMIZATION-CENTRIC VIEW OF LINEAR SYSTEMS 79
As in the proof of the previous lemma, the cosine series with angles in arithmetic progression
sums to 0. The result follows.
The basis matrix B is orthogonal after√matrix normalization. One can normalize the matrix
√ by dividing all matrix entries with n, and then multiplying columns 2 through n with
B
2. For example, an 8 × 8 normalized basis matrix for the cosine transform is as follows:
⎡ 1 π ⎤
√
2
cos( 16 ) cos( 2π
16 ) cos( 3π
16 ) cos( 4π
16 ) cos( 5π
16 ) cos( 6π
16 ) cos( 7π
16 )
⎢ √1 cos( 3π cos( 6π cos( 9π 12π 15π 18π
cos( 21π ⎥
⎢ 2 16 ) 16 ) 16 ) cos( 16 ) cos( 16 ) cos( 16 ) 16 ) ⎥
⎢ √1 5π 10π 15π 20π 25π 30π 35π ⎥
⎢ 2 cos( 16 ) cos( 16 ) cos( 16 ) cos( 16 ) cos( 16 ) cos( 16 ) cos( 16 ) ⎥
⎢ 1 ⎥
1 ⎢ √ cos( 7π 14π 21π 28π 35π
16 ) cos( 16 ) cos( 16 ) cos( 16 ) cos( 16 ) cos( 16 )
42π
cos( 49π
16 ) ⎥
⎥
B= ⎢ ⎢
2
1
2 ⎢ √2 cos( 16 ) cos( 16 ) cos( 16 ) cos( 16 ) cos( 16 ) cos( 16 ) cos( 16 ) ⎥
9π 18π 27π 36π 45π 54π 63π
⎥
⎢ √1 cos( 11π ) cos( 22π ) cos( 33π ) cos( 44π ) cos( 55π ) cos( 66π ) cos( 77π ) ⎥
⎢ 2 ⎥
⎢ 1 16 16 16 16 16 16
91π ⎥
16
⎣ √2 cos( 13π16 ) cos( 26π
16 ) cos( 39π
16 ) cos( 52π
16 ) cos( 65π
16 ) cos( 78π
16 ) cos( 16 ) ⎦
√1 cos( 15π
) cos( 30π
) cos( 45π
) cos( 60π
) cos( 75π
) cos( 90π
) cos( 105π
)
2 16 16 16 16 16 16 16
Consider the time-series s = [8, 6, 2, 3, 4, 6, 6, 5]T , which is the same example used in Sec-
tion 2.3.4 on wavelet transformations. This time-series can be transformed to the basis of
the discrete cosine transform by solving the system of equations Bx = s in order to compute
the coordinates x. Since B is an orthogonal matrix, the solution x is given by x = B T s. The
smaller coefficients can be set to 0 in order to enable space-efficient sparse representations.
The focus on capturing periodicity makes the discrete cosine transform quite different
from the wavelet transform. It is closely related to the discrete Fourier transform (cf. Sec-
tion 2.11.1), and the former is the preferred choice in some applications like jpeg compression.
The discrete cosine transform has many variants depending on how one samples the cosine
function to generate the basis vectors. The version presented in this section is referred to
as DCT-II, and it is the most popular version of the transform [121].
J = Ax − b2
Best Fit
Although one can use calculus to solve this problem (see Section 4.7 of Chapter 4), we use a
geometric argument. The closest approach from a point to a hyperplane is always orthogonal
to the hyperplane. The vector (b − Ax) ∈ Rn , which joins b to its closest approximation
b = Ax on the hyperplane defined by the column space of A, must be orthogonal to the
80 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
hyperplane and therefore to every column of A (see Figure 2.9). Hence, we obtain the normal
equation AT (b − Ax) = 0, which yields the following:
The assumption here is that AT A is invertible, which can occur only when the columns
of A are linearly independent (according to Lemma 2.6.6). This can happen only when A
is a “tall” matrix (i.e., n ≥ d). The matrix L = (AT A)−1 AT is referred to as the left-
inverse of the matrix A, which is a generalization of the concept of a conventional inverse to
rectangular matrices. In such a case, it is evident that we have LA = (AT A)−1 (AT A) = Id .
Note that the identity matrix Id is of size d × d. However, AL will be a (possibly larger)
n× n matrix, and it can never be the identity matrix when n > d. Therefore, the left-inverse
is a one-sided inverse.
An important point is that there are many matrices L for which L A = Id , when
the matrix A satisfies d < n and has linearly independent columns, although the choice
(AT A)−1 AT is the preferred one. In order to understand this point, let z 1 . . . z d be any set
of n-dimensional row vectors such that z i A = 0. As long as the tall matrix A is of rank
strictly less than n (i.e., non-empty left null space), such a set of non-zero vectors can be
found. Note that even if the rank of the left null space of A is 1, we can find d such vectors
that are scalar multiplies of one another. We can stack up these d vectors into a d×n matrix
Z, such that the ith row contains the vector z i . Then, it can be shown that any d × n matrix
Lz (in which Z is chosen according to the aforementioned procedure) is a left-inverse of L:
Lz = (AT A)−1 AT + Z
Using Lz to solve the system of equations as x = Lz b will provide the same solution as
x = (AT A)−1 AT b, when a consistent solution to the system of equations exists. However,
it will not provide an equally good best-fit to an inconsistent system of equations because
it was not derived from the optimization-centric view of linear systems. This is the reason
that even though alternative left-inverses exist, only one of them is the preferred one.
What happens when n < d or when (AT A) is not invertible? In such a case, we have
an infinite number of possible best-fit solutions, all of which have the same optimal value
(which is typically but not necessarily2 0). Although there are an infinite number of best-fit
solutions, one can discriminate further using a conciseness criterion, according to which
we want x2 as small as possible (as a secondary criterion) among alternative minima
for Ax − b2 (which is the primary criterion). The conciseness criterion is a well-known
principle in machine learning, wherein simple solutions are preferable over complex ones
(see Chapter 4). When the rows of A are linearly independent, the most concise solution x
is the following (see Exercise 31):
2 When n < d, we could have an inconsistent system Ax = b with linearly dependent rows and columns
10
in A; an example is the equation pair 10
i=1 xi = 1 and i=1 xi = −1. However, linearly independent rows
and n < d guarantees an infinite number of consistent solutions.
2.8. AN OPTIMIZATION-CENTRIC VIEW OF LINEAR SYSTEMS 81
Problem 2.8.3 Consider an n × d matrix A with linearly independent rows and n < d.
How many matrices R are there that satisfy AR = In ?
d
J = Ax − b2 + λ( x2i )
i=1
Best Fit
Concise
The additional term in the objective function is a regularization term, which tends to fa-
vor small absolute components of the vector x. This is precisely the conciseness criterion
discussed in the previous section. The value λ > 0 is the regularization parameter, which
regulates the relative importance of the best-fit term and the conciseness term.
We have not yet introduced the methods required to compute the solution to the above
optimization problem (which are discussed in Section 4.7 of Chapter 4). For now, we ask the
reader to make the leap of faith that this optimization problem has the following alternative
forms of the solution:
It is striking how similar both the above forms are to left- and right-inverses introduced
in the previous section, and they are referred to as the regularized left inverses and right
inverses, respectively. Both solutions turn out to be the same because of the push-through
82 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
identity (cf. Problem 1.2.13 of Chapter 1). An important difference of the regularized form of
the solution from the previous section is that both the matrices (AT A+λId ) and (AAT +λIn )
are always invertible for λ > 0 (see Problem 2.4.2), irrespective of the linear independence
of the rows and columns of A. How should be parameter λ > 0 be selected? If our primary
goal is to find the best-fit solution, and the (limited) purpose of the regularization term
is to only play a tie-breaking role among equally good fits (with the secondary conciseness
criterion), it makes sense to allow λ to be infinitesimally small.
In the limit that λ → 0+ , these (equivalent) matrices are the same as the Moore-Penrose
pseudoinverse. This provides the following limit-based definition:
limλ→0+ (AT A + λId )−1 AT = limλ→0+ AT (AAT + λIn )−1 [Moore-Penrose Pseudoinverse]
Note that λ approaches 0 from the right, and the function can be discontinuous at λ = 0 in
the most general case. The conventional inverse, the left-inverse, and the right-inverse are
special cases of the Moore-Penrose pseudoinverse. When the matrix A is invertible, all four
inverses are the same. When only the columns of A are linearly independent, the Moore-
Penrose pseudoinverse is the left-inverse. When only the rows of A are linearly independent,
the Moore-Penrose pseudoinverse is the right-inverse. When neither the rows nor columns of
A are linearly independent, the Moore-Penrose pseudoinverse provides a generalized inverse
that none of these special cases can provide. Therefore, the Moore-Penrose pseudoinverse
respects both the best-fit and the conciseness criteria like the left- and right inverses.
The Moore-Penrose pseudoinverse is computed as follows. An n × d matrix A of rank r
has a generalized QR decomposition of the form A = QR, where Q is an n × r matrix with
orthonormal columns, and R is a rectangular r × d upper-triangular matrix of full row rank.
The matrix RRT is therefore invertible. Then, the pseudoinverse of A is as follows:
A+ = limλ→0+ (RT R + λId )−1 RT QT = limλ→0+ RT (RRT + λIn )−1 QT = RT (RRT )−1 QT
We used QT Q = I in the first step and the push-though identity in the second step. Another
approach using singular value decomposition is discussed in Section 7.4.4.
b’ b
ORIGIN
Figure 2.9: The projection of the 3-dimensional vector b on to its closest 3-dimensional point
b lying on a 2-dimensional plane defined by the columns of the 3 × 2 matrix A is shown for
the inconsistent system Ax = b. Multiplying b with the 3 × 3 projection matrix yields b
Rx = QT b = QT QQT b = QT b (2.19)
Note that the matrix A is singular, whereas the matrix A is invertible. The matrix A
could easily have been created by computer finite-precision errors during computation of
what was intended to be A. The inverse of the matrix can be approximated as follows:
108 1 + 10−8 /2 −1 + 10−8 /2 108 1.000000005 −0.999999995
A−1 ≈ =
2 −1 + 10−8 /2 1 + 10−8 /2 2 −0.999999995 1.000000005
It is evident that the inverse contains very large entries, and many entries need to be
represented to a very high degree of precision in order to perform accurate multiplication
with the original matrix. The combination of the two is a deadly cocktail because of the
disproportionate effect of round-off errors and the possibility of numerical overflows in some
cases. In order to understand the problematic aspects of this type of inversion, consider the
case where one tries to solve the system of equations Ax = b. One of the properties of A
is that A x is always non-zero (because the matrix A is nonsingular), but the value√of the
norm A x will vary a lot. For example, choosing x = [1, 1] will √ result in Ax ≈ 2. On
the other hand, choosing x = [1, −1]T will result in Ax = 10−8 2.
This type of variation can cause numerical problems in near-singular systems. Since the
entries of A−1
are very large, small changes in b can lead to large and unstable changes in
the solution x. The resulting solutions might sometimes not be semantically meaningful,
if the non-singularity of A was caused by computational errors. For example, one would
always be able to find a solution to A x = b, but in some cases the solution might be so large
so as to cause a numerical overflow (caused by magnification of a tiny computational error).
In the above case, using b = [1, −1]T might lead to numerical problems, where all entries
are of the order of 108 . The problem of ill-conditioning is ubiquitous in matrix operations
and linear algebra. One can quantify the ill-conditioning of a square and invertible matrix
A with the notion of condition numbers:
86 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
Definition 2.9.1 (Condition Number) Let A be a d×d invertible matrix. Let Ax/x
be the scaling ratio of vector x. Then, the condition number of A is defined as the ratio of
the largest scaling ratio of A (over all d-dimensional vectors) to the smallest scaling ratio
over all d-dimensional vectors.
The smallest possible condition number of 1 occurs for the identity matrix (or any orthog-
onal matrix). After all, orthogonal matrices only rotate or reflect a vector without scaling
it. Singular matrices have undefined condition numbers, and near-singular matrices have
extremely large condition numbers. One can compute the condition number of a matrix
using a method called singular value decomposition (cf. Section 7.4.4.1 of Chapter 7). The
intuitive idea is that singular value decomposition tells us about the various scale factors in a
linear transformation (also referred to as singular values). Therefore, the ratio of the largest
to smallest scale factor gives us the condition number. See Section 7.4.4.1 of Chapter 7 on
methods for solving ill-conditioned systems.
When the linear transformation A is a rotreflection matrix, the matrix S is the identity
matrix, and the inner product specializes to the normal dot product. The inner product
also induces cosines and distances with respect to transformation A:
It is easy to see that the induced distances and angles correspond to our normal geometric
understanding of lengths and angles after using the matrix A to perform a linear transfor-
mation on the vectors. The value x − y, x − y is referred to as a metric, which satisfies
all laws of Euclidean geometry, such as the triangle inequality. This is not particularly
surprising, given that it is a Euclidean distance in transformed space.
A more general definition of inner products that works beyond Rn (e.g., for abstract
vector spaces) is based on particular axiomatic rules that need to be followed:
Every finite-dimensional inner product x, y in Rn satisfying the above axioms can be
shown to be equivalent to xT Sy for some carefully chosen Gram matrix S = AT A. Therefore,
at least for finite-dimensional vector spaces in Rn , the linear transformation definition and
the axiomatic definition of x, y are equivalent. The following exercise shows how such a
matrix S can be constructed from the axiomatic definition of an inner product:
Problem 2.10.2 Suppose that you are given all n × n real-valued inner products between
pairs drawn from n linearly independent vectors in Rn . Show how you can compute x, y
for any x, y ∈ Rn using the basic axioms of inner products.
The angle θ must be expressed in radians for this formula to hold. Therefore, a complex
number may be represented as r · exp(iθ). The polar representation is very convenient in the
context of many linear algebra operations. This is because the multiplication of two complex
numbers is a simple matter of adding angular exponents and multiplying their magnitudes.
This property is used in various types of matrix products.
One can define a vector space over the complex domain using the same additive and
multiplicative properties over C n as in Rn :
2. If x, y ∈ V, then x + y ∈ V.
88 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
Here, it is important to note that the multiplicative scalar is drawn from the complex domain.
For example, the value of c could be a number such as 1 + i. This is an important difference
from Definition 2.3.2 on real-valued vector spaces. The consequence of this fact is that one
can still use the standard basis e1 . . . en to represent any vector in C n . Here, each ei is
an n-dimensional vector with a 1 in the ith entry, and a 0 in all other entries. Although
ei has real components, all real vectors are special cases of complex-valued vectors. Any
vector x = [x1 . . . xd ]T ∈ C n can be expressed in terms of standard basis, where the ith
coordinate is the complex number xi . The key point is that the coordinates can also be
complex values, since the vector space is defined over the complex field. We need to be able
to perform operations such as projections in order to create coordinate representations. This
is achieved with the notion of complex inner products.
As in the case of real inner products, one wants to retain geometric properties of Eu-
clidean spaces (like notions of lengths and angles). Generalizing inner products from the
real domain to the complex domain can be tricky. In real-valued Euclidean spaces, the
dot product of the vector with itself provides the squared norm. This definition does not
work for complex vectors. For example, a blind computation of the real-valued definition of
squared norm of v = [1, 2i]T results in the following:
1
v T v = [1, 2i] = 12 + 4i2 = 1 − 4 = −3 (2.20)
2i
We obtain a negative value for squared norm, which is intended to be a proxy for the squared
length. Therefore, we need modified axioms for the complex-valued inner product u, v:
The superscript ‘*’ indicates the conjugate of a complex number, which is obtained by
negating the imaginary part of the number. The inner product computation of Equation 2.20
is invalid is because it violates the positive definite property.
For a scalar complex number, its squared norm is defined by its product with its conju-
gate. For example, the squared norm of a + ib is (a − ib)(a + ib) = a2 + b2 . In the case of
vectors, we can combine transposition with conjugation in order to define inner products.
The conjugate transpose of a complex vector or matrix is defined as follows:
Definition 2.11.2 (Conjugate Transpose of Vector and Matrix) The conjugate trans-
pose v ∗ of a complex vector v is obtained by transposing the vector and replacing each entry
with its complex conjugate. The conjugate transpose V ∗ of a complex matrix V is obtained
by transposing the matrix and replacing each entry with its complex conjugate.
Therefore, the conjugate transpose of [1, 2i]T is [1, −2i], and the conjugate transpose of
[1 + i, 2 + 3i]T is [1 − i, 2 − 3i].
A popular way of defining4 the inner product between vectors u, v ∈ C n , which is the
direct analog of the dot product, is the following:
u, v = u∗ v (2.21)
4 Some authors define u, v = v ∗ u (which is a conjugate of the definition here). The choice does not
The inner product can be a complex number. Unlike vectors in Rn , the inner product is
not commutative over the complex domain, because u, v is the complex conjugate of v, u
(i.e., conjugate symmetry property). The squared norm of a vector v ∈ C n is defined as v ∗ v
rather than v T v; this is the inner product of the vector with itself. Based on this definition,
the squared norm of [1, 2i]T is [1, −2i][1, 2i]T , which is 12 + 22 = 5. Similarly, the squared
norm of [1 + i, 2 + 3i]T is (1 + i)(1 − i) + (2 + 3i)(2 − 3i) = 1 + 1 + 4 + 9 = 15. Note that
both are positive, which is consistent with the positive definite property.
As in the real domain, two complex vectors are orthogonal when their inner product is
0. In such a case, both the complex conjugates u, v and v, u are zero.
Definition 2.11.3 (Orthogonality in C n ) Two vectors u and v from C n are orthogonal
if and only if u∗ v = v ∗ u = 0.
An orthonormal set of vectors in C n corresponds to any set of vectors v 1 . . . v n , such that
v ∗i v j is 1 when i = j, and 0, otherwise. Note that the standard basis is also orthogonal
in C n . As in the real domain, an n × n matrix containing orthogonal columns from C n is
referred to as orthogonal or unitary.
Definition 2.11.4 (Orthogonal Matrix with Complex Entries) A matrix V with
complex-valued entries is orthogonal or unitary if and only if V V ∗ = V ∗ V = I.
It is relatively easy to compute the inverse of orthogonal matrices by simply computing
their conjugate transposes. This idea has applications to the discrete Fourier transform.
One of the simplifications above uses the fact that exp(iθ) is 1 when θ is a multiple of 2π.
90 CHAPTER 2. LINEAR TRANSFORMATIONS AND LINEAR SYSTEMS
One can, therefore, create a basis matrix B whose columns contain the basis vectors b1 . . . bn .
For example, the 8 × 8 basis matrix for transformation of vectors in C 8 is as follows:
⎡ ⎤
1 1 1 1 1 1 1 1
⎢ 1 exp( 2πi exp( 4πi exp( 6πi exp( 8πi exp( 10πi exp( 12πi exp( 14πi ⎥
⎢ 8 ) 8 ) 8 ) 8 ) 8 ) 8 ) 16 ) ⎥
⎢ 1 exp( 4πi exp( 8πi exp( 12πi exp( 16πi exp( 20πi exp( 24πi exp( 28πi ⎥
⎢ 8 ) 8 ) 8 ) 8 ) 8 ) 8 ) 8 ) ⎥
1 ⎢ 1 exp( 6πi
8 ) exp( 12πi
8 ) exp( 18πi
8 ) exp( 24πi
8 ) exp( 30πi
8 ) exp( 36πi
8 ) exp( 42πi
8 )
⎥
√ ⎢ ⎥
8⎢
⎢ 1 exp( 8πi
8 ) exp( 16πi
8 ) exp( 24πi
8 ) exp( 32πi
8 ) exp( 40πi
8 ) exp( 48πi
8 ) exp( 56πi
8 )
⎥
⎥
⎢ 1 exp( 10πi exp( 20πi exp( 30πi exp( 40πi exp( 50πi exp( 60πi exp( 70πi ⎥
⎢ 8 ) 8 ) 8 ) 8 ) 8 ) 8 ) 8 ) ⎥
⎣ 1 exp( 12πi exp( 24πi exp( 36πi exp( 48πi exp( 60πi exp( 72πi exp( 84πi ⎦
8 ) 8 ) 8 ) 8 ) 8 ) 8 ) 8 )
1 exp( 14πi
8 ) exp( 28πi
8 ) exp( 42πi
8 ) exp( 56πi
8 ) exp( 70πi
8 ) exp( 84πi
8 ) exp( 98πi
8 )
The matrix B is orthogonal, and therefore the basis transformation is length preserving:
Given a complex-valued time-series s from C 8 , one can transform it to the Fourier basis by
solving the system of equations Bx = s. The solution to this system is simply x = B ∗ s,
which provides the complex coefficients of the series. As a practical matter, the approach
is used for real-valued time series. For example, consider our running example of the time-
series s = [8, 6, 2, 3, 4, 6, 6, 5]T , which is used in Section 2.3.4 on the wavelet transform.
One can simply pretend that this series is a special case of a complex-valued series, and
compute the Fourier coefficients as x = B ∗ s. The main problem with this approach is
that it transforms a series from R8 to C 8 , since the coordinates in x will have imaginary
components. A naı̈ve solution to this problem is to create a representation in R16 that
contains both real and imaginary parts of each component of x. Therefore, the Fourier
transformation contains twice the number of real-valued coefficients as the original series.
This increase is a consequence of treating a real-valued time-series as a special case of
a complex-valued series. Because of the real-valued nature of the original series, wasteful
redundancy exists in the coordinate vector x, whose kth component is always the complex
conjugate of the (8 − k)th component for all k. Therefore, one can keep only the first four
components of the vector x ∈ C 8 and unroll the real and imaginary components of these
four complex numbers into R8 . Furthermore, one sets the small Fourier coefficients to zero
in practice, which leads to space-efficient sparse vector representations.
Problem 2.11.1 Use the 8 × 8 Fourier matrix proposed in this section in order to create
the Fourier representation of s = [8, 6, 2, 3, 4, 6, 6, 5]T .
2.12 Summary
Machine learning applications often use additive and multiplicative transformations with
matrices, which correspond to the fundamental building blocks of linear algebra. These
building blocks are utilized for different types of decompositions such as the QR decom-
position and the LU decomposition. The decompositions are the workhorses to solution
methodologies for many matrix-centric problems in machine learning. Specific examples
include solving systems of linear equations and linear regression.
2.14. EXERCISES 91
2.14 Exercises
1. If we have a square matrix A that satisfies A2 = I, it is always the case that A = ±I.
Either prove the statement or provide a counterexample.
2. Show that the matrices A, AAT , and AT A must always have the same rank for any
n × d matrix A. Start by showing that Ax = 0 if and only if AT Ax = 0.
5. Use each of row reduction and Gram-Schmidt to find basis sets for the span of
{[1, 2, 1]T , [2, 1, 1]T , [3, 3, 2]T }. What are the best-fit coordinates of [1, 1, 1]T in each
of these basis sets? Verify that the best-fit vector is the same in the two cases.
7. A d × d skew symmetric matrix satisfies AT = −A. Show that all diagonal elements
of such a matrix are 0. Show that each x ∈ Rd is orthogonal to Ax if and only if A is
skew symmetric. What is the difference from a pure rotation by 90◦ ?
8. Consider the 4 × 4 Givens matrix Gc (2, 4, 90) based on the notations on page 47. This
matrix performs a 90◦ counter-clockwise rotation of a 4-dimensional column vector
in the plane of the second and fourth dimensions. Show how to obtain this matrix
as the product of two Householder reflection matrices. Think geometrically based on
Section 2.2 in order to solve this problem. Is the answer to this question unique?
9. Repeat Exercise 8 for a Givens matrix that rotates a column vector counter-clockwise
for 10◦ instead of 90◦ .
10. Consider the 5 × 5 matrices A, B, and C, with ranks 5, 2, and 4, respectively. What
is the minimum and maximum possible rank of (A + B)C.
11. Solve the following system of equations using the Gaussian elimination procedure:
⎡ ⎤⎡ ⎤ ⎡ ⎤
0 1 1 x1 2
⎣ 1 1 1 ⎦ ⎣ x2 ⎦ = ⎣ 3 ⎦
1 2 1 x3 4
12. Solve the system of equations in the previous exercise using QR decomposition. Use the
Gram-Schmidt method for orthogonalization. Use the QR decomposition to compute
the inverse of the matrix if it exists.
13. Why must the column space of matrix AB must be a subspace of the column space
of A? Show that all four fundamental subspaces of Ak+1 must be the same as that of
Ak for some integer k.
14. Consider a vector space V ⊂ R3 and two of its possible basis sets B1 =
{[1, 0, 1]T , [1, 1, 0]T } and B2 = {[0, 1, −1]T , [2, 1, 1]T }. Show that B1 and B2 are ba-
sis sets for the same vector space. What is the dimensionality of this vector space?
Now consider a vector v ∈ V with coordinates [1, 2]T in basis B1 , where the order
of coordinates matches the order of listed basis vectors. What is the standard basis
representation of v? What are the coordinates of v in B2 ?
15. Find the projection matrix of the following matrix using the QR method:
⎡ ⎤
3 6
A=⎣ 0 1 ⎦
4 8
How can you use the projection matrix to determine whether the vector b = [1, 1, 0]T
belongs to the column space of A? Find a solution (or best-fit solution) to Ax = b.
16. For the problem in Exercise 15, does a solution exist to AT x = c, where c = [2, 2]T ?
If no solution exists, find the best-fit. If one or more solutions exist, find the one for
which x is as small as possible.
17. Gram-Schmidt with Projection Matrix: Given a set of m < n linearly indepen-
dent vectors a1 . . . am in Rn , let Ar be the n×r matrix defined as Ar = [a1 , a2 , . . . , ar ]
for each r ∈ {1 . . . m}. Show the result that after initializing q 1 = a1 , the unnormalized
Gram-Schmidt vectors q 2 . . . q m of a2 . . . am can be computed non-recursively using
the projection matrix Ps as follows:
18. Consider a d × d matrix A such that its right null space is identical to its column
space. Show that d is even, and provide an example of such a matrix.
19. Show that the columns of the n × d matrix A are linearly independent if and only if
f (x) = Ax is a one-to-one function.
20. Consider an n × n matrix A. Show that if the length of the vector Ax is strictly less
than that of the vector x for all non-zero x ∈ Rn , then (A − I) is invertible.
21. It is intuitively obvious that an n × n projection matrix P will always satisfy P b ≤
b for any b ∈ Rn , since it projects b on a lower-dimensional hyperplane. Show
algebraically that P b ≤ b for any b ∈ Rn . [Hint: Express the rank-d projection
matrix P = QQT for n × d matrix Q and start by showing QQT b = QT b. What
is the geometric interpretation of QT b and QQT b?]
22. Let A be a 10 × 10 matrix. If A2 has rank 6, find the minimum and maximum possible
ranks of A. Give examples of both matrices.
2.14. EXERCISES 93
24. Show that every n × n Householder reflection matrix can be expressed as Q1 QT1 −
Q2 QT2 , where concatenating the columns of Q1 and Q2 creates an n×n orthogonal ma-
trix, and Q2 contains a single column. What is the nature of the linear transformation,
when Q2 contains more than one column?
25. Show that if B k has the same rank as that of B k+1 for a particular value of k ≥ 1,
then B k has the same rank as B k+r for all r ≥ 1.
26. Show that if an n × n matrix B has rank (n − 1), and the matrix B k has rank (n − k),
then each matrix B r for r from 1 to k has rank (n − r). Show how to construct a
chain of vectors v 1 . . . v k so that Bv i = v i−1 for i > 1, and Bv 1 = 0. [Note: You will
encounter a similar but more complex Jordan chain in Chapter 3.]
27. Suppose that B k v = 0 for a particular vector v for some k ≥ 2, and B r v = 0 for all
r < k. Show that the vectors v, Bv, B 2 v, . . . , B k−1 v must be linearly independent.
31. Right-inverse yields concise solution: Let x = v be any solution to the consistent
system Ax = b with n × d matrix A containing linearly independent rows. Let v r =
AT (AAT )−1 b be the solution given by the right inverse. Then, show the following:
v2 = v − v r 2 + v r 2 + 2v Tr (v − v r ) ≥ v r 2 + 2v Tr (v − v r )
32. Show that any 2×2 Givens rotation matrix is a product of at most two Householder re-
flection matrices. Think geometrically before wading into the algebra. Now generalize
the proof to d × d matrices.
33. Show algebraically that if two tall matrices of full rank have the same column space,
then they have the same projection matrix.
34. Construct 4 × 3 matrices A and B of rank 2 that are not multiples of one another,
but with the same four fundamental subspaces of linear algebra. [Hint: A = U V .]
35. Show that any Householder reflection matrix (I − 2v v T ) can be expressed as follows:
cos(θ) sin(θ)
(I − 2v v ) =
T
sin(θ) −cos(θ)
Relate v to θ geometrically.
36. Show how any vector v ∈ Rn can be transformed to w ∈ Rn as w = c Hv, where c
is a scalar and H is an n × n Householder reflection matrix. Think geometrically to
solve this exercise.
37. A block upper-triangular matrix is a generalization of a block diagonal matrix (cf. Sec-
tion 1.2.3) that allows non-zero entries above the square, diagonal blocks. Consider a
block upper-triangular matrix with invertible diagonal blocks. Make an argument why
such a matrix is row equivalent to an invertible block diagonal matrix. Generalize the
backsubstitution method to solving linear equations of the form Ax = b when A is
block upper-triangular. You may assume that the diagonal blocks are easily invertible.
38. If P is a projection matrix, show that (P + λI) is invertible for any λ > 0. [Hint: Show
that xT (P + λI)x > 0 for all x, and therefore (P + λI)x = 0.]
39. If R is a Householder reflection matrix, show that (R + I) is always singular, and that
(R + λI) is invertible for any λ ∈ {1, −1}.
40. Length-preserving transforms are orthogonal: We already know that if A is an
n × n orthogonal matrix, then Ax = x for all x ∈ Rn . Prove the converse of this
result that if Ax = x for all x ∈ Rn , then A is orthogonal.
41. Let A be a square n × n matrix so that (A + I) has rank (n − 2). Let f (x) be
the polynomial f (x) = x3 + x2 + x + 1. Show that f (A) has rank at most (n − 2).
Furthermore, show that f (A) has rank exactly (n − 2) if A is symmetric.
42. Suppose that a d × d matrix A exists along with d vectors x1 . . . xd so that xTi Axj
is zero if and only if i = j. Show that the vectors x1 . . . xd are linearly independent.
Note that A need not be symmetric.
43. Suppose that a d × d symmetric matrix S exists along with d vectors x1 . . . xd so that
xTi Sxj is zero when i = j and positive when i = j. Show that x, y = xT Sy is a valid
inner product over all x, y ∈ Rd . [Hint: The positive definite axiom is the hard part.]
44. Cauchy-Schwarz and triangle inequality for general inner products: Let u
and v be two vectors for which u, u = v, v = 1. Show using only the inner-product
axioms that |u, v| ≤ 1. Now show the more general Cauchy-Schwarz inequality by
defining u and v appropriately in terms of x and y:
2.14. EXERCISES 95
Now use this result (and the inner-product axioms) to prove the triangle inequality
for the triangle formed by x, y, and the origin:
“Mathematics is the art of giving the same name to different things.” – Henri
Poincare
3.1 Introduction
Any square matrix A of size d × d can be considered a linear operator, which maps the
d-dimensional column vector x to the d-dimensional vector Ax. A linear transformation Ax
is a combination of operations such as rotations, reflections, and scalings of a vector x.
A diagonalizable matrix is a special type of linear operator that only corresponds to a
simultaneous scaling along d different directions. These d different directions are referred to
as eigenvectors and the d scale factors are referred to as eigenvalues. All such matrices can
be decomposed using an invertible d × d matrix V and a diagonal d × d matrix Δ:
A = V ΔV −1
The columns of V contain d eigenvectors and the diagonal entries of Δ contain the eigen-
values. For any x ∈ Rd , one can geometrically interpret A x using the decomposition in
terms of a sequence of three transformations: (i) Multiplication of x with V −1 computes the
coordinates of x in a (possibly non-orthogonal) basis system corresponding to the columns
(eigenvectors) of V , (ii) multiplication of V −1 x with Δ to create ΔV −1 x dilates these co-
ordinates with scale factors in Δ in the eigenvector directions, and (iii) final multiplication
with V to create V ΔV −1 x transforms the coordinates back to the original basis system (i.e.,
the standard basis). The overall result is an anisotropic scaling in d eigenvector directions.
Linear transformations that can be represented in this way correspond to diagonalizable
matrices. A d × d diagonalizable matrix represents a linear transformation corresponding to
anisotropic scaling in d linearly independent directions.
When the columns of matrix V are orthonormal vectors, we have V −1 = V T . In such a
case, the scaling is done along mutually orthogonal directions, and the matrix A is always
ORIGINAL ORIGINAL
TRANSFORMED TRANSFORMED
ORIGIN ORIGIN
3.2 Determinants
Imagine a scatter plot of n coordinate vectors x1 . . . xn ∈ Rd , which corresponds to the
outline of a d-dimensional object. Multiplying these vectors with a d × d matrix A to create
the vectors Ax1 . . . Axn will result in a distortion of the object. When the matrix A is
diagonalizable, this distortion is fully described by anisotropic scaling, which affects the
“volume” of the object. How can one determine the scale factors of the transformation
implied by multiplication with a matrix? To do so, one must first obtain some notion of
the effect of a linear transformation on the volume of an object. This is achieved by the
notion of the determinant of a square matrix, which can be viewed as a quantification of its
“volume.” A rather loose but intuitive definition of the determinant is as follows:
Definition 3.2.1 (Determinant: Geometric View) The determinant of a d × d matrix
is the (signed) volume of the d-dimensional parallelepiped defined by its row (or column)
vectors.
The determinant of a matrix A is denoted by det(A). The above definition is self-consistent
because the volume defined by the row vectors and the volume defined by the column vectors
of a square matrix can be mathematically shown to be the same. This definition is, however,
3.2. DETERMINANTS 99
Y-AXIS
X-AXIS
Figure 3.2: Parallelepipeds before and after a row operation on the 3 × 3 identity matrix
incomplete because it does not define the sign of det(A). The sign of the determinant
tells us about the effect of multiplication by A on the orientation of the basis system.
For example, a Householder reflection matrix always has a determinant of −1 because it
changes the orientation of the vectors it transforms. It is noteworthy that multiplying an
n × 2 data matrix containing the 2-dimensional scatter plot of a right hand (in its rows)
with a 2 × 2 reflection matrix will change the scatter plot to that of a left hand. The sign
of the determinant keeps track of this orientation effect of the linear transformation. The
geometric view of useful because it provides us an intuitive idea of what the determinant
actually computes in terms of absolute values. Consider the following two matrices:
⎡ ⎤ ⎡ ⎤
1 0 0 1 0 0
A=⎣ 0 1 0 ⎦, B = ⎣ 1 1 0 ⎦ (3.1)
0 0 1 0 0 1
The parallelepipeds formed by the rows of each matrix are shown in Figure 3.2(a) and
(b), respectively. The determinant of both matrices can be shown to be 1, and both paral-
lelepipeds have a base area of 1 and a height of 1. The first of these matrices is simply the
identity matrix, which is an orthogonal matrix. An orthogonal matrix always forms a unit
hypercube, and so the absolute value of its determinant is always 1.
A matrix needs to be non-singular (i.e., invertible) in order for the determinant to be
non-zero. For example, if we have a 3 × 3 matrix that has a rank of 2, then all three row
vectors must lie on a 2-dimensional plane. Therefore, the parallelepiped formed by these
three row vectors cannot have a non-zero 3-dimensional volume. The determinant of the
d × d matrix A can also be defined in terms of (d − 1) × (d − 1) submatrices of A:
d
det(A) = (−1)(i+j) aij det(Aij ) [Fixed column j] (3.2)
i=1
The above computation fixes a column j, and then expands using all the elements of that
column. Any choice of j will yield the same determinant. It is also possible to fix a row i
and expand along that row:
d
det(A) = (−1)(i+j) aij det(Aij ) [Fixed row i] (3.3)
j=1
The recursive definition implies that some matrices have easily computable determinants:
• Diagonal matrix: The determinant of a diagonal matrix is the product of its diagonal
entries.
In this case, we can expand along the first column to obtain the following:
e f b c b c
det(A) = a · det − d · det + g · det
h i h i e f
= a(ei − hf ) − d(bi − hc) + g(bf − ec)
= aei − ahf − dbi + dhc + gbf − gec
with an even number of element interchanges and it is −1 otherwise. Then, the determinant
of A is defined as follows:
# %
$d
det(A) = sgn(σ) aiσi (3.6)
σ∈Σ i=1
3. A matrix with two identical rows has a determinant of 0. This also means that adding
or subtracting the multiple of row j of the matrix from row i and using the result
to replace row i does not change the determinant. Note that we are “shearing” the
parallelepiped in the 2-dimensional plane defined by rows i and j (as in Figure 3.2)
without changing its volume.
4. Multiplying a single row of the matrix A with c to create the new matrix A results
in multiplication of the determinant of A by a factor of c (because we are scaling the
volume of the matrix parallelepiped by c).
A natural corollary of the above result is that multiplying the entire d × d matrix by
c scales its determinant by cd .
5. The determinant of a matrix A is non-zero only if the matrix is non-singular (i.e.,
invertible). Geometrically, a parallelepiped of linearly dependent vectors lies in a lower
dimensional plane with zero volume.
These results can also be used to derive an important product-wise property of determinants.
Lemma 3.2.1 The determinant of the product of two matrices A and B is the product of
their determinants:
det(AB) = det(A) · det(B) (3.9)
Proof: Consider two matrices A and B. One can apply the same elementary row addition
and interchange operations on A and AB to create matrices A and [AB] while maintaining
A B = [AB] . Furthermore, one can apply the same elementary column operations on B and
102 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
AB to create matrices B and [AB] while maintaining AB = [AB] . Performing a row ad-
dition operation on A or a column addition operation on B has no effect on det(A) · det(B),
and there is also no effect on det(AB) when the same row/column operation is performed
on AB. Performing a row interchange on A or a column interchange on B has the same
negation effect on det(A) · det(B) as on det(AB) when the same operation is performed on
AB. By using row addition/interchange operations on A and column addition/interchange
operations on B, one can obtain upper-triangular matrices A and B (see Chapter 2). Note
that A B is also upper-triangular since the product of two upper-triangular matrices is up-
per triangular. Furthermore, each diagonal entry of A B is the product of the corresponding
diagonal entries of A and B . Since the determinant of an upper-triangular matrix is equal
to the product of its diagonal entries, it is easy to show that the product of the determinants
of A and B is equal to the determinant of A B . The same result, therefore, holds for A, B,
and AB, since the sequence of row and column operations to obtain A B from AB is the
same as the concatenation of the sequence of row operations on A and column operations on
B to obtain A and B , respectively. As we have already discussed, each of these operations
has the same effect on det(A) · det(B) as on det(AB). The result follows.
A corollary of this result is that the determinant of the inverse of a matrix is the inverse of
its determinant:
det(I) 1
det(A−1 ) = = (3.10)
det(A) det(A)
The product-wise property of determinants can be geometrically interpreted in terms of
parallelepiped volumes:
1. Multiplying matrix A with matrix B (in any order) always scales up the (paral-
lelepiped) volume of B with the volume of A. Therefore, even though AB = BA (in
general), their volumes are always the same.
2. Multiplying matrix A with a diagonal matrix with values λ1 . . . λd along the diagonal
scales up the volume of A with λ1 λ2 . . . λd . This is not particularly surprising because
we are stretching the axes with these factors, which explains the nature of the scaling
of the volume of the underlying parallelepiped.
3. Multiplying A with a rotation matrix simply rotates the parallelepiped, and it does
not change the determinant of the matrix.
4. Reflecting a parallelepiped to its mirror image changes its sign without changing its
volume. The sign of the determinant tells us a key fact about the orientation of the
data created using multiplicative transformation with A. For example, consider an
n × 2 data set D containing the 2-dimensional scatter plot of a right hand in its rows.
A negative determinant of a 2 × 2 matrix A means that multiplicative transformation
of the n × 2 data set D with A will result in a scatter plot of a right hand in D
changing into that of a (possibly stretched and rotated) left hand in DA.
5. Since all linear transformations are combinations of rotations, reflections, and scaling
(see Chapter 7), one can compute the absolute effect of a linear transformation on the
determinant by focusing only on the scaling portions of the transformation.
The product-wise property of determinants is particularly useful for matrices with special
structure. For example, an orthogonal matrix satisfies AT A = I, and therefore we have
det(A)det(AT ) = det(I) = 1. Since the determinants of A and AT are equal, it follows that
the square of the determinant of A is 1.
3.3. DIAGONALIZABLE TRANSFORMATIONS AND EIGENVECTORS 103
A hint for solving the above problem is to use the recursive definition of determinants.
Problem 3.2.4 Work out the determinants of all the elementary row operator matrices
introduced in Chapter 1.
Problem 3.2.5 How can one compute the determinant from the QR decomposition or the
LU decomposition of a square matrix.
Problem 3.2.6 Consider a d × d square matrix A such that A = −AT . Use the properties
of determinants to show that if d is odd, then the matrix is singular.
Problem 3.2.7 Suppose that you have a d × d matrix in which the absolute value of every
entry is no greater than 1. Show that the absolute value of the determinant is no greater
than (d)d/2 . Provide an example of a 2 × 2 matrix in which the determinant is equal to this
upper bound. [Hint: Think about the geometric view of determinants.]
Each member of the standard basis is an eigenvector of the diagonal matrix, with eigenvalue
equal to the ith diagonal entry. All vectors are eigenvectors of the identity matrix.
The number of eigenvectors of a d × d matrix A may vary, but only diagonalizable ma-
trices represent anisotropic scaling in d linearly independent directions; therefore, we need
104 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
Av i = λi v i , ∀i ∈ {1 . . . d} (3.12)
AV = V Δ (3.14)
A = V ΔV −1 (3.15)
A = V ΔV −1 (3.16)
Since the determinant of a diagonal matrix is equal to the product of its diagonal entries,
the result follows.
The presence of a zero eigenvalue implies that the matrix A is singular because its deter-
minant is zero. One can also infer this fact from the observation that the corresponding
eigenvector v satisfies Av = 0. In other words, the matrix A is not of full rank because
3.3. DIAGONALIZABLE TRANSFORMATIONS AND EIGENVECTORS 105
its null space is nonempty. A nonsingular, diagonalizable matrix can be inverted easily
according to the following relationship:
Note that Δ−1 can be obtained by replacing each eigenvalue in the diagonal of Δ with its
reciprocal. Matrices with zero eigenvalues cannot be inverted; the reciprocal of zero is not
defined.
It is noteworthy that the ith eigenvector v i belongs to the null space of A − λi I because
(A − λi I)v i = 0. In other words, the determinant of A − λi I must be zero. This polynomial
expression that yields the eigenvalue roots is referred to as the characteristic polynomial
of A.
Note that this is a degree-d polynomial, which always has d roots (including repeated
or complex roots) according to the fundamental theorem of algebra. The d roots of the
characteristic polynomial of any d × d matrix are its eigenvalues.
2. For each root λi of this polynomial, we solve the system of equations (A − λi I)v = 0
in order to obtain one or more eigenvectors. The linearly independent eigenvectors
with eigenvalue λi , therefore, define a basis of the right null space of (A − λi I).
The characteristic polynomial of the d × d identity matrix is (1 − λ)d . This is consistent with
the fact that an identity matrix has d repeated eigenvalues of 1, and every d-dimensional
vector is an eigenvector belonging to the null space of A − λI. As another example, consider
the following matrix:
1 2
B= (3.19)
2 1
Then, the matrix B − λI can be written as follows:
1−λ 2
B − λI = (3.20)
2 1−λ
respectively. The corresponding eigenvectors are [1, 1]T and [1, −1]T , respectively, which can
be obtained from the null-spaces of each (A − λi I).
We need to diagonalize B as V ΔV −1 . The matrix V can be constructed by stacking the
eigenvectors in columns. The normalization of columns is not unique, although choosing V
to have unit columns (which results in V −1 having unit rows) is a common practice. One
can then construct the diagonalization B = V ΔV −1 as follows:
√ √ √ √
1/√2 1/√2 3 0 1/√2 1/√2
B=
1/ 2 −1/ 2 0 −1 1/ 2 −1/ 2
Problem 3.3.2 Find the eigenvectors, eigenvalues, and a diagonalization of each of the
following matrices:
1 0 1 1
A= , B=
−1 2 −2 4
Problem 3.3.3 Consider a d × d matrix A such that A = −AT . Show that all non-zero
eigenvalues would need to occur in pairs, such that one member of the pair is the negative
of the other.
One can compute a polynomial of a square matrix A in the same way as one computes
the polynomial of a scalar — the main differences are that non-zero powers of the scalar
are replaced with powers of A and that the scalar term c in the polynomial is replaced by
c I. When one computes the characteristic polynomial in terms of its matrix, one always
obtains the zero matrix! For example, if the matrix B is substituted in the aforementioned
characteristic polynomial λ2 − 2λ − 3, we obtain the matrix B 2 − 2B − 3I:
5 4 1 2 1 0
B − 2B − 3I =
2
−2 −3 =0
4 5 2 1 0 1
This result is referred to as the Cayley-Hamilton theorem, and it is true for all matrices
whether they are diagonalizable or not.
The Cayley-Hamilton theorem is true in general for any square matrix A, but it can be
proved more easily in some special cases. For example, when A is diagonalizable, it is easy
˙
to show the following for any polynomial function f ():
f (A) = V f (Δ)V −1
Proof: The constant term in the characteristic polynomial is the product of the eigenvalues,
which is non-zero in the case of nonsingular matrices. Therefore, only in the case of non-
singular matrices, we can write the Cayley-Hamilton matrix polynomial f (A) in the form
f (A) = A[g(A)] + cI for some scalar constant c = 0 and matrix polynomial g(A) of degree
(d − 1). Since the Cayley-Hamilton polynomial f (A) evaluates to zero, we can rearrange the
expression above to obtain A [−g(A)/c] = I.
A−1
Problem 3.3.4 Show that any matrix polynomial of a d × d matrix can always be reduced
to a matrix polynomial of degree at most (d − 1).
The above lemma explains why the inverse shows many special properties (e.g., commutativ-
ity of multiplication with inverse) shown by matrix polynomials. Similarly, both polynomials
and inverses of triangular matrices are triangular. Triangular matrices contain eigenvalues
on the main diagonal.
Lemma 3.3.4 Let A be a d × d triangular matrix. Then, the entries λ1 . . . λd on its main
diagonal are its eigenvalues.
Proof: Since A − λi I is singular for any eigenvalue λi , it follows that at least one of the
diagonal values of the triangular matrix A − λi I must be zero. This can only occur if λi is
a diagonal entry of A. The converse can be shown similarly.
The characteristic polynomial of A is (λ2 + 1), which does not have any real-valued roots.
The two complex roots of the polynomial are −i and i. The corresponding eigenvectors are
[−i, 1]T and [i, 1]T , respectively, and these eigenvectors can be found by solving the linear
systems (A − iI)x = 0 and (A + iI)x = 0. Solving a system of linear equations on a complex
field of coefficients is fundamentally not different from how it is done in the real domain.
We verify that the corresponding eigenvectors satisfy the eigenvalue scaling condition:
0 −1 −i −i 0 −1 i i
= −i , =i
1 0 1 1 1 0 1 1
108 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
Each eigenvector is rotated by 90◦ because of multiplication with i or −i. One can then put
these eigenvectors (after normalization) in the columns of V , and compute the matrix V −1 ,
which is also a complex matrix. The resulting diagonalization of A is as follows:
√ √ √ √
−i/√2 i/√2 −i 0 i/√2 1/√2
A = V ΔV −1 =
1/ 2 1/ 2 0 i −i/ 2 1/ 2
It is evident that the use of complex numbers greatly extends the family of matrices that
can be diagonalized. In fact, one can write the family of 2 × 2 rotation matrices at an angle
θ (in radians) as follows:
√ √ −iθ √ √
cos(θ) −sin(θ) −i/√2 i/√2 e 0 i/√2 1/√2
= (3.21)
sin(θ) cos(θ) 1/ 2 1/ 2 0 eiθ −i/ 2 1/ 2
From Euler’s formula, it is known that eiθ = cos(θ)+i sin(θ). It seems geometrically intuitive
that multiplying a vector with the mth power of a θ-rotation matrix should rotate the
vector m times to create an overall rotation of mθ. The above diagonalization also makes it
algebraically obvious that the mth power of the θ-rotation matrix yields a rotation of mθ,
because the diagonal entries in the mth power become e±i mθ .
Problem 3.3.5 Show that all complex eigenvalues of a real matrix must occur in conjugate
pairs of the form a + bi and a − bi. Also show that the corresponding eigenvectors also occur
in similar pairs p + iq and p − iq.
Lemma 3.3.5 If a matrix A is symmetric then each of its left eigenvectors is a right
eigenvector after transposing the row vector into a column vector. Similarly, transposing
each right eigenvector results in a row vector that is a left eigenvector.
Proof: Let y be a left eigenvector. Then, we have (yA)T = λy T . The left-hand side can
be simplified to AT y T = Ay T . Re-writing with the simplified left-hand side, we have the
following:
Ay T = λy T (3.22)
Therefore, y T is a right eigenvector of A. A similar approach can be used to show that each
right eigenvector is a left eigenvector after transposition.
This relationship between left and right eigenvectors holds only for symmetric matrices.
How about the eigenvalues? It turns out that the left eigenvalues and right eigenvalues
are the same irrespective of whether or not the matrix is symmetric. This is because the
characteristic polynomial in both cases is det(A − λI) = det(AT − λI).
3.3. DIAGONALIZABLE TRANSFORMATIONS AND EIGENVECTORS 109
AT = (V ΔV −1 )T = (V −1 )T ΔV T
In other words, the right eigenvectors of AT are the columns of (V −1 )T , which are the
transposed rows of V −1 .
Problem 3.3.6 The right eigenvectors of a diagonalizable matrix A = V ΔV −1 are columns
of V , whereas the left eigenvectors are rows of V −1 . Use this fact to infer the relationships
between left and right eigenvectors of a diagonalizable matrix.
Since the eigenvalues are distinct, it follows that α1 = 0. One can similarly show that each of
α2 . . . αk is zero. Therefore, we obtain a contradiction to our linear dependence assumption.
In the special case that the matrix A has d distinct eigenvalues, one can construct an
invertible matrix V from the eigenvectors. This makes the matrix A diagonalizable.
Lemma 3.3.7 When the roots of the characteristic polynomial are distinct, one can find
d linearly independent eigenvectors. Therefore, a (possibly complex-valued) diagonalization
A = V ΔV −1 of a real-valued matrix A with d distinct roots always exists.
In the case that the characteristic polynomial has distinct roots, one can not only show exis-
tence of a diagonalization, but we can also show that the diagonalization can be performed
in an almost unique way (with possibly complex eigenvectors and eigenvalues). We use the
word “almost” because one can multiply any eigenvector with any scalar, and it still remains
an eigenvector with the same eigenvalue. If we scale the ith column of V by c, we can scale
the ith row of V −1 by 1/c without affecting the result. Finally, one can shuffle the order of
left/right eigenvectors in V −1 , V and eigenvalues in Δ in the same way without affecting
the product. By imposing a non-increasing eigenvector order, and a normalization and sign
convention on the diagonalization (such as allowing only unit normalized eigenvectors in
which the first non-zero component is positive), one can obtain a unique & diagonalization.
On the other hand, if the characteristic polynomial is of the form i (λi − λ)ri , where at
least one ri is strictly greater than 1, the roots are not distinct. In such a case, the solution
to (A − λi I)x = 0 might be a vector space with dimensionality less than ri . As a result, we
may or may not be able to find the full set of d eigenvectors required to create the matrix
V for diagonalization.
The algebraic multiplicity of an eigenvalue λi is the number of times (A−λi I) occurs as a
factor in the characteristic polynomial. For example, if A is a d × d matrix, its characteristic
polynomial always contains d factors (including repetitions and complex-valued factors).
We have already shown that an algebraic multiplicity of 1 for each eigenvalue is the simple
case where a diagonalization exists. In the case where the algebraic multiplicities of some
eigenvalues are strictly greater than 1, one of the following will occur:
• Exactly ri linearly independent eigenvectors exist for each eigenvalue with algebraic
multiplicity ri . Any linear combination of these eigenvectors is also an eigenvector.
In other words, a vector space of eigenvectors exists with rank ri , and any basis of
this vector space is a valid set of eigenvectors. Such a vector space corresponding to
a specific eigenvalue is referred to as an eigenspace. In this case, one can perform the
diagonalization A = V ΔV −1 by choosing the columns of V in an infinite number of
possible ways as the basis vectors of all the underlying eigenspaces.
• If less that ri eigenvectors exist for an eigenvalue with algebraic multiplicity ri , a
diagonalization does not exist. The closest we can get to a diagonalization is the
Jordan normal form (see Section 3.3.4). Such a matrix is said to be defective.
In the first case above, it is no longer possible to have a unique diagonalization even after
imposing a normalization and sign convention on the eigenvectors.
For an eigenvalue λi with algebraic multiplicity ri , the system of equations (A−λi I)x = 0
might have as many as ri solutions. When we have two or more distinct eigenvectors (e.g., v 1
and v 2 ) for the same eigenvalue, any linear combination αv 1 +βv 2 will also be an eigenvector
for all scalars α and β. Therefore, for creating a diagonalization A = V ΔV −1 , one can
construct the columns of V in an infinite number of possible ways. The best example of this
3.3. DIAGONALIZABLE TRANSFORMATIONS AND EIGENVECTORS 111
situation is the identity matrix in which any unit vector is an eigenvector with eigenvalue
1. One can “diagonalize” the (already diagonal) identity matrix I in an infinite number of
possible ways I = V ΔV −1 , where Δ is identical to I and V is any invertible matrix.
Repeated eigenvalues also create the possibility that a diagonalization might not exist.
This occurs when the number of linearly independent eigenvectors for an eigenvalue is
less than its algebraic multiplicity. Even though the characteristic polynomial has d roots
(including repetitions), one might have fewer than d eigenvectors. In such a case, the matrix
is not diagonalizable. Consider the following matrix A:
1 1
A= (3.24)
0 1
A = V U V −1 (3.25)
The upper-triangular matrix U is “almost” diagonal, and it contains diagonal entries con-
taining eigenvalues in the same order as the corresponding generalized eigenvectors in V .
In addition, at most (d − 1) entries, which are just above the diagonal, can be 0 or 1. An
entry just above the diagonal is 0 if and only if the corresponding eigenvector is an ordinary
eigenvector, and it is 1, if it is not an ordinary eigenvector. It is not difficult to verify that
112 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
complex eigenvalues. Therefore, every real matrix can be expressed as the product of at
most two diagonalizable matrices (although one might have complex eigenvalues).
A = V1 U V1−1 , B = V2 U V2−1
Therefore, the traces of similar matrices are equal. This also implies that the trace of
a matrix is equal to the trace of the upper-triangular matrix in its Jordan normal form
(which is equal to the sum of the eigenvalues of the family).
Similar matrices perform similar operations, but in different basis systems. For example,
a similar family of diagonalizable matrices performs anisotropic scaling with the same
factors, albeit in completely different eigenvector directions.
114 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
Problem 3.3.7 (Householder Family) Show that all Householder reflection matrices
are similar, and the family includes the elementary reflection matrix that differs from the
identity matrix in one element.
A hint for solving the above problem is that this matrix is diagonalizable.
Problem 3.3.8 (Projection Family) Section 2.8.2 introduces the n×n projection matrix
P = A(AT A)−1 AT for n × d matrix A with full column rank d and n > d. Show that all
projection matrices P obtained by varying A (but for particular values of n and d) are
similar. What is the trace of P ? Provide a geometric interpretation of (I − P ) and (I − 2P ).
A hint for solving this problem is to first express the projection matrix in the form QQT by
using QR decomposition of A, where Q is an orthogonal matrix. Now extract the eigenvec-
tors and eigenvalues of the projection matrix by using the properties of Q, and verify that
the eigenvalues are always the same for fixed values of n and d.
Problem 3.3.9 (Givens Family) Show that all Givens matrices with the same rotation
angle α are similar, because for any such pair of Givens matrices G1 and G2 , one can find a
permutation matrix P such that G2 = P G1 P T . Now consider an orthogonal matrix Q that
is not a permutation matrix. Provide a geometric interpretation of QG1 QT .
For the reader who is familiar with graph adjacency matrices, we recommend the following
exercise (or to return to it after reading Chapter 10):
Problem 3.3.10 (Similarity in Graph Theory) Consider a graph GA whose adjacency
matrix is A. Show that the adjacency matrix B of the isomorphic graph GB obtained by
reordering the vertices of GA is similar to matrix A. What type of matrix is used for the
basis transformation between A and B?
A = V Δ1 V T
B = V Δ2 V T
Lemma 3.3.10 Diagonalizable matrices are also simultaneously diagonalizable if and only
if they are commutative.
Problem 3.3.12 Let A and B be two diagonalizable matrices that share the same set of
eigenvectors. Provide a geometric interpretation of why AB = BA.
Theorem 3.3.1 (Spectral Theorem) Let A be a d × d symmetric matrix with real en-
tries. Then, A is always diagonalizable with real eigenvalues and has orthonormal, real-
valued eigenvectors. In other words, A can be diagonalized in the form A = V ΔV T with
orthogonal matrix V .
Proof: First, we need to show that the eigenvalues of A are real. Let (v, λ) represents a
eigenvector-eigenvalue pair of a real matrix. We start with the most general assumption
that this pair could be complex. Pre-multiplying the equation Av = λv with the conjugate
116 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
λ∗ = [v ∗ Av]∗ = v ∗ A∗ [v ∗ ]∗ = v ∗ A∗ v = v ∗ Av = λ
We used the real and symmetric nature of A in the above derivation. Therefore, the eigen-
value λ is equal to its conjugate, and it is real. The eigenvector v is also real because it
belongs to the null space of the real matrix (A − λI).
We claim that eigenvalues with multiplicity greater than 1 do not have missing eigen-
vectors. If there are missing eigenvectors, two non-zero vectors v 1 and v 2 must exist in a
Jordan chain such that Av 1 = λv 1 and Av 2 = λv 2 + v 1 (see Section 3.3.3). Then, we can
show that (A − λI)2 v 2 = 0, by successively applying the eigenvector condition. Therefore,
v T2 (A − λI)2 v 2 is zero as well. At the same time, one can show the contradictory result that
this quantity is non-zero by using the symmetric nature of the matrix A:
v T1 [Av 2 ] = v T2 [Av 1 ]
λ2 v 2 λ1 v 1
λ1 (v 1 · v 2 ) = λ2 (v 1 · v 2 )
(λ1 − λ2 )(v 1 · v 2 ) = 0
This is possible only when the dot product of the two eigenvectors is zero.
Since the inverse of an orthogonal matrix is its transpose, it is common to write the di-
agonalization of symmetric matrices in the form A = V ΔV T instead of A = V ΔV −1 .
Multiplying a data matrix D with a symmetric matrix represents anisotropic scaling of its
rows along orthogonal axis directions. An example of such a scaling is illustrated on the
left-hand side of Figure 3.1.
The eigenvectors of a symmetric matrix A are not only orthogonal but also A-orthogonal.
_
x
ORIGINAL
_ _
x’=Ax
ORIGIN TRANSFORMED
Figure 3.3: Positive semidefinite transforms do not change angular orientations of points by
more than 90◦
One can use a natural generalization of Gram-Schmidt orthogonalization (cf. Problem 2.7.1)
to find A-orthogonal basis sets (which is a more efficient choice than eigenvector compu-
tation). In many applications like conjugate gradient descent, one is often looking for A-
orthogonal directions, where A is the Hessian of the optimization function.
Problem 3.3.14 (Frobenius Norm vs Eigenvalues) Consider a matrix with real eigen-
values. Show that its squared Frobenius norm is at least equal to the sum of the squares of
its eigenvalues, and that strict equality is observed for symmetric matrices. You will find
the Schur decomposition helpful.
xT Ax ≥ 0 (3.27)
Figure 3.3 provides the pictorial intuition as to why Definition 3.3.6 is equivalent to stating
that the eigenvalues are nonnegative. In the following, we show this result formally:
Lemma 3.3.12 Definition 3.3.6 on positive semidefiniteness of a d × d symmetric matrix
A is equivalent to stating that A has nonnegative eigenvalues.
118 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
Proof: According to the spectral theorem, we can always diagonalize a symmetric matrix
A as V ΔV T . Suppose that the eigenvalues λ1 . . . λd in Δ are all nonnegative. Then, for
any column vector x, let us denote y = V T x. Furthermore, let the ith component of y be
denoted by yi . Therefore, we have:
d
xT Ax = xT V ΔV T x = (V T x)T Δ(V T x) = y T Δy = λi yi2
i=1
It is clear that the final expression on the right is nonnegative because each λi is nonnegative.
Therefore, the matrix A is positive semidefinite according to Definition 3.3.6.
To prove the converse, let us assume that A is positive semidefinite according to Defini-
tion 3.3.6. Therefore, it is the case that xT Ax ≥ 0 for any x. Then, let us select x to be the
ith column of V (which is also the ith eigenvector). Then, because of the orthonormality of
the columns of V , we have V T x = ei , where ei contains a single 1 in the ith position, and
0s in all other positions. As a result, we have the following:
A minor variation on the notion of positive semidefinite matrix is that of a positive definite
matrix, where the matrix A cannot be singular.
xT Ax > 0 (3.28)
Unlike positive semidefinite matrices, positive definite matrices are guaranteed to be invert-
ible. The inverse matrix is simply V Δ−1 V T ; here, Δ−1 can always be computed because
none of the eigenvalues are zero.
One can also define negative semidefinite matrices as those matrices in which every
eigenvalue is non-positive, and xT Ax ≤ 0 for each column vector x. A negative semidefinite
matrix can be converted into a positive semidefinite matrix by reversing the sign of each
entry in the matrix. A negative definite matrix is one in which every eigenvalue is strictly
negative. Symmetric matrices with both positive and negative eigenvalues are said to be
indefinite.
Any matrix of the form BB T or B T B (i.e., Gram matrix form) is always positive semidef-
inite. The Gram matrix is fundamental to machine learning, and it appears repeatedly in
different forms. Note that B need not be a square matrix. This provides yet another defini-
tion of positive semidefiniteness.
Problem
√ 3.3.15 If C is a positive semidefinite matrix, show that there exists a square-root
matrix C that satisfies the following:
√ √
C C=C
A hint for solving the above problems is to examine the eigendecomposition trick used in
the proof of Lemma 3.3.14.
d
j
aij = lik ljk = lik ljk
k=1 k=1
Aij =(LLT )ij Lower-triangular L
Note that the subscript for k only runs up to j instead of d for lower-triangular matrices and
i ≥ j. This condition easily sets up a simple system of equations for computing the entries
in each column of L one-by-one while back substituting the entries already computed, as
120 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
long as we do the computations in the correct order. For example, we can compute the first
column of L by setting j = 1, and iterating over all i ≥ j:
√
l11 = a11
li1 = ai1 /l11 ∀i > 1
We can repeat the same process to compute the second column of L as follows:
l22 = a22 − l21
2
A generalized iteration for the jth column yields the pseudocode for Cholesky factorization:
Initialize L = [0]d×d ;
for j = 1 to d do
j−1 2
ljj = ajj − k=1 ljk ;
for i = j + 1 to d do
j−1
lij = (aij − k=1 lik ljk )/ljj ;
endfor
endfor
return L = [lij ];
Each computation of lij requires O(d) time, and therefore the Cholesky method requires
O(d3 ) time. The above algorithm works for positive-definite matrices. If the matrix is sin-
gular and positive semi-definite, then at least one ljj will be 0. This will cause a division by
0 during the computation of lij , which results in an undefined value. The decomposition is
no longer unique, and a Cholesky factorization does not exist in such a case. One possibility
is to add a small positive value to each diagonal entry of A to make it positive definite and
then restart the factorization. If the matrix A is indefinite or negative semidefinite, it will
show up during the computation of at least one ljj , where one will be forced to compute
the square-root of a negative quantity. The Cholesky factorization is the preferred approach
for testing the positive definiteness of a matrix.
Problem 3.3.17 (Solving a System of Equations) Show how you can solve the system
of equations (LLT )x = b by successively solving two triangular systems of equations, the first
of which is Ly = b. Use this fact to discuss the utility of Cholesky factorization in certain
types of systems of equations. Where does the approach not apply?
Ak = V Δk V −1 (3.29)
Note that it is often easy to compute Δk , because we only need to exponentiate the indi-
vidual entries along the diagonal. By using this approach, one can compute Ak in relatively
few operations. As k → ∞, it is often the case that Ak will either vanish to 0 or explode
to very large entries depending on whether the largest eigenvalue is less than 1 or whether
it is greater than 1. One can easily compute a polynomial function in A by computing a
polynomial function in Δ. These types of applications often arise when working with the
adjacency matrices of graphs (cf. Chapter 10).
Since the dot product is in the form of a Gram matrix, it is positive semidefinite
(cf. Lemma 3.3.14):
Observation 3.4.1 The dot product similarity matrix of a data set is positive semidefinite.
A dot product similarity matrix is an alternative way of specifying the data set, because
one can recover the data set D from the similarity matrix to within rotations and reflections
of the original data set. This is because each computational procedure for performing sym-
metric factorization S = D DT of the similarity matrix might yield a a different D , which
can be viewed as a rotated and reflected version of D. Examples of such computational
procedures include eigendecomposition or Cholesky factorization. All the alternatives yield
the same dot product. After all, dot products are invariant to axis rotation of the coordinate
system. Since machine learning applications are only concerned with the relative positions of
points, this type of ambiguous recovery is adequate in most cases. One of the most common
methods to “recover” a data matrix from a similarity matrix is to use eigendecomposition:
S = QΔQT (3.31)
122 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
The matrix Δ contains only nonnegative eigenvalues of the positive semidefinite similarity
matrix, and therefore we can create a new diagonal matrix Σ containing the square-roots
of the eigenvalues. Therefore, the similarity matrix S can be written as follows:
S = QΣ2 QT = (QΣ) (QΣ)T (3.32)
D D T
Here, D = QΣ is an n×n data set containing n-dimensional representations of the n points.
It seems somewhat odd that the new matrix D = QΣ is an n × n matrix. After all, if the
similarity matrix represents dot products between d-dimensional data points for d n, we
should expect the recovered matrix D to be a rotated representation of D in d dimensions.
What are the extra (n − d) dimensions? Here, the key point is that if the similarity matrix
S was indeed created using dot products on d-dimensional points, then DDT will also have
rank at most d. Therefore, at least (n − d) eigenvalues in Δ will be zeros, which correspond
to dummy coordinates.
But what if we did not use dot product similarity to calculate S from D? What if we
used some other similarity function? It turns out that this idea is the essence of kernel
methods in machine learning (cf. Chapter 9). Instead of using the dot product x · y between
two points, one often uses similarity functions such as the following:
Similarity(x, y) = exp(−x − y2 /σ 2 ) (3.33)
Here, σ is a parameter that controls the sensitivity of the similarity function to distances
between points. Such a similarity function is referred to as a Gaussian kernel. If we use a
similarity function like this instead of the dot product, we might recover a data set that
is different from the original data set from which the similarity was constructed. In fact
this recovered data set may not have dummy coordinates, and all n > d dimensions might
be relevant. Furthermore, the recovered representations QΣ from such similarity functions
might yield better results for machine learning applications than the original data set. This
type of fundamental transformation of the data to a new representation is referred to as
nonlinear feature engineering, and it goes beyond the natural (linear) transformations like
rotation that are common in linear algebra. In fact, it is even possible to extract multidi-
mensional representations from data sets of arbitrary objects between which only similarity
is specified. For example, if we have a set of n graph or time-series objects, and we only have
the n × n similarity matrix of these objects (and no multidimensional representation), we
can use the aforementioned approach to create a multidimensional representation of each
object for off-the-shelf learning algorithms.
Problem 3.4.1 Suppose you were given a similarity matrix S that was constructed using
some arbitrary heuristic (rather than dot products) on a set of n arbitrary objects (e.g.,
graphs). As a result, the matrix is symmetric but not positive semidefinite. Discuss how you
can repair the matrix S by modifying only its self-similarity (i.e., diagonal) entries, so that
the matrix becomes positive semidefinite.
A hint for solving this problem is to examine the effect of adding a constant value to the
diagonal on the eigenvalues. This trick is used frequently for applying kernel methods in
machine learning, when a similarity matrix is constructed using an arbitrary heuristic.
Covariance Matrix
Another common matrix in machine learning is the covariance matrix. Just as the similarity
matrix computes dot products between rows of matrix D, the covariance matrix computes
3.4. MACHINE LEARNING AND OPTIMIZATION APPLICATIONS 123
(scaled) dot products between columns of D after mean-centering the matrix. Consider a
set of scalar values x1 . . . xn . The mean μ and the variance σ 2 of these values are defined as
follows:
n
xi
μ = i=1
n n n
(xi − μ)2 x2
2
σ = i=1 = i=1 i − μ2
n n
Consider a data matrix in which two columns have values x1 . . . xn and y1 . . . yn , respectively.
Also assume that the means of the two columns are μx and μy . In this case, the covariance
σxy is defined as follows:
n n
i=1 (xi − μx )(yi − μy ) x i yi
σxy = = i=1
− μx μy
n n
The notion of covariance is an extension of variance, because σx2 = σxx is simply the variance
of x1 . . . xn . If the data is mean-centered with μx = μy = 0, the covariance simplifies to the
following:
n
x i yi
σxy = i=1 [Mean-centered data only]
n
It is noteworthy that the expression on the right-hand side is simply a scaled version of the
dot product between the columns, if we represent the x values and y values as an n × 2
matrix. Note the close relationship to the similarity matrix, which contains dot products
between all pairs of rows. Therefore, if we have an n × d data matrix D, which is mean-
centered, we can compute the covariance between the column i and column j using this
approach. Such a matrix is referred to as the covariance matrix.
DT D
C=
n
The unscaled version of the matrix, in which the factor of n is not used in the denominator,
is referred to as the scatter matrix. In other words, the scatter matrix is simply DT D. The
scatter matrix is the Gram matrix of the column space of D, whereas the similarity matrix
is the Gram matrix of the row space of D. Like the similarity matrix, the scatter matrix
and covariance matrix are both positive semidefinite, based on Lemma 3.3.14.
The covariance matrix is often used for principal component analysis (cf. Section 7.3.4).
Since the d×d covariance matrix C is positive semidefinite, one can diagonalize it as follows:
C = P ΔP T (3.34)
2 0
1.5 −0.5
f(x, y)
f(x, y)
1 −1
0.5 −1.5
0 −2
1 1
0.5 1 0.5 1
0.5 0.5
0 0
0 0
−0.5 −0.5
−0.5 −0.5
y −1 −1
x y −1 −1
x
vertical axis in addition to the two horizontal axes representing x1 and x2 , we obtain an
upright bowl, as shown in Figure 3.4(a). One can express f (x, y) in matrix form as follows:
1 0 x1
f (x1 , x2 ) = [x1 , x2 ]
0 1 x2
In this case, the function represents a perfectly circular bowl, and the corresponding matrix
A for representing the ellipse xT Ax = r2 is the 2 × 2 identity matrix, which is a trivial
form of a positive semidefinite matrix. We can also use various vertical cross sections of the
circular bowl shown in Figure 3.4(a) to create a contour plot, so that the value of f (x1 , x2 )
at each point on a contour line is constant. The contour plot of the circular bowl in shown
in Figure 3.5(a). Note that using the negative of the identity matrix (which is a negative
semidefinite matrix) results in an inverted bowl, as shown in Figure 3.4(b). The negative
of a convex function is always a concave function, and vice versa. Therefore, maximizing
concave functions is almost exactly similar to minimizing convex functions.
The function f (x) = xT Ax corresponds to a perfectly circular bowl, when A is set to the
identity matrix (cf. Figures 3.4(a) and 3.5(a)). Changing A from the identity matrix leads
to several interesting generalizations. First, if the diagonal entries of A are set to different
(nonnegative) values, the circular bowl would become elliptical. For example, if the bowl is
stretched twice in one direction as compared to the other, the diagonal entries would be in
the ratio of 22 : 1 = 4 : 1. An example of such a function is following:
The contour plot for this case is shown in Figure 3.5(b). Note that the vertical direction x2
is stretched even though the x1 direction has diagonal entry of 4. The diagonal entries are
inverse squares of stretching factors.
126 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
(c) Rotated elliptical bowl (d) Rotated and translated elliptical bowl
Figure 3.5: Contour plots of quadratic functions created with 2 × 2 positive semidefinite
matrices
So far, we have only considered quadratic functions in which the stretching occurs along
axis-parallel directions. Now, consider the case where we start with the diagonal matrix Δ
and rotate using basis matrix P , where P contains the two vectors that are oriented at 45◦
to the axes. Therefore, consider the following rotation matrix:
cos(45) sin(45)
P = (3.35)
−sin(45) cos(45)
In this case, we use A = P ΔP T in order to define xT Ax. The approach computes the
coordinates of x as y = P T x, and then computes f (x) = xT Ax = y T Δy. Note that we are
stretching the coordinates of the new basis. The result is a stretched ellipse in the direction
of the basis defined by the columns of P (which is a 45◦ clockwise rotation matrix for column
vectors). One can compute the matrix A in this case as follows:
T
cos(45) sin(45) 4 0 cos(45) sin(45) 5/2 −3/2
A= =
−sin(45) cos(45) 0 1 −sin(45) cos(45) −3/2 5/2
The term involving x1 x2 captures the interactions between the attributes x1 and x2 . This
is the direct result of a change of basis that is no longer aligned with the axis system. The
contour plot of an ellipse that is aligned at 45◦ with the axes is shown in Figure 3.5(c).
All these cases represent situations where the optimal solution to f (x1 , x2 ) is at (0, 0),
and the resulting function value is 0. How can we generalize to a function with optimum
occurring at b and an optimum value of c (which is a scalar)? The corresponding function
is of the following form:
f (x) = (x − b)T A(x − b) + c (3.36)
The matrix A is equivalent to half the Hessian matrix of the quadratic function. The d × d
Hessian matrix H = [hij ] of a function of d variables is a symmetric matrix containing the
second-order derivatives with respect to each pair of variables.
∂ 2 f (x)
hij = (3.37)
∂xi ∂xj
Note that xT Hx represents the directional second derivative of the function f (x) along x
(cf. Chapter 4), and it represents the second derivative of the rate of change of f (x), when
moving along direction x. This value is always nonnegative for convex functions irrespective
of x, which ensures that the value of f (x) is minimum when the first derivative of the rate of
change of f (x) along each direction x is 0. In other words, the Hessian needs to be positive
semidefinite. This is a generalization of the condition g (x) ≥ 0 in 1-dimensional convex
functions. We make the following assertion, which is shown formally in Chapter 4:
Observation 3.4.2 Consider a quadratic function, whose quadratic term is of the form
xT Ax. Then, the quadratic function is convex, if and only if the matrix A is positive semidef-
inite.
Many quadratic functions in machine learning are of this form. A specific example is the
dual objective function of a support vector machine (cf. Chapter 6).
One can construct an example of the general form of the quadratic function by translat-
ing the 45◦ -oriented, origin-centered ellipse of Figure 3.5(c). For example, if we center the
elliptical objective function at [1, 1] and add 2 to the optimal values, we obtain the function
(xT − [1, 1])A(x − [1, 1]T ) + 2. The resulting objective function, which takes an optimal value
of 2 at [1, 1] is shown below:
5 2
f (x1 , x2 ) = (x + x22 ) − 2(x1 + x2 ) − 3x1 x2 + 4 (3.38)
2 1
This type of quadratic objective function is common in many machine learning algorithms.
An example of the contour plot of a translated ellipse is shown in Figure 3.5(d), although
it doe snot show the vertical translation by 2.
It is noteworthy that the most general form of a quadratic function in multiple variables
is as follows:
T
f (x) = xT A x + b x + c (3.39)
Here, A is a d × d symmetric matrix, b is a d-dimensional column vector, and c is a scalar.
In the 1-dimensional case, A and b are replaced by scalars, and one obtains the familiar
form ax2 +bx+c of univariate quadratic functions. Furthermore, as long as b belongs to the
column space of A , one can convert the general form of Equation 3.39 to the vertex form
of Equation 3.36. It is important for b to belong to the column space of A for an optimum
128 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
to exist. For example, the 2-dimensional function is G(x1 , x2 ) = x21 + x2 does not have a
minimum because the function is partially linear in x2 . The vertex form of Equation 3.39
considers only strictly quadratic functions in which all cross-sections of the function are
quadratic. Only strictly quadratic functions are interesting for optimization, because linear
functions usually do not have a maximum or minimum. One can relate the coefficients of
Equations 3.36 and 3.39 as follows:
T
A = A, b = −2Ab, c = b b + c
Given A , b and c , the main condition for being able to arrive at the vertex form of
Equation 3.36 is the second condition b = −2Ab = −2A b for which a solution will exist
only when b occurs in the column space of A .
Finally, we discuss the case where the matrix A used to create the function xT Ax is
indefinite, and has both positive and negative eigenvalues. An example of such a function
is the following:
1 0 x1
g(x1 , x2 ) = [x1 , x2 ] = x21 − x22
0 −1 x2
The gradient at (0, 0) is 0, which seems to be an optimum point. However, this point behaves
like both a maximum and a minimum, when examining second derivatives. If we approach
the point from the x1 direction, it seems like a minimum. If we approach it from the x2
direction, it seems like a maximum. This is because the directional second derivatives in
the x1 and x2 directions are simply twice the diagonal entries (which are of opposite sign).
The shape of the objective function resembles that of a riding saddle, and the point (0, 0)
is referred to as a saddle point. An example of this type of objective function is shown
in Figure 3.6. Objective functions containing such points are often notoriously hard for
optimization.
d
F (x1 , x2 , . . . , xd ) = fi (xi )
i=1
3.4. MACHINE LEARNING AND OPTIMIZATION APPLICATIONS 129
0.8 SADDLE
POINT
0.6
0.4
0.2
g(x, y)
−0.2
−0.4
−0.6
−0.8
−1
1
0.5
−0.5 1
0.5
0
−1 −0.5
y −1
x
and v j , we have v Ti Av j = Δij = 0. There are an infinite number of possible ways of creating
conjugate directions, and the eigenvectors represent a special case. In fact, a generaliza-
tion of the Gram-Schmidt method can be used to find such directions (cf. Problem 2.7.1).
This basic idea forms the principle of the conjugate gradient descent method discussed
in Section 5.7.1 of Chapter 5, which can be used even for non-quadratic functions. Here,
we provide a conceptual overview of the iterative conjugate gradient method for arbitrary
(possibly non-quadratic) function h(x) from current point x = xt :
2. Compute the optimal solution x∗ of the quadratic function f (x) using the separable
variable optimization approach discussed above as a set of d univariate optimization
problems.
The approach is iterated to convergence. The aforementioned algorithm provides the con-
ceptual basis for the conjugate gradient method. The detailed method is provided in Sec-
tion 5.7.1 of Chapter 5.
Optimize xT Ax
subject to:
x2 = 1
d
x= αi v i (3.40)
i=1
d
Optimize λi αi2
i=1
subject to:
d
αi2 = 1
i=1
d d
The expression x in the constraint is simplified to ( i=1 αi v i ) · ( i=1 αi v i ); we can ex-
2
pand it using the distributive property, and then we use the orthogonality of the eigenvectors
to set v i · v j = 0. The objective function value is i λi αi2 , where the different αi2 sum to 1.
Clearly, the minimum and maximum possible values of this objective function are achieved
by setting the weight αi2 of a single value of λi to 1, which corresponds to the minimum or
maximum possible eigenvalue (depending on whether the optimization problem is posed in
minimization or maximization form):
The maximum value of the norm-constrained quadratic optimization problem
is obtained by setting x to the largest eigenvector of A. The minimum value is
obtained by setting x to the smallest eigenvector of A.
This problem can be generalized to finding a k-dimensional
subspace. In other words, we
want to find orthonormal vectors x1 . . . xk , so that i xi Axi is optimized:
k
Optimize xTi Axi
i=1
subject to:
xi 2 = 1 ∀i ∈ {1 . . . k}
x1 . . . xk are mutually orthogonal
The optimal solution to this problem can be derived using a similar procedure. We provide
an alternative solution with the use of Lagrangian relaxation in Section 6.6 of Chapter 6.
Here, we simply state the optimal solution:
The maximum value of the norm-constrained quadratic optimization problem
is obtained by using the largest k eigenvectors of A. The minimum value is
obtained by using the smallest k eigenvectors of A.
Intuitively, these results make geometric sense from the perspective of the anisotropic scal-
ing caused by symmetric matrices like A. The matrix A distorts the space with scale fac-
tors corresponding to the eigenvalues along orthonormal directions corresponding to the
eigenvectors. The objective function tries to either maximize or minimize the aggregate
projections of the distorted vectors Axi on the original vectors xi , which is sum of the dot
products between xi and Axi . By picking the largest k eigenvectors (scaling directions),
this sum is maximized. On the other hand, by picking the smallest k directions, this sum is
minimized.
Gaussian elimination method (cf. Section 2.5.4 of Chapter 2). However, polynomial equation
solvers are sometimes numerically unstable and have a tendency to show ill-conditioning
in real-world settings. Finding the roots of a polynomial equation is numerically harder
than finding eigenvalues of a matrix! In fact, one of the many ways in which high-degree
polynomial equations are solved in engineering disciplines is to first construct a companion
matrix of the polynomial, such that the matrix has the same characteristic polynomial, and
then find its eigenvalues:
Problem 3.5.1 (Companion Matrix) Consider the following matrix:
0 1
A2 =
−c −b
Discuss why the roots of the polynomial equation x2 + bx + c = 0 can be computed using
the eigenvalues of this matrix. Also show that finding the eigenvalues of the following 3 × 3
matrix yields the roots of x3 + bx2 + cx + d = 0.
⎡ ⎤
0 1 0
A3 = ⎣ 0 0 1 ⎦
−d −c −b
Note that the matrix has a non-zero row and superdiagonal of 1s. Provide the general form
t−1
of the t × t matrix At required for solving the polynomial equation xt + i=0 ai xi = 0.
In some cases, algorithms for finding eigenvalues also yield the eigenvectors as a byproduct,
which is particularly convenient. In the following, we present alternatives both for finding
eigenvalues and for finding eigenvectors.
Note that normalization of the vector in each iteration is essential to prevent overflow or
underflow to arbitrarily large or small values. After convergence to the principal eigenvector
v, one can compute the corresponding eigenvalue as the ratio of v T Av to v2 , which is
referred to as the Raleigh quotient.
We now provide a formal justification. Consider a situation in which we represent the
starting vector x as a linear combination of the basis of d eigenvectors v 1 . . . v d with coeffi-
cients α1 . . . αd :
d
x= αi v i (3.41)
i=1
t
t
t
|λi |t
At x = αi At v i = αi λti v i ∝ αi (−1)t t vi
j=1 |λj |
t
i=1 i=1 i=1
When t becomes large, the quantity on the right-hand side will be dominated by the effect
of the largest eigenvector. This is because the factor |λt1 | increases the proportional weight
of the first eigenvector, when λ1 is the (strictly) largest eigenvalue. The fractional value
t
|λt1 |/ j=1 |λtj | will converge to 1 for the largest (absolute) eigenvector and to 0 for all
others. As a result, the normalized version of At x will point in the direction of the largest
(absolute) eigenvector v 1 . Note that this proof does depend on the fact that λ1 is strictly
greater than the next eigenvalue, or else the convergence will not occur. Furthermore, if
the top-2 eigenvalues are too similar, the convergence will be slow. However, large machine
134 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
learning matrices (e.g., covariance matrices) are often such that the top few eigenvalues are
quite different in magnitude, and most of the similar eigenvalues are at the bottom with
values of 0. Furthermore, even when there are ties in the eigenvalues, the power method
tends to find a vector that lies within the span of the tied eigenvectors.
Problem 3.5.2 (Inverse Power Iteration) Let A be an invertible matrix. Discuss how
you can use A−1 to discover the smallest eigenvector and eigenvalue of A in absolute mag-
nitude.
A hint for the above problem is that the left eigenvectors and right eigenvectors may not be
the same in asymmetric matrices (as in symmetric matrices) and both are needed in order
to subtract the effect of dominant eigenvectors.
Problem 3.5.4 (Finding Largest Eigenvectors) The power method finds the top-k
eigenvectors of largest absolute magnitude. In most applications, we also care about the
sign of the eigenvector. In other words, an eigenvalue of +1 is greater than −2, when sign
is considered. Show how you can modify the power method to find the top-k eigenvectors of
a symmetric matrix when sign is considered.
The key point in the above exercise is to translate the eigenvalues to nonnegative values by
modifying the matrix using the ideas already discussed in this section.
3.6 Summary
Diagonalizable matrices represent a form of linear transformation, so that multiplication of
a vector with such a matrix corresponds to anisotropic scaling of the vector in (possibly
non-orthogonal) directions. Not all matrices are diagonalizable. Symmetric matrices are
always diagonalizable, and they can be represented as scaling transformations in mutually
orthogonal directions. When the scaling factors of symmetric matrices are nonnegative, they
are referred to as positive semidefinite matrices. Such matrices frequently arise in different
types of machine learning applications. Therefore, this chapter has placed a special emphasis
on these types of matrices and their eigendecomposition properties. We also introduce a
number of key optimization applications of such matrices, which sets the stage for more
detailed discussions in later chapters.
3.8 Exercises
1. In Chapter 2, you learned that any d × d orthogonal matrix A can be decomposed
into O(d2 ) Givens rotations and at most one elementary reflection. Discuss how the
sign of the determinant of A determines whether or not a reflection is needed.
2. In Chapter 2, you learned that any d × d matrix A can be decomposed into at most
O(d) Householder reflections. Discuss the effect of the sign of the determinant on the
number of Householder reflections.
136 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
3. Show that if a matrix A satisfies A2 = 4I, then all eigenvalues of A are 2 and −2.
4. You are told that a 4×4 symmetric matrix has eigenvalues 4, 3, 2, and 2. You are given
the values of eigenvectors belonging to the eigenvalues 4 and 3. Provide a procedure
to reconstruct the entire matrix. [Hint: One eigenvalue is repeated and the matrix is
symmetric.]
6. For a 4 × 4 matrix A with the following list of eigenvalues obtained from the
characteristic polynomial, state in each case whether the matrix is guaranteed to
be diagonalizable, invertible, both, or neither: (a) {λ1 , λ2 , λ3 , λ4 } = {1, 3, 4, 9} (b)
{λ1 , λ2 , λ3 , λ4 } = {1, 3, 3, 9} (c) {λ1 , λ2 , λ3 , λ4 } = {0, 3, 4, 9} (d) {λ1 , λ2 , λ3 , λ4 } =
{0, 3, 3, 9} (e) {λ1 , λ2 , λ3 , λ4 } = {0, 0, 4, 9}.
7. Show that any real-valued matrix of odd dimension must have at least one real eigen-
value. Show the related fact that the determinant of a real-valued matrix without
any real eigenvalues is always positive. Furthermore, show that a real-valued matrix
of even dimension with a negative determinant must have at least two distinct real-
valued eigenvalues. [Hint: Properties of polynomial roots.]
10. Let A and B be d × d matrices. Show that the matrix AB − BA can never be positive
semidefinite unless it is the zero matrix. [Hint: Use properties of the trace.]
11. Can the square of a matrix that does not have real eigenvalues be diagonalizable with
real eigenvalues? If no, provide a proof. If yes, provide an example.
12. If the matrices A, B, and AB are all symmetric, show that the matrices A and B
must be simultaneously diagonalizable. [Hint: See Problem 1.2.7 in Chapter 1.]
13. Suppose that the d × d matrix S is symmetric, positive semidefinite matrix, and the
matrix D is of size n × d. Show that DSDT must also be a symmetric, positive
semidefinite matrix. Note that DSDT is a matrix of inner products between rows of
D, which is a generalization of the dot product matrix DDT .
14. Let S be a positive semidefinite matrix, which can therefore be expressed in Gram
matrix form as S = B T B (Lemma 3.3.14). Use this fact to show that a diagonal entry
can never be negative. What does this imply for the convexity of quadratic functions?
15. Show that if a matrix P satisfies P 2 = P , then all its eigenvalues must be 1 or 0.
3.8. EXERCISES 137
16. Show that a matrix A is always similar to its transpose AT . [Hint: Show that if A is
similar to U , then AT is similar to U T . Then show that a matrix U in Jordan normal
form is similar to its transpose with the use of a permutation matrix.]
17. Let x be a right eigenvector (column vector) of square matrix A with eigenvalue λr .
Let y be a left eigenvector (row vector) of A with eigenvalue λl = λr . Show that x and
y T are orthogonal. [Hint: The spectral theorem contains a special case of this result.
Problem 3.3.6 is also a special case for diagonalizable matrices.]
18. True or False? (a) A matrix with all zero eigenvalues must be the zero matrix. (b) A
symmetric matrix with all zero eigenvalues must be the zero matrix.
19. Show that if λ is a non-zero eigenvalue of AB, then it must also be a non-zero eigen-
value of BA. Why does this argument not work for zero eigenvalues? Furthermore,
show that if either A or B is invertible, then AB and BA are similar.
20. Is the quadratic function f (x1 , x2 , x3 ) = 2x21 +3x22 +2x23 −3x1 x2 −x2 x3 −2x1 x3 convex?
How about the function g(x1 , x2 , x3 ) = 2x21 − 3x22 + 2x23 − 3x1 x2 − x2 x3 − 2x1 x3 ? In
each case, find the minimum of the objective function, subject to the constraint that
the norm of [x1 , x2 , x3 ]T is 1.
21. Consider the function f (x1 , x2 ) = x21 + 3x1 x2 + 6x22 . Propose a linear transformation
of the variables so that the function is separable in terms of the new variables. Use
the separable form of the objective function to find an optimal solution.
22. Show that the difference between two similar, symmetric matrices must be indefinite,
unless both matrices are the same. [Hint: Use properties of the trace.]
23. Show that an nth root of a d × d diagonalizable matrix can always be found, as long
as we allow for complex roots. Provide a geometric interpretation of the resulting
matrix in terms of its relationship to the original matrix in the case where the root is
a real-valued matrix.
24. Generate the equation of an ellipsoid centered at [1, −1, 1]T , and whose axes direc-
tions are the orthogonal vectors [1, 1, 1]T , [1, −2, 1]T , and [1, 0, −1]T . The ellipsoid is
stretched in these directions in the ratio 1 : 2 : 3. The answer to this question is not
unique, and it depends on the size of your ellipsoid. Use the matrix form of ellipsoids
discussed in the chapter. [Be careful about the mapping of the stretching ratios to the
eigenvalues of this matrix both in terms of magnitude and relative ordering.]
25. If A and B are symmetric matrices whose eigenvalues lie in [λ1 , λ2 ] and [γ1 , γ2 ], respec-
tively, show that the eigenvalues of A − B lie in [λ1 − γ2 , λ2 − γ1 ]. [Think geometrically
about the effect of the multiplication of a vector with (A − B). Also think of the
norm-constrained optimization problem of xT Cx for C chosen appropriately.]
26. Nilpotent Matrix: Consider a non-zero, square matrix A satisfying Ak = 0 for some
k. Such a matrix is referred to as nilpotent. Show that all eigenvalues are 0 and such
a matrix is defective.
27. Show that A is diagonalizable in each case if (i) it satisfies A2 = A, and (ii) it satisfies
A2 = I.
138 CHAPTER 3. EIGENVECTORS AND DIAGONALIZABLE MATRICES
28. Elementary Row Addition Matrix Is Defective: Show that the d × d elementary
row addition matrix with 1s on the diagonal and a single non-zero off-diagonal entry
is not diagonalizable.
29. Symmetric and idempotent matrices: Show that any n × n matrix P satisfying
P 2 = P and P = P T can be expressed in the form QQT for some n × d matrix Q with
orthogonal columns (and is hence an alternative definition of a projection matrix).
30. Diagonalizability and Nilpotency: Show that every square matrix can be ex-
pressed as the sum of a diagonalizable matrix and a nilpotent matrix (including zero
matrices for either part).
31. Suppose you are given the Cholesky factorization LLT of a positive-definite matrix A.
Show how to compute the inverse of A using multiple applications of back substitution.
32. Rotation in 3-d with arbitrary axis: Suppose that the vector [1, 2, −1]T is the
axis of a counter-clockwise rotation of θ degrees, just as [1, 0, 0]T is the axis of the
counter-clockwise θ-rotation of a column vector with the Givens matrix:
⎡ ⎤
1 0 0
R[1,0,0] = ⎣ 0 cos(θ) −sin(θ) ⎦
0 sin(θ) cos(θ)
Create a new orthogonal basis system of R3 that includes [1, 2, −1]T . Now use the
concept of similarity R[1,2,−1] = P R[1,0,0] P T to create a 60◦ rotation matrix M about
the axis [1, 2, −1]T . The main point is in knowing how to infer P from the aforemen-
tioned orthogonal basis system. Be careful of avoiding inadvertent reflections during
the basis transformation by checking det(P ). Now show how to recover the axis and
angle of rotation from M using complex-valued diagonalization. [Hint: The eigenvalues
are the same for similar matrices and the axis of rotation is an invariant direction.]
33. Show how you can use the Jordan normal form of a matrix to quickly identify its rank
and its four fundamental subspaces.
34. Consider the following quadratic form:
38. Eigenvalues are scaling factors along specific directions. Construct a 2 × 2 diagonaliz-
able matrix A and 2-dimensional vector x, so that each eigenvalue of A is less than 1
in absolute magnitude and the length of Ax is larger than that of x. Prove that any
such matrix A cannot be symmetric. Explain both phenomena geometrically.
41. You have a 100000 × 100 sparse matrix D, and you want to compute the dominant
eigenvector of its left Gram matrix DDT . Unfortunately, DDT is a non-sparse matrix
of size 100000 × 100000, which causes computational problems. Show how you can
implement the power method using only sparse matrix-vector multiplications.
42. Multiple choice: Suppose xTi Axi > 0 for d vectors x1 . . . xd and d × d symmetric
matrix A. Then, A is always positive definite if the different xi ’s are (i) linearly
independent, (ii) orthogonal, (iii) A-orthogonal, (iv) any of the above, or (v) none of
the above? Justify your answer.
43. Convert the diagonalization in the statement of Exercise 40 into Gram matrix form
A = B T B and then compute the Cholesky factorization A = LLT = RT R using the
QR decomposition B = QR.
Chapter 4
4.1 Introduction
Many machine learning models are often cast as continuous optimization problems in mul-
tiple variables. The simplest example of such a problem is least-squares regression, which is
also viewed as a fundamental problem in linear algebra. This is because solving a (consistent)
system of equations is a special case of least-squares regression. In least-squares regression,
one finds the best-fit solution to a system of equations that may or may not be consistent,
and the loss corresponds to the aggregate squared error of the best fit. The special case
of a consistent system of equations yields a loss value of 0. Least-squares regression has a
special place in linear algebra, optimization, and machine learning, because it serves as a
foundational problem in all three disciplines. Least-squares regression historically preceded
the classification problem in machine learning, and the optimization models for classifica-
tion were often motivated as modifications of the least-squares regression model. The main
difference between least-squares regression and classification is that the predicted target
variable is numerical in the former, whereas it is discrete (typically binary) in the latter.
Therefore, the optimization model for linear regression needs to be “repaired” in order to
make it usable for discrete target variables. This chapter will make a special effort to show
how least-squares regression is so foundational to machine learning.
Most continuous optimization methods use differential calculus in one form or the other.
Differential calculus is an old discipline, and it was independently invented by Isaac Newton
and Gottfried Leibniz in the 17th century. The main idea of differential calculus is to provide
a quantification of the instantaneous rate of change of an objective function with respect to
each of the variables in its argument. Optimization methods based on differential calculus
use the fact that the rate of change of an objective function at a particular set of values
of the optimization variables provides hints on how to iteratively change the optimization
variable(s) and bring them closer to an optimum solution. Such iterative algorithms are easy
to implement on modern computers. Although computers had not been invented in the 17th
century, Newton proposed several iterative methods to provide humans a systematic way to
manually solve optimization problems (albeit with some rather tedious work). It was natural
to adapt these methods later as computational algorithms, when computers were invented.
This chapter will introduce the basics of optimization and the associated computational
algorithms. Later chapters will expand on these ideas.
This chapter is organized as follows. The next section will discuss the basics of opti-
mization. The notion of convexity is introduced in Section 4.3 because of its importance in
machine learning. Important details of gradient descent are discussed in Section 4.4. There
are several ways in which optimization problems are manifested in a different way in ma-
chine learning (than in traditional applications). This issue will be discussed in Section 4.5.
Useful matrix calculus notations and identities are introduced in Section 4.6 for computing
the derivatives of objective functions with respect to vectors. The least-squares regression
problem is introduced in Section 4.7. The design of machine learning algorithms with dis-
crete targets is presented in Section 4.8. Optimization models for multiway classification
are discussed in Section 4.9. Coordinate descent methods are discussed in Section 4.10. A
summary is given in Section 4.11.
f (x) = x2 − 2x + 3 (4.1)
This objective function is an upright parabola, which can also be expressed in the form
f (x) = (x − 1)2 + 2. The objective function is shown in Figure 4.2(a); it clearly takes on its
minimum value at x = 1, where the nonnegative term (x − 1)2 drops to 0. Note that at the
minimum value, the rate of change of f (x) with respect to x is zero, as the tangent to the
4.2. THE BASICS OF OPTIMIZATION 143
0.8
0.6
0.4
0.2
f(x)
−0.2
−0.4
−0.6
−0.8
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x
plot at that point is horizontal. One can also find this optimal value by computing the first
derivative f (x) of the function f (x) with respect to x and setting it to 0:
df (x)
f (x) = = 2x − 2 = 0 (4.2)
dx
Therefore, we obtain x = 1 as an optimum value. Intuitively, the function f (x) changes
at zero rate on slightly perturbing the value of x from x = 1, which suggests that it is an
optimal point. However, this analysis alone is not sufficient to conclude that the point is a
minimum. In order to understand this point, consider the inverted parabola, obtained by
setting g(x) = −f (x):
g(x) = −f (x) = −x2 + 2x − 3 (4.3)
Setting the derivative of g(x) to 0 yields exactly the same solution of x = 1:
g (x) = 2 − 2x = 0 (4.4)
However, in this case the solution x = 1 is a maximum rather than a minimum. Furthermore,
the point x = 0 is an inflection point or saddle point of the function F (x) = x3 (cf.
Figure 4.1), even though the derivative is 0 at x = 0. Such a point is neither a maximum
nor a minimum.
All points for which the first derivative is zero are referred to as critical points of the
optimization problem. A critical point might be a maximum, minimum, or saddle point.
How does one distinguish between the different cases for critical points? One observation is
that a function looks like an upright bowl at a minimum point, which implies that its first
derivative increases at minima. In other words, the second derivative (i.e., derivative of the
derivative) will be positive for minima (although there are a few exceptions to this rule).
144 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
For example, the second derivatives for the two quadratic functions f (x) and g(x) discussed
above are as follows:
The case where the second derivative is zero is somewhat ambiguous, because such a point
could be a minimum, maximum, or an inflection point. Such a critical point is referred to as
degenerate. Therefore, for a single-variable optimization function f (x) in minimization form,
satisfying both f (x) = 0 and f (x) > 0 is sufficient to ensure that the point is a minimum
with respect to its immediate locality. Such a point is referred to as a local minimum. This
does not, however, mean that the point x is a global minimum across the entire range of
values of x.
Lemma 4.2.1 (Optimality Conditions in Unconstrained Optimization) A univari-
ate function f (x) is a minimum value at x = x0 with respect to its immediate locality if it
satisfies both f (x0 ) = 0 and f (x0 ) > 0.
These conditions are referred to as first-order and second-order conditions for minimization.
The above conditions are sufficient for a point to be minimum with respect to its infinites-
imal locality, and they are “almost” necessary for the point to be a minimum with respect
to its locality. We use the word “almost” in order to address the degenerate case where a
point x0 might satisfy f (x0 ) = 0 and f (x0 ) = 0. This type of setting is an ambiguous
situation where the point x0 might or might not be a minimum. As an example of this
ambiguity, the functions F (x) = x3 and G(x) = x4 have zero first and second derivatives
at x = 0, but only the latter is a minimum. One can understand the optimality condition
of Lemma 4.2.1 by using a Taylor expansion of the function f (x) within a small locality
x0 + Δ (cf. Section 1.5.1 of Chapter 1):
Δ2
f (x0 + Δ) ≈ f (x0 ) + Δf (x0 ) + f (x0 )
2
0
Note that Δ might be either positive or negative, although Δ2 will always be positive.
The value of |Δ| is assumed to be extremely small, and successive terms rapidly drop off
in magnitude. Therefore, it makes sense to keep only the first non-zero term in the above
expansion in order to meaningfully compare f (x0 ) with f (x0 + Δ). Since f (x0 ) is zero, the
first non-zero term is the second-order term containing f (x0 ). Furthermore, since Δ2 and
f (x0 ) are positive, it follows that f (x0 + Δ) = f (x0 ) + , where is some positive quantity.
This means that f (x0 ) is less than f (x0 + Δ) for any small value of Δ, whether it is positive
or negative. In other words, x0 is a minimum with respect to its immediate locality.
The Taylor expansion also provides insights as to why the degenerate case f (x0 ) =
f (x0 ) = 0 is problematic. In the event that f (x) is zero, one would need to keep expanding
the Taylor series until one reaches the first non-zero term. If the first non-zero term is
positive, then one can show that f (x0 + Δ) < f (x0 ). An example of such a function is
f (x) = x4 at x0 = 0. In such a case, x0 is indeed a minimum with respect to its immediate
locality. However, if the first non-zero term is negative or it depends on the sign of Δ, it
could be a maximum or saddle point; an example is the inflection point of x3 at the origin,
which is shown in Figure 4.1.
Problem 4.2.1 Consider the quadratic function f (x) = ax2 +bx+c. Show that a point can
be found at which f (x) satisfies the optimality condition (for minimization) when a > 0.
Show that the optimality condition (for maximization) is satisfied when a < 0.
4.2. THE BASICS OF OPTIMIZATION 145
6 5
5.5
4
OBJECTIVE FUNCTION
OBJECTIVE FUNCTION
5
3
4.5
4 2
GLOBAL MINIMUM
3.5
1
LOCAL MINIMUM
3
0
2.5
2 −1
−1 −0.5 0 0.5 1 1.5 2 2.5 3 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 2.5 3
OPTIMIZATION VARIABLE OPTIMIZATION VARIABLE
A quadratic function is a rather simple case in which a single minimum or maximum ex-
ists, depending on the sign of the quadratic term. However, other functions have multiple
turning points. For example, the function sin(x) is periodic, and has an infinite number of
minima/maxima over x ∈ (−∞, +∞). It is noteworthy that the optimality conditions of
Lemma 4.2.1 only focus on defining a minimum in a local sense. In other words, the point is
minimum with respect to its infinitesimal locality. A point that is a minimum only with re-
spect to its immediate locality is referred to as a local minimum. Intuitively, the word “local”
refers to the fact that the point is a minimum only within its neighborhood of (potentially)
infinitesimal size. The minimum across the entire domain of values of the optimization vari-
able is the global minimum. It is noteworthy that the conditions of Lemma 4.2.1 do not tell
us with certainty whether or not a point is a global minimum. However, these conditions
are sufficient for a point to be at least a local minimum and “almost” necessary to be a
local minimum (i.e., necessary with the exception of the degenerate case discussed earlier
with a zero second derivative).
Next, we will consider an objective function that has both local and global minima:
This function is shown in Figure 4.2(b), and it has two possible minima. The minimum at
x = −1 is a local minimum, and the minimum at x = 2 is a global minimum. Both the local
and global minima are shown in Figure 4.2(b). On differentiating F (x) with respect to x
and setting it to zero, we obtain the following condition:
x3 − x2 − 2x = x(x + 1)(x − 2) = 0
The roots are x ∈ {−1, 0, 2}. The second derivative is 3x2 − 2x − 2, which is positive at −1
and 2 (minima), and negative at x = 0 (maximum). The value of the function at the two
minima are as follows:
Problem 4.2.3 Find the local and global optima of F (x) = (x − 1)2 [(x − 3)2 − 1]. Which
of these are maxima and which are minima?
This equation is somewhat hard to solve, although iterative methods exist for solving it. By
trial and error, one might get lucky and find out that x = 1 is indeed a solution to the first-
order optimality condition because it satisfies f (1) = 2 loge (1) + 1 − 1 = 0. Furthermore,
the second derivative f (x) can be shown to be positive at x = 1, and therefore this point
is at least a local minimum. However, solving an equation like this numerically causes all
types of numerical and computational challenges; these types of challenges increase when
we move from univariate optimization to multivariate optimization.
A very popular approach for optimizing objective functions (irrespective of their func-
tional form) is to use the method of gradient descent. In gradient descent, one starts at an
initial point x = x0 and successively updates x using the steepest descent direction:
x ⇐ x − αf (x)
Here, α > 0 regulates the step size, and is also referred to as the learning rate. In the uni-
variate case, the notion of “steepest” is hard to appreciate, as there are only two directions
of movement (i.e., increase x or decrease x). One of these directions causes ascent, whereas
the other causes descent. However, in multivariate problems, there can be an infinite number
of possible directions of descent, and the generalization of the notion of univariate deriva-
tive leads to the steepest descent direction. The value of x changes in each iteration by
δx = −αf (x). Note that at infinitesimally small values of the learning rate α > 0, the
4.2. THE BASICS OF OPTIMIZATION 147
above updates will always reduce f (x). This is because for very small α, we can use the
first-order Taylor expansion to obtain the following:
f (x + δx) ≈ f (x) + δxf (x) = f (x) − α[f (x)]2 < f (x) (4.6)
Using very small values of α > 0 is not advisable because it will take a long time for the
algorithm to converge. On the other hand, using large values of α could make the effect of
the update unpredictable with respect to the computed gradient (as the first-order Taylor
expansion is no longer a good approximation). After all, the gradient is only an instantaneous
rate of change, and it does not apply over larger ranges. Therefore, large step-sizes could
cause the solution to overshoot an optimal value, if the sign of the gradient changes over
the length of the step. At extremely large values of the learning rate, it is even possible for
the solution to diverge, where it moves at an increasing speed towards large absolute values,
and typically terminates with a numerical overflow.
In the following, we will show two iterations of the gradient descent procedure for the
function of Equation 4.5. Consider the case where we start at x0 = 2, which is larger
than the optimal value of x = 1. At this point, the value of f (x) can be shown to be
2loge (2) + 1 ≈ 2.4. If we use α = 0.2, then the value of x gets updated from x0 as follows:
x1 ⇐ x0 − 0.2 ∗ 2.4 = 2 − 0.48 = 1.52
This new value of x is closer to the optimal solution. One can then recompute the derivative
at x1 = 1.52 and perform the update x ⇐ 1.52−0.2∗f (1.52). Performing this update again
and again to construct the sequence x0 , x1 , x2 . . . xt will eventually converge to the optimal
value of xt = 1 for large values of t. Note that the choice of α does matter. For example, if
we choose α = 0.8, then it results in the following update:
x1 ⇐ x0 − αf (x0 ) = 2 − 2.4 ∗ 0.8 = 0.08
In this case, the solution has overshot the optimal value of x = 1, although it is still closer
to the optimal solution than the initial point of x0 = 2. The solution can still be shown to
converge to an optimal value, but after a longer time. As we will see later, even this is not
guaranteed in all cases.
OPTIMIZATION OBJECTIVE
OPTIMIZATION OBJECTIVE
NUMBER OF STEPS NUMBER OF STEPS
(a) Convergence (b) Divergence
Figure 4.3: Typical behaviors of objective function during convergence and divergence
f (x) = x2 − 2x + 3
Now imagine a situation where the starting point is x0 = 2, and one chooses a large learning
rate α = 10. The derivative of f (x) = 2x − 2 evaluates to f (x0 ) = f (2) = 2. Then, the
update from the first step yields the following:
x1 ⇐ x0 − 10 ∗ 2 = 2 − 20 = −18
Note that the new point x1 is much further away from the optimal value of x = 1, which is
caused by the overshooting problem. Even worse, the absolute gradient is very large at this
point, and it evaluates to f (−18) = −38. If we keep the learning rate fixed, it will cause
the solution to move at an even faster rate in the opposite direction:
In this case, the solution has overshot back in the original direction but is even further away
from the optimal solution. Further updates cause back-and-forth movements at increasingly
large amplitudes:
Note that each iteration flips the sign of the current solution and increases its magnitude
by a factor of about 20. In other words, the solution moves away faster and faster from an
optimal solution until it leads to a numerical overflow. An example of the behavior of the
objective function during divergence is shown in Figure 4.3(b).
It is common to reduce the learning rate over the course of the algorithm, and one of
the many purposes served by such an approach is to arrest divergence; however, in some
cases, such an approach might not prevent divergence, especially if the initial learning rate is
large. Therefore, when an analyst encounters a situation in gradient descent, where the size
of the parameter vector seems to increase rapidly (and the optimization objective worsens),
4.2. THE BASICS OF OPTIMIZATION 149
Note that these functions are simplified and have very special structure; they are additively
separable. Additively separable functions are those in which univariate terms are added,
and they do not interact with one another. In other words, an additively separable function
might contain terms like sin(x2 ) and sin(y 2 ), but not sin(xy). Nevertheless, these simpli-
fied polynomial functions are adequate for demonstrating the complexities associated with
multivariable optimization. In fact, as discussed in Section 3.4.4 of Chapter 3, all quadratic
functions can be represented in additively separable form (although this is not true for
non-quadratic functions). The two bivariate functions g(x, y) and G(x, y) are shown in Fig-
ure 4.4(a) and (b), respectively. It is evident that the single-variable cross-sections of the
objective functions in Figure 4.4(a) and (b) are similar to the 1-dimensional functions in
Figure 4.2(a) and (b). The objective function of Figure 4.4(a) has a single global optimum
(like the quadratic function of Figure 4.2(a) in one dimension). However, the objective func-
tion of Figure 4.4(b) has four minima, only one of which is global minimum at [x, y] = [2, 2].
Examples of local and global minima are annotated in Figure 4.4(b).
In this case, one can compute the partial derivative of the objective functions g(x, y)
and G(x, y) (of Figure 4.2) in order to perform gradient descent. A partial derivative com-
putes the derivative with respect to a particular variable, while treating other variables as
constants. In fact, a “gradient” is naturally defined as a vector of partial derivatives. One
can compute the gradient of the function g(x, y) in Figure 4.4(a) as follows:
T
∂g(x, y) ∂g(x, y) 2x − 2
∇g(x, y) = , =
∂x ∂y 2y − 2
The notation “∇” is added in front of a function to denote its gradient. This notation will
be consistently used in the book, and we will occasionally add subscripts like ∇x,y g(x, y) to
clarify the choice of variables with respect to which the gradient is computed. In this case,
the gradient is a column vector with two components, because we have two optimization
variables x and y. Each component of the 2-dimensional vector is a partial derivative of
the objective function with respect to one of the two variables. The simplest approach for
150 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
10
12
11 8
10
OBJECTIVE FUNCTION
6
9
8 4
7
2
6
5 0
LOCAL
MINIMUM
4
3 −2 3
−2
2 3 2
−1 GLOBAL
GLOBAL 0 1
2 MINIMUM
1 MINIMUM 0
1
1
2 −1
0
0 3 −2
−1 OPTIMIZATION VARIABLE y
OPTIMIZATION VARIABLE y −1
OPTIMIZATION VARIABLE x OPTIMIZATION VARIABLE x
solving the optimization problem is to set the gradient ∇g(x, y) to zero, which leads to the
solution [x, y] = [1, 1]. We will discuss the second-order optimality conditions (to distinguish
between maxima, minima, and inflection points) in Section 4.2.3.
The simple approach of setting the gradient of the objective function to zero might not
always lead to a system of equations with a closed-form solution. The common solution is
to use gradient-descent updates with respect to the optimization variables [x, y] as follows:
xt+1 xt xt 2xt − 2
⇐ − α∇g(xt , yt ) = −α
yt+1 yt yt 2yt − 2
So far, we have only examined additively separable functions with simple structure. Now
let us consider a somewhat more complicated function:
H(x, y) = x2 − sin(xy) + y 2 − 2x
In such a case, the term sin(xy) ensures that the function is not additively separable. In
such a case, the gradient of the function can be shown to be the following:
T
∂H(x, y) ∂H(x, y) 2x − y cos(xy) − 2
∇H(x, y) = , =
∂x ∂y 2y − x cos(xy)
Although the partial derivative components are no longer expressed in terms of individual
variables, gradient descent updates can be performed in a similar manner to the previous
case.
As in the case of univariate optimization, the presence of local optima remains a consis-
tent problem. For example, in the case of the function G(x, y) shown in Figure 4.4(b), local
optima are clearly visible. All critical points can be found by setting the gradient ∇G(x, y)
to 0: 3
x − x2 − 2x
∇G(x, y) = =0
y 3 − y 2 − 2y
4.2. THE BASICS OF OPTIMIZATION 151
This optimization problem has an interesting structure, because any of the nine pairs
(x, y) ∈ {−1, 0, 2} × {−1, 0, 2} satisfies the first order optimality conditions, and are there-
fore critical points. Among these, there is a single global minimum, three local minima,
and a single local maximum at (0, 0). The other four can be shown to be saddle points.
The classification of points as minima, maxima, or saddle points can only be accomplished
with the use of multivariate second-order conditions, which are direct generalizations of the
univariate optimality conditions of Lemma 4.2.1. The discussion of second-order optimality
conditions for the multivariate case is deferred to Section 4.2.3. Note the rapid proliferation
of the number of possible critical points satisfying the optimality conditions when the op-
timization problem contains two variables instead of one. In general, when a multivariate
problem is posed as sum of univariate functions, the number of local optima can proliferate
exponentially fast with the number of optimization variables.
Problem 4.2.4 Consider a univariate function f (x), which has k values of x satisfying
the optimality condition f (x) = 0. Let G(x, y) = f (x) + f (y) be a bivariate objective
function. Show that there are k 2 pairs (x, y) satisfying ∇G(x, y) = 0. How many tuples
[x1 , . . . , xd ]T would satisfy the first-order optimality condition for the d-dimensional function
d
H(x1 . . . xd ) = i=1 f (xi )?
In the case of the objective function of Figure 4.4(b), a single (local or global) optimum
exists in each of the four quadrants. Furthermore, it can be shown that starting the gradient
descent in a particular quadrant (at low learning rates) will converge to the single optimum
in that quadrant because each quadrant contains its own local bowl. At higher learning
rates, it is possible for the gradient descent to overshoot a local/global optimum and move
to a different bowl (or even behave in an unpredictable way with numerical overflows).
Therefore, the final resting point of gradient descent depends on (what would seem to be)
small details of the computational procedure, such as the starting point or the learning rate.
We will discuss many of these details in Section 4.4.
The function g(x, y) of Figure 4.4(a) has a single global optimum and no local optima.
In such cases, one is more likely to reach the global optimum, irrespective of where one
starts the gradient-descent procedure. The better outcome in this case is a result of the
structure of the optimization problem. Many optimization problems that are encountered
in machine learning have the nice structure of Figure 4.4(a) (or something very close to it),
as a result of which local optima cause fewer problems than would seem at first glance.
Starting from this section, we assume that only the notations w1 . . . wd represent optimiza-
tion variables, whereas the other “variables” like xi and y are really observed values from the
data set at hand (which are constants from the optimization perspective). This notation is
typical for machine learning problems. The objective functions often penalize differences in
152 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
observed and predicted values of specific attributes, such as the variable y shown above. For
example, if we have many observed tuples of the form [x1 , x2 . . . xd , y], one can sum up the
d
values of (y − i=1 wi xi )2 over all the observed tuples. Such objective functions are often
referred to as loss functions in machine learning parlance. Therefore, we will often substitute
the term “objective function” with “loss function” in the remainder of this chapter. In this
section, we will assume that the loss function J(w) is a function of a vector of multiple
optimization variables w = [w1 . . . wd ]T . Unlike the discussion in the preceding sections, we
will use the notations w1 . . . wd for optimization variables, because the notations X, xi , y,
and yi , will be reserved for the attributes in the data (whose values are observed). Although
attributes are also sometimes referred to as “variables” (e.g., dependent and independent
variables) in machine learning parlance, they are not variables from the perspective of the
optimization problem. The values of the attributes are always fixed based on the observed
data during training, and therefore appear among the (constant) coefficients of the opti-
mization problem. Confusingly, these attributes (with constant observed values) are also
referred to as “variables” in machine learning, because they are arguments of the predic-
tion function that the machine learning algorithm is trying to model. The use of notations
such as X, xi , y, and yi to denote attributes is a common practice in the machine learning
community. Therefore, the subsequent discussion in this chapter will be consistent with
this convention. The value of d corresponds to the number of optimization variables in the
problem at hand, and the parameter vector w = [w1 . . . wd ]T is assumed to be a column
vector.
The computation of the gradient of an objective function of d variables is similar to the
bivariate case discussed in the previous section. The main difference is that a d-dimensional
vector of partial derivatives is computed instead of a 2-dimensional vector. The ith com-
ponent of the d-dimensional gradient vector is the partial derivative of J with respect to
the ith parameter wi . The simplest approach to solve the optimization problem directly
(without gradient descent) is to set the gradient vector to zero, which leads to the following
set of d conditions:
∂J(w)
= 0, ∀i ∈ {1 . . . d}
∂wi
These conditions lead to a system of d equations, which can be solved to determine the
parameters w1 . . . wd . As in the case of univariate optimization, one would like to have
a way to characterize whether a critical point (i.e., zero-gradient point) is a maximum,
minimum, or inflection point. This brings us to the second-order condition. Recall that
in single-variable optimization, the condition for f (w) to be a minimum is f (w) > 0. In
multivariate optimization, this principle is generalized with the use of the Hessian matrix.
Instead of a scalar second derivative, we have a d × d matrix of second-derivatives, which
includes pairwise derivatives of J with respect to different pairs of variables. The Hessian
of the loss function J(w) with respect to the optimization variables w1 . . . wd is given by a
d × d symmetric matrix H, in which the (i, j)th entry Hij is defined as follows:
∂ 2 J(w)
Hij = (4.7)
∂wi ∂wj
Note that the (i, j)th entry of the Hessian is equal to the (j, i)th entry because partial
derivatives are commutative according to Schwarz’s theorem. The fact that the Hessian is
a symmetric matrix is helpful in many computational algorithms that require eigendecom-
position of the matrix.
The Hessian matrix is a direct generalization of the univariate second derivative f (w).
For a univariate function, the Hessian is a 1 × 1 matrix containing f (w) as its only entry.
4.2. THE BASICS OF OPTIMIZATION 153
Strictly speaking, the Hessian is a function of w, and should be denoted by H(w), although
we denote it by H for brevity. In the event that the function J(w) is quadratic, the entries
in the Hessian matrix do not depend on the parameter vector w = [w1 . . . wd ]T . This is
similar to the univariate case, where the second derivative f (w) is a constant when the
function f (w) is quadratic. In general, however, the Hessian matrix depends on the value
of the parameter vector w at which it is computed. For a parameter vector w at which
the gradient is zero (i.e., critical point), one needs to test the Hessian matrix H in the
same way we test f (w) in univariate functions. Just as f (w) needs to be positive for a
point w to be a minimum, the Hessian matrix H needs to be positive-definite for a point
to be guaranteed to be a minimum. In order to understand this point, we consider the
second-order, multivariate Taylor expansion of J(w) in the immediate locality of w0 along
the direction v and small radius > 0:
2
J(w0 + v) ≈ J(w0 ) + v T [∇J(w0 )] + [v T Hv] (4.8)
2
0
Problem 4.2.5 The gradient of the objective function J(w) is 0 and the determinant of
the Hessian is negative at w = w0 . Is w0 a minimum, maximum, or a saddle-point?
154 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
0.8 SADDLE
POINT
0.6
0.4
0.2
g(x, y)
−0.2
−0.4
−0.6
−0.8
−1
1
0.5
−0.5 1
0.5
0
−1 −0.5
y −1
x
Figure 4.5: Re-visiting Figure 3.6: Illustration of saddle point created by indefinite Hessian
Setting the gradient of the objective function to 0 and then solving the resulting system of
equations is usually computationally difficult. Therefore, gradient-descent is used. In other
words, we use the following updates repeatedly with learning rate α:
T
∂J(w) ∂J(w)
[w1 . . . wd ] ⇐ [w1 . . . wd ] − α
T T
... (4.9)
∂w1 ∂wd
One can also write the above expression in terms of the gradient of the objective function
with respect to w:
w ⇐ w − α∇J(w)
Here, ∇J(w) is a column vector containing the partial derivatives of J(w) with respect to
the different parameters in the column vector w. Although the learning rate α is shown as
a constant here, it usually varies over the course of the algorithm (cf. Section 4.4.2).
X
X
Y Y
In other words, it is impossible to find a pair of points in the set such that any of the points
on the straight line joining them do not lie in the set. A closed convex set is one in which
the boundary points (i.e., limit points) of the set are included within the set, whereas an
open convex set is one in which all points within the boundary are included but not the
boundary itself. For example, in 1-dimensional space the set is [−2, +2] is a closed convex
set, whereas the set (−2, +2) is an open convex set.
Examples of convex and non-convex sets are illustrated in Figure 4.6. A circle, an ellipse,
a square, or a half-moon are all convex sets. However, a three-quarter circle is not a convex
set because one can draw a line between the two points inside the set, so that a portion of
the line lies outside the set (cf. Figure 4.6).
A convex function F (w) is defined as a function with a convex domain that satisfies the
following condition for any λ ∈ (0, 1):
F (λw1 + (1 − λ)w2 ) ≤ λF (w1 ) + (1 − λ)F (w2 ) (4.10)
One can generalize the convexity condition to k points, as discussed in the practice problem
below.
Problem 4.3.1 For a convex function F (·), and k parameter vectors w1 . . . wk , show that
the following is true for any λ1 . . . λk ≥ 0 and satisfying i λi = 1:
k
k
F( λi w i ) ≤ λi F (wi )
i=1 i=1
The simplest example of a convex objective function is the class of quadratic functions in
which the leading (quadratic) term has a nonnegative coefficient:
f (w) = a · w2 + b · w + c
Here, a needs to be nonnegative for the function to be considered quadratic. The result can
be shown by using the convexity condition above. All linear functions are always convex,
because the convexity property holds with equality.
Lemma 4.3.1 A linear function of the vector w is always convex.
Convex functions have a number of useful properties that are leveraged in practical appli-
cations.
Lemma 4.3.2 Convex functions obey the following properties:
1. The sum of convex functions is always convex.
2. The maximum of convex functions is convex.
3. The square of a nonnegative convex function is convex.
156 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
4. If F (·) is a convex function with a single argument and G(w) is a linear function with
a scalar output, then F (G(w)) is convex.
5. If F (·) is a convex non-increasing function and G(w) is a concave function with a
scalar output, then F (G(w)) is convex.
6. If F (·) is a convex non-decreasing function and G(w) is a convex function with a
scalar output, then F (G(w)) is convex.
We leave the detailed proofs of these results (which can be derived from Equation 4.10) as
an exercise:
Problem 4.3.2 Prove all the results of Lemma 4.3.2 using the definition of convexity.
There are several natural combinations of convex functions that one might expect to be
convex at first glance, but turn out to be non-convex on closer examination. The product of
two convex functions is not necessarily convex. The functions f (x) = x and g(x) = x2 are
convex functions, but their product h(x) = f (x) · g(x) = x3 is not convex (see Figure 4.1).
Furthermore, the composition of two convex functions is not necessarily convex, and it
might be indefinite or concave. As a specific example, consider the linear convex function
f (x) = −x and also the quadratic convex function g(x) = x2 . Then, we have f (g(x)) =
−x2 , which is a concave function. The result on the composition of functions is important
from the perspective of deep neural networks (cf. Chapter 11). Even though the individual
nodes of neural networks usually compute convex functions, the composition of the functions
computed by successive nodes is often not convex.
A nice property of convex functions is that a local minimum will also be a global mini-
mum. If there are two “local” minima, then the above convexity condition ensures that the
entire line joining them also has the same objective function value.
Problem 4.3.3 Use the convexity condition to show that every local minimum in a convex
function must also be a global minimum.
The fact that every local minimum is a global minimum can also be characterized by using a
geometric definition of convexity. This geometric definition, which is also referred to as the
first-derivative condition, is that the entire convex function will always lie above a tangent
to a convex function, as shown in Figure 4.7. This figure illustrates a 2-dimensional convex
function, where the horizontal directions are arguments to the function (i.e., optimization
variables), and the vertical direction is the objective function value. An important conse-
quence of convexity is that one is often guaranteed to reach a global optimum if successful
convergence occurs during the gradient-descent procedure.
The condition of Figure 4.7 can also be written algebraically using the gradient of the
convex function at a given point w0 . In fact, this condition provides an alternative definition
of convexity. We summarize this condition below:
Lemma 4.3.3 (First-Derivative Characterization of Convexity) A differentiable
function F (w) is a convex function if and only if the following is true for any pair w0
and w:
F (w) ≥ F (w0 ) + [∇F (w0 )] · (w − w0 )
We omit a detailed proof of the lemma. Note that if the gradient of F (w) is zero at w = w0 ,
it would imply that F (w) ≥ F (w0 ) for any w. In other words, w0 is a global minimum.
Therefore, any critical point that satisfies the first-derivative condition is a global minimum.
4.3. CONVEX OBJECTIVE FUNCTIONS 157
OPTIMIZATION
VARIABLES
TANGENT
HYPERPLANE
Figure 4.7: A convex function always lies entirely above any tangent to the surface. The
example illustrates a 2-dimensional function, where the two horizontal axes are the opti-
mization variables and the vertical axis is the objective function value
The main disadvantage of the first-derivative condition (with respect to the direct definition
of convexity) is that it applies only to differentiable functions. Interestingly, there is a third
characterization of convexity in terms of the second-derivative:
The second derivative condition has the disadvantage of requiring the function F (w) to be
twice differentiable. Therefore, the following convexity definitions are equivalent for twice-
differentiable functions defined over Rd :
One can choose to use any of the above conditions as the definition of convexity, and then
derive the other two as lemmas. However, the direct definition is slightly more general
because it does not depend on differentiability, whereas the other definitions have the ad-
ditional requirement of differentiability. For example, the function F (w) = w1 is convex
but only the first definition can be used because of its non-differentiability at any point
where a component of w is 0. We refer the reader to [10, 15, 22] for detailed proofs of the
equivalence of the various definitions in the differentiable case. It is often the case that a
particular definition is easier to use than another when one is trying to prove the convexity
of a specific function. Many machine learning objective functions are of the form F (G(w)),
T
where G(w) is the linear function w · X for a row vector containing a d-dimensional data
point X, and F (·) is a univariate function. In such a case, one only needs to prove that
the univariate function F (·) is convex, based on the final portion of Lemma 4.3.2. It is
158 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
particularly easy to use the second-order condition F (·) ≥ 0 for univariate functions. As a
specific example, we provide a practice exercise for showing the convexity of the logarithmic
logistic loss function. This function is useful for showing the convexity of logistic regression.
Problem 4.3.4 Use the second derivative condition to show that the univariate function
F (x) = loge (1 + exp(−x)) is convex.
Problem 4.3.5 Use the second-derivative condition to show that if the univariate function
F (x) is convex, then the function G(x) = F (−x) must be convex as well.
A slightly stronger condition than convexity is strict convexity in which the convexity condi-
tion is modified to strict inequality. A strictly convex function F (w) is defined as a function
that satisfies the following condition for any λ ∈ (0, 1):
For example, a bowl with a flat bottom is convex, but it is not strictly convex. A strictly
convex function will have a unique global minimum. One can also adapt the first-order
conditions to strictly convex functions. A function F (·) can be shown to be strictly convex
if and only if the following condition holds for all w and w0 :
gradient-descent algorithm; the approach will never get stuck in a local minimum. Although
the objective functions in complex machine learning models (like neural networks) are not
convex, they are often close to convex. As a result, gradient-descent methods work quite
well in spite of the presence of local optima.
For any convex function F (w), the region of the space bounded by F (w) ≤ b for any
constant b can be shown to be a convex set. This type of constraint is encountered often
in optimization problems. Such problems are easier to solve because of the convexity of the
space in which one wants to search for the parameter vector.
achieve the desired learning-rate adjustment to avoid these challenges. Therefore, a decaying
learning rate αt is subscripted with the time-stamp t, and the update is as follows:
w ⇐ w − αt ∇J
The time t is typically measured in terms of the number of cycles over all training points. The
two most common decay functions are exponential decay and inverse decay. The learning
rate αt can be expressed in terms of the initial decay rate α0 and time t as follows:
wt+1 ⇐ wt + αt g t
In line search, the learning rate αt is chosen in each step, so as to minimize the value of the
objective function at wt+1 . The step-size αt is computed as follows:
After performing the step, the gradient is computed at wt+1 for the next step. The gradient
at wt+1 will be perpendicular to the search direction g t or else αt will not be optimal. This
4.4. THE MINUTIAE OF GRADIENT DESCENT 161
result can be shown by observing that if the gradient of the objective function at wt + αt g t
has a non-zero dot product with the current movement direction g t , then one can improve
the objective function by moving an amount of either +δ or −δ along g t from wt+1 :
g Tt [∇J(wt + αt g t )] = 0
Lemma 4.4.1 The gradient at the optimal point of a line search is always orthogonal to
the current search direction.
where a < m1 < m2 < b, at least one of the intervals [a, m1 ] and [m2 , b] can be dropped. In
some cases, an even larger interval like [a, m2 ] and [m1 , b] can be dropped. This is because
the minimum value for a unimodal function must always lie in an adjacent interval to the
choice of α ∈ {a, m1 , m2 , b} that yields the minimum value of H(α). When α = a yields
the minimum value for H(α), we can exclude the interval (m1 , b], and when α = b yields
the minimum value for H(α), we can exclude the interval [a, m2 ). When α = m1 yields the
minimum value, we can exclude the interval (m2 , b], and when α = m2 yields the minimum
value, we can exclude the interval [a, m1 ). The new bounds [a, b] for the interval are reset
based on these exclusions. At the end of the process, we are left with an interval containing
either 0 or 1 evaluated point. If we have an interval containing no evaluated point, we first
select a random point α = p in the (reset) interval [a, b], and then another random point
α = q in the larger of the two intervals [a, p] and [p, b]. On the other hand, if we are left
with an interval [a, b] containing a single evaluated point α = p, then we select α = q in the
larger of the two intervals [a, p] and [p, b]. This yields another set of four points over which
we can apply golden-section search. This process is repeated until an interval is reached
with the required level of accuracy.
Note that for small enough values of α, the condition above will always be satisfied. In fact,
one can show using the finite-difference approximation, that for infinitesimally small values
of α, the condition above is satisfied at μ = 1. However, we want a larger step size to ensure
faster progress. What is the largest step-size one can use? We test successively decreasing
values of α for the condition above, and stop the first time the condition above is satisfied.
In backtracking line search, we start by testing H(αmax ), H(βαmax ) . . . H(β r αmax ), until
the condition above is satisfied. At that point we use α = β r αmax . Here, β is a parameter
drawn from (0, 1), and a typical value is 0.5.
than compensate for the expensive nature of the individual steps. An important point with
the use of line-search is that convergence is guaranteed, even if the resulting solution is a
local optimum.
4.4.4 Initialization
The gradient-descent procedure always starts at an initial point, and successively improves
the parameter vector at a particular learning rate. A critical question arises as to how the
initialization point can be chosen. For some of the relatively simple problems in machine
learning (like the ones discussed in this chapter), the vector components of the initialization
point can be chosen as small random values from [−1, +1]. In case the parameters are
constrained to be nonnegative, the vector components can be chosen from [0, 1].
However, this simple way of initialization can sometimes cause problems for more com-
plex algorithms. For example, in the case of neural networks, the parameters have complex
dependencies on one another, and choosing good initialization points can be critical. In
other cases, choosing improper magnitudes of the initial parameters can cause numerical
overflows or underflows during the updates. It is sometimes effective to use some form of
heuristic optimization for initialization. Such an approach already pretrains the algorithm
to an initialization near an optimum point. The choice of the heuristic generally depends
on the algorithm at hand. Some learning algorithms like neural networks have systematic
ways of performing pretraining and choosing good initializations. In this chapter, we will
give some examples of heuristic initializations.
(observed) class. For the ith training point, this probability is denoted by P (X i , yi , w), which
depends on the parameter vector w and training pair (X i , yi ). The probability of correct
prediction over all training points is given by the products of probabilities P (X i , yi , w) over
all (X i , yi ). The negative logarithm is applied to this product to convert the maximization
problem into a minimization problem (while addressing numerical underflow issues caused
by repeated multiplication):
n
$
n
J(w) = −loge P (X i , yi , w) = − loge P (X i , yi , w) (4.13)
i=1 i=1
Using the logarithm also makes the objective function appear as an additively separable sum
over the training points.
As evident from the aforementioned examples, many machine learning problems use
additively separable data-centric objective functions, whether squared loss or log-likelihood
loss is used. This means that each individual data point creates a small (additive) component
of the objective function. In each case, the objective function contains n additively separable
T
terms, and each point-specific error [such as Ji = (yi − w · X i )2 in least-squares regression]
can be viewed as a point-specific loss. Therefore, the overall objective function can be
expressed as the sum of these point-specific losses:
n
J(w) = Ji (w) (4.14)
i=1
This type of linear separability is useful, because it enables the use of fast optimization
methods like stochastic gradient descent and mini-batch stochastic gradient descent, where
one can replace the objective function with a sampled approximation.
The key idea in mini-batch stochastic gradient descent is that the gradient of J(S) with
respect to the parameter vector w is an excellent approximation of the gradient of the full
objective function J. Therefore, the gradient-descent update of Equation 4.9 is modified to
mini-batch stochastic gradient descent as follows:
T
∂J(S) ∂J(S)
[w1 . . . wd ]T ⇐ [w1 . . . wd ]T − α ... (4.16)
∂w1 ∂wd
This approach is referred to as mini-batch stochastic gradient descent. Note that computing
the gradient of J(S) is far less computationally intensive compared to computing the gra-
dient of the full objective function. A special case of mini-batch stochastic gradient descent
is one in which the set S contains a single randomly chosen data point. This approach is
referred to as stochastic gradient descent. The use of stochastic gradient descent is rare, and
4.5. PROPERTIES OF OPTIMIZATION IN MACHINE LEARNING 165
one tends to use the mini-batch method more often. Typical mini-batch sizes are powers
of 2, such as 64, 128, 256, and so on. The reason for this is purely practical rather than
mathematical; using powers of 2 for mini-batch sizes often results in the most efficient use
of resources such as Graphics Processor Units (GPUs).
Stochastic gradient-descent methods typically cycle through the full data set, rather than
simply sampling the data points at random. In other words, the data points are permuted
in some random order and blocks of points are drawn from this ordering. Therefore, all
other points are processed before arriving at a data point again. Each cycle of the mini-
batch stochastic gradient descent procedure is referred to as an epoch. In the case where
the mini-batch size is 1, an epoch will contain n updates, where n is the training data size.
In the case where the mini-batch size is k, an epoch will contain n/k updates. An epoch
essentially means that every point in the training data set has been seen exactly once.
Stochastic gradient-descent methods have much lower memory requirements than pure
gradient-descent, because one is processing only a small sample of the data in each step.
Although each update is more noisy, the sampled gradient can be computed much faster.
Therefore, even though more updates are required, the overall process is much faster. Why
does stochastic gradient descent work so well in machine learning? At its core, mini-batch
methods are random sampling methods. One is trying to estimate the gradient of a loss
function using a random subset of the data. At the very beginning of the gradient-descent,
the parameter vector w is grossly incorrect. Therefore, using only a small subset of the
data is often sufficient to estimate the direction of descent very well, and the updates of
mini-batch stochastic gradient descent are almost as good as those obtained using the full
data (but with a tiny fraction of the computational effort). This is what contributes to the
significant improvement in running time. When the parameter vector w nears the optimal
value during descent, the effect of sampling error is more significant. Interestingly, it turns
out that this type of error is actually beneficial in machine learning applications because
of an effect referred to as regularization! The reason has to do with the subtle differences
between how optimization is used traditionally as opposed to how it is used in machine
learning applications. This will be the subject of the discussion in the next section.
In practice, they often become available only in retrospect, when the true accuracy of the
machine learning algorithm can be computed. Therefore, the labels of the test examples
cannot be made available during training. In machine learning, one only cares about accu-
racy on the unseen test examples rather than training examples. It is possible for excellently
designed optimization methods to perform very well on the training data, but have dis-
astrously poor results on the test data. This separation between training and test data is
also respected during benchmarking of machine learning algorithms by creating simulated
training and test data sets from a single labeled data set. In order to achieve this goal,
one simply hides a part of the labeled data, and refers to the available part as the training
data and the remainder as the test data. After building the model on the training data, one
evaluates the performance of the model on the test data, which was never seen during the
training phase. This is a key difference from traditional optimization, because the model
is constructed using a particular data set; yet, a different (but similar) data set is used
to evaluate performance of the optimization algorithm. This difference is crucial because
models that perform very well on the training data might not perform very well on the test
data. In other words, the model needs to generalize well to unseen test data. When a model
performs very well on the training data, but does not perform very well on the unseen test
data, the phenomenon is referred to as overfitting.
In order to understand this point, consider a case where one has a 4-dimensional data
set of individuals, in which the four attributes x1 , x2 , x3 , and x4 correspond to arm span,
number of freckles, length of hair, and the length of nails. The arm span is defined as the
maximum distance between fingertips when an individual holds their arms out wide. The
target attribute is the height of the individual. The arm span is known to be almost equal
to the height of an individual (with minor variations across races, genders, and individuals),
although the goal of the machine learning application is to infer this fact in a data-driven
manner. The predicted height of the individual is modeled by the linear function ŷ =
w1 x1 + w2 x2 + w3 x3 + w4 x4 + w5 for the purposes of prediction. The best-fit coefficients
w1 . . . w5 can be learned in a data-driven manner by minimizing the squared loss between
predicted ŷ and observed y. One would expect that the height of an individual is highly
correlated with their arm span, but the number of freckles and lengths of hair/nails are
not similarly correlated. As a result, one would typically expect w1 x1 to make most of the
contribution to the prediction, and the other three attributes would contribute very little
(or noise). If the number of training examples is large, one would typically learn values
of wi that show this type of behavior. However, a different situation arises, if the number
of training examples is small. For a problem with five parameters w1 . . . w5 , one needs at
least 5 training examples to avoid a situation where an infinite number of solutions to the
parameter vector exist (typically with zero error on the training data). This is because a
system of equations of the form y = w1 x1 + w2 x2 + w3 x3 + w4 x4 + w5 has an infinite number
of equally good best-fit solutions if there are fewer equations than the number of variables.
4one can often find at least one solution in which w1 is 0, and the squared error
In fact,
(y − i=1 wi xi − w5 )2 takes on its lowest possible value of zero on the training data. In
spite of this fact, the error in the test data will typically be very high. Consider an example
of a training set containing the following three data points:
In this case, setting w1 to 1 and all other coefficients to 0 is the “correct” solution, based
on what is likely to happen over an infinite number of training examples. Note that this
solution does not provide zero training error on this specific training data set, because there
are always empirical variations across individuals. If we had an large number of examples
(unlike the case of this table), it would also be possible for a model to learn this behavior
well with a loss function that penalizes only the squared errors of predictions. However,
with only three training examples, many other solutions exist that have zero training error.
For example, setting w1 = 0, w2 = 7, w3 = 5, w4 = 0, and w5 = 20 provides zero
error on the training data. Here, the arm span and the nail length are not used at all.
At the same time, setting w1 = 0, w2 = 21.5, w3 = 0, w4 = 60, and w5 = 10 also
yields zero error on the training data. This solution does not use the arm span or the hair
length. Furthermore, any convex combination of these coefficients also provides zero error
on the training data. Therefore, an infinite number of solutions that use irrelevant attributes
provide better training error than the natural and intuitive solution that uses arm span.
This is primarily because of overfitting to the specific training data at hand; this solution
will generalize poorly to unseen test data.
All machine learning applications are used on unseen test data in real settings; therefore,
it is unacceptable to have models that perform well on training data but perform poorly
on test data. Poor generalization is a result of models adapting to the quirks and random
nuances of a specific training data set; it is likely to occur when the training data is small.
When the number of training instances is fewer than the number of features, an infinite
number of equally “good” solutions exist. In such cases, poor generalization is almost in-
evitable unless steps are taken to avoid this problem. Therefore, there are a number of
special properties of optimization in machine learning:
These differences between traditional optimization and machine learning are important
because they affect the design of virtually every optimization procedure in machine
learning.
168 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
∂y ∂y
In such a case, the partial derivative ∂w 1
= x21 and ∂w 2
= x22 will show up as multiplicative
terms in the components of the error gradient with respect to w1 and w2 , respectively.
Since x21 is usually much larger than x22 (and often by a factor of 100), the components
of the error gradient with respect to w1 will typically be much greater in magnitude than
those with respect to w2 . Often, small steps along w2 will lead to large steps along w1 (and
therefore an overshooting of the optimal value along w1 ). Note that the sign of the gradient
component along the w1 direction will often keep flipping in successive steps to compensate
for the overshooting along the w1 direction after large steps. In practice, this leads to a back-
and-forth “bouncing” behavior along the w1 direction and tiny (but consistent) progress
along the w2 direction. As a result, convergence will be very slow. This type of behavior is
discussed in greater detail in the next chapter. Therefore, it is often helpful to have features
with similar variance. There are two forms of feature preprocessing used in machine learning
algorithms:
or matrix. Being able to differentiate blocks of variables with respect to other blocks is useful
from the perspective of brevity and quick computation. Although the field of matrix calculus
is very broad, we will focus on a few important identities, which are useful for addressing
the vast majority of machine learning problems one is likely to encounter in practice.
w ⇐ w − α∇J
An equivalent notation for the gradient ∇J is the matrix-calculus notation ∂J(w) ∂w . This
notation is a scalar-to-vector derivative, which always returns a vector. Therefore, we have
the following:
T
∂J(w) ∂J(w) ∂J(w)
∇J = = ...
∂w ∂w1 ∂wd
Here, it is important to note that there is some convention-centric ambiguity in the treat-
ments of matrix calculus by various communities as to whether the derivative of a scalar
with respect to a column vector is a row vector or whether it is a column vector. Throughout
this book, we use the convention that the derivative of a scalar with respect to a column
vector is also a column vector. This convention is referred to as the denominator layout
(although the numerator layout is more common in which the derivative is a row vector).
We use the denominator layout because it frees us from the notational clutter of always
having to transpose a row vector into a column vector in order to perform gradient descent
updates on w (which are extremely common in machine learning). Indeed, the choice of
using the numerator layout and denominator layout in different communities is often regu-
lated by these types of notational conveniences. Therefore, we can directly write the update
in matrix calculus notation as follows:
∂J(w)
w ⇐w−α
∂w
The matrix calculus notation also allows derivatives of vectors with respect to vectors. Such
a derivative results in a matrix, referred to as the Jacobian. Jacobians arise frequently when
computing the gradients of recursively nested multivariate functions; a specific example is
the case of multilayer neural networks (cf. Chapter 11). For example, the derivative of an
m-dimensional column vector h = [h1 , . . . , hm ]T with respect to a d-dimensional column
vector w = [w1 , . . . , wd ]T is a d × m matrix in the denominator layout. The (i, j)th entry of
this matrix is the derivative of hj with respect to wi :
∂h ∂hj
= (4.19)
∂w ij ∂wi
∂hi
The (i, j)th element of the Jacobian is always ∂wj , and therefore it is the transpose of the
∂h
matrix ∂w shown in Equation 4.19.
Another useful derivative that arises frequently in different types of matrix factorization
is the derivative of a scalar objective function J with respect to an m × n matrix W . In the
4.6. COMPUTING DERIVATIVES WITH RESPECT TO VECTORS 171
denominator layout, the result inherits the shape of the matrix in the denominator. The
(i, j)th entry of the derivative is simply the derivative of J with respect to the (i, j)th entry
in W .
∂J ∂J
= (4.20)
∂W ij ∂Wij
A review of matrix calculus notations and conventions is provided in Table 4.1.
∂F (w)
∇F (w) = = 2Aw (4.22)
∂w
The algebraic similarity of the derivative to the scalar case is quite noticeable. The reader
is encouraged to work out each element-wise partial derivative and verify that the above
expression is indeed correct. Note that ∇F (w) is a column vector.
Another common objective function G(w) in machine learning is the following:
T
G(w) = b Bw = wT B T b (4.23)
∂G(w)
∇G(w) = = BT b (4.24)
∂w
In this case, every component of the gradient is a constant. We leave the proofs of these
results as a practice exercise:
Problem 4.6.1 Let A = [aij ] be a symmetric d × d matrix of constant values, B = [bij ]
be an n × d matrix of constant values, w be a d-dimensional column vector of optimization
variables, and b be an n-dimensional column vector of constants. Let F (w) = wT Aw and
T
let G(w) = b Bw. Show using component-wise partial derivatives that ∇F (w) = 2Aw and
∇G(w) = B b.T
The above practice exercise would require one to expand each expression in terms of the
scalar values in the matrices and vectors. One can then appreciate the compactness of the
matrix calculus approach for quick computation. We provide a list of the commonly used
identities in Table 4.2. Many of these identities are useful in machine learning models.
172 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
∂J
Scalar J Scalar x Scalar ∂x
' (
∂h ∂hi
Column vector h Scalar x Column vector ∂x = ∂x
i
in m dimensions in m dimensions
∂J ∂J
Scalar J Column vector w Row vector ∂w i = ∂wi
in d dimensions in d dimensions
' (
Column vector h Column vector w m × d matrix ∂h
∂w = ∂hi
∂wj
ij
in m dimensions in d dimensions
∂J
Scalar J m × n matrix W n × m matrix ∂W ij = ∂J
∂Wji
Derivative of: with respect to: Output size ith or (i, j)th element
∂J
Scalar J Scalar x Scalar ∂x
' (
∂h ∂hi
Column vector h Scalar x Row vector ∂x = ∂x
i
in m dimensions in m dimensions
∂J ∂J
Scalar J Column vector w Column vector ∂w i = ∂wi
in d dimensions in d dimensions
' (
∂hj
Column vector h Column vector w d × m matrix ∂h
∂w = ∂wi
ij
in m dimensions in d dimensions
∂J
Scalar J m × n matrix W m × n matrix ∂W ij = ∂J
∂Wij
Table 4.2: List of common matrix calculus identities in denominator layout. A is a constant
d × d matrix, B is a constant n × d matrix, and b is a constant n-dimensional vector
independent of the parameter vector w. C is a k × d matrix
Since it is common to compute the gradient with respect to a column vector of parameters,
all these identities represent the derivatives with respect to a column vector. Note that
Table 4.2(b) represent some simple vector-to-vector derivatives, which always lead to the
transpose of the Jacobian. Beyond these commonly used identities, a full treatment of matrix
calculus is beyond the scope of the book, although interested readers are referred to [20].
In quadratic programming, the objective function contains a quadratic term of the form
T
wT Aw, a linear term b w, and a constant. An unconstrained quadratic program has the
following form:
1 T
Minimize w wT Aw + b w + c
2
Here, we assume that A is a positive definite d × d matrix, b is a d-dimensional col-
umn vector, c is a scalar constant, and the optimization variables are contained in the
d-dimensional column vector w. An unconstrained quadratic program is a direct generaliza-
tion of 1-dimensional quadratic functions like 12 ax2 + bx + c. Note that a minimum exists
at x = −b/a for 1-dimensional quadratic functions when a > 0, and a minimum exists for
multidimensional quadratic functions when A is positive definite.
The two terms in the objective function can be differentiated with respect to w by
using the identities (i) and (ii) in Table 4.2(a). Since the matrix A is positive definite, it
follows that the Hessian A is positive definite irrespective of the value of w. Therefore, the
objective function is strictly convex, and setting the gradient to zero is a necessary and
174 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
sufficient condition for minimization of the objective function. Using the identities (i) and
(ii) of Table 4.2(a), we obtain the following optimality condition:
Aw + b = 0
Therefore, we obtain the solution w = −A−1 b. Note that this is a direct generalization of
the solution for the 1-dimensional quadratic function. In the event that A is singular, a
solution is not guaranteed even when A is positive semidefinite. For example, when A is the
zero matrix, the objective function becomes linear with no minimum. When A is positive
semidefinite, it can be shown that a minimum exists if and only if b lies in the column space
of A (see Exercise 8).
This form of the gradient is used often in least-squares regression. Setting this gradient to
zero yields the closed-form solution to least-squares regression (cf. Section 4.7).
J = f (g(h(w))) (4.27)
All of f (·), g(·), and h(·) are assumed to be scalar functions. In such a case, the derivative
of J with respect to the scalar w is simply f (g(h(w)))g (h(w))h (w). This rule is referred
4.6. COMPUTING DERIVATIVES WITH RESPECT TO VECTORS 175
to as the univariate chain rule of differential calculus. Note that the order of multiplication
does not matter because scalar multiplication is commutative.
Similarly, consider the case where you have the following functions, where one of the
functions is a vector-to-scalar function:
In such a case, the multivariate chain rule states that one can compute the derivative of J
with respect to w as the sum of the products of the partial derivatives using all arguments
of the function:
k
∂J ∂J ∂gi (w)
=
∂w i=1
∂gi (w) ∂w
One can generalize both of the above results into a single form by considering the case
where the functions are vector-to-vector functions. Note that vector-to-vector derivatives
are matrices, and therefore we will be multiplying matrices together instead of scalars.
Surprisingly, very large classes of machine learning algorithms perform the repeated com-
position of only two types of functions, which are shown in Table 4.2(b). Unlike the case of
the scalar chain rule, the order of multiplication is important when dealing with matrices
and vectors. In a composition function, the derivative of the argument (inner level variable)
is always pre-multiplied with the derivative of the function (outer level variable). In many
cases, the order of multiplication is self-evident because of the size constraints associated
with matrix multiplication. We formally define the vectored chain rule as follows:
Theorem 4.6.1 (Vectored Chain Rule) Consider a composition function of the follow-
ing form:
o = Fk (Fk−1 (. . . F1 (x)))
Assume that each Fi (·) takes as input an ni -dimensional column vector and outputs an
ni+1 -dimensional column vector. Therefore, the input x is an n1 -dimensional vector and
the final output o is an nk+1 -dimensional vector. For brevity, denote the vector output of
Fi (·) by hi . Then, the vectored chain rule asserts the following:
∂o ∂h1 ∂h2 ∂hk−1 ∂o
= ...
∂x ∂x ∂h ∂hk−2 ∂hk−1
1
n1 ×nk+1 n1 ×n2 n2 ×n3 nk−1 ×nk nk ×nk+1
It is easy to see that the size constraints of matrix multiplication are respected in this case.
In this case, the order of multiplication does not matter, because one of the factors in the
product is a scalar. Note that this result is used frequently in machine learning, because
many loss-functions in machine learning are computed by applying a scalar function f (·) to
the dot product of w with a training point a. In other words, we have g(w) = w · a. Note
that w · a can be written as wT (I)a , where I represents the identity matrix. This is in the
form of one of the matrix identities of Table 4.2(a) [see identity (ii)]. In such a case, one
can use the chain rule to obtain the following:
∂J
= [f (g(w))] a (4.29)
∂w
scalar
This result is extremely useful, and it can be used for computing the derivatives of many
loss functions like least-squares regression, SVMs, and logistic regression. The vector a is
simply replaced with the vector of the training point at hand. The function f (·) defines the
specific form of the loss function for the model at hand. We have listed these identities as
results (iv) and (v) of Table 4.2(a).
Table 4.2(b) contains a number of useful derivatives of vector-to-vector functions. The
first is the linear transformation h = Cw, where C is a matrix that does not depend on the
parameter vector w. The corresponding vector-to-vector derivative of h with respect to w is
C T [see identity (i) of Table 4.2(b)]. This type of transformation is used commonly in linear
layers of feed-forward neural networks. Another common vector-to-vector function is the
element-wise function F (w), which is also used in neural networks (in the form of activation
functions). In this case, the corresponding derivative is a diagonal matrix containing the
element-wise derivatives as shown in the second identity of Table 4.2(b).
Finally, we consider a generalization of the product identity in differential calculus. In-
stead of differentiating the product of two scalar variables, we consider the product of a
scalar and a vector variable. Consider the relationship h = fs (w)x, which is the product
of a vector and a scalar. Here, fs (·) is a vector-to-scalar function and x is a column vector
that depends on w. In such a case, the derivative of h with respect to w is the matrix
∂fs (w) T ∂x
∂w x + fs (w) ∂w [see identity (iii) of Table 4.2(b)]. Note that the first term is the outer
product of the two vectors ∂f∂w s (w)
and x, whereas the second term is a scalar multiple of a
vector-to-vector derivative.
matrices in machine learning. Therefore, the row vector X i needs to be transposed before
performing a dot product with the column vector W . The vector W needs to be learned in
T
a data driven manner, so that ŷi = W · X i is as close to each yi as possible. Therefore, we
T
compute the loss (yi − W · X i )2 for each training data point, and then add up this losses
over all points in order to create the objective function:
1
n
T
J= (yi − W · X i )2 (4.30)
2 i=1
Once the vector W has been learned from the training data by optimizing the aforemen-
tioned objective function, the numerical value of the target variable of an unseen test in-
T
stance Z (which is a d-dimensional row vector) can be predicted as W · Z .
It is particularly convenient to write this objective function in terms of an n × d data
matrix. The n × d data matrix D is created by stacking up the n rows X 1 . . . X n . Similarly,
y is an n-dimensional column vector of response variables for which the ith entry is yi . Note
that DW is an n-dimensional column vector of predictions which should ideally equal the
observed vector y. Therefore, the vector of errors is given by (DW − y), and the squared
norm of the error vector is the loss function. Therefore, the minimization loss function of
least-squares regression may be written as follows:
1 1
J= DW − y2 = [DW − y]T [DW − y] (4.31)
2 2
One can expand the above expression as follows:
1 T T 1 T 1 1
J= W D DW − W DT y − y T DW + y T y (4.32)
2 2 2 2
It is easy to see that the above expression is convex, because DT D is the positive semidefinite
Hessian in the quadratic term. This means that if we find a value of the vector W at which
the gradient is zero (i.e., a critical point), it will be a global minimum of the objective
function.
In order to compute the gradient of J with respect to W , one can directly use the
squared-norm result of Section 4.6.2.2 to yield the following:
∇J = DT DW − DT y (4.33)
DT DW = DT y (4.34)
Pre-multiplying both sides with (DT D)−1 , one obtains the following:
Note that this formula is identical to the use of the left-inverse of D for solving a system of
equations (cf. Section 2.8 of Chapter 2), and the derivation of Section 2.8 uses the normal
equation rather than calculus. The problem of solving a system of equations is a special case
of least-squares regression. When the system of equations has a feasible solution, the optimal
solution has zero loss on the training data. In the case that the system is inconsistent, we
obtain the best-fit solution.
178 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
How can one compute W efficiently, when DT D is invertible? This can be achieved via
QR decomposition of matrix D as D = QR (see end of Section 2.8.2), where Q is an n × d
matrix with orthonormal columns and R is a d × d upper-triangular matrix. One can simply
substitute D = QR in Equation 4.34, and use QT Q = Id to obtain the following:
RT RW = RT QT y (4.36)
−1
Multiplying both sides with (RT ) , one obtains RW = QT y. This triangular system of
equations can be solved efficiently using back-substitution.
The above solution assumes that the matrix DT D is invertible. However, in cases where
the number of data points is small, the matrix DT D might not be invertible. In such
cases, infinitely many solutions exist to this system of equations, which will overfit the
training data; such methods will not generalize easily to unseen test data. In such cases,
regularization is important.
Pre-multiplying both sides with (DT D + λI)−1 , one obtains the following:
Here, it is important to note that (DT D+λI) is always invertible for λ > 0, since the matrix
is positive definite (see Problem 2.4.2 of Chapter 2). The resulting solution is regularized,
and it generalizes much better to out-of-sample data. Because of the push-through identity
(see Problem 1.2.13), the solution can also be written in the following alternative form:
W ⇐ W − α∇J (4.42)
T
be much higher than 0 on New Year’s eve. However, the modeling yi = W · X i will always
yield 0 as a predicted value. This problem can be avoided with the use of a bias variable
T
b, so that the new model is yi = W · X i + b. The bias variable absorbs the additional
constant effects (i.e., bias specific to the city at hand) and it needs to be learned like the
other parameters in W . In such a case, it can be shown that the gradient-descent updates
of Equation 4.44 are modified as follows:
T T
W ⇐ W (1 − αλ) − α X i (W · X i + b − yi )
(X i ,yi )∈S
Error value
T
b ⇐ b(1 − αλ) − α (W · X i + b − yi )
(X i ,yi )∈S
Error value
It turns out that it is possible to achieve exactly the same effect as the above updates without
changing the original (i.e., bias-free) model. The trick is to add an additional dimension
to the training and test data with a constant value of 1. Therefore, one would have an
additional (d + 1)th parameter wd+1 in vector W , and the target variable for X = [x1 . . . xd ]
is predicted as follows:
d
ŷ = [ wi xi ] + wi+1 (1)
i=1
It is not difficult to see that this is exactly the same prediction function as the one with
bias. The coefficient wd+1 of this additional dimension is the bias variable b. Since the bias
variable can be incorporated with a feature engineering trick, it will largely be omitted in
most of the machine learning applications in this book. However, as a practical matter, it
is very important to use the bias (in some form) in order to avoid undesirable constant
effects.
the color such as {Blue, Green, Red}. Note that there is no natural ordering between these
targets, which is different from the case of numerical targets unless the target variable is
binary.
A special case of discrete targets is the case in which the target variable y is binary
and drawn from {−1, +1}. The instances with label +1 are referred to as positive class
instances, and those with label −1 are referred to as negative class instances. For example,
the feature variables in a cancer detection application might correspond to patient clinical
measurements, and the class variable can be an indicator of whether or not the patient
has cancer. In the binary-class case, we can impose an ordering between the two possible
target values. In other words, we can pretend that the targets are numeric, and simply
perform linear regression. This method is referred to as least-squares classification, which
is discussed in the next section. Treating discrete targets as numerical values does have its
disadvantages. Therefore, many alternative loss functions have been proposed for discrete
(binary) data that avoid these disadvantages. Examples include the support vector machine
and logistic regression. In the following, we will provide an overview of these models and
their relationships with one another. While discussing these relationships, it will become
evident that the ancient problem of least-squares regression serves as the parent model and
the motivating force to all these (relatively recent) models for discrete-valued targets.
W XT < 0
LABEL = -1
W XT > 0
LABEL = +1
W XT = 0
W is a column vector, the test instance needs to be transposed before computing the dot
T
product between W and Z . This dot product yields a real-valued prediction, which is
converted to a binary prediction with the use of sign function:
T
ŷ = sign{W · Z } (4.46)
T
In effect, the model learns a linear hyperplane W · X = 0 separating the positive and
T
negative classes. All test instances for which W · Z > 0 are predicted to belong to the
T
positive class, and all instances for which W · Z < 0 are predicted to belong to the
negative class.
As in the case of real-valued targets, one can also use mini-batch stochastic gradient-
descent for regression on binary targets. Let S be a mini-batch of pairs (X i , yi ) of feature
variables and targets. Each X i is a row of the data matrix D and yi is a target value drawn
from {−1, +1}. Then, the mini-batch update for least-squares classification is identical to
that of least-squares regression:
T T
W ⇐ W (1 − αλ) − α X i (W · X i − yi ) (4.47)
(X i ,yi )∈S
Here, α > 0 is the learning rate, and λ > 0 is the regularization parameter. Note that this
update is identical to that in Equation 4.44. However, since each target yi is drawn from
{−1, +1}, an alternative approach also exists for writing the targets by using the fact that
yi2 = 1. This alternative form of the update is as follows:
T T
W ⇐ W (1 − αλ) − α yi2 X i (W · X i − yi )
(X i ,yi )∈S 1
T T
= W (1 − αλ) − α yi X i (yi [W · X i ] − yi2 )
(X i ,yi )∈S
This form of the update is more convenient because it is more closely related to updates
of other classification models discussed later in this chapter. Examples of these models are
the support vector machine and logistic regression. The loss function can also be converted
to a more convenient representation for binary targets drawn from {−1, +1}.
4.8. OPTIMIZATION MODELS FOR BINARY TARGETS 183
Using the fact that yi2 = 1 for binary targets, we can modify the objective function as
follows:
1 2
n
T λ
J= y (yi − W · X i )2 + W 2
2 i=1 i 2
1 2
n
T λ
= (y − yi [W · X i ])2 + W 2
2 i=1 i 2
1
n
T λ
J= (1 − yi [W · X i ])2 + W 2 (4.50)
2 i=1 2
Differentiating this loss function directly leads to Equation 4.48. However, it is important
to note that the loss function/updates of least-squares classification are identical to the loss
function/updates of least-squares regression, even though one might use the binary nature
of the targets in the former case in order to make them look superficially different.
The updates of least-squares classification are also referred to as Widrow-Hoff up-
dates [132]. The rule was proposed in the context of neural network learning, and it was the
second major neural learning algorithm proposed after the perceptron [109]. Interestingly,
the neural models were proposed independently of the classical literature on least-squares
regression; yet, the updates turn out to be identical.
Heuristic Initialization
A good way to perform heuristic initialization is to determine the mean μ0 and μ1 of the
points belonging to the negative and positive classes, respectively. The difference between
the two means is w0 = μT1 − μT0 is a d-dimensional column vector, which satisfies w0 · μT1 ≥
w0 · μT0 . The choice W = w0 is a good starting point, because positive-class instances will
have larger dot products with w0 than will negative-class instances (on the average). In
many real applications, the classes are roughly separable with a linear hyperplane, and the
normal hyperplane to the line joining the class centroids provides a good initial separator.
1
n
T λ
J= (1 − yi [W · X i ])2 + W 2
2 i=1 2
184 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
T
Now consider a positive class instance for which W · X i = 100 is highly positive. This is
obviously an desirable situation at least from a predictive point of view because the training
instance is being classified on the correct side of the linear separator between the two classes
in a positive way. However, the loss function in the training model treats this prediction as
T
a large loss contribution of (1 − yi [W · X i ])2 = (1 − (1)(100))2 = 992 = 9801. Therefore,
a large gradient descent update will be performed for a training instance that is located at
T
a large distance from the hyperplane W · X = 0 on the correct side. Such a situation is
undesirable because it tends to confuse least-squares classification; the updates from these
T
points on the correct side of the hyperplane W · X = 0 tend to push the hyperplane in the
same direction as some of the incorrectly classified points. In order to address this issue,
many machine learning algorithms treat such points in a more nuanced way. These nuances
will be discussed in the following sections.
1
n ) T
*2 λ
J= max 0, 1 − yi [W · X i ] + W 2 [L2 -loss SVM]
2 i=1 2
Note that the only difference from the least-squares classification model is the use of the
maximization term in order to set the loss of well-separated points to 0. Once the vector W
has been learned, the classification process for an unseen test instance is the same in the
SVM as it is in the case of least-squares classification. For an unseen test instance Z, the
T
sign of W · Z yields the class label.
A more common form of the SVM loss is the hinge-loss. The hinge-loss is the L1 -version
of the (squared) loss above:
n
T λ
J= max{0, (1 − yi [W · X i ])} + W 2 [Hinge-loss SVM] (4.51)
i=1
2
function as well. The L2 -loss SVM squares the nonnegative hinge loss. Since the square of
a nonnegative convex function is convex (according to Lemma 4.3.2), it follows that the
point-specific L2 -loss is convex. The sum of the point-specific losses (convex functions) is
convex according to Lemma 4.3.2. Therefore, the unregularized loss is convex.
Regularized Loss: We have already shown earlier in Section 4.7.1 that the L2 -regularization
term is strictly convex. Since the sum of a convex and a strictly convex function is strictly
convex according to Lemma 4.3.6, both objective functions (including the regularization
term) are strictly convex.
Therefore, one can find the global optimum of an SVM by using gradient descent.
The objective functions for the L1 -loss (hinge loss) and L2 -loss SVM are both in the form
J = i Ji + Ω(W ), where Ji is a point-specific loss and Ω(W ) = λW 2 /2 is the regular-
ization term. The gradient of the latter term is λW . The main challenge is in computing
the gradient of the point-specific loss Ji . Here, the key point is that the point-specific loss
of both the L1 -loss (hinge loss) and L2 -loss can be expressed in the form of identity (v) of
Table 4.2(a) for an appropriately chosen function f (·):
T
Ji = fi (W · X i )
Here, the function fi (·) is defined for the hinge-loss and L2 -loss SVMs as follows:
+
max{0, 1 − yi z} [Hinge Loss]
fi (z) =
2 max{0, 1 − yi z}
1 2
[L2 -Loss]
Therefore, according to Table 4.2(a) (also see Equation 4.29), the gradient of Ji with respect
to W is the following:
∂Ji T T
= X i fi (W · X i ) (4.52)
∂W
The derivatives for the L1 -loss and the L2 -loss SVMs depend on the corresponding deriva-
tives of fi (z), as they are defined in the two cases:
+
−yi I([1 − yi z] > 0) [Hinge Loss]
fi (z) =
−yi max{0, 1 − yi z} [L2 -Loss]
Here, I(·) is an indicator function, which takes on the value of 1 when the condition inside
it is true, and 0, otherwise. Therefore, by plugging in the value of f (z) in Equation 4.52,
one obtains the following loss derivatives in the two cases:
+ T T
∂Ji −yi X i I([1 − yi (W · X i )] > 0) [Hinge Loss]
= T T
∂W −yi X i max{0, 1 − yi (W · X i )} [L2 -Loss]
These point-wise loss derivatives can be used to derive the stochastic gradient-descent up-
dates.
186 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
The subset of instances in S + correspond to those for which the indicator function I(·)
of the previous section takes on the value of 1. These instances are of two types; those
T
corresponding to yi [W ·X i ] < 0 are misclassified instances on the wrong side of the decision
T
boundary, whereas the remaining instances corresponding to yi [W · X i ] ∈ (0, 1) lie on the
correct side of the decision boundary, but they are uncomfortably close to the decision
boundary. Both these types of instances trigger updates in the SVM. In other words, the
well-separated points do not play a role in the update. By using the gradient of the loss
function, the updates in the L1 -loss SVM can be shown to be the following:
T
W ⇐ W (1 − αλ) + αyi X i (4.54)
(X i ,yi )∈S +
This algorithm is referred to as the primal support vector machine algorithm. The hinge-
loss update seems somewhat different from the update for least-squares classification. The
primary reason for this is that the least-squares classification model uses a squared loss
function, whereas the hinge-loss is a piece-wise linear function. The similarity with the
updates of least-squares classification becomes more obvious when one compares the updates
of least-squares classification with those of the SVM with L2 -loss. The updates of the SVM
with L2 -loss are as follows:
T T
W ⇐ W (1 − αλ) + α yi X i (max{1 − yi [W · X i ], 0}) (4.55)
(X i ,yi )∈S
In this case, it is evident that the updates of the L2 -SVM are different from those of least-
squares classification (cf. Equation 4.48) only in terms of the treatment of well-separated
points; identical updates are made for misclassified points and those near the decision bound-
ary, whereas no updates are made for well-separated points on the correct side of the decision
boundary. This difference in the nature of the updates fully explains the difference between
the L2 -SVM and least-squares classification. It is noteworthy that the loss function of the
L2 -SVM was proposed [60] by Hinton much earlier than the Cortes and Vapnik [30] work
on the hinge-loss SVM. Interestingly, Hinton proposed the L2 -loss as a way to repair the
Widrow-Hoff loss (i.e., least-squares classification loss), which makes a lot of sense from an
intuitive point of view. Hinton’s work remained unnoticed by the community of researchers
working on SVMs during the early years. However, the approach was eventually rediscovered
in the recent focus on deep learning, where many of the early works were revisited.
Logistic regression uses a loss function, which has a very similar shape to the hinge-loss
SVM. However, the hinge-loss is piecewise linear, whereas logistic regression is a smooth loss
function. Logistic regression has a probabilistic interpretation in terms of the log-likelihood
loss of a data point. The loss function of logistic regression is formulated as follows:
n
T λ
J= log(1 + exp(−yi [W · X i ])) + W 2 [Logistic Regression] (4.56)
2
i=1
Ji
T
All logarithms in this section are natural logarithms. When W ·X i is large in absolute mag-
nitude and has the same sign as yi , the point-specific loss Ji is close to log(1+exp(−∞)) = 0.
On the other hand, the loss is larger than log(1 + exp(0)) = log(2) when the signs of yi and
T
W · X i disagree. For cases in which the signs disagree, the loss increases almost linearly
T T
with W · X i , as the magnitude of W · X i becomes increasingly large. This is because of
the following relationship:
The above limit is computed using L’Hopital’s rule, which differentiates the numerator and
denominator of a limit to evaluate it. Note that the hinge loss of an SVM is always (1 − z)
T
for z = yi W · X i < 1. One can show that the logistic loss differs from the hinge loss by a
constant offset of 1 for grossly misclassified instances:
Since constant offsets do not affect gradient descent, logistic loss and hinge loss treat grossly
misclassified training instances in a similar way. However, unlike the hinge loss, all instances
have non-zero logistic losses. Like SVMs, the loss function of logistic regression is convex:
Lemma 4.8.2 The loss function of logistic regression is a convex function. Adding the
regularization term makes the loss function strictly convex.
Proof: This result can be shown by using the fact that the point-wise loss is of the form
T
log[1+exp(G(X))], where G(X i ) is the linear function G(X i ) = −yi (W ·X i ). Furthermore,
the function log[1 + exp(−z)] is convex (see Problem 4.3.4). Then, by using Lemma 4.3.2
on the composition of convex and linear functions, it is evident that each point-specific
loss is convex. Adding all the point-specific losses also results in a convex function because
of the first part of the same lemma. Furthermore, adding the regularization term makes
the function strictly convex according to Lemma 4.3.6, because the regularization term is
strictly convex.
It is, in fact, possible to show that logistic regression is strictly convex even without regu-
larization. We leave the proof of this result as an exercise.
Problem 4.8.2 Show that the loss function in logistic regression is strictly convex even
without regularization.
188 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
LEAST−SQUARES LOSS
3.5
SVM HINGE LOSS
LINEAR REGRESSION
LOGISTIC LOSS (y IS NUMERIC)
3
LOSS = ( y-W XT)2
2.5
SET y Є {-1,+1}
2 DECISION
BOUNDARY
LOSS
1.5
OVER- LEAST-SQUARES CLASSIFICATION
1
PERFORMANCE (LLSF)
PENALIZED LOSS = ( y-W XT )2 =(1-y W XT)2
0.5
(a) Loss functions of optimization models (b) Relationships among linear models
Figure 4.9: (a) The loss for a training instance X belonging to the positive class at varying
T
values of W · X . Logistic regression can be viewed as a smooth variant of SVM hinge loss.
Least-squares classification is the only case in which the loss increases with increasingly
correct classification in some regions. (b) All linear models in classification derive their
motivation from the parent problem of linear regression
190 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
in the real-world experiences of machine learning practitioners who often find that the two
models seem to provide similar results. The least-squares classification model provides the
T
only loss function where increasing the magnitude of W · X increases the loss for correctly
classified instances. The semantic relationships among different loss functions are illustrated
in Figure 4.9(b). It is evident that all the binary classification models inherit the basic
structure of their loss functions from least-squares regression (while making adjustments
for the binary nature of the target variable).
These relationships among their loss functions are also reflected as relationships among
their updates in gradient descent. The updates for all three models can be expressed in
a unified way in terms of a model-specific mistake function δ(X i , yi ) for the training pair
(X i , yi ) at hand. In particular, it can be shown that the stochastic gradient-descent updates
of all the above algorithms are of the following form:
T
W ⇐ W (1 − αλ) + αyi [δ(X i , yi )]X i (4.60)
T
The mistake function δ(X i , yi ) is (yi −W ·X i ) for least-squares regression and classification,
an indicator variable for SVMs, and a probability value for logistic regression.
Therefore, one can set up a loss value Ji for the ith training instance as follows:
T T
Ji = max(W j · X i − W c(i) · X i + 1, 0) (4.62)
j:j
=c(i)
4.9. OPTIMIZATION MODELS FOR THE MULTICLASS SETTING 191
It is not difficult to see the similarity between this loss function and that of the binary
SVM. The overall objective function can be computed by adding the losses over the different
training instances, and also adding a regularization term Ω(W 1 . . . W k ) = λ r W r 2 /2:
n T T λ
k
J= max(W j · X i − W c(i) · X i + 1, 0) + W r 2
i=1 j:j
=c(i)
2 r=1
The fact that the Weston-Watkins loss function is convex has a proof that is very similar
to the binary case. One needs to show that each additive term of Ji is convex in terms
of the parameter vector; after all, this additive term is the composition of a linear and a
maximization function. This can be used to show that Ji is convex as well. We leave this
proof as an exercise for the reader:
Problem 4.9.1 The Weston-Watkins loss function is convex in terms of its parameters.
As in the case of the previous models, one can learn the weight vectors with the use of
gradient descent.
Furthermore, the derivative of Ji can be written with respect to W r by using the multivariate
chain rule as follows:
∂Ji k
∂Ji ∂vji
= (4.63)
∂W r ∂v
j=1
ji
Wr
δ(j,X i )
The partial derivative of Ji = r max{vri , 0} with respect to vji is equal to the par-
tial derivative of max{vji , 0} with respect to vji . The partial derivative of the function
max{vji , 0} with respect to vji is 1 for positive vji , and 0, otherwise. We denote this value
T T
by δ(j, X i ). In other words, the binary value δ(j, X i ) is 1, when W c(i) · X i < W j · X i + 1,
and therefore the correct class is not preferred with respect to class j with sufficient margin.
The right-hand side of Equation 4.63 requires us to compute the derivative of vji =
T T
W j · X i − W c(i) · X i + 1 with respect to W r . This is an easy derivative to compute because
of its linearity, as long as we are careful to track which weight vectors W r appear with
positive signs in vji . In the case when r = c(i) (separator for wrong class), the derivative of
T
vji with respect to W r is X i when j = r, and 0, otherwise. In the case when r = c(i), the
T
derivative is −X i when j = r, and 0, otherwise. On substituting these values, one obtains
the gradient of Ji with respect to W r as follows:
+ T
∂Ji δ(r, X i )X i r = c(i)
= T
∂W r − j
=r δ(j, X i )X i r = c(i)
192 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
One can obtain the gradient of J with respect to W r by summing up the contributions
of the different Ji and the regularization component of λW r . Therefore, the updates for
stochastic gradient descent are as follows:
∂Ji
W r ⇐ W r (1 − αλ) − α ∀r ∈ {1 . . . k}
∂W r
+ T
δ(r, X i )X i r = c(i)
= W r (1 − αλ) − α T ∀r ∈ {1 . . . k}
− j
=r δ(j, X i )X i r = c(i)
An important special case is one in which there are only two classes. In such a case, it
can be shown that the resulting updates of the separator belonging to the positive class
will be identical to those in the hinge-loss SVM. Furthermore, the relationship W 1 = −W 2
will always be maintained, assuming that the parameters are initialized in this way. This
is because the update to each separator will be the negative of the update to the other
separator. We leave the proof of this result as a practice exercise.
Problem 4.9.2 Show that the Weston-Watkins SVM defaults to the binary hinge-loss SVM
in the special case of two classes.
One observation from the relationship W 1 = −W 2 in the binary case is that there is a
slight redundancy in the number of parameters of the multiclass SVM. This is because we
really need (k − 1) separators in order to model k classes, and one separator is redundant.
However, since the update of the kth separator is always exactly defined by the updates of
the other (k − 1) separators, this redundancy does not make a difference.
Problem 4.9.3 Propose a natural L2 -loss function for the multclass SVM. Derive the gra-
dient and the details of stochastic gradient descent in this case.
It is easy to verify that the probability of X i belonging to the rth class increases exponen-
T
tially with increasing dot product between W r and X i .
The goal in learning W 1 . . . W k is to ensure that the aforementioned probability is high
for the class c(i) for (each) instance X i . This is achieved by using the cross-entropy loss,
4.9. OPTIMIZATION MODELS FOR THE MULTICLASS SETTING 193
which is the negative logarithm of the probability of the instance X i belonging to the correct
class c(i):
n
λ
k
J =− log[P (c(i)|X i )] + W r 2
2 r=1
i=1
Ji
It is relatively easy to show that each Ji = −log[P (c(i)|X i )] is convex using an approach
similar to the case of binary logistic regression.
! "
∂Ji ∂Ji ∂vji ∂Ji ∂vri T ∂Ji
= = = Xi (4.65)
∂W r ∂vji ∂W r ∂v ri W r ∂vri
j
T
Xi
In the above simplification, we used the fact that vji has a zero gradient with respect to
W r for j = r, and therefore all terms in the summation except for the case of j = r drop
out to 0. We still need to compute the partial derivative of Ji with respect to vri . First, we
express Ji directly as a function of v1i , v2i , . . . , vki as follows:
T
k
T
Ji = −log[P (c(i)|X i )] = −W c(i) · Xi + log[ exp(W j · X i )] [Using Equation 4.64]
j=1
k
= −vc(i),i + log[ exp(vji )]
j=1
Therefore, we can compute the partial derivative of Ji with respect to vri as follows:
⎧ ! "
⎪
⎪ exp(v )
⎨ − 1 − k expri(v ) if r = c(i)
∂Ji
= ! " ji
j=1
∂vri ⎪
⎪ exp(v )
⎩ k expri(vji ) if r = c(i)
j=1
+
−(1 − P (r|X i )) if r = c(i)
=
P (r|X i ) if r = c(i)
∂Ji
By substituting the value of the partial derivative ∂v ri
in Equation 4.65, we obtain the
following:
+ T
∂Ji −X i (1 − P (r|X i )) if r = c(i)
= T (4.66)
∂W r X i P (r|X i ) if r = c(i)
194 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
The probabilities in the above update can be substituted using Equation 4.64. It is note-
worthy that the updates use the probabilities of mistakes in order to change each separator.
In comparison, methods like least-squares regression use the magnitudes of mistakes in the
updates. This difference is natural, because the softmax method is a probabilistic model.
The above stochastic gradient descent is proposed for a mini-batch size of 1. We leave the
derivation for a mini-batch S as an exercise for the reader.
Problem 4.9.4 The text provides the derivation of stochastic gradient descent in multino-
mial logistic regression for a mini-batch size of 1. Provide a derivation of the update of each
separator W r for a mini-batch S containing pairs of the form (X, c) as follows:
T T
W r ⇐ W r (1 − αλ) + α X · (1 − P (r|X)) − α X · P (r|X) (4.68)
(X,c)∈S,r=c (X,c)∈S,r
=c
Just as the Weston-Watkins SVM defaults to the hinge-loss SVM for the two-class case,
multinomial logistic regression defaults to logistic regression in the special case of two classes.
We leave the proof of this result as an exercise.
Problem 4.9.5 Show that multinomial logistic regression defaults to binary logistic regres-
sion in the special case of two classes.
Note that this is a single-variable optimization problem, which is usually much simpler to
solve. In some cases, one might need to use line-search to determine wi , when a closed form
of the solution is not available. If one cycles through all the variables, and no improvement
occurs, convergence has occurred. In the event that the optimized function is convex and
differentiable in minimization form, the solution at convergence will be the optimal one.
For non-convex functions, optimality is certainly not guaranteed, as the system can get
stuck at a local minimum. Even for functions that are convex but non-differentiable, it is
possible for coordinate descent to reach a suboptimal solution. An important point about
coordinate descent is that it implicitly uses more than first-order gradient information; after
all, it finds an optimal solution with respect to the variable it is optimizing. As a result,
convergence can sometimes be faster with coordinate descent, as compared to stochastic
4.10. COORDINATE DESCENT 195
SUBOPTIMAL
SOLUTION
Figure 4.10: The contour plot of a non-differentiable function is shown. The center of the
parallelogram-like contour plot is the optimum. Note that the axis-parallel moves can only
worsen the objective function from acute-angled positions
gradient descent. Another important point about coordinate descent is that convergence is
usually guaranteed, even if the resulting solution is a local optimum.
There are two main problems with coordinate descent. First, it is inherently sequential
in nature. The approach optimizes one variable at a time, and therefore it would need to
have optimized with respect to one variable in order to perform the next optimization step.
Therefore, the parallelization of coordinate descent is always a challenge. Second, it can
get stuck at suboptimal points (local minima). Even though the convergence to a local
minimum is guaranteed, the use of a single variable can sometimes be myopic. This type
of problem could occur even for convex functions, if the function is not differentiable. For
example, consider the following function:
This objective function is convex but not differentiable. The optimal point of this function
is (0, 0). However, if coordinate descent reaches the point (1, 1), it will cycle through both
variables without improving the solution. The problem is that no path exists to the optimal
solution using axis-parallel directions. Such a situation can occur with non-differentiable
functions having pointed contour plots; if one ends up at one of the corners of the contour
plot, there might not be a suitable axis-parallel direction of movement in order to improve
the objective function. An example of such a scenario is illustrated in Figure 4.10. Such a
situation can never arise in a differentiable function, where at least one axis-parallel direction
will always improve the objective function.
A natural question that arises is to characterize the conditions under which coordinate
descent is well behaved in non-differentiable function optimization. One observation is that
even though the function f (x, y) of Equation 4.69 is convex, its additive components are
not separable in terms of the individual variables. In general, a sufficient condition for
coordinate descent to reach a global optimum solution is that the additive components of
the non-differentiable portion of the multivariate function need to be expressed in terms of
individual variables, and each of them must be convex. We summarize a general version of
the above result:
Lemma 4.10.1 Consider a multivariate function F (w) that can be expressed in the follow-
ing form:
d
F (w) = G(w) + Hi (wi )
i=1
196 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
The function G(w) is a convex and differentiable function, whereas each Hi (wi ) is a convex,
univariate function of wi , which might be non-differentiable. Then, coordinate descent will
converge to a global optimum of the function F (w).
An example of a non-differentiable function Hi (wi ), which is also convex, is Hi (wi ) = |wi |.
This function is used for L1 -regularization. In fact, we will discuss the use of coordinate
descent for L1 -regularized regression in Section 5.8.1.2 of Chapter 5.
The issue of additive separability is important, and it is sometimes helpful to perform
a variable transformation, so that the non-differentiable part is additively separable. For
example, consider a generalization of the objective function of Equation 4.69:
f (x, y) = g(x, y) + |x + y| + 2|x − y| (4.70)
Assume that g(x, y) is differentiable. Now, we make the following variable transformations
u = x + y and v = x − y. Then, one can rewrite the objective function after the variable
transformation as f ([u + v]/2, [u − v]/2). In other words, we always substitute [u + v]/2
everywhere for x and [u − v]/2 everywhere for y to obtain the following:
F (u, v) = g([u + v]/2, [u − v]/2) + |u| + 2|v| (4.71)
Each of the non-differentiable components is a convex function. Now, one can perform
coordinate descent with respect to u and v without any problem. The main point of this
trick is that the variable transformation changes the directions of movement, so that a path
to the optimum solution exists.
Interestingly, even though non-differentiable functions cause problems for coordinate de-
scent, such functions (and even discrete optimization problems) are often better solved by
coordinate descent than gradient descent. This is because coordinate descent often enables
the decomposition of a complex problem into smaller subproblems. As a specific example
of this decomposition, we will show how the well-known k-means algorithm is an exam-
ple of coordinate descent, when applied to a potentially difficult mixed integer program
(cf. Section 4.10.3).
Note that the left-hand side is free of wi because the two terms involving wi cancel each
T T T
other out. This is because the term di r contributes −wi di di , which cancels with wi di di .
Because of the fact that one of the sides does not depend on wi , we obtain an update that
yields the optimal value of wi in a single iteration:
T
di r
wi ⇐ wi + (4.74)
di 2
T
In the above update, we have used the fact that di di is the same as the squared norm of
di . It is common to standardize each column of the data matrix to zero mean and unit
variance. In such a case, the value of di 2 will be 1, and the update further simplifies to
the following:
T
w i ⇐ w i + di r (4.75)
This update is extremely efficient. One full cycle of coordinate descent through all the vari-
ables requires asymptotically similar time as one full cycle of stochastic gradient descent
through all the points. However, the number of cycles required by coordinate descent tends
to be smaller than that in least-squares regression. Therefore, the coordinate-descent ap-
proach is more efficient. One can also derive a form of coordinate descent for regularized
least-squares regression. We leave this problem as a practice exercise.
Problem 4.10.1 Show that if Tikhonov regularization is used with parameter λ on least-
squares regression, then the update of Equation 4.74 needs to be modified to the following:
T
wi di 2 + di r
wi ⇐
di 2 + λ
The simplification of optimization subproblems that are inherent in solving for one variable
at a time (while keeping others fixed) is very significant in coordinate descent.
It is assumed that there are a total of n data points denoted by the d-dimensional
row vectors X 1 . . . X n . The k-means algorithms creates k prototypes, which are denoted
by z 1 . . . z k , so that the sum of squared distances of the data points from their nearest
prototypes is as small as possible. Let yij be a 0-1 indicator of whether point i gets
assigned
to cluster j. Each point gets assigned to only a single cluster, and therefore we have j yij =
1. One can therefore, formulate the k-means problem as a mixed integer program over the
real-valued d-dimensional prototype row vectors z 1 . . . z k and the matrix Y = [yij ]n×k of
discrete assignment variables:
k
n
Minimize yij X i − z j 2
j=1 i=1
Oj
subject to:
k
yij = 1
j=1
yij ∈ {0, 1}
This is a mixed integer program, and such optimization problems are known to be very hard
to solve in general. However, in this case, carefully choosing the blocks of variables is essen-
tial. Choosing the blocks of variables carefully also trivializes the underlying constraints.
In this particular case, the variables are divided into two blocks corresponding to the k × d
prototype variables in the vectors z 1 . . . z k and the n × k assignment variables Y = [yij ].
We alternately minimize over these two blocks of variables, because it provides the best
possible decomposition of the problem into smaller subproblems. Note that if the prototype
variables are fixed, the resulting assignment problem becomes trivial and one assigns each
point to the nearest prototype. On the other hand, if the cluster assignments are fixed, then
the objective function can be decomposed into separate objective functions over different
clusters. The portion of the objective function Oj contributed by the jth cluster is shown by
an underbrace in the optimization formulation above. For each cluster, the relevant optimal
solution z j is the mean of the points assigned to that cluster. This result can be shown by
setting the gradient of the objective function Oj with respect to each z j to 0:
∂Oj n
=2 yij (X i − z j ) = 0 ∀j ∈ {1 . . . k} (4.76)
∂zj i=1
The points that do not belong to cluster j drop out in the above condition because yij = 0
for such points. As a result, z j is simply the mean of the points in its cluster. Therefore,
we need to alternative assign points to their closest prototypes, and set the prototypes
to the centroids of the clusters defined by the assignment; these are exactly the steps of
the well-known k-means algorithm. The centroid computation is a continuous optimization
step, whereas cluster assignment is a discrete optimization step (which is greatly simplified
by the decomposition approach of coordinate descent).
4.11 Summary
This chapter introduces the basic optimization models in machine learning. We discussed
the conditions for optimality, as well as the cases in which a global optimum is guaranteed.
Optimization problems in machine learning often have objective functions which can be
4.13. EXERCISES 199
separated into components across individual data points. This property enables the use of
efficient sampling methods like stochastic gradient descent. Optimization models in machine
learning are significantly different from traditional optimization in terms of the need to max-
imize performance on out-of-sample data rather than on the original optimization problem
defined on the training data. Several examples of optimization in machine learning, such
as linear regression, support vector machine, and logistic regression were discussed. Gen-
eralizations to multiclass models were also discussed. An alternative to stochastic gradient
descent is coordinate descent, which can be more efficient in some situations.
4.13 Exercises
1. Find the saddle points, minima, and the maxima of the following functions:
(a) F (x) = x2 − 2x + 2
(b) F (x, y) = x2 − 2x − y 2
2. Suppose that y is a d-dimensional vector with very small norm = y2 . Consider a
continuous and differentiable objective function J(w) with zero gradient and Hessian
H at w = w0 . Show that y T Hy is approximately equal to twice the change in J(w)
by perturbing w = w0 by in direction y/y.
3. Suppose that an optimization function J(w) has a gradient of 0 at w = w0 . Further-
more, the Hessian of J(w) at w = w0 has both positive and negative eigenvalues. Show
200 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
how you would use the Hessian to (i) find a vector direction along which infinitesi-
mal movements in either direction from w0 decrease J(w); (ii) find a vector direction
along which infinitesimal movements in either direction from w0 increase J(w). Is w0
a maximum, minimum, or saddle-point?
4. We know that the maximum of two convex functions is a convex function. Is the min-
imum of two convex functions convex? Is the intersection of two convex sets convex?
If the union of two convex sets convex? Justify your answer in each case.
5. Either prove each statement or give a counterexample: (i) If f (x) and g(x) are convex,
then F (x, y) = f (x) + g(y) is convex. (ii) If f (x) and g(x) are convex, then F (x, y) =
f (x) · g(y) is convex.
6. Hinge-loss without margin: Suppose that we modified the hinge-loss on page 184
by removing the constant value within the maximization function as follows:
n
T λ
J= max{0, (−yi [W · X i ])} + W 2
i=1
2
This loss function is referred to as the perceptron criterion. Derive the stochastic
gradient descent updates for this loss function.
7. Compare the perceptron criterion of the previous exercise to the hinge-loss in terms
of its sensitivity to the magnitude of W . State one non-informative weight vector
W , which will always be an optimal solution to the optimization problem of the
previous exercise. Use this observation to explain why a perceptron (without suitable
modifications) can sometimes provide much poorer solutions with an SVM when the
points of the two classes cannot be separated by a linear hyperplane.
T
8. Consider an unconstrained quadratic program of the form wT Aw + b w + c, where w
is a d-dimensional vector of optimization variables, and the d × d matrix A is positive
semidefinite. The constant vector b is d-dimensional. Show that a global minimum
exists for this quadratic program if and only if b lies in the column space of A.
9. The text of the book discusses a stochastic gradient descent update of the Weston-
Watkins SVM, but not a mini-batch update. Consider a setting in which the mini-
batch S contains training pairs of the form (X, c), where each c ∈ {1, . . . , k} is the
categorical class label. Show that the stochastic gradient-descent step for each sepa-
rator W r at learning rate α:
T T
W r ⇐ W r (1 − αλ) + α X [ δ(j, X)] − α X [δ(r, X)] (4.77)
(X,c)∈S,r=c j
=r (X,c)∈S,r
=c
10. Consider the following function f (x, y) = x2 + 2y 2 + axy. For what values of a (if any)
is the function f (x, y) concave, convex, and indefinite?
12. Consider the L1 -loss function for binary classification, where for feature-class pair
(X i , yi ) and d-dimensional parameter vector W , the point-specific loss for the ith
instance is defined as follows:
T
Li = yi − W · X i 1
Here, we have yi ∈ {−1, +1}, and X i is a d-dimensional row vector of features. The
norm used above is the L1 -norm instead of the L2 -norm of least-squares classification.
Discuss why the loss function can be written as follows for yi ∈ {−1, +1}:
T
Li = 1 − yi W · X i 1
Here, λ is the regularization parameter, and α is the learning rate. Compare this
update with the hinge-loss update for SVMs.
14. Let x be an n1 -dimensional vector, and W be an n2 ×n1 -dimensional matrix. Show how
to use the vector-to-vector chain rule to compute the vector derivative of W (x x x)
with respect to x. Is the resulting vector derivative a scalar, vector, or matrix? Now
repeat this exercise for G(W (x x x) − y), where y is a constant vector in n2 -
dimensions, and G(·) is a function summing the absolute value of the elements of its
argument into a scalar.
16. Incremental linear regression with added points: Suppose that you have a data
matrix D and target vector y in linear regression. You have done all the hard work
to invert (DT D) and then compute the closed-form solution W = (DT D)−1 DT y.
Now you are given an additional training point (X, y), and are asked to compute the
updated parameter vector W . Show how you can do this efficiently without having
to invert a matrix from scratch. Use this result to provide an efficient strategy for
incremental linear regression. [Hint: Matrix inversion lemma.]
17. Incremental linear regression with added features: Suppose that you have a
data set with a fixed number of points, but with an ever-increasing number of di-
mensions (as data scientists make an ever-increasing number of measurements and
surveys). Provide an efficient strategy for incremental linear regression with regular-
ization. [Hint: There are multiple ways to express the closed-form solution in linear
regression because of the push-through identity of Problem 1.2.13.]
202 CHAPTER 4. OPTIMIZATION BASICS: A MACHINE LEARNING VIEW
f (x, y) = a x2 + b y 2 + 2c xy + d x + e y + f
Show that f (x, y) is convex if and only if a and b are non-negative, and c is at most
equal to the geometric mean of a and b in absolute magnitude.
22. Show that the functions f (x) = x, x and g(x) = x, x are both convex. With
regard to inner products, you are allowed to use only the basic axioms, and the
Cauchy-Schwarz/triangle inequality.
23. Two-sided matrix least-squares: Let A be an n × m matrix and B be a k × d
matrix. You want to find the m × k matrix X so that J = C − AXB2F is minimized,
where C is a known n × d matrix. Derive the derivative of J with respect to X and the
optimality conditions. Show that one possible solution to the optimality conditions is
X = A+ CB + , where A+ and B + represent the Moore-Penrose pseudo-inverses of A
and B, respectively. [Hint: Compute the scalar derivatives with respect to individual
elements of X and then convert to matrix calculus form. Also see Exercises 47–51 of
Chapter 2.]
24. Suppose that you replace the sum-of-squared-Euclidean objective with a sum-of-
Manhattan objective for the k-means algorithm (pp. 198). Show that block coordinate
descent results in the k-medians clustering algorithm, where the each dimension of the
“centroid” representative is chosen as the median of the cluster along that dimension
and assignment of points to representatives is done using the Manhattan distance
instead of Euclidean distance. [Interesting fact: Many other representative-based clus-
tering variants like k-modes and k-medoids are coordinate descent algorithms.]
4.13. EXERCISES 203
25. Consider the cubic polynomial objective function f (x) = ax3 + bx2 + cx + d. Under
what conditions does this objective function not have a critical point? Under what
conditions is it strictly increasing in [−∞, +∞]?
26. Consider the cubic polynomial objective function f (x) = ax3 + bx2 + cx + d. Under
what conditions does this objective have exactly one critical point? What kind of
critical point is it? Give an example of such an objective function.
27. Let f (x) be a univariate polynomial of degree n. What is the maximum number of
critical points of this polynomial? What is the maximum number of minima, maxima,
and saddle points?
28. What is the maximum number of critical points of a multivariate polynomial of degree
n in d dimensions? Give an example of a polynomial where this maximum is met.
29. Suppose that h and x are column vectors, and W1 , W2 , and W3 are matrices satisfying
h = W1 W2 x − W22 W3 x + W1 W2 W3 x. Derive an expression for ∂h
∂x .
30. Consider a situation in which hi = Wi Wi−1 hi−1 , for i ∈ {1 . . . n}. Here, each Wi is a
matrix and each hi is a vector. Use the vector-centric chain rule to derive an expression
∂hi
for ∂h .
0
Chapter 5
“The journey of a thousand miles begins with one step.” –Lao Tzu
5.1 Introduction
The previous chapter introduced several basic algorithms for gradient descent. However,
these algorithms do not always work well because of the following reasons:
• Flat regions and local optima: The objective functions of machine learning algorithms
might have local optima and flat regions in the loss surface. As a result, the learning
process might be too slow or arrive at a poor solution.
• Differential curvature: The directions of gradient descent are only instantaneous direc-
tions of best movement, which usually change over steps of finite length. Therefore, a
steepest direction of descent no longer remains the steepest direction, after one makes
a finite step in that direction. If the step is too large, the different components of
the gradient might flip signs, and the objective function might worsen. A direction is
said to show high curvature, if the gradient changes rapidly in that direction. Clearly,
directions of high curvature cause uncertainty in the outcomes of gradient descent.
• Non-differentiable objective functions: Some objective functions are non-differentiable,
which causes problems for gradient descent. If differentiability is violated at a relatively
small number of points and the loss function is informative for the large part, one can
use gradient descent with minor modifications. More challenging cases arise when the
objective functions have steep cliffs or flat surfaces in large regions of the space, and
the gradients are not informative at all.
The simplest approach to address both flat regions and differential curvature is to adjust
the gradients in some way to account for poor convergence. These methods implicitly use
the curvature to adjust the gradients of the objective function with respect to different
parameters. Examples of such techniques include the pairing of vanilla gradient-descent
methods with computational algorithms like the momentum method, RMSProp, or Adam.
Another class of methods uses second-order derivatives to explicitly measure the cur-
vature; after all, a second derivative is the rate of change in gradient, which is a direct
measure of the unpredictability of using a constant gradient direction over a finite step.
The second-derivative matrix, also referred to as the Hessian, contains a wealth of infor-
mation about directions along which the greatest curvature occurs. Therefore, the Hessian
is used by many second-order techniques like the Newton method in order to adjust the
directions of movement by using a trade-off between the steepness of the descent and the
curvature along a direction.
Finally, we discuss the problem of non-differentiable objective functions. Consider the
L1 -loss function, which is non-differentiable at some points in the parameter space:
f (x1 , x2 ) = |x1 | + |x2 |
The point (x1 , x2 ) = (0, 0) is a non-differentiable point of the optimization. This type of set-
ting can be addressed easily by having special rules for the small number of non-differentiable
points in the space. However, in some cases, non-informative loss surfaces contain only flat
regions and vertical cliffs. For example, trying to directly optimize a ranking-based objective
function will cause non-differentiability in large regions of the space. Consider the following
objective function containing training points X 1 . . . X n , of which a subset S belong to a
positive class (e.g., fraud instances versus normal instances):
J(W ) = Rank(W · X i )
i∈S
Here, the function “Rank” simply computes a value from 1 through n, based on sorting the
values of W · X i over the n training points and returning the rank of each X i . Minimizing
the function J(W ) tries to set W to ensure that positive examples are always ranked before
negative examples. This kind of objective function will contain only flat surfaces and vertical
cliffs with respect to W , because the ranks can suddenly change at specific values of the
parameter vector W . In most regions, the ranks will not change on perturbing W slightly,
and therefore J(W ) will have a zero gradient in most regions. This type of setting can cause
serious problems for gradient descent because the gradients are not informative at all. In
such cases, more complex methods like the proximal gradient method need to be used. This
chapter will discuss several such options.
This chapter is organized as follows. The next section will discuss the challenges as-
sociated with optimization of differentiable functions. Methods that modify the first-order
derivative of the loss function to account for curvature are discussed in Section 5.3. The
Newton method is introduced in Section 5.4. Applications of the Newton method to machine
learning are discussed in Section 5.5. The challenges associated with the Newton method are
discussed in Section 5.6. Computationally efficient approximations of the Newton method
are discussed in Section 5.7. The optimization of non-differentiable functions is discussed in
Section 5.8. A summary is given in Section 5.9.
8 5
4
6
3
4
OBJECTIVE FUNCTION
OBJECTIVE FUNCTION
2
2 1
FLAT REGION
0 0
FLAT REGION
LOCAL
MINIMUM −1
−2 GLOBAL
MINIMUM
−2
−4
−3
−6 −4
0 0.5 1 1.5 2 2.5 3 3.5 4 −8 −6 −4 −2 0 2 4 6 8
OPTIMIZATION VARIABLE OPTIMIZATION VARIABLE
(a) Local optima with flat regions (b) Only global optimum with flat region
Computing the derivative and setting it to zero yields the following condition:
&d
the d-dimensional function created by the sum of these functions has i=1 ki local/global
minima. For example, a 10-dimensional function, which is a sum of 10 instances of the func-
tion represented in Equation 5.2.1 (over different variables) would have 210 = 1024 minima
obtained by setting each of the 10 dimensions to any one of the values from {1, 3.366}.
Clearly, if one does not know the number and location of the local minima, it is hard to be
confident about the optimality of the point to which gradient descent converges.
Another problem is the presence of flat regions in the objective function. For example,
the objective function in Figure 5.1(a) has a flat region between a local minimum and a
local maximum. This type of situation is quite common and is possible even in objective
functions where there are no local optima. Consider the following objective function:
+
−(x/5)3 if x ≤ 5
F (x) =
x2 − 13x + 39 if x > 5
The objective function is shown in Figure 5.1(b). This objective function has a flat region
in the range [−1, +1], where the absolute value of the gradient is less than 0.1. On the other
hand, the gradient increases rapidly for values of x > 5. Why are flat regions problematic?
The main issue is that the speed of descent depends on the magnitude of the gradient (if
the learning rate is fixed). In such cases, the optimization procedure will take a long time
to cross flat regions of the space. This will make the optimization process excruciatingly
slow. As we will see later, techniques like momentum methods use analogies from physics in
order to inherit the rate of descent from previous steps as a type of momentum. The basic
idea is that if you roll a marble down a hill, it gathers speed as it rolls down, and it is often
able to navigate local potholes and flat regions better because of its momentum. We will
discuss this principle in more detail in Section 5.3.1.
40 40
30 30
20 20
VALUE OF y
VALUE OF y
10 10
0 0
−10 −10
−20 −20
−30 −30
−40 −40
−40 −30 −20 −10 0 10 20 30 40 −40 −30 −20 −10 0 10 20 30 40
VALUE OF x VALUE OF x
(a) Loss function is circular bowl (b) Loss function is elliptical bowl
L = x2 + y 2 L = x2 + 4y 2
Figure 5.2: The effect of the shape of the loss function on steepest-gradient descent
of gradient descent because it tells us that some directions have more consistent gradients
that do not change rapidly. Consistent gradients are more desirable from the perspective of
making gradient-descent steps of larger sizes.
In the case of the circular bowl of Figure 5.2(a), the gradient points directly at the
optimum solution, and one can reach the optimum in a single step, as long as the correct
step-size is used. This is not quite the case in the loss function of Figure 5.2(b), in which
the gradients are often more significant in the y-direction as compared to the x-direction.
Furthermore, the gradient never points to the optimal solution, as a result of which many
course corrections are needed over the descent. A salient observation is that the steps along
the y-direction are large, but subsequent steps undo the effect of previous steps. On the
other hand, the progress along the x-direction is consistent but tiny. In other words, the
long-term progress along each direction is very limited; therefore, it is possible to get into
situations where very little progress is made even after training for a long time.
The above example represents a very simple quadratic, convex, and additively separable
function, which represents a straightforward scenario compared to any real-world setting
in machine learning. In fact, with very few exceptions, the path of steepest descent in most
objective functions is only an instantaneous direction of best movement, and is not the
correct direction of descent in the longer term. In other words, small steps with “course
corrections” are always needed; the only way to reach the optimum with steepest-descent
updates is by using an extremely large number of tiny updates and course corrections, which
is obviously very inefficient. At first glance, this might seem almost ominous, but it turns
out that there are numerous solutions of varying complexity to address these issues. The
simplest example is feature normalization.
the expenditure of various nations, together with the happiness index. The goal is to predict
the happiness index y of the nation as a function of the guns per capita x1 and the ounces
per capita of butter x2 . An example of a toy data set of three points is shown in Table 5.1.
A linear regression model uses the coefficient w1 for guns and the coefficient w2 for butter
in order to predict the happiness index from guns and butter:
y = w1 x 1 + w 2 x 2
Note that this objective function is far more sensitive to w2 as compared to w1 . This is
caused by the fact that the butter feature has a much larger variance than the gun feature,
which shows up in the coefficients of the objective function. As a result, the gradient will
often bounce along the w2 direction, while making tiny progress along the w1 direction.
However, if we standardize each column in Table 5.1 to zero mean and unit variance, the
coefficients of w12 and w22 will become much more similar. As a result, the bouncing behavior
of gradient descent is reduced. In this particular case, the interaction terms of the form w1 w2
will cause the ellipse to be oriented at an angle to the original axes. This causes additional
challenges in terms of bouncing of gradient descent along directions that are not parallel to
the original axes. Such interaction terms can be addressed by a procedure called whitening,
and it is an application of the method of principal component analysis (cf. Section 7.4.6 of
Chapter 7).
X
PARAMETER 1
5 LEAST
CURVATURE
DIRECTION
4
3
f(x, y)
−1
1
0.5 2
1
0 0
−0.5 −1
−1 −2
y x
The specific effect of curvature is particularly evident when one encounters loss functions
in the shape of sloping or winding valleys. An example of a sloping valley is shown in
Figure 5.4. A valley is a dangerous topography for a gradient-descent method, particularly
if the bottom of the valley has a steep and rapidly changing surface (which creates a narrow
valley). In narrow valleys, the gradient-descent method will bounce violently along the steep
sides of the valley without making much progress in the gently sloping direction, where the
greatest long-term gains are present. As we will see later in this chapter, many computational
methods magnify the components of the gradient along consistent directions of movement
(to discourage back-and-forth bouncing). In some cases, the steepest descent directions are
212 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
modified using such ad hoc methods, whereas in others, the curvature is explicitly used
with the help of second-order derivatives. The first of these methods will be the topic of
discussion in the next section.
In momentum-based descent, the vector V inherits a fraction β of the velocity from its pre-
vious step in addition to the current gradient, where β ∈ (0, 1) is the momentum parameter:
∂J
V ⇐ βV − α ; W ⇐W +V
∂W
Setting β = 0 specializes to straightforward gradient descent. Larger values of β ∈ (0, 1) help
the approach pick up a consistent velocity V in the correct direction. The parameter β is
also referred to as the momentum parameter or the friction parameter. The word “friction”
is derived from the fact that small values of β act as “brakes,” much like friction.
Momentum helps the gradient descent process in navigating flat regions and local optima,
such as the ones shown in Figure 5.1. A good analogy for momentum-based methods is to
visualize them in a similar way as a marble rolls down a bowl. As the marble picks up
5.3. ADJUSTING FIRST-ORDER DERIVATIVES FOR DESCENT 213
GD SLOWS DOWN
LOSS
IN FLAT REGION
GD GETS TRAPPED
IN LOCAL OPTIMUM
Figure 5.5: Effect of momentum in navigating complex loss surfaces. The annotation “GD”
indicates pure gradient descent without momentum. Momentum helps the optimization
process retain speed in flat regions of the loss surface and avoid local optima
speed, it will be able to navigate flat regions of the surface quickly and escape form local
potholes in the bowl. This is because the gathered momentum helps it escape potholes.
Figure 5.5, which shows a marble rolling down a complex loss surface (picking up speed as
it rolls down), illustrates this concept. The use of momentum will often cause the solution
to slightly overshoot in the direction where velocity is picked up, just as a marble will
overshoot when it is allowed to roll down a bowl. However, with the appropriate choice
of β, it will still perform better than a situation in which momentum is not used. The
momentum-based method will generally perform better because the marble gains speed as
it rolls down the bowl; the quicker arrival at the optimal solution more than compensates
for the overshooting of the target. Overshooting is desirable to the extent that it helps avoid
local optima. The parameter β controls the amount of friction that the marble encounters
while rolling down the loss surface. While increased values of β help in avoiding local optima,
it might also increase oscillation at the end. In this sense, the momentum-based method
has a neat interpretation in terms of the physics of a marble rolling down a complex loss
surface. Setting β > 1 can cause instability and divergence, because gradient descent can
pick up speed in an uncontrolled way.
In addition, momentum-based methods help in reducing the undesirable effects of cur-
vature in the loss surface of the objective function. Momentum-based techniques recognize
that zigzagging is a result of highly contradictory steps that cancel out one another and
reduce the effective size of the steps in the correct (long-term) direction. An example of this
scenario is illustrated in Figure 5.2(b). Simply attempting to increase the size of the step in
order to obtain greater movement in the correct direction might actually move the current
solution even further away from the optimum solution. In this point of view, it makes a lot
more sense to move in an “averaged” direction of the last few steps, so that the zigzagging is
smoothed out. This type of averaging is achieved by using the momentum from the previous
steps. Oscillating directions do not contribute consistent velocity to the update.
With momentum-based descent, the learning is accelerated, because one is generally
moving in a direction that often points closer to the optimal solution and the useless “side-
ways” oscillations are muted. The basic idea is to give greater preference to consistent
directions over multiple steps, which have greater importance in the descent. This allows
the use of larger steps in the correct direction without causing overflows or “explosions”
in the sideways direction. As a result, learning is accelerated. An example of the use of
214 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
OPTIMUM
STARTING
POINT
STARTING
POINT WITH
MOMENTUM (b) WITHOUT MOMENTUM
OPTIMUM
WITHOUT STARTING
MOMENTUM POINT
momentum is illustrated in Figure 5.6. It is evident from Figure 5.6(a) that momentum
increases the relative component of the gradient in the correct direction. The corresponding
effects on the updates are illustrated in Figure 5.6(b) and (c). It is evident that momentum-
based updates can reach the optimal solution in fewer updates. One can also understand
this concept by visualizing the movement of a marble down the valley of Figure 5.4. As the
marble gains speed down the gently sloping valley, the effects of bouncing along the sides
of the valley will be muted over time.
5.3.2 AdaGrad
In the AdaGrad algorithm [38], one keeps track of the aggregated squared magnitude of
the partial derivative with respect to each parameter over the course of the algorithm. The
square-root of this value is proportional to the root-mean-squared slope for that parameter
(although the absolute value will increase with the number of epochs because of successive
aggregation).
Let Ai be the aggregate value for the ith parameter. Therefore, in each iteration, the
following update is performed with respect to the objective function J:
! "2
∂J
Ai ⇐ Ai + ; ∀i (5.1)
∂wi
√
Scaling the derivative inversely with Ai is a kind of “signal-to-noise” normalization
because Ai only measures the historical magnitude of the gradient rather than its sign; it
encourages faster relative movements along gently sloping directions with consistent sign
of the gradient. If the gradient component along the ith direction keeps wildly fluctuating
between +100 and −100, this type of magnitude-centric normalization will penalize that
component far more than another gradient component that consistently takes on the value
in the vicinity of 0.1 (but with a consistent sign). For example, in Figure 5.6, the movements
along the oscillating direction will be de-emphasized, and the movement along the consistent
direction will be emphasized. However, absolute movements along all components will tend
to slow down over time, which is the main problem with the approach. The slowing down is
caused by the fact that Ai is the aggregate value of the entire history of partial derivatives.
This will lead to diminishing values of the scaled derivative. As a result, the progress of
AdaGrad might prematurely become too slow, and it will eventually (almost) stop making
progress. Another problem is that the aggregate scaling factors depend on ancient history,
which can eventually become stale. It turns out that the exponential averaging of RMSProp
can address both issues.
5.3.3 RMSProp
The RMSProp algorithm [61] uses a similar motivation as√AdaGrad for performing the
“signal-to-noise” normalization with the absolute magnitude Ai of the gradients. However,
instead of simply adding the squared gradients to estimate Ai , it uses exponential averaging.
Since one uses averaging to normalize rather than aggregate values, the progress is not slowed
prematurely by a constantly increasing scaling factor Ai . The basic idea is to use a decay
factor ρ ∈ (0, 1), and weight the squared partial derivatives occurring t updates ago by
ρt . Note that this can be easily achieved by multiplying the current squared aggregate
(i.e., running estimate) by ρ and then adding (1 − ρ) times the current (squared) partial
derivative. The running estimate is initialized to 0. This causes some (undesirable) bias in
early iterations, which disappears over the longer term. Therefore, if Ai is the exponentially
averaged value of the ith parameter wi , we have the following way of updating Ai :
! "2
∂J
Ai ⇐ ρAi + (1 − ρ) ; ∀i (5.2)
∂wi
The square-root of this value for each parameter is used to normalize its gradient. Then,
the following update is used for (global) learning rate α:
! "
α ∂J
wi ⇐ wi − √ ; ∀i
Ai ∂wi
√ √
If desired, one can use Ai + in the denominator instead of Ai to avoid ill-conditioning.
Here, is a small positive value such as 10−8 . Another advantage of RMSProp over AdaGrad
is that the importance of ancient (i.e., stale) gradients decays exponentially with time. The
drawback of RMSProp is that the running estimate Ai of the second-order moment is biased
in early iterations because it is initialized to 0.
5.3.4 Adam
The Adam algorithm uses a similar “signal-to-noise” normalization as AdaGrad and RM-
SProp; however, it also incorporates momentum into the update. In addition, it directly
addresses the initialization bias inherent in the exponential smoothing of pure RMSProp.
216 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
As in the case of RMSProp, let Ai be the exponentially averaged value of the ith pa-
rameter wi . This value is updated in the same way as RMSProp with the decay parameter
ρ ∈ (0, 1):
! "2
∂J
Ai ⇐ ρAi + (1 − ρ) ; ∀i (5.3)
∂wi
At the same time, an exponentially smoothed value of the gradient is maintained for which
the ith component is denoted by Fi . This smoothing is performed with a different decay
parameter ρf : ! "
∂J
Fi ⇐ ρf Fi + (1 − ρf ) ; ∀i (5.4)
∂wi
This type of exponentially smoothing of the gradient with ρf is a variation of the momentum
method discussed in Section 5.3.1 (which is parameterized by a friction parameter β instead
of ρf ). Then, the following update is used at learning rate αt in the tth iteration:
αt
w i ⇐ w i − √ Fi ; ∀i
Ai
There are two key differences from the RMSProp algorithm. First, the gradient is replaced
with its exponentially smoothed value in order to incorporate momentum. Second, the
learning rate αt now depends on the iteration index t, and is defined as follows:
# %
1 − ρt
αt = α (5.5)
1 − ρtf
Adjust Bias
Technically, the adjustment to the learning rate is actually a bias correction factor that is
applied to account for the unrealistic initialization of the two exponential smoothing mech-
anisms, and it is particularly important in early iterations. Both Fi and Ai are initialized
to 0, which causes bias in early iterations. The two quantities are affected differently by the
bias, which accounts for the ratio in Equation 5.5. It is noteworthy that each of ρt and ρtf
converge to 0 for large t because ρ, ρf ∈ (0, 1). As a result, the initialization bias correction
factor of Equation 5.5 converges to 1, and αt converges to α. The default suggested values
of ρf and ρ are 0.9 and 0.999, respectively, according to the original Adam paper [72]. Refer
to [72] for details of other criteria
√ (such as parameter√ sparsity) used for selecting ρ and ρf .
Like other methods, Adam uses Ai + (instead of Ai ) in the denominator of the update
for better conditioning. The Adam algorithm is extremely popular because it incorporates
most of the advantages of other algorithms, and often performs competitively with respect
to the best of the other methods [72].
individual loss improvements. In the special case of quadratic loss functions, the Newton
method requires a single step.
∂ 2 J(W )
Hij =
∂wi ∂wj
Note that the partial derivatives use all pairwise parameters in the denominator. Therefore,
for a neural network with d parameters, we have a d × d Hessian matrix H, for which the
(i, j)th entry is Hij .
The Hessian can also be defined as the Jacobian of the gradient with respect to the
weight vector. As discussed in Chapter 4, a Jacobian is a vector-to-vector derivative in
matrix calculus, and therefore the result is a matrix. The derivative of an m-dimensional
column vector with respect to an d-dimensional column vector is a d × m matrix in the
denominator layout of matrix calculus, whereas it is an m × d matrix in the numerator
layout (see page 170). The Jacobian is an m × d matrix, and therefore conforms to the
numerator layout. In this book, we are consistently using the denominator layout, and
therefore, the Jacobian of the m-dimensional vector h with respect to the d-dimensional
vector w is defined as the transpose of the vector-to-vector derivative:
T
∂h ∂hi
Jacobian(h, w) = = (5.6)
∂w ∂wj m×d matrix
However, the transposition does not really matter in the case of the Hessian, which is
symmetric. Therefore, the Hessian can also be defined as follows:
T
∂∇J(W ) ∂∇J(W )
H= = (5.7)
∂W ∂W
The Hessian can be viewed as the natural generalization of the second derivative to mul-
tivariate data. Like the univariate Taylor series expansion of the second derivative, it can
be used for the multivariate Taylor-series expansion by replacing the scalar second deriva-
tive with the Hessian. Recall that the (second-order) Taylor-series expansion of a univariate
function f (w) about the scalar w0 may be defined as follows (cf. Section 1.5.1 of Chapter 1):
(w − w0 )2
f (w) ≈ f (w0 ) + (w − w0 )f (w0 ) + f (w0 ) (5.8)
2
It is noteworthy that the Taylor approximation is accurate when |w − w0 | is small, and
it starts losing its accuracy for non-quadratic functions when |w − w0 | increases (as the
contribution of the higher-order terms increases as well). One can also write a quadratic
approximation of the multivariate loss function J(W ) in the vicinity of parameter vector
W 0 by using the following Taylor expansion:
1
J(W ) ≈ J(W 0 ) + [W − W 0 ]T [∇J(W 0 )] + [W − W 0 ]T H[W − W 0 ] (5.9)
2
218 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
As in the case of the univariate expansion, the accuracy of this approximation falls off
with increasing value of W − W 0 , which is the Euclidean distance between W and W 0 .
Note that the Hessian H is computed at W 0 . Here, the parameter vectors W and W 0
are d-dimensional column vectors. This is a quadratic approximation, and one can simply
set the gradient to 0, which results in the following optimality condition for the quadratic
approximation:
The optimality condition above only finds a critical point, and the convexity of the function
is important to ensure that this critical point is a minimum. One can rearrange the above
optimality condition to obtain the following Newton update:
∗
W ⇐ W 0 − H −1 [∇J(W 0 )] (5.10)
One interesting characteristic of this update is that it is directly obtained from an opti-
mality condition, and therefore there is no learning rate. In other words, this update is
approximating the loss function with a quadratic bowl and moving exactly to the bottom
of the bowl in a single step; the learning rate is already incorporated implicitly. Recall from
Figure 5.2 that first-order methods bounce along directions of high curvature. Of course,
the bottom of the quadratic approximation is not the bottom of the true loss function, and
therefore multiple Newton updates will be needed. Therefore, the basic Newton method
for non-quadratic functions initializes W to an initial point W 0 , performs the updates as
follows:
1. Compute the gradient ∇J(W ) and the Hessian H at the current parameter vector W .
2. Perform the Newton update:
W ⇐ W − H −1 [∇J(W )]
Figure 5.7: The effect of pre-multiplication of steepest-descent direction with the inverse
Hessian
the learning steps towards low-curvature directions. This situation also arises in valleys like
the ones shown in Figure 5.4. Multiplication with the inverse Hessian will tend to favor
the gently sloping (but low curvature) direction, which is a better direction of long-term
movement. Furthermore, if the Hessian is negative semi-definite at a particular point (rather
than positive semi-definite), the Newton method might move in the wrong direction towards
a maximum (rather than a minimum). Unlike gradient descent, the Newton method only
finds critical points rather than minima.
OBJECTIVE FUNCTION
STARTING POINT
TRUE FUNCTION
LOCAL QUADRATIC
APPROXIMATION
BOTTOM OF QUADRATIC
APPROXIMATION
OPTIMIZATION VARIABLE
Figure 5.8: A Newton step can worsen the objective function in large steps for non-quadratic
functions, because the quadratic approximation increasingly deviates from the true function.
A line search can ameliorate the worsening
In other words, a single step suffices to reach the optimum point of this quadratic function.
This is because the second-order Taylor “approximation” of a quadratic function is exact,
and the Newton method solves this approximation in each iteration. Of course, real-world
functions are not quadratic, and therefore multiple steps are typically needed.
It is assumed that w1 and w2 are expressed1 in radians. Note that the optimum of this
objective function is still [w1 , w2 ] = [0, 0], since the value of J(0, 0) is −1 at this point,
where each additive term of the above expression takes on its minimum value. We will
again start at [w1 , w2 ] = [1, 1], and show that one iteration no longer suffices in this case.
In this case, we can show that the gradients and Hessian are as follows:
2 + sin(2) 2.91
∇J(1, 1) = =
8 + sin(2) 8.91
2 + cos(2) cos(2) 1.584 −0.416
H= =
cos(2) 8 + cos(2) −0.416 7.584
Note that we do reach closer to an optimal solution, although we certainly do not reach
the optimum point. This is because the objective function is not quadratic in this case,
and one is only reaching the bottom of the approximate quadratic bowl of the objective
function. However, Newton’s method does find a better point in terms of the true objective
function value. The approximate nature of the Hessian is why one must use either exact
or approximate line search to control the step size. Note that if we used a step-size of 0.6
instead of the default value of 1, one would obtain the following solution:
w1 1 2.1745 −0.30
⇐ − 0.6 =
w2 1 1.296 0.22
Although this is only a very rough approximation to the optimal step size, it still reaches
much closer to the true optimal value of [w1 , w2 ] = [0, 0]. It is also relatively easy to show
that this set of parameters yields a much better objective function value. This step would
need to be repeated in order to reach closer and closer to an optimal solution.
1 This ensures simplicity, as all calculus operations assume that angles are expressed in radians.
222 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
∇J(W ) = DT DW − DT y (5.12)
The Hessian is obtained by computing the Jacobian of this gradient. The second term
of the gradient is a constant and therefore further differentiating it will yield 0; we need
only differentiate the first term. On computing the vector-to-vector derivative of the first
term of the gradient with respect to W , we obtain the fact that the Hessian is DT D. This
observation can be verified directly using the matrix calculus identity (i) of Table 4.2(b) in
Chapter 4. We summarize this observation as follows:
Observation 5.5.1 (Hessian of Squared Loss) Let J(W ) = 12 DW − y2 be the loss
function of linear regression for an n × d data matrix D, a d-dimensional column vector W
of coefficients and n-dimensional column vector y of targets. Then, the Hessian of the loss
function is given by DT D.
It is also helpful to view the Hessian as the sum of point-specific Hessians, since the Hessian
of any linearly additive function is the sum of the Hessians of the individual terms:
Observation 5.5.2 (Point-Specific Hessian of Squared Loss) Let Ji = 12 (W · X i −
yi )2 be the loss function of linear regression for a single training pair (X i , yi ). Then, the
T
point specific Hessian of the squared loss of Ji is given by the outer-product X i X i .
T
Note that DT D is simply the sum over all X i X i , since any matrix multiplication can be
decomposed into the sum of outer-products (Lemma 1.2.1 of Chapter 1):
n
T
DT D = Xi Xi
i=1
This is consistent with the fact that Hessian of the full data-specific loss function is the sum
of the point-specific Hessians.
One can now combine the Hessian and gradient to obtain the Newton update. A neat
result is that the Newton update for least-squares regression and classification simplifies to
the closed-form solution of linear regression result discussed in Chapter 4. Given the current
vector W , the Newton update is as follows (based on Equation 5.10):
Note that the right-hand side is free of W , and therefore we need a single “update” step
in closed form. This solution is identical to Equation 4.39 of Chapter 4! This equivalence
5.5. NEWTON METHODS IN MACHINE LEARNING 223
is not surprising. The closed-form solution of Chapter 4 is obtained by setting the gradient
of the loss function to 0. The Newton method also sets the gradient of the loss function to
0 after representing it using a second-order Taylor expansion (which is exact for quadratic
functions).
Problem 5.5.1 Derive the Newton update for least-squares regression, when Tikhonov
∗
regularization with parameter λ > 0 is used. Show that the final solution is W =
T −1 T
(D D + λI) D y, which is the same regularized solution derived in Chapter 4.
1
n ) T
*2
J(W ) = max 0, 1 − yi [W · X i ] [L2 -loss SVM]
2 i=1
We have omitted the regularization term for simplicity. This loss can be decomposed as
J(W ) = i Ji , where Ji is the point-specific loss. The point-specific loss for the ith point
can be expressed in a form corresponding to identity (v) of Table 4.2(a) in Chapter 4:
T 1 ) T
*2
Ji = fi (W · X i ) = max 0, 1 − yi [W · X i ]
2
Note the use of the function fi (·) in the above expression, which is defined for L2 -loss SVMs
as follows:
1
fi (z) = max{0, 1 − yi z}2
2
This function will eventually need to be differentiated during gradient descent:
∂fi (z)
= fi (z) = −yi max{0, 1 − yi z}
∂z
T
Therefore, we have Ji = fi (zi ), where zi = W · X i . The derivative of Ji = fi (zi ) with
respect to W is computed using the chain rule:
Note that this derivative is in the same form as identity (v) of Table 4.2(a). In order to
compare the gradients of least-squares classification and the L2 -SVM, we restate them next
to each other:
224 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
∂Ji T T
= −yi (1 − yi (W · X i ))X i [Least-Squares Classification]
∂W
∂Ji T T
= −yi max{0, 1 − yi (W · X i )}X i [L2 -SVM]
∂W
The least-squares classification and the L2 -SVM have a similar gradient, except that the
contributions of instances that are correctly classified in a confident way (i.e., instances
T
satisfying yi (W · X i ) ≥ 1) are not included in the SVM. One can use yi2 = 1 to rewrite the
gradient of the L2 -SVM in terms of the indicator function as follows:
∂Ji T T T
= (W · X i − yi )I([1 − yi (W · X i )] > 0) X i [L2 -SVM]
∂W
scalar vector
The binary indicator function I(·) takes on the value of 1 when the condition inside it
is satisfied. Therefore, the overall gradient of J(W ) with respect to W can be written as
follows:
n
∂Ji n
T T T
∇J(W ) = = (W · X i − yi )I([1 − yi (W · X i )] > 0) X i
∂W
i=1 i=1
scalar vector
= D Δw (DW − y)
T
Here, Δw is an n × n diagonal matrix in which the (i, i)th entry contains the indicator
T
function I([1 − yi (W · X i )] > 0) for the ith training instance.
Next, we focus on the computation of the Hessian. We would first like to compute the
∂Ji
Jacobian of the point-specific gradient ∂W in order to compute the point-specific Hessian,
and then add up the point-specific Hessians. In important point is that the gradient is the
T
product of a scalar s = −yi max{0, 1 − yi (W · X i )} (dependent on W ) and the vector X i
(independent of W ). This fact simplifies the computation of the point-specific Hessian Hi
(i.e., transposed vector derivative of the gradient), using the product-of-variables identity
in Table 4.2(b):
T ' (
T ∂s T T
Hi = X i = X i yi2 I([1 − yi (W · X i )] > 0)X i
∂W
T T
= I([1 − yi (W · X i )] > 0)[X i X i ] [Setting yi2 = 1]
H = DT Δw D
5.5. NEWTON METHODS IN MACHINE LEARNING 225
Here, Δw is the same n × n binary diagonal matrix Δw that is used in the expression for
the gradient. The value of Δw will change over time during learning, as different training
instances move in and out of correct classification and therefore contribute in varying ways
to Δw . The key point is that rows drop in and out in terms of their contributions to the
gradient and the Hessian, as W changes. This is the reason that we have subscripted Δ
with w to indicate that it depends on the parameter vector.
Therefore, at any given value of the parameter vector, the Newton update of the L2 -loss
SVM is as follows:
W ⇐ W − H −1 [∇J(W )] = W − (DT Δw D)−1 [DT Δw (DW − y)]
= W − W +(DT Δw D)−1 DT Δw y = (DT Δw D)−1 DT Δw y
0
This form is almost identical to least-squares classification, except that we are dropping the
instances that are correctly classified in a strong way. At first glance, it might seem that the
L2 -SVM also requires a single iteration like least-squares regression, because the vector W
has disappeared on the right-hand side. However, this does not mean that the right-hand
side is independent of W . The matrix Δw does depend on the weight vector, and will change
once W is updated. Therefore, one must recompute Δw in each iteration and repeat the
above step to convergence.
The second point is that line search becomes important in each update of the L2 -SVM,
as we are no longer dealing with a quadratic function. Therefore, we can add line search to
compute the learning rate αt in the tth iteration. This results in the following update:
W ⇐ W − αt (DT Δw D)−1 [DT Δw DW − Dw
T
Δw y]
= W (1 − αt ) + αt (DT Δw D)−1 DT y
Note that it is possible for line search to obtain a value of αt > 1, and therefore the coefficient
(1 − αt ) of the first term can be negative. One can also derive a form of the update for the
regularized SVM. We leave this problem as a practice exercise.
Problem 5.5.2 Derive the Newton update without line-search for the L2 -SVM, when
Tikhonov regularization with parameter λ > 0 is used. Show that the iterative update
of the Newton method is W ⇐ (DT Δw D + λI)−1 DT Δw y. All notations are the same as
those used for the L2 -SVM in this section.
It is noteworthy that the Newton’s update uses the quadratic Taylor expansion of the non-
quadratic objective function of the L2 -SVM; the second-order Taylor expansion is, therefore,
only an approximation. On the other hand, least-squares regression already has a quadratic
objective function, and its second-order Taylor approximation is exact. This point of view is
critical in understanding why certain objective functions like least-squares regression require
a single Newton update, whereas others like the SVM do not.
Problem 5.5.3 Discuss why the Hessian is more likely to become singular towards the end
of learning in the Newton method for the L2 -SVM. How would you address the problem
caused by the non-invertibility of the Hessian? Also discuss the importance of line search in
these cases.
training pairs, and therefore stacking up all the d-dimensional rows results in an n × d
matrix D. The resulting loss function (cf. Section 4.8.3) is as follows:
n
T
J(W ) = log(1 + exp(−yi [W · X i ]))
i=1
We start by defining a function for logistic loss in order to enable the (eventual) use of the
chain rule:
fi (z) = log(1 + exp(−yi z)) (5.14)
T
When zi is set to W · X i , the function fi (zi ) contains the loss for the ith training point.
The derivative of fi (zi ) is as follows:
∂fi (zi ) exp(−yi zi ) 1
= −yi = −yi
∂zi 1 + exp(−yi zi ) 1 + exp(yi zi )
pi
The quantity pi = 1/(1 + exp(yi zi ) in the above expression is always interpreted as the
T
probability of the model to make2 a mistake, when zi = W · X i . Therefore, one can express
the derivative of fi (zi ) as follows:
∂fi (zi )
= −yi pi
∂zi
With this machinery and notations, one can write the objective function of logistic regression
in terms of the individual losses:
n
T
n
J(W ) = fi (W · X i ) = fi (zi )
i=1 i=1
Then, one can compute the gradient of the loss function using the chain rule as follows:
n
∂fi (zi ) ∂zi n
T
∇J(W ) = =− y i pi X i (5.15)
∂z ∂W
i=1
i
i=1
−yi pi T
Xi
T
The derivative of zi = W · X i with respect to W is based on identity (v) of Table 4.2(a).
To represent the gradient compactly using matrices, one can introduce an n × n diagonal
matrix Δpw , in which the ith diagonal entry contains the probability pi :
One can view Δpw as a soft version of the binary matrix Δw used for the L2 -SVM. There-
fore, we have added the superscript p to the matrix Δpw in order to indicate that it is a
probabilistic matrix.
The Hessian is given by the Jacobian of the gradient:
T T T
∂∇J(W ) n T
∂[yi pi X i ] n T
∂[pi X i ]
H= =− =− yi (5.17)
∂W i=1
∂W i=1
∂W
2 This conclusion follows from the modeling assumption in logistic regression that the probability of a
correct prediction is pi = 1/(1 + exp(−yi zi )). It can be easily shown that pi + pi = 1.
5.5. NEWTON METHODS IN MACHINE LEARNING 227
Now observe that this form is the weighted sum of matrices, where each matrix is the outer-
product between a vector and itself. This form is also used in the spectral decomposition of
matrices (cf. Equation 3.43 of Chapter 3), in which the weighting is handled by a diagonal
matrix. Consequently, we can convert the Hessian to a form using the data matrix D as
follows:
H = DT Λuw D (5.20)
Here, Λuw is a diagonal matrix of uncertainties in which the ith diagonal entry is simply
pi (1 − pi ), where pi is the probability of making a mistake on the ith training instance with
weight vector W . When a point is classified with probability close to 0 or 1, the value of pi
will always be closer to 0. On the other hand, if the model is unsure about the class label of
pi , its probability will be high. Note that Λuw depends on the value of the parameter vector,
and we have added the notations w, u to it in order to emphasize that it is an uncertainty
matrix that depends on the parameter vector. It is helpful to note that the Hessian of
logistic regression is similar in form to the Hessian DT D in the “parent problem” of linear
regression and the Hessian DT Δw D in the L2 -SVM. The L2 -SVM explicitly drops rows
that are correctly classified in a confident way, whereas logistic regression gives each row a
soft weight depending on the level of uncertainty (rather than correctness) in classification.
One can now derive an expression for the Newton update for logistic regression by
plugging in the expressions for the Hessian and the gradient. At any given value of the
parameter vector W , the update is as follows:
This iterative update needs to be executed to convergence. Note that Δpw simply weights each
class label from {−1, +1} by the probability of making a mistake for that training instance.
Therefore, instances with larger mistake probabilities are emphasized in the update. This is
also an important difference from the L2 -SVM where only incorrect or marginally classified
instances are used, and other “confidently correct” instances are discarded. Furthermore,
the update of logistic regression uses the “uncertainty weight” in the matrix Λuw . Finally,
it is common to use line search in conjunction with learning rate α in order to modify the
aforementioned update to the following:
Problem 5.5.4 Derive the Newton update for logistic regression, when Tikhonov regular-
ization with parameter λ is used. Show that the update is modified to the following:
The notations here are the same as those in the discussion of this section.
It is evident that all the updates are very similar. One can explain these differences in
terms of the similarities and differences of the loss functions. For example, when the L2 -
SVM is compared to least-squares classification, it is primarily different in terms of assuming
zero loss for points that are classified correctly in a sufficiently “confident” way (i.e., meet
the margin requirement). Similarly, when we compare the Hessian and the gradient used in
the case of the L2 -SVM to that used in least-squares classification, a binary diagonal matrix
Δw is used to remove the effect of these correctly classified points (whereas least-squares
classification includes these points as well). The impact of changing the loss function is
more complex in the case of logistic regression; points that are correctly classified with high
probability are de-emphasized in the gradient, and points that the model is certain about
(whether correct or incorrect) are de-emphasized in the Hessian. Furthermore, unlike the
L2 -SVM, logistic regression uses soft weighting rather than hard weighting. All these con-
nections are naturally related to the connections among their loss functions (cf. Figure 4.9
of Chapter 4). The logistic regression update is considered a soft and iterative version of
the closed-form solution to least-squares regression — as a result, the Newton method for
logistic regression is sometimes also referred to as the iteratively re-weighted least-squares
algorithm.
One can also understand all these updates in the context of a unified framework, where
the regularized loss function for many machine learning models can be expressed as follows:
n
T λ
J= fi (W · X i ) + W 2
i=1
2
5.6. NEWTON METHOD: CHALLENGES AND SOLUTIONS 229
Note that each fi (·) also uses the observed value yi to compute the loss, and can also be
T
written as L(yi , W ·X i ). All the updates can be written in a single unified form as discussed
in the result below:
Lemma 5.5.1 (Unified Newton Update for Machine Learning) Let the objective
function for a machine learning problem with d-dimensional parameter vector W , and n × d
data matrix D containing rows (feature vectors) X 1 . . . X n be as follows:
n
T λ
J= L(yi , W · X i ) + W 2
i=1
2
Here, y = [y1 . . . yn ]T is the observed dependent variable parameter vector for matrix D.
Then, the regularized Newton update can be written in the following form:
Here Δ2 is an n × n diagonal matrix whose diagonal entries contain the second derivative
T
L (yi , zi ) [with respect to zi = W · X i ] evaluated at each (X i , yi ), and Δ1 is an n × n
diagonal matrix whose diagonal entries contain the corresponding first derivative L (yi , zi )
evaluated at each (X i , yi ).
We leave the proof of this lemma as an exercise for the reader (see Exercise 14).
1
1
0.8 0.8 SADDLE
POINT
0.6 0.6
0.4
0.4 0.2
0
g(x, y)
0.2
−0.2
f(x)
0 −0.4
−0.6
−0.2
−0.8
−0.4 −1
1
−0.6 0.5
0
−0.8 y −0.5 1
0.5
0
−1 −1 −0.5
−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 x
x
(a) 1-dimensional saddle point (b) 2-dimensional saddle point
(i.e., a critical point) of a gradient-descent method because its gradient is zero, but it is not
a minimum (or maximum). A saddle point is an inflection point, which appears to be either
a minimum or a maximum depending on which direction we approach it from. Therefore,
the quadratic approximation of the Newton method will result in vastly different shapes
depending on the precise location of current parameter vector with respect to a nearby
saddle point. A 1-dimensional function with a saddle point is the following:
f (x) = x3
This function is shown in Figure 5.9(a), and it has an inflection point at x = 0. Note that
a quadratic approximation at x > 0 will look like an upright bowl, whereas a quadratic
approximation at x < 0 will look like an inverted bowl. The second-order Taylor approxi-
mations at x = 1 and x = −1 are as follows:
6(x − 1)2
F (x) = 1 + 3(x − 1) + = 3x2 − 3x + 1 [At x = 1]
2
6(x + 1)2
G(x) = −1 + 3(x + 1) − = −3x2 − 3x − 1 [At x = −1]
2
It is not difficult to verify that one of these functions is an upright bowl (convex function)
with a minimum and no maximum, whereas another is an inverted bowl (concave function)
with a maximum and no minimum. Therefore, the Newton optimization will behave in an
unpredictable way, depending on the current value of the parameter vector. Furthermore,
even if one reaches x = 0 in the optimization process, both the second derivative and the
first derivative will be zero. Therefore, a Newton update will take the 0/0 form and become
indefinite. Such a point is a degenerate point from the perspective of numerical optimization.
In general, a degenerate critical point is one where the Hessian is singular (along with the
first-order condition that the gradient is zero). The problem is complicated by the fact that
a degenerate critical point can be either a true optimum or a saddle point. For example,
the function h(x) = x4 has a degenerate critical point at x = 0 in which both first-order
and second-order derivatives are 0. However, the point x = 0 is a true minimum.
It is also instructive to examine the case of a saddle point in a multivariate function,
where the Hessian is not singular. An example of a 2-dimensional function with a saddle
point is as follows:
5.6. NEWTON METHOD: CHALLENGES AND SOLUTIONS 231
g(x, y) = x2 − y 2
This function is shown in Figure 5.9(b). The saddle point is (0, 0). The Hessian of this
function is as follows:
2 0
H=
0 −2
It is easy to see that the shape of this function resembles a riding saddle. In this case, ap-
proaching from the x direction or from the y direction will result in very different quadratic
approximations. In one case, the function will appear to be a minimum, and in another case,
the function will appear to be a maximum. Furthermore, the saddle point [0, 0] will be a sta-
tionary point from the perspective of a Newton update, even though it is not an extremum.
Saddle points occur frequently in regions between two hills of the loss function, and they
present a problematic topography for the Newton method. Interestingly, straightforward
gradient-descent methods are often able to escape from saddle points [54], because they are
simply not attracted by such points. On the other hand, Newton’s method is indiscrimi-
nately attracted to all critical points (such as maxima or saddle points). High-dimensional
objective functions seem to contain a large number of saddle points compared to true op-
tima (see Exercise 14). The Newton method does not always perform better than gradient
descent, and the specific topography of a particular loss function may have an important
role to play. The Newton method is needed for loss functions with complex curvatures,
but without too many saddle points. Note that the pairing of computational algorithms
(like Adam) with gradient-descent methods already changes the steepest direction in a way
that incorporates several advantages of second-order methods in an implicit way. Therefore,
real-world practitioners often prefer gradient-descent methods in combination with compu-
tational algorithms like Adam. Recently, some methods have been proposed [32] to address
saddle points in second-order methods.
This is a quadratic objective function, and the individual losses are the three terms of the
above expression. The aggregate loss can also be written as J = 14w2 + 3. Therefore, the
loss functions of the three individual points and the aggregate loss are both quadratic. This
is the reason that the Newton method converges to the optimal solution in a single step in
least-squares classification/regression; the Taylor “approximation” is exact.
Let us now examine, how this objective function would be modified by the L2 -SVM:
J = max{(1 − w), 0}2 + max{(1 − 2w), 0}2 + max{(1 + 3w), 0}2
This objective function is no longer quadratic because of the use of the maximization func-
tion within the loss. As a result, the Taylor approximation is no longer exact, and a finite
step will lead to a point where the Taylor approximation deteriorates. Note that different
points contribute non-zero values at different values of w. Therefore, for any Newton step of
finite size, points may drop off or add into the loss, which can cause unexpected results. For
example, as one reaches near an optimal solution many misclassified training points may
be the result of noise and errors in the training data. In this situation, the Newton method
will define the update of the weight vector based on such unreliable training points. This is
one of the reasons that line search in important in the Newton method. Another solution
is to use the trust region method.
A key point is in terms of how the radius δt should be selected. The radius δt is ei-
ther increased or decreased, by comparing the improvement F (at ) − F (at+1 ) of the Taylor
approximation F (W ) to the improvement J(at ) − J(at+1 ) of the true objective function:
J(at ) − J(at+1 )
It = [Improvement Ratio]
F (at ) − F (at+1 )
Intuitively, we would like the true objective function to improve as much as possible, and
not just the Taylor approximation. The value of the improvement ratio It is usually less than
1, as one is optimizing the Taylor approximation rather than the true objective function.
For example, choosing extremely small values of δt will lead to improvement ratios near 1,
but it is not helpful in terms of making sufficient progress.
Therefore, the change in δt from iteration to iteration is accomplished by using the
improvement ratio as a hint about whether it is too conservative or too liberal. Similarly,
the trust constraint W − at ≤ δt needs to be satisfied tightly by the optimization solution
W = at+1 in order to increase the size of the trust region in the next iteration. If the
improvement ratio is too small (say, less than 0.25), then the trust radius δt needs to be
reduced by a factor of 2 in the next iteration. If the ratio is too large (say, greater than 0.75)
and a full step of δt was used in the current iteration (i.e., tightly satisfied trust constraint),
the trust radius δt needs to be increased. Otherwise, the trust radius does not change.
Furthermore, if the improvement ratio is smaller than a critical point (say, negative), then
the current step is not accepted, and we set at+1 = at and the optimization problem is
solved again with a smaller step size. This process is repeated to convergence. An example
of the implementation of logistic regression with a trust-region method is given in [80].
Figure 5.10: The eigenvectors of the Hessian of a quadratic function represent the orthogonal
axes of the quadratic ellipsoid and are also mutually orthogonal. The eigenvectors of the
Hessian are orthogonal conjugate directions. The generalized definition of conjugacy may
result in non-orthogonal directions
appropriate basis transformation of variables (cf. Section 3.4.4 of Chapter 3). These variables
represent directions in the data that do not interact with one another. Such noninteracting
directions are extremely convenient for optimization because they can be independently
optimized with line search. Since it is possible to find such directions only for quadratic
loss functions, we will first discuss the conjugate gradient method under the assumption
that the objective function J(W ) is quadratic. Later, we will discuss the generalization to
non-quadratic functions.
A quadratic and convex loss function J(W ) has an ellipsoidal contour plot of the type
shown in Figure 5.10, and has a constant Hessian over all regions of the optimization space.
The orthonormal eigenvectors q 0 . . . q d−1 of the symmetric Hessian represent the axes direc-
tions of the ellipsoidal contour plot. One can rewrite the loss function in a new coordinate
space defined by the eigenvectors as the basis vectors (cf. Section 3.4.4 of Chapter 3) to
create an additively separable sum of univariate quadratic functions in the different vari-
ables. This is because the new coordinate system creates a basis-aligned ellipse, which does
not have interacting quadratic terms of the type xi xj . Therefore, each transformed variable
can be optimized independently of the others. Alternatively, one can work with the original
variables (without transformation), and simply perform line search along each eigenvector
of the Hessian to select the step size. The nature of the movement is illustrated in Fig-
ure 5.10(a). Note that movement along the jth eigenvector does not disturb the work done
along other eigenvectors, and therefore d steps are sufficient to reach the optimal solution
in quadratic loss functions.
Although it is impractical to compute the eigenvectors of the Hessian, there are other
efficiently computable directions satisfying similar properties; this key property is referred
to as mutual conjugacy of vectors. Note that two eigenvectors q i and q j of the Hessian satisfy
q Ti q j = 0 because of orthogonality of the eigenvectors of a symmetric matrix. Furthermore,
since q j is an eigenvector of H, we have Hq j = λj q j for some scalar eigenvalue λj . Multi-
plying both sides with q Ti , we can easily show that the eigenvectors of the Hessian satisfy
5.7. COMPUTATIONALLY EFFICIENT VARIATIONS OF NEWTON METHOD 235
Note that the second-order term in the above objective function uses the diagonal matrix
Δ, where W contains the coordinates of the parameter vector in the basis corresponding to
the conjugate directions. Of course, we do not need to be explicit about performing a basis
transformation into an additively separable objective function. Rather, one can separately
optimize along each of these d H-orthogonal directions (in terms of the original variables)
to solve the quadratic optimization problem in d steps. Each of these optimization steps
can be performed using line search along an H-orthogonal direction. Hessian eigenvectors
represent a rather special set of H-orthogonal directions that are also orthogonal; conjugate
directions other than Hessian eigenvectors, such as those shown in Figure 5.10(b), are not
mutually orthogonal. Therefore, conjugate gradient descent optimizes a quadratic objective
function by implicitly transforming the loss function into a non-orthogonal basis with a
additively separable representation of the objective function in which each additive term is
a univariate quadratic. One can state this observation as follows:
Observation 5.7.1 (Properties of H-Orthogonal Directions) Let H be the Hessian
of a quadratic objective function. If any set of d H-orthogonal directions are selected for
movement, then one is implicitly moving along separable variables in a transformed repre-
sentation of the function. Therefore, at most d steps are required for quadratic optimization.
The independent optimization along each non-interacting direction (with line search) en-
sures that the component of the gradient along each conjugate direction will be 0. Strictly
convex loss functions have linearly independent conjugate directions (see Exercise 9). In
other words, the final gradient will have zero dot product with d linearly independent di-
rections; this is possible only when the final gradient is the zero vector (see Exercise 10),
which implies optimality for a convex function. In fact, one can often reach a near-optimal
solution in far fewer than d updates.
How can one identify conjugate directions? The simplest approach is to use general-
ized Gram-Schmidt orthogonalization on the Hessian of the quadratic function in order to
generate H-orthogonal directions (cf. Problem 2.7.1 of Chapter 2 and Exercise 11 of this
236 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
Premultiplying both sides with q Tt H and using the conjugacy condition to set the left-hand
side to 0, one can solve for βt :
q Tt H[∇J(W t+1 )]
βt = (5.22)
q Tt Hq t
This leads to an iterative update process, which initializes q 0 = −∇J(W 0 ), and computes
q t+1 iteratively for t = 0, 1, 2, . . . T :
1. Update W t+1 ⇐ W t + αt q t . Here, the step size αt is computed using line search to
minimize the loss function.
) T *
q H[∇J(W )]
2. Set q t+1 = −∇J(W t+1 ) + t qT Hq t+1 q t . Increment t by 1.
t t
It can be shown [99, 114] that q t+1 satisfies conjugacy with respect to all previous q i . A
systematic road-map of this proof is provided in Exercise 12.
The conjugate-gradient method is also referred to as Hessian-free optimization. However,
the above updates do not seem to be Hessian-free, because the matrix H is included in
the above updates. However, the underlying computations only need the projection of the
Hessian along particular directions; we will see that these can be computed indirectly using
the method of finite differences without explicitly computing the individual elements of the
Hessian. Let v be the vector direction for which the projection Hv needs to be computed.
The method of finite differences computes the loss gradient at the current parameter vector
W and at W + δv for some small value of δ in order to perform the approximation:
in terms of how one can create a modified algorithm for non-quadratic functions. Do we
first create a quadratic approximation at a point and then solve it for a few iterations with
the Hessian (quadratic approximation) fixed at that point, or do we change the Hessian
every iteration along with the change in parameter vector? The former is referred to as the
linear conjugate gradient method, whereas the latter is referred to as the nonlinear conjugate
gradient method.
In the nonlinear conjugate gradient method, the mutual conjugacy (i.e., H-
orthogonality) of the directions will deteriorate over time, as the Hessian changes from
one step to the next. This can have an unpredictable effect on the overall progress from
one step to the next. Furthermore, the computation of conjugate directions needs to be
restarted every few steps, as the mutual conjugacy deteriorates. If the deterioration occurs
too fast, the restarts occur very frequently, and one does not gain much from conjugacy.
On the other hand, each quadratic approximation in the linear conjugate gradient method
can be solved exactly, and will typically be (almost) solved in much fewer than d iterations.
Therefore, one can make similar progress to the Newton method in each iteration. As long
as the quadratic approximation is of high quality, the required number of approximations is
often not too large. The nonlinear conjugate gradient method has been extensively used in
traditional machine learning from a historical perspective [19], although recent work [86, 87]
has advocated the use of linear conjugate methods. Experimental results in [86, 87] suggest
that linear conjugate gradient methods have some advantages.
The above update can be improved with an optimized learning rate αt for non-quadratic
loss functions working with (inverse) Hessian approximations like Gt :
The optimized learning rate αt is identified with line search. The line search does not
need to be performed exactly (like the conjugate gradient method), because maintenance
of conjugacy is no longer critical. Nevertheless, approximate conjugacy of the early set of
directions is maintained by the method when starting with the identity matrix. One can
(optionally) reset Gt to the identity matrix every d iterations (although this is rarely done).
238 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
It remains to be discussed how the matrix Gt+1 is approximated from Gt . For this
purpose, the quasi-Newton condition, also referred to as the secant condition, is needed:
The subscript of the norm is annotated by “w” to indicate that it is a weighted3 form
of the norm. This weight is an “averaged” form of the Hessian, and we refer the reader
to [99] for details of how the averaging is done. Note that one is not constrained to using
the weighted Frobenius norm, and different variations of how the norm is constructed lead
to different variations of the quasi-Newton method. For example, one can pose the same
objective function and secant condition in terms of the Hessian rather than the inverse
Hessian, and the resulting method is referred to as the Davidson–Fletcher–Powell (DFP)
method. In the following, we will stick to the use of the inverse Hessian, which is the BFGS
method.
Since the weighted norm uses the Frobenius matrix norm (along with a weight matrix)
the above is a quadratic optimization problem with linear constraints. Such constrained
optimization problems are discussed in Chapter 6. In general, when there are linear equality
constraints paired with a quadratic objective function, the structure of the optimization
problem is quite simple, and closed-form solutions can sometimes be found. This is because
the equality constraints can often be eliminated along with corresponding variables (using
methods like Gaussian elimination), and an unconstrained, quadratic optimization problem
can be defined in terms of the remaining variables. These problems sometimes turn out
to have closed-form solutions like least-squared regression. In this case, the closed-form
solution to the above optimization problem is as follows:
Here, the (column) vectors q t and v t represent the parameter change and the gradient
change; the scalar Δt = 1/(q Tt v t ) is the inverse of the dot product of these two vectors.
The update in Equation 5.28 can be made more space efficient by expanding it, so that fewer
temporary matrices need to be maintained. Interested readers are referred to [83, 99, 104]
for implementation details and derivation of these updates.
Even though BFGS benefits from approximating the inverse Hessian, it does need to
carry over a matrix Gt of size O(d2 ) from one iteration to the next. The limited memory
BFGS (L-BFGS) reduces the memory requirement drastically from O(d2 ) to O(d) by not
carrying over the matrix Gt from the previous iteration. In the most basic version of the L-
BFGS method, the matrix Gt is replaced with the identity matrix in Equation 5.28 in order
to derive Gt+1 . A more refined choice is to store the m ≈ 30 most recent vectors q t and v t .
Then, L-BFGS is equivalent to initializing Gt−m+1 to the identity matrix and recursively
applying Equation 5.28 m times to derive Gt+1 . In practice, the implementation is optimized
to directly compute the direction of movement from the vectors without explicitly storing
large intermediate matrices from Gt−m+1 to Gt .
RANKING OBJECTIVE
(Y-AXIS)
3
LOSS FUNCTION
LOSS FUNCTION
Figure 5.12: Subgradients in one and two dimensions. Any vector residing on the hyperplane,
which originates at the contact point between the loss function and the hyperplane, is a
subgradient. The vertical direction is the loss function value in each case
Definition 5.8.1 (Subgradient) Let J(w) be a multivariate, convex loss function in d di-
mensions. The subgradient at point w0 is a d-dimensional vector v that satisfies the following
for any w:
J(w) ≥ J(w0 ) + v · (w − w0 )
Note that the notion of subgradient is primarily used in a convex function rather than
an arbitrary function (as in conventional gradients). Although it is possible to also apply
the above definition for nonconvex functions, the definition loses its usefulness in those
cases. The subgradient is not unique unless the function is differentiable at that point.
At differentiable points, the subgradient is simply the gradient. It can be shown that any
convex combination of subgradients is a subgradient.
Problem 5.8.1 Show using Definition 5.8.1 that if v 1 and v 2 are subgradients of J(w) at
w = w0 , then λv 1 + (1 − λ)v 2 is also a subgradient of J(w) for any λ ∈ (0, 1).
The above practice problem shows that the set of subgradients is a convex closed set.
Furthermore, if the zero vector is a subgradient at w0 , then Definition 5.8.1 implies that we
have J(w) ≥ J(w0 ) for all w. In other words, w0 is an optimal solution. In the following,
we mention some key properties of subgradients:
2. For convex functions, the optimality condition for a particular value of the optimiza-
tion variables w0 is that the set of subgradients at w0 must include the zero vector.
3. At any point w0 , the sum of any subgradient of J1 (w0 ) and any subgradient of J2 (w0 )
is a subgradient of (J1 + J2 )(w0 ). In other words, we can decompose the subgradient
of a separably additive function into its constituent subgradients. This property is
relevant to loss functions of various machine learning algorithms that add up loss
contributions of individual training points.
242 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
While it might not be immediately obvious, we have already used the subgradient method
(implicitly) in the hinge-loss SVM in Chapter 4. We repeat the objective function of the
hinge-loss SVM here (cf. page 184), which is based on the training pairs (X i , yi ):
n
T λ
J= max{0, (1 − yi [W · X i ])} + W 2 [Hinge-loss SVM]
i=1
2
As evident from Figure 4.9 of Chapter 4, the use of the maximization function causes
non-differentiability at the sharp “hinge” of the hinge-loss function; these are values of
W where the second argument of the max-function is 0 for any training point. So what
happens at these points? The update of the SVM uses only those training points where
the second argument is not zero. Therefore, at the non-differentiable points, the gradient is
simply set to 0, which is a valid subgradient. Therefore, the primal updates of the hinge-loss
SVM implicitly use the subgradient method, although the use is straightforward and natural.
In this case, the subgradient does not point in a direction of instantaneous movement
that worsens the objective function (for infinitesimal steps). This is not the case for more
aggressive uses of the subgradient method.
1
d
Minimize J = DW − y2 + λ |wj |
2
j=1
Prediction Error
L1 -Regularization
Here D is an n × d data matrix whose rows contain the training instances, and y is an n-
dimensional column vector containing the target variables. The column vector W contains
the coefficients. Note that the regularization term now uses the L1 -norm of the coefficient
vector rather than the L2 -norm. The function J is non-differentiable for any W in which
even a single component wj is 0. Specifically, if wj is infinitesimally larger than 0, then
the partial derivative of |wj | is +1, whereas if wj is infinitesimally smaller than 0, then the
partial derivative of |wj | is −1. In these methods, the partial derivative of wj at 0 is selected
randomly from [−1, +1], whereas the derivative at values different from 0 is computed in the
same way as the gradient. Let the subgradient of wj be denoted by sj . Then, for step-size
α > 0, the update is as follows:
In this particular case, movement along the subgradient might worsen the objective function
value because of the random choice of sj from [−1, +1]. Therefore, one always maintains
5.8. NON-DIFFERENTIABLE OPTIMIZATION FUNCTIONS 243
the best possible value of W best that was obtained in any iteration. At the beginning of the
process, both W and W best are initialized to the same random vector. After each update
of W , the objective function value is evaluated with respect to W , and W best , and is set
to the recently updated W if the objective function value provided by W is better than
that obtained by the stored value of W best . At the end of the process, the vector W best is
returned by the algorithm as the final solution. Note that sj = 0 is also a subgradient at
wj = 0, and it is a choice that is sometimes used.
1
d
Minimize J = DW − y2 + λ |wj |
2
j=1
Prediction Error
L1 -Regularization
As discussed in Section 4.10 of Chapter 4, coordinate descent can sometimes get stuck for
non-differentiable functions. However, a sufficient condition for coordinate descent to work
for convex loss functions is that the non-differentiable portion can be decomposed into sep-
arable univariate functions (cf. Lemma 4.10.1 of Chapter 4). In this case, the regularization
term is clearly a sum of separable and convex functions. Therefore, one can use coordinate
descent without getting stuck at a local optimum. The subgradient with respect to all the
variables is as follows:
∇J = DT (DW − y) + λ[s1 , s2 , . . . sd ]T (5.30)
Here, each si is a subgradient drawn from [−1, +1]. Since we are optimizing with respect to
only the ith variable, we only need to set the ith component of ∇J to zero. Let di be the
ith column of D. Furthermore, let r denote the n-dimensional residual vector y − DW . One
can then write the optimality condition for the ith component in terms of these variables
as follows:
T
di (y − DW ) − λsj = 0
T
di r − λsj = 0
T T T
di r + wi di di − λsj = wi di di
T T
The left-hand side is free of wi because the term di r contributes −wi di di , which cancels
T
with wi di di . Therefore, we obtain the coordinate update for wi :
T
d r − λsi
wi ⇐ wi + i (5.31)
di 2
244 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
The value of the subgradient si is defined in the same way as in the previous section. The
main problem is that each si could be chosen to be any value between −1 and +1 when the
updated value of wi is close enough to 0; only one of these values will arrive at the optimal
solution. How can one determine the exact value of si that optimizes the objective function
in such cases? This is achieved by the use of soft thresholding of such “close enough” values
of wi to 0. Soft thresholding of wi automatically sets the value of si to an appropriate
intermediate value between −1 and +1. Therefore, the value of each wi is set as follows:
⎧ T
⎨0, d r
− dλ2 ≤ wi + di 2 ≤ dλ2
wi ⇐ i i i
(5.32)
⎩w + di r−λsign(wi ) , otherwise
T
i d 2
i
As in any form of coordinate-descent, one cycles through the variables one by one until
convergence is reached. The elastic-net combines both L1 - and L2 -regularization, and we
leave the derivation of the resulting updates as a practice problem.
Problem 5.8.2 (Elastic-Net Regression) Consider the problem of elastic-net regres-
sion with the following objective function:
1 d
λ2 2
d
Minimize J = DW − y2 + λ1 |wj | + w
2 j=1
2 j=1 j
d 2 +λ
i 2
The main challenge in coordinate descent is to avoid getting stuck in a local optimum
because of non-differentiability (see Figure 4.10 of Chapter 4 for an example). In many
cases, one can use variable transformations to convert the objective function to a well-
behaved form (cf. Lemma 4.10.1) in which convergence to a global optimum is guaranteed.
An example is the graphical lasso [48], which implicitly uses variable transformations.
In other words, we are trying to minimize the function H(·) in the proximity of w by adding
a quadratic penalty term to penalize distance from w. Therefore, the proximity operator
will try to find a “better” u than w, but only in the proximity of w because distance from
w is quadratically penalized. Now let us examine what happens with a few examples:
• When H(w) is set to be a constant, the PH,α (w) = w. This is because one cannot im-
prove w any further from its current argument, and the quadratic penalty encourages
staying at the current point.
∂H(u)
u=w−α (5.34)
∂u
Note that this step is similar to gradient-descent except that the gradient of H(·) is
computed at u rather than w. However, the quadratic penalization ensures that the
step-size is relatively small, and the computation of the gradient of H(u) happens only
in the proximity of w. This is a key motivational point. The proximity operator makes
sensible moves when H(·) is differentiable. However, it works for non-differentiable
functions as well.
Armed with this definition of the proximal operator, one can then write the proximal gra-
dient algorithm in terms of repeating the following two iterative steps as follows:
1. Make a standard gradient-descent step on the differentiable function G(·) with step-
size α:
∂G(w)
w ⇐w−α
∂w
2. Make a proximal descent step on the non-differentiable function H(·) with step-size α:
1
w ⇐ PH,α (w) = argminu αH(u) + u − w2
2
Note that if the function H(·) is differentiable, then the approach roughly simplifies to
alternate gradient descent on G(·) and H(·).
Another key point is in terms of how hard it is to compute the proximal operator.
The approach is only used for problems with “simple” proximal operators that are easy to
compute; furthermore, the underlying functions have a small number of non-differentiable
points. A typical example of such a non-differentiable function is the L1 -norm of a vector.
For this reason, the proximal method is less general than the subgradient method; however,
when it works, it provides better performance.
SUBGRADIENT
METHOD
ERROR (Y-AXIS)
PROXIMAL GRADIENT
METHOD
Figure 5.13: An illustrative comparison of the subgradient and the proximal gradient method
in terms of typical behavior
1 d
Minimize J = DW − y2 + λ |wj |
2
j=1
G(W )
H(W )
A key point is the definition of the proximal operator on the function H(W ), which the
L1 -norm of W . The proximal operator for H(w) with step-size α is as follows:
⎧
⎪
⎨wj + αλ wj < −αλ
[PH,α ]j = 0 −αλ ≤ wj ≤ αλ (5.35)
⎪
⎩
wj − αλ wj > αλ
Note that the proximity operator essentially shrinks each wj by exactly αλ as long as it
is far away from the non-differentiable point. However, if it is close enough to the non-
differentiable point then it simply moves to 0. This is the main difference from the subgra-
dient method, which always updates by exactly αλ in either direction at all differentiable
points, and updates by a random sample from [−αλ, αλ] at the non-differentiable point. As
a result, the subgradient method is more likely to oscillate around non-differentiable points
as compared to the proximal gradient method. An illustrative comparison of the “typi-
cal” convergence behavior of the subgradient and proximal gradient method is shown in
Figure 5.13. In most cases, the proximal gradient method performance significantly faster
than the subgradient method. The faster convergence is because of the thresholding ap-
proach used in the neighborhood of non-differentiable points. This approach is referred to
as the iterative soft thresholding algorithm, or ISTA in short.
results in a highly non-informative function for the purposes of optimization. This function is
not only non-differentiable at several points, but its staircase-like nature makes the gradient
zero at all differentiable points. In other words, a gradient descent procedure would not know
which direction to proceed. This type of problem does not occur with objective functions like
the L1 -norm (which enables the use of a subgradient method). In such cases, it makes sense
to design a surrogate loss function for the optimization problem at hand. This approach is
inherently not a new one; almost all objective functions for classification are surrogate loss
functions anyway. Strictly speaking, a classification problem should be directly optimizing
the classification accuracy with respect to the parameter W . However, the classification
accuracy is another staircase-like function. Therefore, all the models we have seen so far use
some form of surrogate loss, such as the least-squares (classification) loss, the hinge loss, and
the logistic loss. Extending such methods to ranking problems is therefore not a fundamental
innovation at least from a methodological point of view. However, the solutions to ranking
objective functions have their own unique characteristics. In the following, we examine some
surrogate objective functions designed for the ranking problem for classification.
Most classification objective functions are designed to penalize accuracy of classification
by using some surrogate loss, such as the hinge-loss (which is a one-sided penalty from
the target values of +1 and −1). Ranking-based objective functions are based on exactly
the same principle. The only difference is that we penalize the deviation from an ideal
ranking with a surrogate loss function. Two examples of such loss functions correspond
to the pairwise and the listwise approaches. In the following, we discuss a simple pairwise
approach for defining the loss function.
For each such pair in the ranking support vector machine, the goal is learn a d-dimensional
T T
weight vector W , so that W · X i > W · X j when X i is ranked above X j . Therefore, given
T
an unseen set of test instances Z 1 . . . Z t , we can compute each W · Z i , and rank the test
instances on the basis of this value.
In the traditional support vector machine, we always impose a margin requirement by
penalizing points that are uncomfortably close to the decision boundary. Correspondingly,
T T
in the ranking SVM, we penalize pairs where the difference between W · X i and W · X j is
not sufficiently large. Therefore, we would like to impose the following stronger requirement:
W · (X i − X j )T > 1
Any violations of this condition are penalized by 1−W ·(X i −X j )T in the objective function.
Therefore, one can formulate the problem as follows:
λ
Minimize J = max{0, [1 − (W · [X i − X j ]T )]} + W 2
2
(X i ,X j )∈DR
248 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
Here, λ > 0 is the regularization parameter. Note that one can replace each pair (X i , X j )
with the new set of features X i − X j . In other words, each U p is of the form U p = X i − X j
for a ranked pair (X i , X j ) in the training data. Then, the ranking SVM formulates the fol-
lowing optimization problem for the t different pairs in the training data with corresponding
features U 1 . . . U t :
t
T λ
Minimize J = max{0, [1 − W · U i ]} + W 2
i=1
2
Note that the only difference from a traditional support-vector machine is that the class
variable yi is missing in this optimization formulation. However, this change is extremely
easy to incorporate in all the optimization techniques discussed in Section 4.8.2 of Chapter 4.
In each case, the class variable yi is replaced by 1 in the corresponding gradient-descent
steps of various methods discussed in Section 4.8.2.
Here, the key point is that even though the number of solutions is extremely large, the
optimal substructure property allows us to consider only a small subset of them. For ex-
ample, the number of paths from the source to sink in a graph may be exponentially large,
but one can easily compute all shortest paths containing at most 2 nodes from the source
to all nodes. Because of the optimal substructure property, these paths can be extended
to paths containing at most 3 nodes in linear time. This process can be repeated for an
increasing number of nodes, until the number of nodes in the graph is reached. One gener-
ally implements dynamic programming via an iterative table-filling approach where smaller
subproblems are solved first and their solutions are saved. Larger problems are then solved
5.8. NON-DIFFERENTIABLE OPTIMIZATION FUNCTIONS 249
as a function of the known solutions of the smaller problems using the optimal substructure
property. In order to elucidate this point, we will use the example of optimizing the number
of operations in chain matrix multiplication.
Note that the values on the right-hand side are computed earlier than the ones on the left
using iterative table filling, where we compute all N [i, j] in cases where (j − i) is 1, 2, and so
on in that order till j − i is (m − 1). There are at most O(m2 ) slots in the table to fill, and
each slot computation needs the evaluation of the right-hand side of Equation 5.36. This
evaluation requires a minimization over at most (m − 1) possibilities, each of which requires
two table lookups of the evaluations of smaller subproblems. Therefore, each evaluation of
Equation 5.36 requires O(m) time, and the overall complexity is O(m3 ). One can summarize
this algorithm as follows:
Initialize N [i, i] = 0 and Split[i, i] = −1 for all i;
for δ = 1 to m − 1 do
for i = 1 to m − δ do
N [i, i + δ] = mink∈[i+1,i+δ] {N [i, k − 1] + N [k, i + δ] + ni nk ni+δ };
Split[i, i + δ] = argmink∈[i+1,i+δ] {N [i, k − 1] + N [k, i + δ] + ni nk ni+δ };
endfor;
endfor
One also needs to keep track of the optimal split position for each pair [i, j] in a sepa-
rate table Split[i, j] in order to reconstruct the nesting. For example, one will first access
k = Split(1, m) in order to divide the matrix into two groups A1 . . . Ak−1 and Ak . . . Am .
250 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
Subsequently Split[1, k − 1] and Split[k, m] will be accessed again to find the top-level nest-
ing for the individual subproblems. This process will be repeated until we reach singleton
matrices.
The word “dynamic programming” is used in settings beyond pure optimization. Many
types of iterative table filling that achieve polynomial complexity by avoiding repeated
operations are considered dynamic programming (even when no optimization occurs). For
example, the backpropagation algorithm (cf. Chapter 11) uses the summation operation in
the dynamic-programming recursion, but it is still considered dynamic programming. One
can easily change the shortest-path algorithm between a source-sink pair to an algorithm for
finding the number of paths between a source-sink pair (in a graph without cycles) with a
small change to the form of the key table-filling step. Instead of computing the shortest path
using each incident node i on source node s, one can compute the sum of the paths from
each incident node i (on the source) to the sink. The key point is that an additive version of
the substructure property holds, where the number of paths from source to sink is to equal
to the sum of the number of paths from node i (incident on source) to sink. However, this
is not an optimization problem. Therefore, the dynamic programming principle can also
be viewed as a general computer programming paradigm that works in problem settings
beyond optimization by exploiting any version of the substructure property — in general,
the substructure property needs to be able to compute the statistics of superstructures from
those of substructures via bottom-up table filling.
5.9 Summary
This chapter introduces a number of advanced methods for optimization, when simpler
methods for gradient descent are not very effective. The simplest approach is to modify
gradient descent methods, and incorporate several ideas from second-order methods into
the descent process. The second approach is to directly use second-order methods such
as the Newton technique. While the Newton technique can solve quadratic optimization
problems in a single step, it can be used to solve non-quadratic problems with the use of local
quadratic approximations. Several variations of the Newton method, such as the conjugate
gradient method and the quasi-Newton method, can be used to make it computationally
efficient. Finally, non-differentiable optimization problems present significant challenges in
various machine learning settings. The simplest approach is to change the loss function to a
differentiable surrogate. Other solutions include the use of the subgradient and the proximal
gradient methods.
5.11 Exercises
1. Consider the loss function L = x2 +y 10 . Implement a simple steepest-descent algorithm
to plot the coordinates as they vary from the initialization point to the optimal value
of 0. Consider two different initialization points of (0.5, 0.5) and (2, 2) and plot the
trajectories in the two cases at a constant learning rate. What do you observe about
the behavior of the algorithm in the two cases?
2. As shown in this chapter with examples like Figure 5.2, the number of steps taken by
gradient descent is very sensitive to the scaling of the variables. In this exercise, we will
show that the Newton method is completely insensitive to the scaling of the variables.
Let x be the set of optimization variables for a particular optimization problem (OP).
Suppose we transform x to y by the linear scaling y = Bx with invertible matrix B,
and pose the same optimization problem in terms of y. The objective function might
be non-quadratic. Show that the sequences x0 , x1 . . . xr and y 0 , y 1 . . . y r obtained by
iteratively applying Newton’s method will be related as follows:
y k = Bxk ∀k ∈ {1 . . . r}
[As a side note, the preprocessing and scaling of features is extremely common in
machine learning, which also affects the scaling of the optimization variables.]
3. Write down the second-order Taylor expansion of each of the following functions about
x = 0: (a) x2 ; (b) x3 ; (c) x4 ; (d) cos(x).
4. Suppose that you have the quadratic function f (x) = ax2 +bx+c with a > 0. It is well
known that this quadratic function takes on its minimum value at x = −b/2a. Show
that a single Newton step starting at any point x = x0 will always lead to x = −b/2a
irrespective of the starting point x0 .
5. Consider the objective function f (x) = [x(x − 2)]2 + x2 . Write the Newton update for
this objective function starting at x = 1.
252 CHAPTER 5. ADVANCED OPTIMIZATION SOLUTIONS
4
6. Consider the objective function f (x) = i=1 xi . Write the Newton update starting
at x = 1.
7. Is it possible for a Newton update to reach a maximum rather than a minimum?
Justify your answer. In what types of functions is the Newton method guaranteed to
reach a maximum rather than a minimum?
8. Consider the objective function f (x) = sin(x) − cos(x), where the angle x is measured
in radians. Write the Newton update starting at x = π/8.
9. The Hessian H of a strongly convex quadratic function always satisfies xT Hx > 0
for any non-zero vector x. For such problems, show that all conjugate directions are
linearly independent.
10. Show that if the dot product of a d-dimensional vector v with d linearly independent
vectors is 0, then v must be the zero vector.
11. The chapter uses steepest descent directions to iteratively generate conjugate direc-
tions. Suppose we pick d arbitrary directions v 0 . . . v d−1 that are linearly independent.
Show that (with appropriate choice of βti ) we can start with q 0 = v 0 and generate
successive conjugate directions in the following form:
t
q t+1 = v t+1 + βti q i
i=0
Discuss why this approach is more expensive than the one discussed in the chapter.
12. The definition of βt in Section 5.7.1 ensures that q t is conjugate to q t+1 . This exercise
systematically shows that any direction q i for i ≤ t satisfies q Ti Hq t+1 = 0.
[Hint: Prove (b), (c), and (d) jointly with induction on t while staring at (a).]
(a) Recall from Equation 5.23 that Hq i = [∇J(W i+1 ) − ∇J(W i )]/δi for quadratic
loss functions, where δi depends on ith step-size. Combine this condition with
Equation 5.21 to show the following for all i ≤ t:
δi [q Ti Hq t+1 ] = −[∇J(W i+1 ) − ∇J(W i )]T [∇J(W t+1 )] + δi βt (q Ti Hq t )
Also show that [∇J(W t+1 ) − ∇J(W t )] · q i = δt q Ti Hq t .
(b) Show that ∇J(W t+1 ) is orthogonal to each q i for i ≤ t.
(c) Show that the loss gradients at W 0 . . . W t+1 are mutually orthogonal.
(d) Show that q Ti Hq t+1 = 0 for i ≤ t. [The case for i = t is trivial.]
13. Consider the use of the Newton method for a regularized L2 -loss SVM, and a wide
data matrix D. Discuss how you can make the update in the chapter text more efficient
by inverting a smaller
√ matrix. [Hint: Use the push-through identity of Problem 1.2.13
by defining Dw = Δw D. The notations are the same as in the text.]
14. Saddle points proliferate in high dimensions: Consider the univariate function
f (x) = x3 − 3x, and its natural multivariate extension:
d
F (x1 . . . xd ) = f (xi )
i=1
Show that this function has one minimum, one maximum, and 2d − 2 saddle points.
Argue why high-dimensional functions have proliferating saddle points.
5.11. EXERCISES 253
15. Give a proof of the unified Newton update for machine learning in Lemma 5.5.1.
16. Preparing for backpropagation: Consider a directed-acyclic graph G (i.e., graph
without cycles) with source node s and sink t. Each edge is associated with a length
and a multiplier. The length of a path from s to t is equal to the sum of the edge
lengths on the path and the multiplier of the path is the product of the corresponding
edge multipliers. Devise dynamic programming algorithms to find (i) the longest path
from s to t, (ii) the shortest path from s to t, (iii) the average path length from s to
t, and (iv) the sum of the path-multipliers of all paths from s to t. [Part (iv) is the
core idea behind the backpropagation algorithm.]
17. Give an example of a univariate cubic objective function along with two possible start-
ing points for Newton’s method, which terminate in maxima and minima, respectively.
18. Linear regression with L1 -loss minimizes DW − y1 for data matrix D and target
vector y. Discuss why the Newton method cannot be used in this case.
Chapter 6
“Virtuous people often revenge themselves for the constraints to which they
submit by the boredom that they inspire.”– Confucius
6.1 Introduction
In many machine learning settings, such as nonnegative regression and box regression, the
optimization variables are constrained. Therefore, one needs to find an optimal solution
only over the region of the optimization space that satisfies these constraints. This region
is referred to as the feasible region in optimization parlance. The straightforward use of a
gradient-descent procedure does not work, because an unconstrained step might move the
optimization variables outside the feasible region of the optimization problem. In general,
there are two approaches to addressing optimization constraints:
1. Primal approach: In the primal approach, one attempts to modify gradient descent
so as to stay within the feasible regions of the space. Many of the methods discussed
in the previous chapters, such as gradient descent, coordinate descent, and Newton’s
method, can be modified to stay within feasible regions of the space.
2. Dual approach: The dual approach uses Lagrangian relaxation in order to create a
new dual problem in which primal constraints are converted into dual variables. In
many cases, the structure of the dual problem is simpler to solve. However, the dual
problem is often constrained as well, and might require similar optimization methods
(to the primal methods above) that can work with constraints.
This chapter discusses both primal and the dual methods for constrained optimization.
Some techniques like penalty methods incorporate aspects of both primal and dual methods.
The complexity of an optimization problem depends on the structure of its constraints.
Luckily, many machine learning applications involve two simple types of constraints:
1. Linear and convex constraints: Linear constraints are of the form F (w) ≤ b or of the
form G(w) = c, where F (w) and G(w) are linear functions. A more general type of
constraint is the convex constraint of the form H(w) ≤ d, where H(w) is convex.
2. Norm constraints: Many machine learning problems are norm constrained, where we
wish to minimize or maximize F (w) subject to the constraint that w2 = 1. This
problem arises in spectral clustering and principal component analysis.
This chapter is organized as follows. The next section will introduce constrained methods
for (primal) gradient descent. Methods for coordinate descent are discussed in Section 6.3.
The approach of Lagrangian relaxation is introduced in Section 6.4. Penalty methods are
discussed in Section 6.5. Methods for norm-constrained optimization are discussed in Sec-
tion 6.6. A discussion of the relative advantages of primal and dual methods is provided in
Section 6.7. A summary is given in Section 6.8.
INITIAL POINT
w1
w3
STEEPEST DESCENT MOVES
w2 OUTSIDE FEASIBLE SPACE
Figure 6.1: The projected gradient-descent method. Steepest descent first moves outside the
feasible region and then projects back to nearest point inside feasible region
2. Project w onto its nearest point in the set C. This projection can be expressed as an
optimization problem of the following form:
w ⇐ argminv∈C w − v2
This step is required only when the first step moves w outside the feasible region.
These two steps are iterated to convergence. When the set C is convex and the objective
function F (w) is convex, this approach can be shown to converge to an optimal solution.
Note that the second step is itself an optimization problem, albeit with a simpler structure.
The projected gradient descent method is pictorially illustrated in Figure 6.1.
PROJECTED GRADIENT
Figure 6.2: Projected gradient descent with different types of linear constraints
y = 1−x, and drop both y and the constraint to create the following unconstrained objective
function:
J = x2 + (1 − x)2
It is easy to verify that the optimal value of x is 1/2. When we have a larger number
of constraints, it is necessary to use row reduction in order to create row echelon form.
Subsequently, one can express the variables for which leading non-zero entries exist in
the row-reduced form of A in terms of all the remaining free variables (for which leading
non-zero entries do not exist). As a result, an unconstrained objective function can be
expressed only in terms of the free variables. An example of this type of elimination is
shown in Section 2.5.4 of Chapter 2. Subsequently, one can use simple gradient descent on
the unconstrained objective in order to solve the optimization problem.
In spite of the possibility of eliminating a subset of the variables (and the constraints)
using Gaussian elimination, one can also use projected gradient descent with equality con-
straints. An example of a 2-dimensional hyperplane space in three dimensions is shown in
Figure 6.2. Note that one need not separate out the two iterative steps of steepest direction
movement and projection in this special case. Rather, the gradient can be directly projected
onto the linear hyperplane in order to perform the descent. The corresponding projection of
the steepest-descent direction on the 2-dimensional hyperplane is illustrated in Figure 6.2.
It is helpful to work out what the steepest-descent direction means in algebraic terms.
Consider a situation where one is minimizing F (w) subject to the constraint system Aw = b.
Here, w is a d-dimensional column vector, and A is an m × d matrix with m ≤ d. Therefore,
the vector b is m-dimensional. Note that it is important for m ≤ d, or else the set of
constraints might be infeasible. For simplicity, we will assume that the rows of A are linearly
independent.
Consider the situation where the current parameter vector w = wt . Assume that wt
is already feasible and therefore it satisfies the constraints Awt = b of the optimization
problem. Then, the current steepest-descent direction is given by g t = ∇F (wt ). Note that
if Ag t = 0, then the point wt − αg t will no longer be feasible. This is because we will have
A[wt −αg t ] = b−αAg t = b. This situation is shown in Figure 6.2, where the steepest-descent
direction moves off the feasible hyperplane.
Therefore, in order for the steepest-descent step to stay feasible, the vector g t needs to
be projected onto the hyperplane Aw = 0, so that the projected vector g t satisfies Ag t = 0.
In other words, projected steepest descent needs to project g t onto the right null space of A.
6.2. PRIMAL GRADIENT DESCENT METHODS 259
In cases when the rows of A are not linearly independent, the computation of g t = g ⊥ can
also be achieved easily by Gram-Schmidt orthogonalization (cf. Section 2.7.1 of Chapter 2)
of the m rows of A to create r < m orthonormal vectors v 1 . . . v r . Then, g ⊥ can be computed
as follows:
r
g = [g t · v i ] v i
i=1
g⊥ = gt − g
Subsequently, the iterative projected gradient descent steps can be written as follows:
1. Compute g t = ∇F (wt ) and compute g ⊥ from g t as discussed above.
2. Update wt+1 ⇐ wt − αg ⊥ and increment t by 1.
The above two steps are repeated to convergence. The procedure can be initialized with
any feasible value of the vector w = w0 . The initial feasible value can be found by solving
the system of equations Aw = b using any of the methods discussed in Chapter 2.
Problem 6.2.1 Suppose that you use line search to determine the step-size α in each it-
eration for projected gradient descent in convex functions and linear equality constraints.
Show that successive directions of projected descent are always orthogonal to one another.
columns of A, which is a column-wise projection matrix. Here, we project in the span of the rows of A, and
therefore the formula of Equation 2.17 has been modified by transposing A.
260 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
1 T
Minimize J(w) = w Qw + pT w + q
2
subject to:
Aw = b
Here, Q is a d × d positive definite matrix, p and w are d-dimensional column vectors,
and q is a scalar. This objective function is strictly convex, since it has a positive-definite
Hessian Q everywhere. For simplicity in discussion, we assume that the matrix A has linearly
independent rows. Therefore, A is an m × d matrix with m ≤ d, and the vector b is m-
dimensional.
We already know from Section 4.6.2.1 that unconstrained quadratic programs with pos-
itive definite Hessians have closed-form solutions. Since equality constraints can always be
eliminated with the Gaussian method, it stands to reason that one should be able to find
a closed-form solution in this case as well. After all, the projection of a strictly convex
function on a linear hyperplane Aw = b will continue to be strictly convex as well, and
therefore we should be able to find a closed form solution in this case. However, to achieve
this goal, we need to use a variable transformation so that the objective function contains
linearly separable variables (cf. Section 3.4.4 of Chapter 3). This process is similar to that of
T
converting a univariate quadratic function into vertex form. First we express Q = P ΔP √ ,
where Δ is a diagonal matrix with strictly positive entries. Therefore, both the matrix Δ
and Δ−1/2 can be defined. The objective function can be rewritten as follows:
1 T
J(w) = w Qw + pT w + q
2
1
= wT [P ΔP T ]w + pT w + q
2
1 √ 1
= ΔP T w + Δ−1/2 P T p2 + [q − pT [P Δ−1 P T ] p]
2 2
Q−1
Note that the modified constant term is defined by q = q − 12 pT [P Δ−1 P T ]p. In order to
solve the problem, we make the following variable transformation:
√
w = ΔP T w + Δ−1/2 P T p (6.2)
This variable transformation is invertible, since we can express w in terms of w as well by
left-multiplying both sides with P Δ−1/2 :
P Δ−1/2 w = w + P Δ−1 P T p
= w + Q−1 p
In other words, w can be expressed in terms of w as follows:
w = P Δ−1/2 w − Q−1 p (6.3)
The linear constraints Aw = b can be expressed in terms of the new variables w as follows:
Aw = b
A[P Δ−1/2 w − Q−1 p] = b
[AP Δ−1/2 ] w = b + AQ−1 p
A b
6.2. PRIMAL GRADIENT DESCENT METHODS 261
Therefore, we again obtain linear constraints with new matrices/vectors A and b . In other
words, the optimization problem can be expressed in the following form:
1 2
Minimize J(w ) = w + q
2
subject to:
A w = b
Note that the rows of A are linearly independent like those of A because A is obtained by
multiplying A with square matrices of full rank. This is exactly the optimization problem
discussed in Section 2.8 of Chapter 2, where the right-inverse of A can be used to find a
solution for w :
w = AT (A AT )−1 b (6.4)
What does this mean in terms of the original coefficients and optimization variables? By
substituting A = AP Δ−1/2 , it can be shown that A AT = A(P Δ−1 P T )AT = AQ−1 AT .
One can therefore obtain w in terms of the original coefficients:
w = P Δ−1/2 w − Q−1 p
= P Δ−1/2 [Δ−1/2 P T AT (AQ−1 AT )−1 b ] − Q−1 p
= Q−1 AT [AQ−1 AT ]−1 b − Q−1 p
= Q−1 {AT [AQ−1 AT ]−1 [b + AQ−1 p] − p}
The fact that one can find a closed-form solution to the problem of convex quadratic pro-
gramming with equality constraints implies that one can also solve the problem of least-
squares regression with equality constraints. After all, the objective function of linear re-
gression is a convex quadratic function as well. Consider an n × d data matrix D containing
the feature variables, and an n-dimensional response vector y. Assume that we have some
domain-specific insight about the data because of which the d-dimensional coefficient vector
w is subject to the linear system of constraints Aw = b. Here, A is an m × d matrix with
262 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
1 λ
Minimize J(w) = Dw − y2 + w2
2 2
subject to:
Aw = b
This objective function is exactly in the same form as the convex quadratic program of
Section 6.2.1.1. This implies that we can use the closed-form solution of Equation 6.5. The
key point is to able to transform the problem to the same form. We leave this transformation
as an exercise.
Problem 6.2.2 Show that one can express the solution to equality-constrained linear re-
gression in the same form as the solution to the quadratic optimization formulation of
T
Section 6.2.1.1 by using Q = DT D + λI and p = D y in Equation 6.5.
One can adapt the Newton method to any convex function with linear equality constraints
(even if the objective function is not quadratic). The overall idea is the same as that dis-
cussed in Chapter 5. Consider the case where we are trying to minimize the arbitrary convex
function J(w) subject to the equality constraints Aw = b. Here, A is an m × d matrix, and
w is a d-dimensional vector of optimization variables. The Newton method first initializes
w = w0 to a feasible point on the hyperplane Aw = b. Then, we start with t = 0 and
perform the following steps iteratively:
Note that the second-order Taylor approximation can always be expressed in the form of
Equation 6.5, and therefore its closed-form solution can be plugged in directly. This iterative
approach can converge to the optimal solution in fewer steps than gradient descent.
Here, it is important to note that we are solving one optimization problem as a subproblem
of another; clearly, the subproblem has to be simple for the approach to make sense. As it
turns out, this subproblem is indeed much easier than the original problem because it is a
linear programming problem; it has a linear objective function and linear constraints. Such
problems can be solved efficiently with off-the-shelf solvers, and we refer the reader to [16] for
an introduction to linear optimization. Therefore, the conditional gradient method simply
solves the above optimization problem repeatedly to convergence.
The main issue with the above optimization problem is that minimizing the objective
function does not necessarily lead to the optimum point, as we are using the instantaneous
gradient at wt in order to determine wt+1 . Obviously, the gradient will change as we move
from wt to wt+1 , and the objective function might even start worsening as one approaches
wt+1 . This problem can be partially addressed as follows. We first solve the above optimiza-
tion problem to find a tentative value of wt+1 . At this point, we only obtain a direction of
movement q t = wt+1 − wt . Subsequently, the update is modified to wt + αt q t , where αt is
selected using line search. However, in this case, αt would need to selected to ensure both
feasibility and an optimum solution.
2. Find the components in w for which the interval bounds (box constraints) are violated,
and set the component value to the end-point of the interval that is violated.
The above two steps are applied iteratively to convergence. One must take care to select
the initialization points within the feasible box.
Problem 6.2.3 (Linear Regression with Box Constraints) The linear regression prob-
lem optimizes the following objective function:
J = Dw − y2
264 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
PROJECT BACK w2
CONVEX BOUNDARY OF FEASIBLE REGION
w3
PROJECT BACK
w2
w3 STEEPEST DESCENT MOVES
OUTSIDE FEASIBLE SPACE
w1 STEEPEST DESCENT MOVES w1
INITIAL POINT OUTSIDE FEASIBLE SPACE INITIAL POINT
The dual problem for support vector machines is also a convex optimization problem with
box constraints. This problem is discussed in Section 6.4.4.1.
Problem 6.2.4 Consider the problem in which you want to use the L2 -loss SVM as the ob-
jective function (see page 184). However, you have the additional domain-specific knowledge
that all coefficients are nonnegative (possibly because of known positive correlations between
features and class label). Discuss how you would solve the L2 -SVM optimization problem.
DESCENT
STEP DESCENT
DESCENT STEP
PROJECT PROJECT STEP
BACK BACK
WHERE TO
PROJECT BACK?
2. Extract the violated constraints Av w ≤ bv . We assume that the rows of Av are linearly
independent because the rows of A are linearly independent.
3. Update wt+1 ⇐ wt+1 + ATv (Av ATv )−1 [bv − Av wt+1 ]. Note that Av wt+1 can be shown
to be exactly equal to bv by multiplying both sides of the above equation by Av . This
266 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
update can also be derived by applying an origin translation to wt+1 in order to use
the right-inverse results of Section 2.8 in Chapter 2; then one can add back wt+1 . We
need to translate the origin to wt+1 because we want to find the closest point to wt+1
on Av w = bv , whereas the right-inverse in Section 2.8 finds the most concise solution
to Av w = bv (i.e., closest point to the origin). However, translating the origin in this
way transforms the vector bv to [bv − Av wt+1 ], and therefore the weight vector in
translated space is ATv (Av ATv )−1 [bv − Av wt+1 ]. Adding back wt+1 yields the update.
4. Increment t by 1 and go back to step 1.
These steps are iterated to convergence. Here, a key point is that the projection step does
not result in violation of the other (already satisfied) constraints. This is because the nearest
point in a convex set is guaranteed to lie on the intersection of all the violated constraints,
when the constraints are linearly independent.
A key question arises as to how one can use the approach when the rows of the matrix
A are not linearly independent. Here, an important observation is that we only need each
violated set Av to contain linearly independent rows rather than the much stronger criterion
of requiring this from the full set A. Therefore, the approach will often work even in cases
where there is a modest level of linear dependence between rows of A, and one never
encounters any matrix Av containing linearly dependent rows. One way of discouraging the
rows of Av to be linearly independent is to use line search on αt , and restrict the step-
size so that the violated constraints are never linearly dependent. With this modification,
the aforementioned approach can be used directly. However, convergence to an optimal
solution is not guaranteed by such an approach, although the approach tends to work well
in practice.
One problem with this approach is that the linear constraints need not be a bounded convex
region. For example, if the constraint is of the form w2 ≤ 1 (which is a bounded circle of
radius one), then its linearized approximation is w2t + 2wt (w − wt ) ≤ 1. In other words,
the linearized constraint is simply the tangent to the concentric circle passing through wt
and the side containing the center of the circle (which is the origin in this case) is included
as the feasible space. Depending on the nature of the objective function, the solution to
6.3. PRIMAL COORDINATE DESCENT 267
the subproblem might be unbounded because of feasible region on one side of the tangent
is unbounded. One can handle this issue in several ways, such as adding additional box
constraints in order to limit the step-size. However, even adding box constraints might
sometimes result in a value of wt+1 that does not satisfy the original constraints. In such
cases, one possible solution is to perform a linear search on the region between wt and
wt+1 and reduce the step size, so that the solution stays feasible. There are, however, many
other ways in which these issues are handled, and we refer the reader to [99] for a detailed
discussion.
Here, HFt represents the Hessian of F (·) at the point wt . This Hessian is positive semi-
definite, since we are only dealing with convex functions. If the Hessian HFt is positive
definite, the problem will have a bounded global minimum even without constraints. Al-
though quadratic programs are harder to solve as subproblems than linear programs, they
are much easier to solve than many other linear programs (see Exercise 7). Many of the
methods discussed in later sections (such as Lagrangian relaxation) can be used for solving
convex quadratic programs effectively. The main issue is that the solution to the linearized
problem may not be feasible for the original constraints to the problem. We refer the reader
to [21, 99] for a detailed discussion of solution methods. In particular, a practical line-search
method discussed by [99] is very useful in this context.
HORIZONTAL LINE
INTERVAL CORRESPONDING
TO FIXED VARIABLE
INTERVAL
Figure 6.5: Fixing variables results in an interval constraint over remaining variable when
the feasible region is convex
coordinate descent, we optimize a single variable wi from the vector w, while holding all the
other parameters fixed to their values wt in the tth iteration. This leads to the following
update in the tth iteration:
wt+1 = argmin[ith component of w ] F (w) [All parameters except wi are fixed to wt ]
Here, i is the index of the ith variable, and other variables are fixed to the corresponding
values in wt . One cycles through the variables one at a time, until convergence is achieved.
For example, if no improvement occurs during a cycle of optimizing each variable, then
it means that the solution is a global optimum. In block coordinate descent, a block of
variables is optimized at a given time, and one cycles through the different blocks one at a
time.
Coordinate descent is particularly suitable for constrained optimization. This is because
the variable-at-a-time optimization significantly simplifies the structure of the resulting sub-
problem; in fact, the problem reduces to the univariate case. Although block coordinate
descent does not yield univariate optimization problems, it still results in significant simpli-
fication. Very often, the constraints that tie together different variables can be dropped in
an iteration, since some of the variable values are fixed in an iteration. A specific example
of this situation is the k-means algorithm discussed in Section 4.10.3 of Chapter 4.
Note that the constraints are both quadratic and linear, and therefore the problem is more
complex than the linear constraints considered in the previous section. Now consider the
case in which one is performing coordinate descent, and we are trying to compute the
optimum value w1 so that F (w1 , w2 , w3 ) is minimized (while holding w2 and w3 fixed). The
values of w2 and w3 are set to 2 and 0, respectively. Plugging in these values of w2 and w3 ,
we obtain the following pair of constraints:
Note that the first constraint implies that w1 ∈ [−1, 3] and the second constraint implies
that w1 ∈ (−∞, 1]. Therefore, by combining the constraints, we obtain the fact that the
variable w1 must lie in [−1, +1]. Furthermore, the objective function can be simplified to
G(w1 ) = F (w1 , 2, 0). Therefore, the subproblem reduces to optimizing a univariate convex
function G(w1 ) over an interval.
How does one optimize a univariate convex function over an interval? One possibility
is to simply set the derivative of the convex function (with respect to the only variable w
being optimized) to 0, and obtain a value of the variable w by solving the resulting equation.
At this point, one must check the two ends of the interval in order to check whether the
optimum lies at one of the two ends. The reason that one is able to use this simple approach
is because of the convexity of the optimization function. Alternatively, one can use the line
search methods discussed in Section 4.4.3 of Chapter 4. One cycles through the variables
using this iterative approach, until convergence is reached.
Depending on the structure of the objective function and optimization variables, the
univariate subproblem in coordinate descent often has a very simple structure. Therefore,
even when one is faced with an arbitrarily complex problem, it is worthwhile trying ideas
from coordinate descent for the purposes of optimization. In some cases, coordinate descent
can even provide good heuristic solutions to difficult optimization problems like mixed in-
teger programs. This is because the subproblems are often much easier to solve than the
original formulation. A specific example is the case of the k-means algorithm, which has
integer constraints on the variables (cf. Section 4.10.3 of Chapter 4). However, there are
also cases in which coordinate descent fails (see Exercise 19).
1 λ
Minimize J = Dw − y2 + w2
2 2
subject to:
li ≤ wi ≤ ui , ∀i ∈ {1 . . . d}
270 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
T
wi di 2 + di r
wi ⇐
di 2 + λ
Here, r = y − Dw is the n-dimensional vector of residuals. In this case, the only difference
is that we use the additional truncation operator Ti (·) after each coordinate descent step in
order to bring the variable back into the relevant bounds.
T
wi di 2 + di r
wi ⇐ Ti
di 2 + λ
In other words, each coordinate is immediately truncated to its lower and upper bounds
after the coordinate update. We also make the following observation:
P = Minimize F (w)
subject to:
fi (w) ≤ 0, ∀i ∈ {1 . . . m}
This problem is referred to as the primal problem in optimization parlance, and we introduce
the notation P to denote its optimal solution. The Lagrangian relaxation methodology is
6.4. LAGRANGIAN RELAXATION AND DUALITY 271
particularly useful when the functions F (w) and each fi (w) are convex. The Lagrangian
relaxation is defined with the use of nonnegative Lagrangian multipliers α = [α1 . . . αm ]T :
m
L(α) = Minimize w F (w) + αi fi (w)
i=1
subject to:
No constraints on w
We have introduced the notation L(α) to indicate the solution to the relaxed problem at
any particular value of the parameter vector α. Note that the minimization is only with
respect to the parameters in w and not the parameters in α, which is fixed (and therefore a
part of the argument of L(α)). It is important to note that each αi is nonnegative to ensure
that violations of the constraints are penalized. When a constraint is violated, we will have
fi (w) > 0, and the penalty αi fi (w) will also be nonnegative. Although L(α) is defined over
any value of α, it makes sense to consider only nonnegative values of α. For example, if the
value of αi is negative, then violation of the ith constraint will be rewarded.
In the case of equality constraints, the Lagrange multipliers do not have any nonnega-
tivity constraints. Consider the following equality-constrained optimization problem:
Minimize F (w)
subject to:
fi (w) = 0, ∀i ∈ {1 . . . m}
Each equality constraint can be converted to a pair of inequality constraints fi (w) ≤ 0 and
−fi (w) ≤ 0 with nonnegative Lagrangian multipliers αi,1 and αi,2 , respectively. Then, the
Lagrangian relaxation contains terms of the form fi (w)(αi,1 − αi,2 ). One can instead treat
αi = αi,1 − αi,2 as the sign unconstrained Lagrange multiplier. Most of the discussion in
this chapter will, however, be centered around inequality constraints.
Let us examine why the Lagrangian relaxation problem provides a lower bound on the
solution to the original optimization problem. Let w∗ be the optimal solution to the original
optimization problem, and α be any nonnegative vector of Lagrangian parameters. Since
w∗ is also a feasible solution to the original problem, it follows that each fi (w∗ ) is no larger
than zero. Therefore, the “penalty” αi fi (w∗ ) ≤ 0. In other words, the penalties can become
rewards for primal-feasible solutions like w∗ , if the penalties are non-zero. Therefore, we
have:
m
L(α) = Minimize w F (w) + αi fi (w)
i=1
m
≤ F (w∗ ) + αi fi (w∗ ) [w∗ might not be optimal for relaxation]
i=1
≤0
≤ F (w∗ ) = P
In other words, the value of L(α) for any nonnegative vector α is always no larger than the
optimal solution to the primal. One can tighten this bound by maximizing L(α) over all
nonnegative α and formulating the dual problem with objective function D:
272 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
1
0.8
1 SADDLE
0.6 POINT
0.4
0.5
0.2
H(X, Y)
H(X, Y)
0 0
−0.2
−0.4 −0.5
−0.6
−0.8 −1
1
−1 0.5
2 1
0 0.5
0 −2 −0.5 0
0 −1 −0.5
−2 2 1 −1 −1
MINIMIZATION VARIABLE X MAXIMIZATION VARIABLE Y MAXIMIZATION VARIABLE Y MINIMIZATION VARIABLE X
Figure 6.6: Examples of two minimax functions with a single minimization variable and a
single maximization variable. The first is neither concave nor convex in either variable. The
second is convex in the minimization variable and concave in the maximization variable,
and has a well-defined saddle point
We summarize the relationship between the primal and the dual as follows:
D = L(α∗ ) ≤ P
This result is referred to as that of weak duality. It is noteworthy that the Lagrangian opti-
mization problem is a minimax problem containing disjoint minimization and maximization
variables. The minimization and maximization is done in a specific order. The ordering of
the minimization and maximization for any minimax optimization problem does matter.
Problem 6.4.1 Consider the 2-dimensional function G(x, y) = sin(x + y). Show that
minx maxy G(x, y) = 1 and maxy minx G(x, y) = −1.
The ordering effects of minimization and maximization in minimax problems can be for-
malized in terms of John von Neumann’s minimax theorem [37] in mathematics. It states
that “min-max” is an upper bound on “max-min” of a function containing both minimiza-
tion and maximization variables. Furthermore, strict equality occurs when the function is
convex in its minimization variables and also concave in the maximization variables. For
example, the function H(x, y) = sin(x + y) is neither concave nor convex in either x or y.
The corresponding plot is shown in Figure 6.6(a). As shown in Problem 6.4.1, the order
of minimization and maximization matters in this case. On the other hand, the function
H(x, y) = x2 − y 2 is convex in the minimization variable x and concave in the maximization
variable y. This function is shown in Figure 6.6(b). Therefore, this function has a single
saddle point, which is the optimal solution to both minimax problems.
6.4. LAGRANGIAN RELAXATION AND DUALITY 273
Armed with this understanding of the importance of ordering of minimization and maxi-
mization in minimax problems, we revisit the effect of this ordering on the Lagrangian relax-
ation. We denote the minimax optimization function of Lagrangian relaxation as H(w, α):
m
H(w, α) = F (w) + αi fi (w) (6.6)
i=1
Here, w contains the minimization variables and α contains the maximization variables.
While the dual computes maxα≥0 minw H(w, α) (which is a lower bound on the primal),
reversing the order to minw maxα≥0 H(w, α) always yields the original (primal) optimization
problem irrespective of whether the original problem has a convex objective function or
convex constraints. We summarize this result below:
Lemma 6.4.1 (Minimax Primal Formulation) Let H(w, α) of Equation 6.6 represent
the Lagrangian relaxation of the unrelaxed primal formulation with constraints. Then, the
unconstrained minimax problem minw maxα≥0 H(w, α) is equivalent to the original, unre-
laxed primal formulation irrespective of the convexity structure of the original problem.
Proof: Consider the Lagrangian objective function H(w, α) of Equation 6.6. Then, the
value of maxα≥0 H(w, α) is ∞ at any fixed value of w that violates one or more of the
original primal constraints. This is achieved by setting the corresponding αi of the violated
constraint to ∞. Therefore, the primal problem of minw maxα≥0 H(w, α) will never yield
a solution for w at (minimax) optimality that violates constraints of the form fi (w) ≤ 0.
In other words, minimax optimality of minw maxα≥0 H(w, α) always yields solutions for w
satisfying each fi (w) ≤ 0.
For any value of w satisfying each fi (w) ≤ 0, the contribution of the penalty term to
H(w, α) is non-positive because αi fi (w) ≤ 0 for each i. Therefore, for any such fixed value
of w satisfying primal constraints, the function H(w, α) will be maximized with respect to
α only when the value of αi is set to zero for each i satisfying fi (w) < 0. This ensures that
the corresponding
m value of αi fi (w) is zero, and therefore the contribution of the penalty
term i=1 αi fi (w) to H(w, α) is 0 at minimax optimality.
The above two facts imply that the optimization of F (w) with respect to the primal
constraints is the same problem as minw maxα H(w, α). At optimality of the second problem,
the primal constraints are satisfied, and the objective function is the same as well (since the
penalty contribution drops to 0).
We make some key observations about the Lagrangian relaxation H(w, α) of Equation 6.6:
1. Dual is a minimax problem: The dual problem of Lagrangian optimization is
based on the relaxation of Equation 6.6 in which the minimax optimization is done in
a specific order:
D = maxα≥0 minw H(w, α) (6.7)
3. Duality results of Lagrangian relaxation can be derived from the more gen-
eral minimax theorem in mathematics: The weak duality result that D ≤ P can
274 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
also be derived from John von Neumann’s minimax theorem of optimization [37]. The
minimax theorem of optimization is designed for general minimax functions containing
a disjoint set of minimization and maximization variables (of which the Lagrangian
relaxation is a special case). The theorem states that max-min is always bounded
above by min-max, which implies that D ≤ P . Furthermore, the minimax theorem
also states that strict equality D = P occurs when the optimization function is con-
vex in the minimization (primal) variables and concave in the maximization (dual)
variables.
What types of optimization problems are such that their Lagrangian relaxations show strict
equality between primal and dual solutions? First, the function H(w, α) is linear in the
maximization variables, and therefore concavity with respect to maximization variables is
always satisfied. Second, the function H(w, α) is a sum of F (w) and nonnegative multiples
of the various fi (w) for i ∈ {1 . . . m}. Therefore, if F (w) and each of fi (w) are convex in w,
then H(w, α) will be convex in the minimization variables. This is the primary pre-condition
for strong duality:
P = Minimize F (w)
subject to:
fi (w) ≤ 0, ∀i ∈ {1 . . . m}
Let F (w) and each fi (w) be convex functions. Then, the optimal objective function value of
the dual problem created using Lagrangian relaxation is almost always the same as that
of the primal.
We use the qualification “almost always,” because we also need a relatively weak condition
referred to as Slater’s condition, which states that at least one strictly feasible point exists
satisfying fi (w) < 0 for each i. For most machine learning problems, these conditions hold
by default. For simplicity in presentation, we will drop this condition in the subsequent ex-
position. Many optimization problems in machine learning such as support vector machines
and logistic regression satisfy strong duality.
We refer to these primal and dual optimization problems as OP1 and OP2, respectively.
We make the following observation, which is true irrespective of the convexity structure of
the primal optimization problem:
The Kuhn-Tucker conditions are obtained by combining the primal feasibility conditions,
dual feasibility conditions, complementary slackness conditions, and stationarity conditions.
For convex objective functions, these represent the first-order conditions that are both
necessary and sufficient for optimality:
Theorem 6.4.1 (Kuhn-Tucker Optimality Conditions) Consider an optimization prob-
lem in which we wish to minimize the convex objective function F (w), subject to convex
constraints of the form fi (w) ≤ 0 for i ∈ {1 . . . m}. Then, a solution w is optimal for the
primal and a solution α is optimal for the dual, if and only if:
• Feasibility: w is feasible for the primal by satisfying each fi (w) ≤ 0 and α is feasible
for the dual by being nonnegative.
• Complementary slackness: We have αi fi (w) = 0 for each i ∈ {1 . . . m}.
• Stationarity: The primal and dual variables are related as follows:
m
∇F (w) + αi ∇fi (w) = 0
i=1
Note that one does not have to worry about second-order optimality conditions in the case
of convex optimization problems. The Kuhn-Tucker optimality conditions are useful because
they provide an alternative approach to solving the optimization problem by simply finding
a feasible solution to a set of constraints as follows:
Observation 6.4.1 For a convex optimization problem, any pair (w, α) that satisfies pri-
mal feasibility fi (w) ≤ 0, dual feasibility αi ≥ 0, complementary slackness αi fi (w) = 0, and
the stationarity conditions is an optimal solution to the original optimization problem.
The stationarity conditions relate the primal and dual variables, and therefore they are often
useful for eliminating primal variables from the Lagrangian. We will also refer to them as
primal-dual (PD) constraints, because they relate primal and dual variables at optimality.
The stationarity conditions are often used to formulate the minimax dual purely in terms
of the dual variable (and therefore create a pure maximization problem). We discuss this
general procedure in the next section.
276 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
The primal variables w can often be eliminated from L(α) by setting the gradients of
H(w, α) with respect to the primal variables w to zero. Setting the gradient with respect to
primal variables to zero will result in exactly as many conditions as the number of primal
variables. These are exactly the stationarity conditions of the previous section, which repre-
sent a subset of the Kuhn-Tucker optimality conditions. We also refer to these conditions as
primal-dual (PD) constraints, because they relate the primal and dual variables. The (PD)
constraints can be used to substitute for (and eliminate) the primal variables w, and obtain
a pure maximization objective function L(α), which is expressed in terms of α. In some
cases, the feasibility and complementary slackness conditions are also used in the elimina-
tion process. At the end of the day, the process of generating the dual from the primal is
almost purely a mechanical and algebraic process based on the Kuhn-Tucker conditions.
While the specific mechanics might vary somewhat at the detailed level, the basic principle
remains the same across different problems. In Section 6.4.3, we will provide an example of
this procedure with the L1 -loss support vector machine. Furthermore, guided exercises (i.e.,
exercises broken up into simpler steps), are also available on the L2 -loss SVM and logistic
regression, and the reader is advised to work them out in the same sequence as they occur.
6.4.2.1 Inferring the Optimal Primal Solution from Optimal Dual Solution
One needs to compute the optimal primal variables in order to have an interpretable solu-
tion. Therefore, a natural question arises as to how one can infer an optimal primal solution
w from the optimal dual solution α. In this context, the (PD) constraints (i.e., the station-
arity conditions) are very helpful, because they can be used to substitute in the values of the
optimal dual variables and solve for the primal variables (although the algebraic approach
might vary slightly across problems).
1
n
T 1
J= max{0, (1 − yi [W · X i ])} + W 2 [Hinge-loss SVM]
λ i=1 2
Note that this objective function is cosmetically different from Equation 4.51 by the scaling
factor of 1/λ. We have made this cosmetic adjustment because one often uses the notation
corresponding to the slack penalty C = 1/λ in the literature on dual SVM optimization,
which is what we will use in subsequent restatements of this formulation. In order to create
the dual, we would like to reformulate the problem as a constrained optimization problem,
6.4. LAGRANGIAN RELAXATION AND DUALITY 277
while simplifying the objective function without the maximization operator. This is achieved
with the use of slack variables ξ1 . . . ξn as follows:
1 n
Minimize J = W 2 + C ξi
2 i=1
subject to:
T
ξi ≥ 1 − yi [W · X i ] ∀i ∈ {1 . . . n} [Margin Constraints]
ξi ≥ 0 ∀i ∈ {1 . . . n} [Nonnegativity Constraints]
T
Ideally, we would like ξi = max{0, (1 − yi [W · X i ])}. Note that the constraints do allow
T
values of ξi larger than max{0, (1 − yi [W · X i ])}, but such values can never be optimal.
The first set of constraints is referred to as the set of “margin” constraints, because they
define the margins for the predicted values of yi beyond which points are not penalized. For
T
example, if W · X i has the same sign as yi and its absolute value is “sufficiently” positive
by a margin of 1, ξi will drop to 0. Therefore, the point is not penalized. Strictly speaking,
the constraints need to be converted to “≤” form by multiplying with −1, but we can take
care of it during the relaxation by multiplying the penalties with −1. We introduce the
Lagrangian multiplier αi for the ith of n margin constraints and the multiplier γi for the
ith nonnegativity constraint on ξi . With these notations, the Lagrangian relaxation is as
follows:
1 n n
T n
LD (α, γ) = Minimize Jr = W 2 + C ξi − αi (ξi − 1 + yi (W · X i )) − γ i ξi
2 i=1 i=1 i=1
Relax margin constraint Relax ξi ≥ 0
Here, Jr is the relaxed objective function. Since the relaxed constraints are inequalities, it
follows that both αi and γi must be nonnegative for the relaxation to make sense. Therefore,
when we optimize over the dual variables such as αi and γi , the optimization problem
has a box constraint structure, which makes it somewhat simpler to solve. In this type
of dual problem, one first minimizes over primal variables (with dual variables fixed) to
obtain LD (α, γ) and then maximizes LD (α, γ) over the dual variables, while imposing box
constraints on them. One can express this type of minimax optimization problem as follows:
As discussed in the previous section, the general approach to solving the dual is to use the
(PD) constraints to eliminate the primal variables in order to create a pure maximization
problem in terms of the dual variables. The (PD) constraints are obtained by setting the
gradient of the minimax objective with respect to the primal variables to 0. This gives us
exactly as many constraints as the number of primal variables, which is precisely what we
need for eliminating all of them:
∂Jr n
T
=W− αi yi X i = 0, [Gradient with respect to W is 0] (6.10)
∂W i=1
∂Jr
= C − αi − γi = 0, ∀i ∈ {1 . . . n} (6.11)
∂ξi
278 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
The equations resulting from the partial derivatives with respect to ξi are independent of
ξi , but the resulting equations are still useful in eliminating ξi from Jr . This is because the
coefficient of ξi in Jr is (C − αi − γi ), which turns out to be 0 based on Equation 6.11.
The ability to drop ξi is a direct result of the linearity of the Jr in ξi ; the linear coefficient
of ξi in Jr is also its derivative, which is set to 0 as an optimality condition. Furthermore,
n T
based on Equation 6.10, we can substitute W = i=1 αi yi X i everywhere it occurs in Jr .
By dropping the terms involving ξi and substituting for W , Jr is simplified as follows:
1 n
T
Jr = W 2 + αi (1 − yi (W · X i )), [Dropping terms with ξi ]
2 i=1
1
n n n
T 2 n T
= αj yj X j + αi (1 − yi αj yj X i · X j ), [Substituting W = j=1 αj yj X j ]
2 j=1 i=1 j=1
n
1
n n
= αi − αi αj yi yj X i · X j , [Algebraic simplification]
i=1
2 i=1 j=1
This objective function is expressed purely in terms of the dual variables. Furthermore, the
variable γi has dropped out of the optimization formulation. Nevertheless, the constraint
γi ≥ 0 also needs to be modified by substituting γi as C − αi (cf. Equation 6.11):
γi = C − αi ≥ 0
Therefore, the variables αi satisfy the box constraints 0 ≤ αi ≤ C. We can multiply the
objective function by −1 in order to turn the maximization problem into a minimization
problem:
1
n n n
Minimize0≤α≤C αi αj yi yj X i · X j − αi
2 i=1 j=1 i=1
Beyond the fact that the dual problem (in minimization form) is always convex (see Exer-
cise 12), one can show that the leading term in the quadratic is of the form αT Hα, where H
is a positive semidefinite matrix of similarities between points. This makes the dual problem
convex. To this effect, we assert the following result:
n n
Observation 6.4.2 The quadratic term i=1 j=1 αi αj yi yj X i · X j in the dual SVM can
be expressed in the form αT BB T α, where B is an n × d matrix in which the ith row of B
contains yi X i . In other words, the ith row of B simply contains the ith data instance, after
multiplying it with the class label yi ∈ {−1, +1}.
This result can be shown by simply expanding the (i, j)th term of αT BB T α. As shown in
Lemma 3.3.14 of Chapter 3, matrices of the form BB T are always positive semidefinite.
Therefore, this is a convex optimization problem.
6.4.3.1 Inferring the Optimal Primal Solution from Optimal Dual Solution
As discussed in Section 6.4.2.1, the (PD) constraints can be used to infer the primal variables
from the dual variables. In the particular case of the SVM, the constraints correspond to
Equations 6.10–6.11. Among these constraints, Equation 6.10 is in a particularly useful
form, because it directly yields all the primal variables in terms of the dual variables:
n
T
W = αi yi X i
i=1
6.4. LAGRANGIAN RELAXATION AND DUALITY 279
One can obtain the slack variables ξi by using the constraints among the primal variables
and substituting the inferred value of W .
1
n n n
Minimize LD = αi αj yi yj X i · X j − αi
2 i=1 j=1 i=1
subject to:
0 ≤ αi ≤ C ∀i ∈ {1 . . . n}
∂LD n
= yk ys αs X k · X s − 1 ∀k ∈ {1 . . . n} (6.12)
∂αk s=1
One problem is that an update might lead to some of the values of αk violating the feasi-
bility constraints. In such a case, we project such infeasible components of α to the feasible
box, as shown in Figure 6.3. In other words, the value of each αk is reset to 0 if it becomes
negative, and it is reset to C if it exceeds C. Therefore, one starts by setting the vector
of Lagrangian parameters α = [α1 . . . αn ] to an n-dimensional vector of 0s and uses the
following update steps with learning rate η:
repeat
n
αk ⇐ αk + η 1 − yk s=1 ys αs X k · X s
Update for each k ∈ {1 . . . n};
∂LD
Update is equivalent to α ⇐ α − η ∂α
for each k ∈ {1 . . . n} do begin
αk ⇐ min{αk , C};
αk ⇐ max{αk , 0};
endfor;
until convergence
It is noteworthy that the gradient-descent procedure updates all the components
α1 . . . αn at a time. This is the main difference from coordinate descent, which updates
a single component at a time, and it chooses a specific learning rate for that component,
so that that particular value of αk is optimized. This is the point of discussion in the next
section.
280 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
Problem 6.4.2 (Relaxation of L2 -SVM) Consider the following formulation for the
L2 -SVM:
1 n
Minimize J = W 2 + C ξi2
2 i=1
subject to:
T
ξi ≥ 1 − yi [W · X i ], ∀i ∈ {1 . . . n}
In comparison with the hinge-loss SVM, the parameter ξ is squared in the objective func-
tion, and the nonnegativity constraints on ξi have been dropped. Discuss why dropping of
nonnegativity constraints on ξi does not affect the optimal solution in this case. Write the
minimax Lagrangian relaxation containing both primal and dual variables. Use the Lagrange
parameter αi for the ith slack constraint to enable comparison with the hinge-loss SVM.
Problem 6.4.4 (Dual Formulation of L2 -SVM) Use the Lagrangian relaxation and
the primal-dual constraints in the previous two exercises to eliminate the primal variables
from the minimax formulation. Show that the dual problem of the L2 -SVM is as follows:
n
1
n n
Maximize α≥0 αi − αi αj yi yj (X i · X j + δij /2C)
i=1
2 i=1 j=1
Here, δij is 1 if i = j, and 0, otherwise. Note that the main difference from the dual
formulation of the hinge-loss SVM is the addition of δij /2C to the dot product X i · X j , in
order to constrain the magnitudes of αi2 in a soft way rather than the explicit constraint
αi ≤ C.
Problem 6.4.5 (Optimization Algorithm for L2 -SVM Dual) Carefully examine the
gradient-descent and coordinate-descent pseudo-codes for the hinge-loss SVM in Sec-
tions 6.4.4.1 and 6.4.4.2. The actual updates of each αk always contain terms with X k · X s
as a multiplicative factor for each s. Show that the gradient descent and coordinate descent
algorithms for the dual L2 -SVM are exactly the same as the hinge-loss SVM, except that the
dot product X k · X s within each update equation is substituted with [X k · X s + (δks /2C)].
The value of δks is 1 if k = s, and 0, otherwise. Furthermore, the values of αi are not reset
to C when they are larger than C.
can create the dual in cases where the optimization problem is unconstrained to begin
with. There are several approaches for achieving this goal, one of which uses Lagrangian
relaxation. For example, a dual approach for logistic regression uses a parametrization
approach to construct the dual [68]. We refer the reader to the bibliographic notes for
discussions of other forms of duality.
Here, it is important to understand that an optimization problem need not be formu-
lated in a unique way. An unconstrained optimization problem can always be recast as a
constrained problem by simply introducing additional variables for various terms in the ob-
jective function, and defining those variables within the constraints. The way in which the
dual was generated for the hinge-loss SVM already provides a hint for the kinds of formula-
tions that are more friendly to creating dual problems. For example, the SVM formulation
in Section 4.8.2 of Chapter 4 does not use slack variables, whereas the dual SVM of the
previous section introduces slack variables for specific portions of the objective function,
and then defines those slack variables within the constraints. This approach of generating
additional variables for specific terms within the objective function provides a natural way
to create a Lagrangian relaxation. Therefore, we summarize the basic approach for creating
a Lagrangian relaxation of an unconstrained problem:
Introduce new variables in lieu of specific parts of the objective function, and
define those variables within the constraints.
Here, it is important to understand that there is more than one way in which one might
choose ways of defining the new variables. Correspondingly, one would obtain a different
dual, and the structure of some might be more friendly than others to optimization. Learning
to define the correct variables and constraints is often a matter of skill and experience.
Consider the following simple 2-variable optimization problem without constraints:
One can easily solve this problem in any number of ways, including the use of gradient
descent, or by simply setting each partial derivative to 0. In either case, one obtains an
optimal solution x = 1, and y = 2 with a corresponding objective function value of 0.
However, it is instructive to formulate the dual of this optimization problem. In this case,
we choose to introduce two new variables ξ = x−1 and β = y−2. The resulting optimization
problem is as follows:
Minimize J = ξ 2 /2 + β 2 /2
subject to:
ξ =x−1
β =y−2
It is noteworthy that the constraints are equality constraints, and therefore the Lagrange
multipliers would not have nonnegativity constraints either. We introduce the Lagrange
multiplier α1 with the first constraint and the multiplier α2 with the second constraint. The
corresponding Lagrangian relaxation then becomes the following:
Note that the minimization is performed only over the primal variables, and L(α1 , α2 ) needs
to be maximized over the dual variables. In order to eliminate the four primal variables, we
6.4. LAGRANGIAN RELAXATION AND DUALITY 283
need to set the partial derivative with respect to each to zero, and obtain four stationarity
constraints, which we also refer to as (PD) constraints. However, in this particular case, the
(PD) constraints have a simple form:
∂J ∂J
= ξ + α1 , = β + α2
∂ξ ∂β
∂J ∂J
= −α1 , = −α2
∂x ∂y
Setting the first two derivatives with respect to ξ and β to 0 allows us to replace ξ and β
with −α1 and −α2 , respectively. However, setting the second two derivatives with respect
to x and y to 0 yields α1 = α2 = 0, which allows us to drop the penalty portions of the
objective function. However, we need to include2 the constraints that are independent of the
primal variables (i.e., α1 = α2 = 0) within the dual formulation. This yields the following
trivial dual problem:
In this case, the feasible space contains only one point with an objective function value of 0.
Therefore, the optimal dual objective function value is 0 at α1 = α2 = 0. Furthermore, since
ξ and β are equal to −α1 and −α2 (according to the stationarity constraints), it follows
that we have ξ = x − 1 = 0 and β = y − 2 = 0. Note that this solution of x = 1 and y = 2
can be obtained by simply setting the derivative of the primal objective function to 0.
1
n
T λ
J= (yi − W · X i )2 + W 2 (6.13)
2 i=1 2
This is again an unconstrained problem, but we somehow want to create the Lagrangian
relaxation for it in order to generate the dual. In order to do so, we create new variables
T
and new constraints by introducing a new variable ξi = yi − W · X i for the error of each
data point. The corresponding optimization problem is as follows:
1 2 λ
n
Minimize J = ξ + W 2
2 i=1 i 2
subject to:
T
ξi = y i − W · X i , ∀i ∈ {1 . . . n}
2 As discussed in the previous section, this situation also arose with the hinge-loss SVM when the
constraint C − αi − γi = 0 contains only dual variables. In that case, the constraint C − αi − γi = 0 was
implicitly included in the formulation by using it to eliminate γi from the dual.
284 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
We introduce the dual variable αi for the ith constraint, which results in the following dual
objective function:
1 2 λ
n n
T
L(α) = MinimizeW ,ξi J = ξi + W 2 + αi (−ξi + yi − (W · X i ))
2 i=1 2 i=1
Next, we will generate the primal-dual (PD) constraints by differentiating the objective
function with respect to all the primal variables and setting it to zero.
∂J T
n
= λW − αi X i = 0
∂W i=1
∂J
= ξi − αi = 0, ∀i ∈ {1 . . . n}
∂ξi
n T
Substituting ξi = αi and W = j=1 αj X j /λ, we obtain the following for L(α) purely in
terms of only the dual variables:
⎛ ⎞
1 2 1
n n n n n
αi ⎝−αi + yi − X i · [ αj X j ]/λ⎠
T T
L(α) = α + αi αj X i · X j +
2 i=1 i 2λ i=1 j=1 i=1 j=1
n
n
1
n n
= αi yi − αi2 /2 − αi αj X i · X j
i=1 i=1
2λ i=1 j=1
One can rewrite the above objective function in matrix form by replacing the d-dimensional
row vectors X 1 . . . X n with a single n × d matrix D whose rows contain these vectors in
the same order. Furthermore, the scalar variables are converted to vector forms such as
α = [α1 . . . αn ]T and y = [y1 . . . yn ]T :
1 1 T
L(α) = αT y − α2 − α DDT α
2 2λ
1 T
= αT y − α (DDT + λI)α
2λ
One can simply set the gradient of the objective function to 0 in order to solve for α in
closed form. By using matrix calculus to compute the gradient of the objective function, we
obtain the following:
(DDT + λI)α = λy
α = λ(DDT + λI)−1 y
It now remains to relate the optimal dual variables to the optimal primal variables by
using the primal-dual constraints. From the (PD) constraints, we already know that W =
n T T
j=1 αj X j /λ = D α/λ. This yields the following optimal solution for primal variable W :
At first glance, this solution seems to be different. However, the two solutions are really
equivalent, and one can derive this result from the push-through identity (cf. Problem 1.2.13
of Chapter 1). Specifically, the following can be shown:
1 n
Minimize J = W 2 + C log(1 + exp[ξi ])
2 i=1
subject to:
T
ξi = −yi (W · X i )
Discuss why this objective function is the same as Equation 4.56 with an appropriate choice
of C. Assume that the other notations are the same as Equation 4.56. Formulate a La-
grangian relaxation of this problem, where αi is the dual variable used for the ith constraint
associated with X i .
Since the Lagrange multiplier is sign-unconstrained in this case, and the constraints are
equality constraints, one could obtain either of two possible answers to the previous problem
with different signs of αi . This issue is also applicable to the next problem, where you might
get the results in the statement of the exercise with the sign of αi flipped.
n
T
W = yi αi X i
i=1
C
αi =
1 + exp(−ξi )
Now discuss why αi must lie in the range (0, C) based on the primal dual constraints (just
like the hinge-loss SVM).
The similarity of the logistic dual with the hinge-loss SVM dual is not particularly surprising,
given the fact that we have shown the similarity of the primal logistic regression objective
function with that of the hinge-loss SVM, especially for the critical, difficult-to-classify
points (see Section 4.8.4 of Chapter 4).
286 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
Problem 6.4.8 Show that the dual of logistic regression can be expressed in minimization
form as follows:
1
n n n n
Minimize α αi αj yi yj (X i · X j ) + αi log(αi ) + (C − αi )log(C − αi )
2 i=1 j=1 i=1 i=1
Note that the objective function of logistic regression only makes sense for αi ∈ (0, C)
because the logarithm function can only have positive arguments. In practice, one explicitly
adds the constraints αi ∈ (0, C) to avoid an undefined objective function. This makes
the entire formulation very similar to the hinge-loss SVM dual, and the pseudo-code in
Section 6.4.4.1 can be used directly, but with stronger box-constraint updates to strictly
within (0, C). Another difference is that αk is updated as follows:
C − αk n
αk ⇐ αk + η log − yk ys αs X k · X s
αk s=1
The term log([C − αk ]/αk ) replaces 1 in the pseudo-code, and it tries to keep αk in the
middle of the range (0, C).
learning, when using trust-region optimization in conjunction with the Newton method
(cf. Section 5.6.3.1 of Chapter 5). This problem is stated as follows:
Minimize F (w)
subject to:
w − a2 ≤ δ 2
The first step is to solve the optimization problem while ignoring the constraint. If the
optimal solution already satisfies the constraint (in spite of the fact that it was not used),
then we need to do nothing else. We can simply terminate. On the other hand, if the
constraint is violated, then we formulate the following relaxed version of the problem with
penalty parameter α > 0:
0 12
Minimize F (w) + α max{w − a2 − δ 2 , 0}
Note that there is no penalty or gain when the constraint is satisfied. This ensures that the
objective function value of the relaxed problem is the same as that of the original problem
as long as one operates in the feasible space. Choosing very small values of α might result
in violation of the constraints. On the other hand, choosing large enough values of α will
always result in feasible solutions, in which the penalty does not contribute anything to the
objective function. An important observation about penalty functions is as follows:
Observation 6.5.1 Consider a penalty-based variation of a constrained optimization prob-
lem in which violation of constraints is penalized and added to the objective function. Fur-
thermore, feasible points have zero penalties (or gains). If the optimal solution to the penalty-
based relaxation is feasible for the constraints in the original problem, then that solution is
also optimal for the original problem.
The above observation is the key to the success of penalty-based methods. We simply need
to start with small enough values of α and gradually test successively large values of α until
the relaxation yields a feasible solution. One can solve this problem by starting at α = 1 and
solving the optimization problem. If the constraints are satisfied, we terminate and report
the corresponding value of the parameter vector w as optimal. If the solution is not feasible,
one can double the value of α and perform gradient descent again to find the best value
of the parameter vector w with gradient descent. One can use the parameter vector w at
the end of an iteration as the starting point for gradient descent in the next iteration (with
increased α). This reduces the work in the next iteration. This approach of increasing α is
continued until no constraints are violated. It is also noteworthy that the relaxed objective
function is convex when the objective function and the constraints are convex.
be used even in the cases where these conditions are not met; however, in those cases, one
might not be able to obtain the global optimum. Then, the relaxed objective function of
this problem is as follows:
#m %
α k
2 2
Minimize R(w, α) = F (w) + max{0, fi (w)} + hi (w) (6.17)
2 i=1 i=1
Note the difference between how equality and inequality constraints are treated. The penalty
parameter α is always greater than zero. We make the following observation:
Observation 6.5.2 (Convexity of Relaxation) If F (w) is convex, each fi (w) is convex,
and each hi (w) is linear, then the relaxed objective function of Equation 6.17 is convex for
α > 0.
The gradient of this objective function with respect to w can be computed as follows:
m
k
∇w R(w, α) = ∇F (w) + α max{fi (w), 0}∇fi (w) + α hi (w)∇hi (w)
i=1 i=1
Minimize F (w)
subject to:
fi (w) ≥ 0, ∀i ∈ {1 . . . m}
6.5. PENALTY-BASED AND PRIMAL-DUAL METHODS 289
Then, the barrier function B(w, α) is well-defined only for feasible values of the parameter
vector w, and it is defined as follows:
m
B(w, α) = F (w) − α log(fi (w))
i=1
This is an example of the use of the logarithmic barrier function, although other choices
(such as the inverse barrier function) exist. One observation is that the barrier function is
convex as long as F (w) is convex, and each fi (w) is concave. This is because the logarithm3
of a concave function is concave, and the negative logarithm is therefore convex. The sum
of convex functions is convex, and therefore the barrier function is convex. Note that we
require each fi (w) to be concave (rather than convex) because our inequality constraints
are of the form fi (w) ≥ 0 rather than fi (w) ≤ 0.
A key point is that each fi (w) must be strictly greater than zero even for the objective
function to be meaningfully evaluated at a given step; one cannot compute the logarithm
of zero or negative values. Therefore, barrier methods start with feasible solutions w in the
interior of the data. Furthermore, unlike penalty methods, one starts with large values of
α in early iterations, and this value is reduced over time. At any fixed value of α, gradient-
descent is performed on w to optimize the weight vector. Smaller values of α allow w to
approach closer to the boundary of the feasible region defined by the constraints. This is
because the barrier function always approaches ∞ near the boundary irrespective of the
value of α, but small values of α allow a closer approach. However, small values of α also
result in sharp ill-conditioning, and using small values of α early is bad for convergence.
For example, using high values of α in the initial phases is helpful in maintaining strict
feasibility of the weight vector w.
In cases where the true optimal solution is not near the boundary of the feasible region,
one will often approach the optimal solution quickly, and convergence is smooth. In these
cases, the constraints might even be redundant, and the unconstrained version of the prob-
lem will yield the same solution. In more difficult cases, the optimal weight vector might
lie near the boundary of the feasible region. As the feasible weight vector w approaches
close enough to the boundary fi (w) ≥ 0, the penalty contribution increases rapidly like a
“barrier” and increases to ∞ when one reaches the boundary fi (w) = 0. Therefore, we only
need relatively small values of α in order to ensure feasibility. However, at small values of
α, the function becomes ill-conditioned near the boundary. Therefore, the barrier method
starts with large values of α and gradually reduces it, while performing gradient descent
with respect to w and fixed α. The optimal vector w at the end of a particular iteration is
used as a starting point for the next iteration (with a smaller value of α).
For gradient descent, the gradient of the objective function is as follows:
m
∇fi (w)
∇w B(w, α) = ∇F (w) − α
i=1
fi (w)
3 Since the logarithm is concave, we know that:
log[λfi (w1 ) + (1 − λ)fi (w2 )] ≥ λlog[fi (w1 )] + (1 − λ)log[fi (w2 )] (6.18)
At the same time, we know that fi (λw1 + (1 − λ)w2 ) ≥ λfi (w1 ) + (1 − λ)fi (w2 ) because fi (·) is concave.
Since, the logarithm is an increasing function, we can take the logarithm of both sides to show the result
that log[fi (λw1 + (1 − λ)w2 )] ≥ log[λfi (w1 ) + (1 − λ)fi (w2 )]. Combining this inequality with Equation 6.18
using transitivity, we can show that log[fi (λw1 + (1 − λ)w2 )] ≥ λlog[fi (w1 )] + (1 − λ)log[fi (w2 )]. In other
words, log(fi (·)) is concave. More generally, we just went through all the steps required to show that the
composition g(f (·)) of two concave functions is concave as long as g(·) is non-decreasing. Closely related
results are available in Lemma 4.3.2.
290 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
Setting this gradient to zero yields the optimality condition. It is instructive to compare
this optimality
condition with the primal-dual (PD) constraint of the Lagrangian L(w, α) =
F (w) − i αi fi (w):
m
∇w L(w, α) = ∇F (w) − αi ∇fi (w) = 0
i=1
Here, we are using α1 . . . αk as the Lagrangian parameters, which can be distinguished from
the penalty parameter α by virtue of having a subscript. Furthermore, since the Lagrangian
relaxation is computed using the “≤” form of the constraint (which is −fi (w) ≤ 0), we have
a negative sign in front of each penalty term. Note that the value of α/fi (w) is an estimate
of the Lagrangian multiplier αi , if one were to use the traditional Lagrangian relaxation
L(w, α) = F (w)− i αi fi (w). Interestingly, this means that we have αi fi (w) = α. Note that
this is almost the complementary-slackness condition of Lagrangian relaxation, except that
we have substituted 0 with a small value α. Therefore, at small values of α, the optimality
conditions of the (traditional) dual relaxation are nearly satisfied when one views the barrier
function as a Lagrangian relaxation. The barrier method belongs to the class of interior
point methods that approach the optimal solution from the interior of the feasible space.
Therefore, one benefit of such methods is that they yield estimates of the Lagrangian dual
variables in addition to yielding the primal values.
k
Minimize xTi Axi
i=1
subject to:
xi 2 = 1, ∀i ∈ {1 . . . k}
x1 . . . xk are mutually orthogonal
choose not to relax all the constraints, although one can obtain an equivalent solution by
relaxing all constraints. Note that the Lagrangian multipliers are not constrained to be
nonnegative because we are relaxing equality constraints rather than inequality constraints.
We also add a negative sign in front of the multipliers for algebraic interpretability of the
Lagrangian multipliers as eigenvalues (as we will show later). Correspondingly, one can write
the Lagrangian relaxation as follows:
k
k
L(α) = Minimizex1 ...xk are orthogonal xTi Axi − αi (xi 2 − 1)
i=1 i=1
Setting the gradient of the Lagrangian with respect to each xi to 0, one obtains the following:
Axi = αi xi , ∀i ∈ {1 . . . k}
As discussed earlier, we need to use the primal-dual (PD) constraints to eliminate the primal
variables, and obtain an optimization problem in terms of the dual variables. Note that the
constrains Axi = αi xi implies that the feasible space for αi is restricted to the d eigenvalues
of A. Note that the orthogonality constraints on the vectors x1 . . . xk are automatically
satisfied because the eigenvectors of the symmetric matrix A are orthonormal. Using the
(PD) constraints to substitute Axi = αi xi within the Lagrangian relaxation, we obtain the
following:
k
k
L(α) = Minimize[x1 ...xk are orthogonal] αi xTi xi − αi (xi 2 − 1)
i=1 i=1
k
= Minimize[Eigenvalues of A] αi
i=1
Clearly, the above objective function is minimized over the smallest eigenvalues of A. There-
fore, one obtains the following trivial dual problem:
k
Maximize L(α) = αi
i=1
subject to:
α1 . . . αk are smallest eigenvalues of A
Note that the dual problem has a single point in its feasible solution. The primal solutions
x1 . . . xk , correspond to the smallest eigenvectors of A because of the (PD) constraints
Axi = αi xi . A key point is that even though we assumed that the matrix A is symmetric,
we did not assume that it is positive semi-definite. Therefore, the objective function might
not be convex. In other words, strong duality is not guaranteed, and there might be a gap
between the primal and dual solutions. One way of checking optimality of the derived primal
solution is to explicitly check if a gap exists. In other words, we substitute the derived primal
solution into the primal objective function and compare it with the dual objective function
value at optimality. On making this substitution, we find that the primal objective function
is also the sum of the smallest k eigenvalues. Therefore, there is no gap between the derived
primal and dual solutions. The result of this section, therefore, provides an example of how
it is sometimes possible to use Lagrangian relaxation even in the case of objective functions
that are not convex. This section also provides a detailed proof of the norm-constrained
optimization problem introduced in Section 3.4.5.
292 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
k
Maximize xTi Axi
i=1
subject to:
xi 2 = 1, ∀i ∈ {1 . . . k}
x1 . . . xk are mutually orthogonal
As in the case of the minimization version of the problem, it is important for the matrix A
to be symmetric (because of orthogonality constraints). The approach to the maximization
variant of the problem is very similar, and one can show that the best solution is obtained
by choosing the largest eigenvectors of A. We leave the proof of this result as an exercise
for the reader.
Problem 6.6.1 Show that the optimal solution to the maximization variant of norm-
k
constrained optimization with objective function i=1 xTi Axi corresponds to the largest k
eigenvectors of the symmetric matrix A.
function between objects that are not inherently multidimensional. Such techniques are
referred to as kernel methods. However, the idea that dual objective functions are essential
for the use of kernel methods is a widespread misconception. As we will see in Chapter 9,
there is a systematic way in which every primal objective function discussed in this chapter
and the previous chapters can be recast in terms of similarities. This approach uses a
fundamental idea in linear algebra, known as the representer theorem. Note that the dual
problems are often constrained optimization problems like the primal (albeit with simple
box constraints). Therefore, all that the dual formulation achieves is to provide another
perspective to the problem, which might have (relatively minor) benefits.
For example, consider the issue of computational efficiency for a problem with n data
points and d dimensions. The scatter matrix (used in the primal) has O(d2 ) entries, whereas
the similarity matrix (used in the dual) has O(n2 ) entries. Therefore, the primal is often
cheaper to solve when the dimensionality is smaller than the number of points. This situation
is quite common. On the other hand, if the number of points is smaller than the dimen-
sionality, the dual methods can be cheaper. However, some principles like the representer
theorem (cf. Chapter 9) enable techniques for the primal, which are of similar complexity
as the dual.
Another point to be kept in mind is that most gradient descent methods arrive at an
approximately optimal objective function value. After all, there are many practical chal-
lenges associated with computational optimization, and one often arrives at a numerically
approximate solution. However, the primal has the advantage that the level of final approx-
imation is guaranteed, because we are directly optimizing the objective function we wanted
in the first place. On the other hand, the final dual solution needs to be mapped to a pri-
mal solution via the primal-dual constraints. For example, on computing the dual variables
T
α1 . . . αn in the hinge-loss SVM, the primal solution W is computed as W = i αi yi X i .
Optimizing the dual objective function approximately might provide an arbitrarily poor
solution for the primal. Although the primal and dual objective function values are exactly
the same at optimality (for convex objective functions like the SVM), this is not the case
for approximately optimal solutions; the approximate dual objective function value (which
is a function of α1 . . . αn ) might be quite different from the final objective function value
when translated to the primal solution. Finally, intermediate primal solutions are more
interpretable than dual solutions. This interpretability has an advantage from a practical
point of view, and early termination is easier in the event of computational constraints.
The dual approach has been historically favored in models like support vector machines.
However, there is no inherent reason to so so, given the vast number of simple methods
available for primal optimization. Our recommendation is to always use a primal method
where possible.
6.8 Summary
Many optimization problems have constraints in them, which makes the solution methodol-
ogy more challenging. Several methods for handing constrained optimization were discussed
in this chapter, such as projected gradient descent, coordinate descent, and Lagrangian re-
laxation. Penalty-based and barrier methods combine ideas from primal and dual formula-
tions. Among these methods, primal methods have some advantages because of their better
interpretability. Nevertheless, dual problems can also work well in some settings, where the
number of points is fewer than the number of variables.
294 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
6.10 Exercises
1. Suppose you want to find the largest area of rectangle that can be inscribed in a circle
of radius 1. Formulate a 2-variable optimization problem with constraints to solve this
problem. Discuss how you can convert this problem into a single-variable optimization
problem without constraints.
2. Consider the following optimization problem:
Minimize x2 + 2x + y 2 + 3y
subject to:
x+y =1
d
Minimize c i wi
i=1
subject to:
Aw ≤ b
1 T d
Minimize w Qw + cT w
2 i=1
subject to:
Aw ≤ b
Compute the dual of this optimization formulation by using analogous steps to those
discussed in the chapter. How would you handle the additional constraint in the dual
formulation during gradient descent?
9. As you will learn in Chapter 9, the primal formulation for least-squares regression can
be recast in terms of similarities sij between pairs of data points as follows:
1 λ
n n n n
J= (yi − βp spi )2 + βi βj sij
2 i=1 p=1
2 i=1 j=1
Here, sij is the similarity between points i and j. Convert this unconstrained opti-
mization problem into a constrained problem, and formulate the dual of the problem
in terms of sij .
T
10. Let z ∈ Rd lie outside the ellipsoid xT Ax + b x + c ≤ 0, where A is a d × d positive
semi-definite matrix and x ∈ Rd . We want to find the closest projection of z on this
convex ellipsoid to enable projected gradient descent. Use Lagrangian relaxation to
show that the projection point z 0 must satisfy the following:
z − z 0 ∝ 2Az 0 + b
Minimize x2 − y 2 − 2xy + z 2
subject to:
x2 + y 2 + z 2 ≤ 2
Imagine that we are using coordinate descent in which we are currently optimizing the
variable x, when y and z are set to 1 and 0, respectively. Solve for x. Then, solve for y by
296 CHAPTER 6. CONSTRAINED OPTIMIZATION AND DUALITY
setting x and z to their current values. Finally, solve for z in the same way. Perform
another full cycle of coordinate descent to confirm that coordinate descent cannot
improve further. Provide an example of a solution with a better objective function
value. Discuss why coordinate descent was unable to find an optimal solution.
12. Consider the dual objective function in Lagrangian relaxation, as a function of only
the dual variables:
m
L(α) = Minimize w [F (w) + αi fi (w)]
i=1
The notations here for F (·) and fi (·) are the same as those used in Section 6.4. Show
that L(α) is always concave in α, irrespective of the convexity structure of the original
optimization problem.
13. Nonnegative box regression: Formulate the Lagrangian dual (purely in terms of
dual variables) for L2 -regularized linear regression Dw ≈ y with n × d data matrix
D, regressand vector y, and with nonnegativity constraints w ≥ 0 on the parameter
vector.
14. Hard Regularization: Consider the case where instead of Tikhonov regularization,
you solve the linear regression problem of minimizing Ax−b2 subject to the spherical
constraint x ≤ r. Formulate the Lagrangian dual of the problem with variable α ≥ 0.
Show that the primal and dual variables are related at optimality in a similar way to
Tikhonov regularization:
x = (AT A + αI)−1 AT b
Under what conditions is α equal to 0? If α is non-zero, show that it is equal to the
solution to the following secular equation:
T
b A(AT A + αI)−2 AT b = r2
15. Propose a (primal) gradient-descent algorithm for the hard regularization model of
the previous exercise. Use the projected gradient-descent method. The key point is in
knowing how to perform the projection step.
16. Best subset selection: Consider an n × d data matrix D in which you want to find
the best subset of k features that are related to the n-dimensional regressand vector y.
Therefore, the following mixed integer program is formulated with d-dimensional real
vector w, d-dimensional binary vector z, and an a priori (constant) upper bound M
on each coefficient in w. The optimization problem is to minimize Dw − y2 subject
to the following constraints:
T
z ∈ {0, 1}d , w ≤ M z, 1 z = k
The notation 1 denotes a d-dimensional vector of 1s. Propose an algorithm using
block coordinate descent for this problem, where each optimized block contains just
two integer variables and all real variables.
17. Duality Gap: Suppose that you are running the dual gradient descent algorithm
for the SVM, and you have the (possibly suboptimal) dual variables α1 . . . αn in the
current iteration. Propose a quick computational procedure to estimate an upper
bound on how far this dual solution is from optimality. [Hint: The current dual solution
can be used to construct a primal solution.]
6.10. EXERCISES 297
18. State whether the following minimax functions f (x, y) satisfy John von Neumann’s
strong duality condition, where x is the minimization variable and y is the max-
imization variable: (i) f (x, y) = x2 + 3xy − y 4 , (ii) f (x, y) = x2 + xy + y 2 , (iii)
f (x, y) = sin(y − x), and (iv) f (x, y) = sin(y − x) for 0 ≤ x ≤ y ≤ π/2.
21. Formulate a variation of an SVM with hinge loss, in which the binary target (drawn
from −1 or +1) is known to be non-negatively correlated with each feature based on
prior knowledge. Propose a variation of the gradient descent method by using only
feasible directions.
Chapter 7
“The SVD is absolutely a high point of linear algebra.”– Gilbert Strang and Kae
Borre
7.1 Introduction
In Chapter 3, we learned that certain types of matrices, which are referred to as positive
semidefinite matrices, can be expressed in the following form:
A = V ΔV T
Here, V is a d × d matrix with orthonormal columns, and Δ is a d × d diagonal matrix
with nonnegative eigenvalues of A. The orthogonal matrix V can also be viewed as a rota-
tion/reflection matrix, the diagonal matrix Δ as a nonnegative scaling matrix along axes
directions, and the matrix V T is the inverse of V . By factorizing the matrix A into simpler
matrices, we are expressing a linear transform as a sequence of simpler linear transforma-
tions (such as rotation and scaling). This chapter will study the generalization of this type
of factorization to arbitrary matrices. This generalized form of factorization is referred to
as singular value decomposition.
Singular value decomposition generalizes the factorization approach to arbitrary matri-
ces that might not even be square. Given an n × d matrix B, singular value decomposition
decomposes it as follows:
B = QΣP T
Here, B is an n × d matrix, Q is an n × n matrix with orthonormal columns, Σ is an
n × d rectangular diagonal matrix with nonnegative entries, and P is a d × d matrix with
orthonormal columns. The notion of a rectangular diagonal matrix is discussed in Figure 1.3
of Chapter 1 in which only entries with indices of the form (i, i) (i.e., with the same row
and column indices) are non-zero. The columns of Q and the columns of P are referred to
as left singular vectors and right singular vectors, respectively. The entries of Σ are referred
to as singular values, and they are arranged in non-increasing order (by convention). We
emphasize that the diagonal matrix Σ is nonnegative.
Singular value decomposition has some insightful linear algebra properties in terms of
enabling the discovery of all four fundamental subspaces of the matrix B. Furthermore, if
exact decomposition is not essential, singular value decomposition provides the ability to
approximate B very well with small portions of the factor matrices Q, P , and Σ. This is an
optimization-centric view of singular value decomposition. The optimization-centric view
naturally generalizes to the broader concept of low-rank matrix factorization, which lies at
the heart of many machine learning applications (cf. Chapter 8).
We will first approach singular value decomposition simply from a linear algebra point
of view, as a way of exploring the row and column spaces of a matrix. This view is, how-
ever, incomplete because it does not provide an understanding of the compression-centric
properties of singular value decomposition. Therefore, we will also present singular value de-
composition in terms of the optimization-centric view together with its natural applications
to compression and dimensionality reduction.
This chapter is organized as follows. In the next section, we will introduce singular value
decomposition from the point of view of linear algebra. An optimization-centric view of
singular value decomposition is presented in Section 7.3. Both these views expose somewhat
different properties of singular value decomposition. Singular value decomposition (SVD)
has numerous applications in machine learning, and an overview is provided in Section 7.4.
Numerical algorithms for singular value decomposition are introduced in Section 7.5. A
summary is given in Section 7.6.
Lemma 7.2.1 Let B be a square, m × m matrix. Then, the following results are true:
Proof: We only show the first part of the above result, because the proof of the second
part is exactly identical by working with B T instead of B throughout the proof. If p is an
eigenvector of B T B with eigenvalue λ, we have the following:
B T Bp = λp
BB T [Bp] = λ[Bp] { Pre-multiplying with B }
Proof: This proof works by defining each q i as a function of pi . Let there be r ≤ m non-
zero eigenvalues.
√ In the case when pi is associated with a non-zero eigenvalue, we define
q i = Bpi / λi , and Lemma 7.2.1 ensures that each q i is a unit eigenvector of BB T . The
extracted eigenvectors q 1 . . . q r for non-zero eigenvalues are orthogonal to one another:
q Ti q j = (Bpi )T (Bpj )/λ = pTi ([B T B]pj )/λ = pTi pj = 0
Proof: Corollary 7.2.1 ensures that for any ordered set p1 . . . pm of eigenvectors of B T B,
an ordered set q 1 . . . q m of eigenvectors of BB T exists, so that the following is satisfied for
each i ∈ {1 . . . m}:
√
q i λ = Bpi
One can write the m vector-centric relationships as a single matrix-centric relationship:
[q 1 , . . . , q m ]Σ = B[p1 . . . pm ]
√
Here, Σ is an m × m diagonal matrix whose (i, i)th entry is λi . One can write the above
relationship in the following form:
QΣ = BP
Here, P is an m×m orthogonal matrix with columns containing p1 . . . pm , and Q is an m×m
orthogonal matrix with columns containing q 1 . . . q m . Post-multiplication of both sides with
P T and setting P P T = I yields QΣP T = B. Therefore, a singular value decomposition of
a square matrix B always exists.
Consider the following matrix B and its derived scatter matrix B T B:
⎡ ⎤ ⎡ ⎤
14 8 −6 893 247 242
B = ⎣ 21 11 14 ⎦ , B T B = ⎣ 247 221 94 ⎦
16 −6 2 242 94 236
Upon performing this multiplication, we obtain a matrix Q whose columns are proportional
to [1, 2, 1]T , [1, −1, 1]T , and [−1, 0, 1]T , although the matrix Q is obtained in terms of unit
normalized columns. Therefore, the SV D of matrix B can be expressed as QΣP T as follows:
⎡ √ √ √ ⎤⎡ √ ⎤⎡ √ √ √ ⎤T
1/√6 1/√3 −1/ 2 4 66 √0 0 3/√11 1/√6 1/√66
⎣ 2/ 6 −1/√3 0 ⎦⎣ 0 9 2 √ 0 ⎦ ⎣ 1/√11 −1/√6 −7/√66 ⎦
√ √
1/ 6 1/ 3 1/ 2 0 0 2 33 1/ 11 −2/ 6 4/ 66
Q Σ PT
One important point is that we derived Q from P , rather than independently diagonalizing
BB T and B T B, and doing the latter might lead to incorrect results because of sign depen-
dence between Q and P . For example, one could use −Q and −P as the decomposition
matrices without changing the product of the matrices. However, we cannot use −Q and P
to create an SVD. The signs of matching pairs of singular vectors are also interdependent.
7.2. SVD: A LINEAR ALGEBRA PERSPECTIVE 303
Note that the above matrix has no diagonalization, since it is nilpotent (see Exercise 26
of Chapter 3). However, it has a valid singular value decomposition. Furthermore, even
though this matrix only has zero eigenvalues, it has a non-zero singular value of 7, con-
taining one of the key scaling factors of the transformation. In fact, SVD has the neat
property of relating arbitrary (square) matrices to positive semidefinite ones with the use of
polar decomposition, which explicitly separates out the rotreflection matrix from the scaling
(positive semidefinite) matrix:
Lemma 7.2.2 (Polar Decomposition) Any square matrix can be expressed in the form
U S, where U is an orthogonal matrix, and S is a symmetric positive semidefinite matrix.
Proof: One can write the SVD of a square matrix as QΣP T = (QP T )(P ΣP T ). The matrix
QP T can be set to U , and it is orthogonal because of the closure of orthogonal matrices
under multiplication (cf. Chapter 2). Furthermore, S can be set to P ΣP T , which is positive
semidefinite because of the nonnegativity of Σ.
The polar decomposition is geometrically insightful, because it tells us that every ma-
trix multiplication causes an anisotropic scaling along orthogonal directions with nonnega-
tive scale factors, followed by rotreflection. When the rotreflection component is missing,
the resulting matrix is positive semidefinite. The matrix U is also the nearest orthogonal
matrix to B, just as [cos(θ), sin(θ)]T is the nearest unit vector to the polar coordinates
r[cos(θ), sin(θ)]T .
Problem 7.2.1 Let B be a symmetric and square matrix, which is negative semidefinite.
Show that the singular value decomposition of B is of the form B = QΣP T , where Q = −P .
The important point of the previous exercise is to emphasize the fact that the singular
values need to be nonnegative. We provide another exercise to emphasize this fact:
Problem 7.2.2 Suppose that somebody gave you an m × m matrix B and a decomposition
of the form B = QΣP T , where Q and P are both orthogonal matrices of size m×m, and Σ is
an m × m diagonal matrix. However, you are told that some of the entries of Σ are negative.
Discuss how you would adjust the decomposition in order to convert it into a standard form
of singular value decomposition.
Problem 7.2.3 Suppose that the eigendecomposition of a 3 × 3 symmetric matrix A can
be written as follows:
⎡ ⎤⎡ ⎤⎡ ⎤
v11 v12 v13 5 0 0 v11 v21 v31
A = V ΔV T = ⎣ v21 v22 v23 ⎦ ⎣ 0 −2 0 ⎦ ⎣ v12 v22 v32 ⎦
v31 v32 v33 0 0 −3 v13 v23 v33
What is the singular value decomposition of this matrix?
The number of non-zero singular values yields the rank of the original matrix.
Lemma 7.2.3 Let B be an m × m matrix with rank k ≤ m. Let the singular value decom-
position of B be B = QΣP T , where Q, Σ, and P T are m × m matrices. Then, exactly m − k
singular values must be zeros.
304 CHAPTER 7. SINGULAR VALUE DECOMPOSITION
The matrices, Q, P , and Σ are all of sizes m × m, as is normally the case for square SVD.
The matrix P1 is of size d × d, and Q1 is of size n × n. The matrices P2 and Q2 are of
sizes (m − d) × (m − d) and (m − n) × (m − n), respectively. The matrix Σ1 is of size
min{n, d} × min{n, d}.
Proof Sketch: Consider the first case above where B = [D 0] and d < n. In such a case,
B T B will only have a single non-zero block of size d × d in the upper-left corner. As a result,
it will have at most d non-zero eigenvalues, the square-roots of which can be used to create
the d × d diagonal matrix Σ1 . The eigenvectors of its upper-left block will be contained in
the d × d matrix P1 . Let the (n − d) × (n − d) matrix P2 be created by stacking up any set
of (n − d) orthonormal column vectors in R(n−d) . It remains to show that if matrix P and
Σ are constructed using P1 , P2 , and Σ1 using the block structure shown on the right-hand
side of the first relationship above, then (i) P will contains both the non-zero and zero
eigenvectors of B T B, and (ii) the matrix Σ2 contains the eigenvalues of B T B. This can
be achieved by showing that the ith column of P is a right-eigenvector of B T B with the
corresponding eigenvalue contained in the ith diagonal entry of Σ2 . The result holds because
7.2. SVD: A LINEAR ALGEBRA PERSPECTIVE 305
for i ≤ d, the eigenvectors and eigenvalues are inherited from eigenvectors of the upper-left
block of B T B with size d×d. These eigenvectors are contained in P1 and the padding simply
adds (n − d) zero values both to the ith column of B T B and to the ith column of P . For
i > d, any n-dimensional vector with zero values in the first d components can be shown
to be an eigenvector of B T B (with 0 eigenvalue) because of the block structure of B T B.
Furthermore, the matrix P can be shown to be orthogonal because both of its blocks are
orthogonal matrices. The matrix Q can be extracted from B, Σ, and P using the methods
discussed in the proof of Theorem 7.2.1. Therefore, one can create an SVD respecting the
block diagonal structure in the first case of the statement of the lemma (when n > d). The
second case for n < d can be proven using a similar argument.
Instead of using singular value decomposition on the padded matrix B, one can directly
decompose the matrix D by pulling out portions of the block structure of padded SVD:
Σ1
D=Q P1T , [When d < n]
0
D = Q1 [Σ1 0]P T , [When n < d]
Both Q and P are square, and only the n × d diagonal matrix Σ is rectangular in both
relationships. The square submatrix Σ1 is of size min{n, d} × min{n, d}, and the n × d
matrix Σ is obtained by padding it with |n − d| zero rows or columns. Unlike the SVD
of B, the right singular vectors and left singular vectors of D are no longer of the same
dimensionality. The left singular vector matrix is always of size n × n, whereas the right
singular vector matrix is always of size d × d. This is the standard form of rectangular
singular value decomposition. However, other variations of singular value decomposition
are even more economical, and will be discussed in the next section.
D = QΣP T
Here, Q is an n × n matrix with orthonormal columns containing the left singular vectors, Σ
is an n × d rectangular “diagonal” matrix with diagonal entries containing the nonnegative
singular values in non-increasing order, and P is a d × d matrix with orthonormal columns
containing the right singular vectors.
We present a number of important properties of the right singular vectors and left singular
vectors below. These properties follow directly from the discussion in the previous section:
1. The n columns of Q, which are referred to as the left singular vectors, correspond
to the n eigenvectors of the n × n matrix DDT . Note that these eigenvectors are
orthonormal because DDT is a symmetric matrix.
2. The d columns of P , which correspond to the right singular vectors, correspond to the
d eigenvectors of the d × d matrix DT D. These eigenvectors are orthonormal because
DT D is a symmetric matrix.
306 CHAPTER 7. SINGULAR VALUE DECOMPOSITION
3. The diagonal entries of the n × d rectangular diagonal matrix Σ contain the singular
values, which are the square-roots of the min{n, d} largest eigenvalues of DT D or
DDT .
4. By convention, the columns of Q, P , and Σ are ordered by non-increasing singular
value.
The above form of singular value decomposition is also referred to as full singular value
decomposition. Note that either Q or P will be larger than the original matrix D when
n = d, and the n × d matrix Σ is of the same size as the original matrix. In fact, the larger
of Q and P will contain |n − d| unmatched eigenvectors that are not represented in the
min{n, d} diagonal entries of Σ. This would seem wasteful.
A more economical form of the decomposition is economy singular value decomposition,
which can be derived from the spectral decomposition of the matrix. Let σrr be the (r, r)th
entry of Σ, q r be the rth column of Q, and pr be the rth column of P . Then, the matrix
product QΣP T can be decomposed into the sum of rank-1 matrices:
min
{n,d}
D = QΣP T = σrr q r pTr (7.1)
r=1
The right-hand side of the above result is obtained by simply applying one of the funda-
mental ways of characterizing matrix multiplication (cf. Lemma 1.2.1 of Chapter 1) to the
product of the matrices (QΣ) and P T . The above form of the decomposition is also referred
to as the spectral decomposition of the matrix D. Each of the min{n, d} terms (i.e., the
n × d matrix σrr q r pTr ) in the above summation is referred to as a latent component of the
original n×d matrix D. This term is referred to as a latent component, because it represents
the independent, hidden (or latent) pieces of the matrix D. Note that each q Tr pr is a rank-1
matrix of size n × d, because it is obtained from the product of an n-dimensional column
vector with a d-dimensional row vector. The above form of the spectral decomposition pro-
vides the insight necessary to propose a form of SVD, referred to as economy singular value
decomposition. The idea is that each term of Equation 7.1 can be used to create one of the
p = min{n, d} columns of each of the decomposed matrices:
Definition 7.2.2 (Economy Singular Value Decomposition) Consider an n × d ma-
trix D with real-valued entries, where p = min{n, d}. Such a matrix can always be factorized
into three matrices as follows:
D = QΣP T
Here, Q is an n × p matrix with orthonormal columns containing the left-singular vectors,
Σ is an p × p diagonal matrix with diagonal entries containing nonnegative singular values
in non-increasing order, and P is a d × p matrix with orthonormal columns containing the
right-singular vectors.
One of the two matrices Q and P may no longer be square, as we are shedding unmatched
singular vectors from the larger of the two matrices in full singular value decomposition.
One can further reduce the size of the decomposition by observing that some of the
min{n, d} values of σrr might be zero. Such a situation will occur in the case of a matrix
D with rank k that is strictly smaller than min{n, d}. In such cases, one can keep only the
k < min{n, d} strictly positive singular values without affecting the sum. Assume that the
singular values are ordered by non-increasing value, so that σ11 ≥ σ22 ≥ . . . ≥ σkk . In such
a case, we can write the above decomposition as follows:
7.2. SVD: A LINEAR ALGEBRA PERSPECTIVE 307
k
D= σrr q r pTr (7.2)
r=1
Note that the above summation uses all the k strictly positive singular values. This leads
to a slightly different form of singular value decomposition, which is referred to as compact
singular value decomposition or reduced singular value decomposition. Compact singular
value decomposition is defined as follows:
Definition 7.2.3 (Compact Singular Value Decomposition) Consider an n × d ma-
trix D with real-valued entries, which has rank k ≤ min{n, d}. Such a matrix can always be
factorized into three matrices as follows:
D = QΣP T
Instead of only dropping the additive components for which σrr = 0, we might also drop
those components for which σrr is very small. In other words, we keep the top-k values of σrr
in the decomposition (like compact SVD), except that k might be smaller than the number
of non-zero singular values. In such a case, we obtain an approximation Dk of the original
matrix D, which is also referred to as the rank-k approximation of the n × d matrix D:
k
D ≈ Dk = σrr q r pTr (7.4)
r=1
Note that Equation 7.4 for truncated singular value decomposition is the same as that for
compact singular value decomposition (cf. Equation 7.2); the only difference is that the
value of k is no longer chosen to ensure zero information loss. Consequently, we can express
truncated singular value decomposition as a matrix factorization as follows:
D ≈ Dk = Qk Σk PkT (7.5)
308 CHAPTER 7. SINGULAR VALUE DECOMPOSITION
Here, Qk is an n × k matrix with columns containing the top-k left singular vectors, Σk is
a k × k diagonal matrix containing the top-k singular values, and Pk is a d × k matrix with
columns containing the top-k right singular vectors. It is not difficult to see that the matrix
Dk is of rank-k, and therefore it is viewed as a low-rank approximation of D.
Almost all forms of matrix factorization, including singular value decomposition, are
low-rank approximations of the original matrix. Truncated singular value decomposition
can retain a surprisingly large level of accuracy using values of k that are much smaller
than min{n, d}. This is because only a very small proportion of the singular values are
large in real-world matrices. In such cases, Dk becomes an excellent approximation of D by
retaining the few singular vectors that are large.
A useful property of truncated singular value decomposition is that it is also possible
to create a lower dimensional representation of the data by changing the basis to Pk , so
that each d-dimensional data point is now represented in only k dimensions. In other words,
we change the axes so that the basis vectors correspond to the columns of Pk . This trans-
formation is achieved by post-multiplying the data matrix D with Pk to obtain the n × k
matrix Uk . By post-multiplying Equation 7.5 with Pk and using PkT Pk = Ik , we obtain the
following:
Uk = DPk = Qk Σk (7.6)
Each row of Uk contains a reduced k-dimensional representation of the corresponding row in
D. Therefore, we can obtain a reduced representation of the data either by post-multiplying
the data matrix with the matrix containing the dominant right singular vectors (i.e., using
DPk ), or we can simply scale the dominant left singular vectors with the singular values
(i.e., using Qk Σk ). Both these types of methods are used in real applications, depending on
whether n or d is larger.
The reduction in dimensionality can be very significant in some domains such as images
and text. Image data are often represented by matrices of numbers corresponding to pixels.
For example, an image corresponding to an 807 × 611 matrix of numbers is illustrated
in Figure 7.1(a). Only the first 75 singular values are represented in Figure 7.1(b). The
remaining 611 − 75 = 536 singular values are not shown because they are very small. The
x 104
7
MAGNITUDE OF SINGULAR VALUE
0
0 10 20 30 40 50 60 70
RANK OF SINGULAR VALUE
(a) An 807 611 image (b) First 75 singular values
Figure 7.1: The rapid decay in singular values for an 807 × 611 image
7.2. SVD: A LINEAR ALGEBRA PERSPECTIVE 309
rapid decay in singular values is quite evident in the figure. It is this rapid decay that
enables effective truncation without loss of accuracy. In the text domain, each document is
represented as a row in a matrix with as many dimensions as the number of words. The
value of each entry is the frequency of the word in the corresponding document. Note that
this matrix is sparse, which is a standard use-case for SVD. The word-frequency matrix D
might have n = 106 and d = 105 . In such cases, truncated SVD might often yield excellent
approximations of the matrix by using k ≈ 400. This represents a drastic level of reduction
in the dimensionality of representation. The use of SVD in text is also referred to as latent
semantic analysis because of its ability to discover latent (hidden) topics represented by the
rank-1 matrices of the spectral decomposition.
The squared Frobenius norm is a special case of the Frobenius inner product. The Frobenius
orthogonality of matrices can be viewed in a similar way to the pairwise orthogonality of
vectors by simply converting each matrix into a vector representation. One simply flattens
all the entries of each matrix into a vector and computes the dot product between them.
Many of the norm properties of sums of pairwise orthogonal vectors are also inherited by
matrices. This is not particularly surprising because one can view the set of all n×d matrices
as a vector space in Rn×d and an inner product that behaves similarly to the dot product.
For example, the Frobenius inner product also satisfies the Pythagorean theorem:
Lemma 7.2.5 Let A and B be two n × d matrices that are Frobenius orthogonal. Then, the
squared Frobenius norm of (A + B) can be expressed in terms of the Frobenius norms of A
and B as follows:
A + B2F = A2F + B2F
Proof: The above result is relatively easy to show by expressing the Frobenius norm in
terms of the trace of the matrix:
Corollary 7.2.2 Let A1 . . . Ak be any set of k matrices of the same size that are all Frobe-
nius orthogonal to one another. Then, the squared Frobenius norm of the sum of these
matrices can be expressed in terms of the Frobenius norms of the individual matrices as
follows:
k k
Ai 2F = Ai 2F
i=1 i=1
One can generalize the above result to the case where a weighted sum of the matrices is
used. We leave the proof of the generalized result as an exercise:
Corollary 7.2.3 Let A1 . . . Ak be any set of k matrices of the same size that are all Frobe-
nius orthogonal to one another. Then, the Frobenius norm of a linear combination of these
matrices can be expressed in terms of the Frobenius norms of the individual matrices as
follows:
k
k
σi Ai 2F = σi2 Ai 2F
i=1 i=1
Note that we used the orthogonality of q i and q j in the above proof, but we did not use the
orthogonality of pi and pj . This lemma can be shown to be true under the weaker conditions
that either of the vector pairs (q i , q j ) and (pi , pj ) are orthogonal.
The matrix q i pTi in the spectral decomposition is the outer-product of two vectors with unit
norm. The Frobenius norm of such a matrix can be shown to be 1.
Lemma 7.2.7 Let pi and q i be a pair of vectors with unit norm. The Frobenius norm of
the rank-1 matrix of the form Di = q i pTi is 1.
Proof: The Frobenius norm of Di can be expressed in terms of the trace as follows:
Let us now take a moment to examine the spectral decomposition of the matrix created
by truncated SVD. We replicate the spectral decomposition of rank-k truncated SVD from
Equation 7.4 here:
k
D ≈ Dk = Qk Σk PkT = σrr q r pTr (7.7)
r=1
7.2. SVD: A LINEAR ALGEBRA PERSPECTIVE 311
Here, it is evident that the spectral decomposition on the right-hand side contains a bunch of
Frobenius orthogonal matrices. Each of these matrices has a Frobenius norm of 1, but they
are weighted by σrr . Therefore, taking the Frobenius norm of all expressions in Equation 7.7,
we obtain the following (based on Corollary 7.2.3):
k
k k
D2F ≈ Dk 2F = σrr q r pTr 2F = 2
σrr q r pTr 2F = 2
σrr
r=1 r=1
r=1
=1
Therefore, we obtain the result that the squared Frobenius norm of the rank-k approxima-
tion is equal to the sum of the squares of the top-k singular values. The squared Frobenius
norm of a matrix is referred to as its energy (cf. Section 1.2.6 of Chapter 1). Therefore, the
lost energy is equal to the sum of the squares of the smallest singular values (excluding the
top-k singular values), which is also a measure of the squared error of the approximation. In
fact, Section 7.3 shows that SVD provides a rank-k approximation of the matrix D, which
has the smallest squared error among the universe of all possible rank-k approximations.
30
DATA POINTS
20 EIGENVECTOR 1
EIGENVECTOR 2
EIGENVECTOR 3
10
0
FEATURE Z
−10
−20
−30
−40
−50
Figure 7.2: Most of energy of the data is retained in the projection along the one or two
largest eigenvectors of the 3 × 3 matrix DT D
ORIGIN-CENTERED ELLIPSOID
Figure 7.3: SVD models the data to be distributed in an ellipsoid centered at the origin
The frequencies of the words in each document of the data matrix D are illustrated below:
⎛ ⎞
lion tiger cheetah jaguar porsche ferrari
⎜ Document-1 2 2 1 2 0 0 ⎟
⎜ ⎟
⎜ Document-2 2 3 3 3 0 0 ⎟
⎜ ⎟
D=⎜ ⎜ Document-3 1 1 1 1 0 0 ⎟⎟
⎜ Document-4 2 2 2 3 1 1 ⎟
⎜ ⎟
⎝ Document-5 0 0 0 1 1 1 ⎠
Document-6 0 0 0 2 1 2
7.2. SVD: A LINEAR ALGEBRA PERSPECTIVE 313
Note that this matrix represents topics related to both cars and cats. The first three doc-
uments are primarily related to cats, the fourth is related to both, and the last two are
primarily related to cars. The word “jaguar” is ambiguous because it could correspond to
either a car or a cat. We perform an SVD of rank-2 to capture the two latent components
in the collection, which is as follows:
D ≈ Q2 Σ2 P2T
⎛ ⎞
−0.41 0.17
⎜ −0.65 0.31 ⎟
⎜ ⎟! "! "
⎜ −0.23 0.13 ⎟ −0.41 −0.49 −0.44 −0.61 −0.10 −0.12
≈⎜ ⎟ 8.4 0
⎜ −0.56 −0.20 ⎟ 0 3.3 0.21 0.31 0.26 −0.37 −0.44 −0.68
⎜ ⎟
⎝ −0.10 −0.46 ⎠
−0.19 −0.78
⎛ ⎞
1.55 1.87 1.67 1.91 0.10 0.04
⎜ 2.46 2.98 2.66 2.95 0.10 −0.03 ⎟
⎜ ⎟
⎜ 0.89 1.08 0.96 1.04 0.01 −0.04 ⎟
=⎜⎜ 1.81
⎟
⎟
⎜ 2.11 1.91 3.14 0.77 1.03 ⎟
⎝ 0.02 −0.05 −0.02 1.06 0.74 1.11 ⎠
0.10 −0.02 0.04 1.89 1.28 1.92
The reconstructed matrix is a very good approximation of the original data matrix D. One
can also obtain a 2-dimensional embedding of each row of D as DP2 = Q2 Σ2 :
⎛ ⎞
−3.46 0.57
⎜ −5.44 1.03 ⎟
⎜ ⎟
⎜ −1.95 0.41 ⎟
DP2 = Q2 Σ2 ≈ ⎜⎜ −4.74 −0.66 ⎟
⎟
⎜ ⎟
⎝ −0.83 −1.49 ⎠
−1.57 −2.54
It is clear that the reduced representations of the first three rows are quite similar, which
is not surprising. After all the corresponding documents belong to similar topics. At the
same time, the reduced representations of the last two rows are also similar. The fourth row
seems to be somewhat different because it contains a combination of two topics. Therefore,
the latent components seem to capture the hidden “concepts” in the data matrix. In this
case, these hidden concepts correspond to cats and cars.
LATENT
DIMENSIONS COMPONENTS
d k LATENT
COMPONENTS DIMENSIONS
WS
VECTTORS OF ROW
ATA POINTS
k d
OMPONENTSS
OMPONENTSS
OP k BASIS
DATTA POINTS
LATENT
LATENT
OF DT
n DATA n x k k x k VECTORS OF
ROWS OF D
TO
DA
CO
CO
PkT
k: IMPORTANCE OF
D Qk LATENT COMPONENTS
Figure 7.4: Interpretation of SVD in terms of the basis vectors of rows and columns of D
matrix Q contains the n-dimensional basis vectors of the columns of D in its columns. In
other words, SVD simultaneously finds the basis sets of both the (transposed) rows and the
columns of the data matrix. The square of the ith diagonal entry of the matrix Σ provides
a quantification of the energy of the 1-dimensional data set Dpi obtained by projecting it
along the ith right singular vector. Directions with larger scatter obviously retain larger
information about the data set. For example, when the singular value σii is small, each
value in Dpi tends to be close to zero. When truncated SVD is used instead of compact
SVD, we are restricting ourselves to finding approximate basis sets rather than exact basis
sets. In other words, we can use these basis sets to represent all the rows in the data
matrix approximately, but not exactly. This ability of truncated SVD to simultaneously
find approximate bases for the row space and column space is shown in Figure 7.4. Note
that each of the k pieces σii q i pTi represents a portion of D corresponding to a latent (or
hidden) component of the matrix. Truncated SVD, therefore, represents a matrix in terms
of its dominant hidden components.
SVD can also be interpreted from a transformation-centric point of view, especially
when it is performed on square matrices. Consider a square d × d matrix A, which is used
to transform the d-dimensional rows of the n × d data matrix D into the d-dimensional
rows of the n × d matrix DA. One can replace A with its SVD QΣP T , which corresponds
to a sequence of rotation/reflection, anisotropic scaling, and another rotation/reflection.
This seems very similar to what happens in diagonalization of positive semidefinite ma-
trices. The only difference is that the two rotations/reflections cancel each other out
in positive semidefinite matrices, whereas they do not cancel each other out in SVD.
SVD implies that any linear transformation can be expressed as a combination of rota-
tion/reflection and scaling. Another way of viewing this point is that if we have an n × d
data matrix D, whose scatter plot is an origin-centered ellipsoid in d-dimensions, and
we multiply it with an arbitrary d × d matrix A to create the matrix DA, the result-
ing scatter plot will still be a re-scaled and re-oriented ellipsoid! Both the left and right
singular vectors will affect the final orientation, and the singular values will affect the
scaling. An example of a transformation of a 2-dimensional scatter plot is illustrated in
Figure 7.5.
Both the aforementioned interpretations are rooted in linear algebra. SVD can also be
interpreted from an optimization-centric point of view, wherein it tries to find an approx-
imate factorization that preserves the maximum energy from the data set. In Section 7.3,
we will explore this optimization-centric interpretation, which is a gateway to more general
forms of matrix factorization (cf. Chapter 8).
7.2. SVD: A LINEAR ALGEBRA PERSPECTIVE 315
MULTIPLY D WITH
ANY 2X2 MATRIX A
alize the uniqueness result of Lemma 7.2.8 to rectangular singular value decomposition, as
long as we use the compact variant of singular value decomposition in which only non-zero
singular values are included.
In addition, truncated SVD will also be unique, as long as the retained singular values in
the decomposition are distinct. Truncated singular value decomposition is very likely to
be unique in real applications, because most of the (exact or approximate) ties in singular
values often occur at the lower-order singular values at or near zero. The truncation process
often removes most of these singular values.
D ≈ UV T (7.9)
If the original matrix D has rank larger than k, the above decomposition is only approximate
(like truncated SVD). One can convert any three-way factorization like SVD into a two-way
factorization as follows:
D ≈ (QΣ)
PT
U VT
In the case of SVD, it is natural to absorb the diagonal matrix within Q, because U = QΣ
provides the coordinates of the data point in the k-dimensional basis space corresponding to
the columns of V = P . When converting a three-way decomposition into a two-way decom-
position, the general preference is to keep the normalization of the right factor and absorb
the diagonal matrix in the left factor. However, the reality is that the 2-way decomposition
has a much lower level of uniqueness as compared to 3-way decomposition. For example,
one could absorb Σ in V T instead of U . Furthermore, one could scale U and V in all sorts
of ways without affecting the product U V T . For example, if we multiply each entry of U
by 2, we can divide each entry of V by 2 to get the same product U V T . Furthermore, we
can apply this trick to just a particular (say, rth) column of each of U and V to get the
same result. In this sense, two-way factorizations are often ambiguously defined, unless one
takes care to have clear normalization rules for one of the factors. Nevertheless, two-way
factorizations are extremely useful in other forms of dimensionality reduction (like nonneg-
ative matrix factorization) because of the simplicity in working with only two matrices in
optimization formulations. Many forms of factorization use optimization models over two
factors, which are relatively simple from the perspective of optimization algorithms like
gradient descent. The good news that two-way factorizations can always be converted to a
standardized three-way factorization like SVD by using the procedure discussed below.
7.3. SVD: AN OPTIMIZATION PERSPECTIVE 317
In singular value decomposition, the (r, r)th diagonal entry is chosen in such a way that
the rth columns of the left-most factor matrix Q and the right-most factor matrix P become
normalized to unit norm. In other words, the diagonal matrix contains the scaling factors
which create the ambiguity in 2-way factorization in terms of their distribution between U
and V . Consider a two-way matrix factorization D ≈ U V T into n × k and d × k matrices
U and V , respectively. We can convert it into a near-unique (ignoring column reflection)
three-way matrix factorization of the following form:
D ≈ QΣP T (7.10)
QΣP T = U V T (7.11)
It is noteworthy that all diagonal entries of Σ are always nonnegative because of how the
normalization is done. The optimization-centric view of SVD, which is discussed in the next
section, uses two-way factorization in order to create compact optimization formulations.
In general, two-way decompositions are more common in optimization-centric matrix fac-
torization, because it is simpler to work with fewer matrices (and optimization variables).
constraints in order to control the properties of the factorization. Controlling the properties
of the factorization is the key to being able to use them in different types of machine learning
models, and these properties will be explored in Chapter 8. The optimization perspective
is useful in all these cases. The most important result that arises from optimization-centric
analysis is the following:
Truncated SVD provides the best possible rank-k approximation of a matrix in
terms of squared error.
An important point is that SVD also happens to provide a factorization D ≈ U V T = QΣP T ,
which is such that the columns of each of U and V are orthogonal. However, even if we
allow factorizations D ≈ U V T in which the columns of each of U and V are not necessarily
orthogonal, one would not gain anything from this relaxation in terms of accuracy. In other
words, even for the optimization problem of minimizing the squared error of unconstrained
low-rank factorization of D into U and V T , one of the alternative optima is a pair of matrices
U and V , such that the columns of each of the matrices are orthogonal. This section will
show this beautiful property of SVD by approaching it from an optimization perspective.
In the following exposition, we will consistently work with the two-way factorization
D ≈ U V T rather than the three-way factorization D ≈ QΣP T . Here, D is an n × d matrix,
U is an n × k matrix, and V is a d × k matrix. The hyperparameter k is the rank of the
factorization. In such a case, the columns of each of U and V are mutually orthogonal,
although there is some ambiguity in how these columns are scaled. Therefore, we will make
the assumption that the columns of V are scaled to unit norm.
k
k
T
DV 2F = DV r 2 = V r [DT D]V r
r=1 r=1
Note that this optimization problem is the same as the norm-constrained optimization
problem introduced in Section 6.6 of Chapter 6. The solution to this problem corresponds
to the top-k eigenvectors of DT D. Recall from the previous section that the eigenvalues
of DT D are σ112 2
. . . σrr , which are the same as the squares of the singular values of D.
7.3. SVD: AN OPTIMIZATION PERSPECTIVE 319
k 2
Furthermore, the energy retained in DV is equal to r=1 σrr based on the discussion in
Section 6.6 of Chapter 6. This is consistent with the energy retained by truncated singular
value decomposition (cf. Section 7.2.4). We have, therefore, just shown that the energy
retained by truncated SVD (cf. Section 7.2.4) is as large as possible among all possible
orthonormal basis systems V . We summarize this result as follows:
Lemma 7.3.1 The optimal solution V for the optimization problem (OP) is obtained by
setting the columns of V to the largest eigenvectors in DT D.
We can also show that the transformed representation U = DV contains the (scaled)
eigenvectors of DDT .
Lemma 7.3.2 Let U = DV be the transformed representation of the data, when V is
obtained using (OP). Then U contains the scaled eigenvectors of DDT .
Proof: Let the n-dimensional column vector U r contain the rth column of DV . This is
equal to DV r , where V r contains the rth column of V . In other words, we have:
U r = DV r
In other words, U 1 . . . U k are the eigenvectors of DDT . The only difference is that the
columns of V are scaled to unit norm, whereas those of U are not.
Since DDT is a symmetric matrix, its eigenvectors U 1 . . . U k will be mutually orthogonal
as well. Note that this optimization model only uses the assumption that the columns of
V are orthogonal, and we were able to automatically derive the fact that the columns of
U = DV are mutually orthogonal.
as shown in Figure 8.1 of Chapter 8, a necessary condition for optimality of this matrix
factorization problem is as follows:
DV − U V T V = 0
The solution with orthonormal columns of V (obtained via QR decomposition of any optimal
V 0 ), satisfies V T V = I, and, therefore, the condition simplifies to U = DV . Substituting
for U in the optimization formulation, the unconstrained matrix factorization problem has
the same objective function value as that of minimizing D − U V T 2 = D − DV V T 2F
subject to V T V = Ik . The sum of the squared Frobenius norms of DV and D − DV V T
can be shown1 to be the constant D2F , and therefore this minimization problem reduces
to the maximization of the Frobenius norm of DV . This is exactly the problem (OP) of
the previous section. Therefore, the unconstrained minimization formulation with residuals
also yields the top eigenvectors of DDT and DT D for U and V , respectively, as one of the
alternate optima. In other words, we have the following important result:
Theorem 7.3.1 Truncated singular value decomposition provides one of the alternate op-
tima to unconstrained matrix factorization.
For example, probabilistic matrix factorization methods use a log-likelihood function rather
than the Frobenius norm as the optimization function. Similarly, various types of nonneg-
ative matrix factorization impose nonnegativity constraints on U and V . Logistic matrix
factorization methods apply a logistic function on the entries of U V T in order to materialize
the probability that a particular entry is 1. Such an approach works well for matrices with
binary entries. Therefore, the optimization framework of unconstrained matrix factorization
provides a starting point for factorizations with different properties. These methods will be
discussed in detail in Chapter 8. Most matrix factorization formulations are not convex.
Nevertheless, gradient descent works quite well in these cases.
(D − DV V T ). Therefore, the sum of the squared Frobenius norms of DV and D − DV V T is simply D2F .
7.3. SVD: AN OPTIMIZATION PERSPECTIVE 321
When the data is not mean-centered up front, PCA and SVD will yield different results.
In PCA, we first mean-center the data set by subtracting the d-dimensional mean-vector of
the full data set D from each row as follows:
M =D− 1 μ
n×d
Here, 1 is a column vector of n ones, and μ is a d-dimensional row vector containing the
mean values of each of the d dimensions. Therefore, 1 μ is an n × d matrix in which each
row is the mean vector μ. We compute the covariance matrix C as follows:
MT M
C=
n
The covariance matrix C is a d × d matrix, in which the (i, j)th entry is simply the co-
variance between the dimensions i and j. The diagonal entries are the dimension-specific
variances. Like the scatter matrix DT D in SVD, the covariance matrix in SVD is also pos-
itive semidefinite. The covariance matrix may be approximately diagonalized at rank-k as
follows:
C ≈ V ΔV T
Here, V is a d × k matrix with columns containing the top-k eigenvectors, and Δ is a k × k
diagonal matrix with the diagonal entries containing the top-k eigenvalues (which are always
nonnegative for the positive semidefinite matrix C ∝ M T M ). The (r, r)th diagonal entry is
therefore denoted by the nonnegative value λ2r , and it represents the rth eigenvalue. As we
will see later, the value of λ2r is equal to the variance of the rth column of the k-dimensional
projection DV of the matrix D. Instead of referring to the eigenvectors as singular vectors
(as in SVD), they are referred to as principal components in PCA. Note that if one were to
perform singular value decomposition on the mean-centered matrix M , the right singular
vectors are the PCA eigenvectors, and the rth singular value σrr of SVD is related to the
eigenvalue λ2r of PCA as follows:
σ2
λ2r = rr
n
The additional factor of n in the denominator comes from dividing M T M by n to obtain
the covariance matrix. The n × k matrix U containing the k-dimensional representation of
the n rows of D is defined by projecting the rows of M on the columns of V :
U = MV
1. The matrix U is mean-centered just like the mean-centered data set M . In other
words, the reduced representation of the data is also mean-centered. Note that the
sum of the rows of U is given by 1U = 1[M V ] = [1M ] V .
0
2. The covariance of the matrix U is the diagonal matrix Δ. Consider the case in which
the matrix V contains the k columns v 1 . . . v k . Since the matrix U is mean-centered,
322 CHAPTER 7. SINGULAR VALUE DECOMPOSITION
UT U [M T M ]
=VT V = [v 1 . . . v k ]T (C[v 1 . . . v k ])
n n
= [v 1 . . . v k ]T [λ21 v 1 . . . λ2k v k ] = Δ
In the above simplification, we used the fact that each v i is an eigenvector of the
covariance matrix C, and that these k vectors are orthonormal. Therefore, v i · v j is
1 when i = j, and 0, otherwise. As a result, the diagonal entries of Δ will contain
λ21 . . . λ2k .
k
3. The retained variance in the data is given by i=1 λ2i . This is easy to show because
the covariance matrix of U is Δ. Therefore, the sum of its diagonal entries, which is
k 2
i=1 λi , yields the retained variance.
All of the above results show that PCA has very similar properties to SVD. In order to
completely reconstruct the data from U and V T , one also needs to store the mean vector
μ, which was used to mean-center the data. In other words, the original (uncentered) data
set can be reconstructed by using the following approach:
D ≈ Dpca = U V T + 1 μ (7.12)
The amount of overhead for storing μ is small, and it asymptotically vanishes for large data
sets.
The mean-centering of PCA helps in improving the accuracy of the approximation. In
order to understand this point, we have shown an example of a 3-dimensional data set that
is not originally mean-centered in Figure 7.6. Most of the data is distributed near a plane far
PCA HYPERPLANE
1.5
1
FEATURE Z
0.5
ORIGIN FAR 1
0 FROM DATA
0.8
−0.5 0.6
0.4
−1 0.2
0 0.1 0.2 0.3 0.4 0.5 0 FEATURE Y
0.6 0.7 0.8 0.9 1
FEATURE X
away from the origin (before preprocessing or mean-centering). In this case, a 2-dimensional
hyperplane can approximate the data quite well, where the mean-centering process ensures
that the PCA hyperplane passed through the mean of the original data set. This is not
the case for SVD, which will struggle to approximate the data without using all the three
dimensions. It can be explicitly shown that the accuracy of PCA is at least as good as that
of SVD for the same number of eigenvectors.
Problem 7.3.1 Consider an n×d data set D, whose rank-k approximations using truncated
SVD and PCA are Dsvd and Dpca , respectively (see Equation 7.12). Then, the information
loss in PCA can never be larger that that in SVD:
For mean-centered data, the accuracy of the two methods is identical because Dpca = Dsvd .
The geometric intuition for the above exercise is that PCA finds a k-dimensional hyperplane
that must pass through the mean of the data, whereas SVD finds the k-dimensional hyper-
plane passing through the origin. The former provides better reconstruction. However, as
the next exercise shows, the difference is usually not too large.
Problem 7.3.2 Show that the squared error of SVD at a truncation rank of (k + 1) is no
larger than the squared error of PCA at a truncation rank of k for any k ≥ 1.
A hint for solving the above problem is to show using Lemma 2.6.2 of Chapter 2 that
the mean-corrected reconstruction Dpca (cf. Equation 7.12) has rank at most (k + 1). The
SVD of D at rank-(k + 1) will provide a better rank-(k + 1) reconstruction because of its
optimality properties.
Figure 7.7: SVD reconstruction at different ranks. The reconstruction at rank-200 is nearly
identical to that of the full-rank image
are multiple colors in the image, each color channel is processed as a separate matrix. An
image matrix is often of full rank, although the lower ranks have very small singular values.
Figure 7.7 illustrates the case of an image of size 807 × 611 in which the 611th singular value
is non-zero. The rank of the image matrix is therefore 611, and the full-rank reconstruction
of Figure 7.7(d) is identical to the original image. Obviously, there are no space advantages
of full-rank reconstruction, and one must use truncation. Using a rank that is too low, such
as 5, loses a lot of information, and the resulting image does not show too many useful
details (cf. Figure 7.7(a)). An SVD of rank-50 loses only a small amount of detail, as shown
in Figure 7.7(b). Furthermore, an SVD of rank-200 is virtually indistinguishable from the
original image (cf. Figure 7.7(c)).
With certain types of images, noisy artifacts of the image can even be removed by the
SVD truncation at intermediate values of the rank. This is because the dropping of the lower-
order components leads to the discarding of the grainy noise components rather than the
informative portions of the image. Therefore, the “lossiness” of the low-rank reconstruction
is sometimes useful. This is an issue, which will be discussed in the next section.
1. The r non-zero right singular vectors of D define an orthogonal basis for the row space
of D. This is because the vector DT x = P ΣT [QT x] = [P ΣT ]y can always be shown
to be a linear combination of the non-zero right singular vectors [non-zero columns of
P ΣT ] for any x ∈ Rn .
2. The r non-zero left singular vectors of D define an orthogonal basis for the column
space of D. This is because the vector Dx = QΣ[P T x] = [QΣ]z can always be shown
to be a linear combination of the non-zero left singular vectors [non-zero columns of
QΣ] for any x ∈ Rd .
3. The (d − r) zero right singular vectors contained in the columns of P define an orthog-
onal basis for the right null space of D, because the right null space is the orthogonal
complementary space to the row space of D.
4. The (n−r) zero left singular vectors contained in the columns of Q define an orthogonal
basis for the left null space of D. This is because the left null space is the orthogonal
complementary space to the column space of D.
Problem 7.4.1 In Chapter 2, we showed that the row rank of a matrix is the same as its
column rank. This value is referred to as the matrix rank, which is used throughout this
chapter. Discuss why the existence of SVD provides an alternative proof that the row rank
of a matrix is the same as its column rank.
D = QΣP T
The matrix Σ−1 is obtained by replacing each diagonal entry in Σ by its reciprocal.
326 CHAPTER 7. SINGULAR VALUE DECOMPOSITION
B
B PCA A
This type of preprocessing is also used in unsupervised applications like outlier detection.
In fact, whitening is arguably more important in unsupervised applications because one does
not have labels to provide guidance about the relative importance of different directions in
the data. An example of the whitening of an ellipsoidal data distribution is illustrated in
Figure 7.8. The resulting data distribution has a spherical shape.
1. First, the n×d data matrix D is mean-centered, and PCA is used to transform it to an
n×k data matrix Uk = DVk . Here, Vk contains the top-k eigenvectors of the covariance
matrix. Each column of Uk is normalized to unit variance. This type of approach will
tend to increase the absolute distance of outliers from the data mean when they
deviate along low-variance directions. In fact, for low-dimensional data, the value of
the rank, k, might be the full dimensionality but the distortion of the ellipsoidal data
distribution to a spherical distribution will change the relative propensity of different
points to be considered outliers.
2. The squared distance of each point from the data mean is reported as its outlier score.
Although some of the low-variance principal components may be dropped (in order to
avoid directions in which variances are caused by computational errors), the primary goal
of whitening is to change the relative importance of the independent directions, so as to
emphasize relative variations along the principal directions. It is the distortion of the shape of
the data distribution that is the key to the discovery of non-obvious outliers. For example,
the point A is further from the center of the original data distribution, as compared to
point B. However, the point A is aligned along the elongated axis of the data distribution,
and therefore it is much more consistent with overall shape of the distribution. This pattern
becomes more obvious when we apply principal component analysis to the data distribution.
This tends to separate B from the data distribution and the distance from the center of
the data distribution provides an outlier score that is larger for point B as compared to
7.4. APPLICATIONS OF SINGULAR VALUE DECOMPOSITION 329
point A. The resulting method is referred to as soft PCA because its uses soft distortions of
the data distribution rather than truncation of low-variance directions. In fact, low-variance
directions are more important in this case for discovering outliers. This approach is also
referred to as the Mahalanobis method because the distance of each point from the center
of the data distribution after PCA-based normalization is equivalent to the Mahalanobis
distance. Intuitively, the Mahalanobis distance is the exponent of the Gaussian distribution,
which assumes that the original data has an ellipsoidal shape. The whitening along principal
component directions simply discovers which points are unlikely to belong to this Gaussian
distribution.
Definition 7.4.1 (Mahalanobis Distance) Let X be a d-dimensional row vector from
the data set and μ be the mean (row vector) of a data set. Let C be the d × d covariance
matrix of a d-dimensional data set in which the (i, j)th entry is the covariance between the
dimensions i and j. Then, the squared Mahalanobis distance of the point X is given by the
following:
M aha(X, μ)2 = (X − μ)C −1 (X − μ)T (7.14)
At first glance, the Mahalanobis distance seems to have little to do with PCA or to nor-
malization of points by the standard deviations along principal components. However, the
key point is that the covariance matrix can be expressed as V ΔV T , where the columns of
V contain the eigenvectors. Then, the Mahalanobis distance can be expressed in terms of
the eigenvectors as follows:
right singular vectors. The decomposition D = QΣP T provides the left singular vectors
in the columns of Q and the right singular vectors in the columns of P . The (scaled) left
singular vectors QΣ provide the embedding, whereas the right singular vectors P provide
the basis. Either of the two matrices can be used to compute the transformed data in
the standard version of SVD. While the direct extraction of the right singular vectors
is more common because of the intuitive appeal of a basis, the left singular vectors can
directly provide the embeddings (without worrying about a basis). In some application-
centric settings, no multidimensional representation of the data is available, but only a
similarity matrix S is available. For example, S might represent the pairwise similarities
between a set of small graph objects. In such cases, one can assume that the provided
similarity matrix corresponds to DDT for some unknown n × n matrix D, whose rows
contain the multidimensional representations of the graph objects. Note that the matrix
D might have as many as n dimensions because any set of n objects (together with the
origin) always defines an n-dimensional plane. In such cases, one can simply diagonalize S
as follows:
S = DDT = QΣ2 QT = (QΣ)(QΣ)T
The n × n matrix QΣ is provides the multidimensional embeddings of the points in its
rows. If the similarity matrix S was derived by using dot products on multidimensional
data, then the resulting representation will provide the vanilla SVD embedding of D.
Note that any rotated representation DV of D will provide the same embedding, because
(DV )(DV )T = D(V T V )D = DDT . We cannot control the basis in which the unknown
matrix D is represented in the final embedding QΣ; singular value decomposition happens
to choose the basis in which the columns of the embedded representation are orthogonal.
This type of approach works only when the similarity matrix S is positive semidefinite,
because the eigenvalues in Σ2 need to be nonnegative. Such similarity matrices are referred
to as kernel matrices. You will learn more about kernel matrices in Chapter 9.
This type of approach is referred to as feature engineering, because we can convert any
arbitrary object (e.g., graph) to a multidimensional representation by using the pairwise
similarities between them. For example, one can combine the Mahalanobis method for outlier
detection (see previous section) with the feature engineering approach discussed in this
section. Consider a set of n graphs with an n × n similarity matrix S. We wish to identify
the graphs that should be labeled as outliers. One can extract the embedding QΣ from the
diagonalization S = QΣ2 QT . By whitening the representation, one obtains the embedding
Q in which each column has unit variance. The distance of each row in Q from the mean
of the rows of Q provides the kernel Mahalanobis outlier score. Even for multidimensional
data, one can extract more insightful features by replacing the dot products in DDT with
other similarity functions between points. In Chapter 9, we will provide specific examples
of such similarity functions.
in finding the left singular vectors. In most cases, we only need to find the top-k singular
vectors, where k mbox{n, d}. If we use DT D to compute the d × k matrix P containing
the right singular vectors, the left singular vectors will be contained in the n × k matrix
Q = DP Σ−1 . The k × k matrix Σ is computed by placing the square root of the eigenvalues
on the diagonal of the matrix. We can assume that we are only interested in non-zero
singular vectors, and therefore Σ is invertible. On the other hand, if the left singular vectors
are found by diagonalizing DDT , then the matrix P can be computed as P = DT QΣ−1 .
This approach is inefficient for sparse matrices. For example, consider a text data set
in which each row contains about 100 non-zero values, but the dimensionality d of the
row is 105 . Similarly, the collection contains 106 documents, which is not large by modern
standards. The number of non-zero entries in D is 108 , which is much smaller than the
total number of entries in D. Therefore, this is a sparse matrix for which special data
structures can be used. On the other hand, DT D is a dense matrix that contains 1010
entries. Therefore, it is inefficient to work with DT D as compared to D. In the following,
we present the generalization of the power method discussed in Section 3.5 that works with
D rather than DT D. It is noteworthy that this method is not optimized for efficiency, but it
provides the starting points for understanding some efficient methods such as the Lanczos
algorithm [52]. In recent years, methods based on QR decomposition have become more
popular. A specific example is the Golub and Kahan algorithm [52].
[DT (Dp1 )]
p1 ⇐
[DT (Dp1 )]
The projection of the data matrix D on the vector p1 has an energy that is equal to the
square of the first singular value. Therefore, the first singular value σ11 is obtained by using
the L2 -norm of the vector Dp1 . The first column q 1 of Q is obtained by a single execution
of the following step:
Dp1
q1 ⇐ (7.18)
σ11
The above result is a 1-dimensional simplification of Q = DP Σ−1 . This completes the
determination of the first set of singular vectors and singular values. The next eigenvector
and eigenvalue pair is obtained by making use of the spectral decomposition of Equation 7.1.
One possibility is to remove the rank-1 component contributed by the first set of singular
vectors by adjusting the data matrix as follows:
Once the impact of the first component has been removed, we can repeat the process to
obtain the second set of singular vectors from the modified matrix. The main problem with
this approach is that the removal of spectral components hurts the sparsity of D.
332 CHAPTER 7. SINGULAR VALUE DECOMPOSITION
Therefore, in order to avoid hurting the sparsity of D, one need not explicitly remove
the rank-1 matrix q 1 pT1 from D. Rather, the original matrix D is used, and the second set
of singular vectors can be computed by using the following iterative step (that removes the
effect of the first component within the iterations):
p2 ⇐ (DT − σ11 p1 q T1 )([D − σ11 q 1 pT1 ]p2 )
p
p2 ⇐ 2
p2
When computing a quantity like [D − σ11 q 1 pT1 ]p2 , one computes Dp2 and q 1 [pT1 p2 ] sepa-
rately. Note that the order of operations in q 1 [pT1 p2 ] is preferred to the order [q 1 pT1 ]p2 to
ensure that one never has to store large and dense matrices. Therefore, the associativity
property of matrix multiplication comes in handy to ensure that one is always working with
multiplications between vectors, or between vectors and sparse matrices. This basic idea
can be generalized to finding the kth singular vector:
k−1
k−1
pk ⇐ (DT − σrr pr q Tr )([D − σrr q r pTr ]pk )
r=1 r=1
p
pk ⇐ k
pk
In each case, it is possible to control the order of multiplication of matrices, so that one
never has to work with dense matrices. The singular value σkk is the norm of Dpk . The kth
left singular vector can be derived from the kth right singular vector as follows:
Dpk
qk ⇐ (7.20)
σkk
The entire process is repeated m times to obtain the rank-m singular value decomposition.
7.6 Summary
Singular value decomposition is one of the most fundamental techniques among a class
of methods called matrix factorization. We present a linear algebra and an optimization
perspective on singular value decomposition. These perspectives provide different insights:
• The linear algebra perspective is helpful is showing that a singular value decomposition
exists, and the singular vectors are the eigenvectors of DDT and DT D.
• The optimization perspective shows that singular value decomposition provides a ma-
trix factorization with the least error. The optimization perspective can be generalized
to other forms of matrix factorization, which is the subject of the next chapter.
Singular value decomposition has numerous applications in machine learning applications
like least-squares regression. The basic ideas in singular value decomposition also provide
the foundations for kernel methods, which are discussed in Chapter 9.
singular value decomposition are discussed in [52, 130]. The noise reduction properties of
singular value decomposition are discussed in [7]. The use of singular value decomposition
methods in outlier detection is discussed in [4].
7.8 Exercises
1. Use SVD to show the push-through identity of Problem 1.2.13 for any n × d matrix
D and scalar λ > 0:
(λId + DT D)−1 DT = DT (λIn + DDT )−1
This exercise is almost the same as Problem 1.2.13 in Chapter 1.
2. Let D be an n × d data matrix, and y be an n-dimensional column vector containing
the dependent variables of linear regression. The Tikhonov regularization solution to
linear regression (cf. Section 4.7.1 of Chapter 4) predicts the dependent variables of a
test instance Z using the following equation:
Prediction(Z) = Z W = Z(DT D + λI)−1 DT y
Here, the vectors Z and W are treated as 1 × d and d × 1 matrices, respectively. Show
using the result of Exercise 1, how you can write the above prediction purely in terms
of similarities between training points or between Z and training points.
3. Suppose that you are given a truncated SVD D ≈ QΣP T of rank-k. Show how you can
use this solution to derive an alternative rank-k decomposition Q Σ P T in which the
unit columns of Q (or/and P ) might not be mutually orthogonal and the truncation
error is the same.
4. Suppose that you are given a truncated SVD D ≈ QΣP T of rank-k. Two of the non-
zero singular values are identical. The corresponding right singular vectors are [1, 0, 0]T
and [0, 1, 0]T . Show how you can use this solution to derive an alternative rank-k SVD
Q Σ P T for which the truncation error is the same. At least some columns of matrices
Q and P need to be non-trivially different from the corresponding columns in Q and
P (i.e., the ith column of Q should not be derivable from the ith column of Q by
simply multiplying with either −1 or +1). Give a specific example of how you might
manipulate the right singular vectors to obtain a non-trivially different solution.
5. Suppose that you are given a particular solution x = x0 that satisfies the system of
equations Ax = b. Here, A is an n × d matrix, x is a d-dimensional vector of variables,
and b is an n-dimensional vector of constants. Show that all possible solutions to this
system of equations are of the form x0 + v, where v is any vector drawn from a vector
space V. Show that V can be found easily using SVD. [Hint: Think about the system
of equations Ax = 0.]
6. Consider the n × d matrix D. Construct the (n + d) × (n + d) matrix B as follows:
0 DT
B=
D 0
Note that the matrix B is square and symmetric. Show that diagonalizing B yields all
the information needed for constructing the SVD of D. [Hint: Relate the eigenvectors
of B to the singular vectors of SVD.]
334 CHAPTER 7. SINGULAR VALUE DECOMPOSITION
12. Recall from Chapter 3 that the determinant of a square matrix is equal to the product
of its eigenvalues. Show that the determinant of a square matrix is also equal to the
product of its singular values but only in absolute magnitude. Show that the Frobenius
norm of the inverse of a d × d square matrix A is equal to the sum of squared inverses
of the singular values of A.
13. Show using SVD that a square matrix A is symmetric (i.e., A = AT ) if and only if
AAT = AT A.
14. Suppose that you are given the following valid SVD of a matrix:
⎡ ⎤⎡ ⎤⎡ ⎤
1 √0 √0 2 0 0 1 √0 √0
D = ⎣ 0 1/√2 1/√2 ⎦ ⎣ 0 1 0 ⎦⎣ 0 1/√2 −1/√2 ⎦
0 1/ 2 −1/ 2 0 0 1 0 1/ 2 1/ 2
Is the SVD of this matrix unique? You may ignore multiplication of singular vectors
by −1 as violating uniqueness. If the SVD is unique, discuss why this is the case. If
the SVD is not unique, provide an alternative SVD of this matrix.
15. State a simple way to find the SVD of (a) a diagonal matrix with both positive and
negative entries that are all different; and (b) an orthogonal matrix. Is the SVD unique
in these cases?
16. Show that the largest singular value of (A + B) is at most the sum of the largest
singular values of each of A and B. Also show that the largest singular value of AB
is at most the product of the largest singular values of A and B. Finally, show that
the largest singular value of a matrix is a convex function of the matrix entries.
17. If A is a square matrix, use SVD to show that AAT and AT A are similar. What
happens when A is rectangular?
18. The Frobenius norm of a matrix A is defined as the trace of either AAT or AT A. Let
P be a d × k matrix with orthonormal columns. Let D be an n × d data matrix. Show
that the squared Frobenius norm of DP is the same as that of DP P T . Interpret the
matrices DP and DP P T in terms of their relationship with D, when P contains the
top-k right singular vectors of the SVD of D.
19. Consider two data matrices D1 and D2 that share the same scatter matrix D1T D1 =
D2T D2 but are otherwise different. We aim to show that the columns of one are rotre-
flections of the other and vice versa. Show that a partially shared (full) singular value
decomposition can be found for D1 and D2 , so that D1 = Q1 ΣP T and D2 = Q2 ΣP T .
Use this fact to show that D2 = Q12 D1 for some orthogonal matrix Q12 .
T
20. Let A = a b be a rank-1 matrix for vectors a, b ∈ Rn . Find the non-zero eigenvectors,
eigenvalues, singular vectors, and singular values of A.
21. What are the singular values of (i) a d × d Givens rotation matrix, (ii) a d × d
Householder reflection matrix, (iii) a d × d projection matrix of rank r, (iv) a 2 × 2
shear matrix A = [aij ] with 1s along the diagonal, and a value of a12 = 2 in the upper
right corner.
336 CHAPTER 7. SINGULAR VALUE DECOMPOSITION
22. Consider an n × d matrix A with linearly independent columns and non-zero sin-
gular values σ1 . . . σd . Find the non-zero singular values of AT (AAT )5 , AT (AAT )5 A,
A(AT A)−2 AT , and A(AT A)−1 AT . Do you recognize the last of these matrices? Which
of these matrices have economy SVDs with zero singular values in addition to the non-
zero singular values?
23. Suppose that you have the n × 3 scatterplot matrix D of an ellipsoid in 3-dimensions,
whose three axes have lengths 3, 2, and 1, respectively. The axes directions of this
ellipsoid are [1, 1, 0], [1, −1, 0], and [0, 0, 1]. You multiply the scatter plot matrix D
with a 3 × 3 transformation matrix A to obtain the scatter plot D = DA of a new
ellipsoid, in which the axes [1, 1, 1], [1, −2, 1], and [1, 0, −1] have lengths 12, 6, and
5, respectively. Write the singular value decompositions of two possible matrices that
can perform the transformation. You should be able to write down the SVDs with
very little numerical calculation. [The answer to this question is not unique, as the
specific mapping of points between the two ellipsoids is not known. For example, an
axis direction in the original ellipsoid may or may not match with an axis direction
in the transformed ellipsoid.]
24. Regularization impact: Consider the regularized least-squares regression prob-
lem of minimizing Ax − b2 + λx2 for d-dimensional optimization vector x, n-
dimensional vector b, nonnegative scalar λ, and n × d matrix A. There are several
ways of showing that the norm of the optimum solution x = x∗ is non-increasing with
increasing λ (and this is also intuitively clear from the nature of the optimization
formulation). Use SVD to show that the optimum solution x∗ = (AT A + λId )−1 AT b
has non-increasing norm with increasing λ.
25. The function f (λ) arises commonly in spherically constrained least-squares regression:
T
f (λ) = b A(AT A + λI)−2 AT b
r ! "2
σii ci
f (λ) = 2 +λ
i=1
σii
28. Generalized singular value decomposition: The generalized singular value de-
composition of an n × d matrix D is given by D = QΣP T , where QT S1 Q = I and
7.8. EXERCISES 337
Matrix Factorization
“He who knows only his own side of the case knows little of that. His reasons
may be good, and no one may have been able to refute them. But if he is equally
unable to refute the reasons on the opposite side, if he does not so much as know
what they are, he has no ground for preferring either opinion.”–John Stuart Mill
8.1 Introduction
Just as multiplication can be generalized from scalars to matrices, the notion of factorization
can also be generalized from scalars to matrices. Exact matrix factorizations need to satisfy
the size and rank constraints that are imposed on matrix multiplication. For example, when
an n × d matrix A is factorized into two matrices B and C (i.e., A = BC), the matrices
B and C must be of sizes n × k and k × d for some constant k. For exact factorization to
occur, the value of k must be equal to at least the rank of A. This is because the rank of
A is at most equal to the minimum of the ranks of B and C. In practice, it is common to
perform approximate factorization with much smaller values of k than the rank of A.
As in scalars, the factorization of a matrix is not unique. For example, the scalar 12 can
be factorized into 2 and 6, or it can be factorized into 3 and 4. If we allow real factors,
there are an infinite number of possible factorizations of a given scalar. The same is true of
matrices, where even the sizes of the factors might vary. For example, consider the following
factorizations of the same matrix:
3 6 1 1 1 2 4
= 3 6 =
3 6 1 1 1 1 2
It is clear that a given matrix can be factorized in an unlimited number of ways. However,
factorizations with certain types of properties are more useful than others. There are two
types of properties that are commonly desired in decompositions:
1. Linear algebra properties with exact decomposition: In these cases, one tries to create
decompositions in which the individual components of the factorization have specific
linear algebra/geometric properties such as orthogonality, triangular nature of the
matrix, and so on. These types of properties are useful for various linear algebra ap-
plications like basis construction. All the decompositions that we have seen so far, such
as LU decomposition, QR decomposition, and SVD, have linear algebra properties.
2. Optimization and compression properties with approximate decomposition: In these
cases, one is attempting to factorize a much larger matrix into two or more smaller
matrices. Truncated SVD is an example of this type of factorization. Consider the
n × d matrix D, which is truncated to rank-k to create the following factorization:
D ≈ Qk Σk PkT (8.1)
Here, Qk is an n×k orthogonal matrix, Σk is a k ×k diagonal matrix with nonnegative
entries, and Pk is a d × k orthogonal matrix. The total number of entries in all three
matrices is (n+d+k)k, which is often much smaller than the nd entries in the original
matrix for large values of n and d. For example, if n = d = 106 and k = 1000, the
number of entries in D is 1012 , whereas the total number of entries in the factorized
matrices is approximately 2×109 , which is only 0.2% of the original number of entries.
Singular value decomposition is one of the few factorizations that is useful both in terms
of its linear algebra properties (when used in exact form), and in terms of its compression
properties (when used in truncated form). The value k is referred to as the rank of the
factorization. The optimization view of matrix factorization D ≈ U V T is particularly useful
in machine learning by instantiating D, U , and V as follows:
1. When D is a document-term matrix containing frequencies of words (columns of D)
in documents (rows of D), the rows of U provide latent representations of documents,
whereas the rows of V provide latent representations of words.
2. A rating is a numerical score that a user gives to an item (e.g., movie). Recommender
systems collect ratings of users for items in order to make predictions of ratings for
items they have not yet evaluated. When D is a user-item matrix of ratings, the rows
correspond to users and the columns correspond to items. The entries of D contain
ratings. Matrix factorization decomposes the incomplete matrix D ≈ U V T using only
observed ratings. The rows of U provide latent representations of users, whereas the
rows of V are the latent representations of items. The matrix U V T reconstructs the
entire ratings matrix (including predictions for missing ratings).
3. Let D ≈ U V T be a graph adjacency matrix, so that the (i, j)th entry of D contains
the weight of edge between nodes i and j. In such a case, the rows of both U and V
are the latent representations of nodes. The latent representations of U and V can be
used for applications like clustering and link prediction (cf. Chapters 9 and 10).
In the optimization-centric view, one can impose specific properties on the decomposed ma-
trices as constraints of the optimization problem (such as nonnegativity of matrix entries).
These specific properties are often useful in various types of applications.
This chapter is organized as follows. The next section provides an overview of the
optimization-centric view to matrix factorization. Unconstrained matrix factorization meth-
ods are discussed in Section 8.3. Nonnegative matrix factorization methods are introduced
in Section 8.4. Weighted matrix factorization methods are introduced in Section 8.5. Lo-
gistic and maximum-margin matrix factorizations are discussed in Section 8.6. Generalized
low-rank models are introduced in Section 8.7. Methods for shared matrix factorization are
discussed in Section 8.8. Factorization machines are discussed in Section 8.9. A summary is
given in Section 8.10.
8.2. OPTIMIZATION-BASED MATRIX FACTORIZATION 341
Constraints are used to ensure specific properties of the factor matrices. A commonly used
constraint is that of nonnegativity of the matrices U and V . The simplest possible objec-
tive function, which is used in SVD, is D − U V T 2F . Other objective functions such as
log-likelihood and I-divergence are also used to create probabilistic models. Most matrix
factorization formulations are not convex; nevertheless, gradient descent works quite well
in these cases.
In some cases, it is possible to weight specific matrix entries in the objective function. In
fact, for certain types of matrices, it makes more sense to interpret the entry of the matrix
as a weight. This is common in the case of implicit feedback data in recommender systems,
where all entries are assumed to be binary, and the values of non-zero entries are treated
as weights. For example, a matrix containing the quantities of sales of products (column
identifiers) to various users (row identifiers) is always assumed to be binary depending on
whether or not users have bought the products. This approach is also sometimes used with
frequency matrices in the text domain [101].
Logistic matrix factorization methods apply a logistic function on the entries of U V T
in order to materialize the probability that a particular entry is 1. Such an approach works
well for matrices in which the nonnegative values should be treated as frequencies of binary
values. The basic idea here is to assume that the entries of D are frequencies obtained
by repeatedly sampling each entry in the matrix with probabilities present in the matrix
P = Sigmoid(U V T ). The sigmoid function is defined as follows:
1
Sigmoid(x) =
1 + exp(−x)
Note that D and P are two matrices of the same size, and the frequencies in D will be
roughly proportional to the entries in P :
D ∼ Instantiation of frequencies obtained by sampling from P
The optimization model maximizes a log-likelihood function based on this probabilistic
model. A surprisingly large number of applications in machine learning can be shown to
be special cases of matrix factorization, especially if one is willing to incorporate complex
342 CHAPTER 8. MATRIX FACTORIZATION
objective functions and constraints in the factorization. Matrix factorization is used for
feature engineering, clustering, kernel methods, link prediction, and recommendations. In
each case, the secret is to choose an appropriate objective function and corresponding
constraints for the problem at hand. As a specific example, we will show that the k-means
algorithm is a special case of matrix factorization, albeit with some special constraints.
This is a mixed integer matrix factorization problem, because the entries of U are constrained
to be binary values. An equivalent optimization formulation is given in Section 4.10.3 of
Chapter 4. In this case, each row of U can be shown to contain exactly a single 1, correspond-
ing to the cluster membership of that row. Each column of V contains the d-dimensional
centroid of one of the k clusters. As discussed in Section 4.10.3 of Chapter 4, this opti-
mization problem can be solved using block coordinate descent, which is identical to the
k-means algorithm. The fact that k-means is a special case of matrix factorization is an ex-
ample of the fact that the family of matrix factorization methods is extremely expressive in
its relationship to a wide variety of machine learning methods. This chapter will, therefore,
explore multiple ways of performing matrix factorization together with their applications.
not be orthonormal sets for an optimum solution to exist. Given an optimum pair (U0 , V0 ),
one could change the basis of the column space of V0 to a non-orthogonal one and adjust U0
to the corresponding coordinates in the non-orthogonal basis system, so that the product
U0 V0T does not change. To understand this point, we recommend the reader to solve the
following problem:
Problem 8.3.1 Let D ≈ Qk Σk PkT be the rank-k SVD of D. The results in Chapter 7
show that (U, V ) = (Qk Σk , Pk ) represents an optimal pair of rank-k solution matrices
to the unconstrained matrix factorization problem that is posed in this section. Show that
(U, V ) = (Qk Σk RkT , Pk Rk−1 ) is an alternative optimal solution to the unconstrained matrix
factorization problem for any k × k invertible matrix Rk .
d
= (eij )(−vjq ) ∀i ∈ {1 . . . n}, q ∈ {1 . . . k}
j=1
# %
∂J n k
= xij − uis · vjs (−uiq ) ∀j ∈ {1 . . . d}, q ∈ {1 . . . k}
∂vjq i=1 s=1
n
= (eij )(−uiq ) ∀j ∈ {1 . . . d}, q ∈ {1 . . . k}
i=1
344 CHAPTER 8. MATRIX FACTORIZATION
One can also express these derivatives in terms of matrices. Let E = [eij ] be the n×d matrix
of errors. In the denominator layout of matrix calculus, the derivatives can be expressed as
follows:
∂J
= −(D − U V T )V = −EV
∂U
∂J
= −(D − U V T )T U = −E T U
∂V
The above matrix calculus identity can be verified by using the relatively tedious process of
expanding the (i, q)th and (j, q)th entries of each of the above matrices on the right-hand
∂J
side, and showing that they are equivalent to the (corresponding) scalar derivatives ∂uiq
and
∂J
∂vjq .An alternative approach that directly uses the matrix calculus identities of Chapter 4
is given in Figure 8.1. The reader may choose to skip over this derivation without loss of
continuity.
The optimality conditions for this optimization problem are therefore obtained by setting
these derivatives to 0. Therefore, we obtain the optimality conditions DV = U V T V and
DT U = V U T U . These optimality conditions can be shown to hold for the solution obtained
from SVD U = Qk Σk and V = Pk .
Problem 8.3.2 Let Qk Σk PkT be the rank-k truncated SVD of matrix D. Show that the
solution U = Qk Σk and V = Pk satisfies the optimality conditions DV = U V T V and
DT U = V U T U .
A useful hint for solving the above problem is to use the spectral decomposition of SVD as
a sum of rank-1 matrices.
Although the optimality condition leads to the standard SVD solution D ≈ [Qk Σk ]PkT ,
one can also find an optimal solution by using gradient descent. The updates for gradient
descent are as follows:
∂J
U ⇐U −α = U + αEV
∂U
∂J
V ⇐V −α = V + αE T U
∂V
Here, α > 0 is the learning rate.
The optimization model is identical to that of SVD. If the aforementioned gradient
descent method is used (instead of the power iteration method of the previous chapter), one
will typically obtain solutions that are equally good in terms of objective function value,
but for which the columns of U (or V ) are not mutually orthogonal. The power iteration
methods yields solutions with orthogonal columns. Although the standardized SVD solution
with orthonormal columns is typically not obtained by gradient descent, the k columns of
U will span1 the same subspace as the columns of Qk , and the columns of V will span the
same subspace as the columns of Pk .
The gradient-descent approach can be implemented efficiently when the matrix D is
sparse by sampling entries from the matrix for making updates. This is essentially a stochas-
tic gradient descent method. In other words, we sample an entry (i, j) and compute its error
1 This will occur under the assumption that the top-k eigenvalues of D T D are distinct. Tied eigenvalues
result in a non-unique solution for SVD, which might sometimes result in some differences in the subspace
corresponding to the smallest eigenvalue within the rank-k solution.
8.3. UNCONSTRAINED MATRIX FACTORIZATION 345
Consider the following objective function the n d matrix D with rank-k matrices U and
V:
Matrix calculus can be used for computing derivatives with respect to U and V after
decomposing the Frobenius norm into row-wise vector norms or column-wise vector norms,
depending on whether we wish to compute the derivative of J with respect to U or V .
Let X i be the ith row of D (row vector), dj be the jth column of D (column vector), ui
be the ith row of U (row vector), and v j be the jth row of V (row vector).
Then, the Frobenius norm can be decomposed in row-wise fashion as follows:
To compute the derivative of the two non-constant terms with respect to ui , we can use
identities (i) and (ii) of Table 4.2(a) in Chapter 4. This yields the following:
In order to compute the derivative with respect to V , one will need to decompose the
squared Frobenius norm in J in column-wise fashion as follows:
One can again use identities (i) and (ii) of Table 4.2(a) to show the following:
As in the previous case, one can put together the derivatives for different rows of V to
obtain the following:
eij . Subsequently, we make the following updates to the ith row ui of U and the jth row v j
of V , which are also referred to as latent factors:
ui ⇐ ui + αeij v j
v j ⇐ v j + αeij ui
One cycles through the sampled entries of the matrix (making the above updates) until
convergence. The fact that we can sample entries of the matrix for updates means that
we do not need fully specified matrices in order to learn the latent factors. This basic idea
forms the foundations of recommender systems.
Problem 8.3.3 (Regularized Matrix Factorization) Let D be an n × d matrix that
we want to factorize using a rank-k decomposition into U and V . Suppose that we add the
regularization terms λ2 (U 2 + V 2 ) to the objective function 12 D − U V T 2F . Show that
the gradient descent updates need to be modified as follows:
U ⇐ U (1 − αλ) + αEV
V ⇐ V (1 − αλ) + αE T U
The entries of the matrices U and V can be initialized as follows. First, all n × k entries in
U are independently sampled from a standard normal distribution, and then each column is
normalized to the unit vector. Note that the matrix U contains roughly orthogonal columns,
if n is large (see Exercise 18 of Chapter 1). The matrix V is selected to be DT U . This
approach ensures that U V T yields U U T D, where U U T is (roughly) a projection matrix
because of the approximate orthogonality of U . Thus, the initialized product is already
closely related to the target matrix.
GOODFELLAS
GODFATHER
GLADIATOR
SPARTACUS
SCARFACE
BEN-HUR
TOM 1 5 2
JIM 5 4
JOE 5 3 1
ANN 3 4
JILL 3 5
SUE 5 4
Note the “hat” symbol (i.e., circumflex) on the rating on the left-hand side to indicate that
it is a predicted value rather than an observed value. The error eij of the prediction is
eij = xij − x̂ij for ratings that are observed.
One can then formulate the objective function in terms of the observed entries in D as
follows:
# %2
1 λ 2 λ 2
k n k d k
Minimize J = xij − uis · vjs + uis + v
2 s=1
2 i=1 s=1 2 j=1 s=1 js
(i,j)∈S
The main difference from the objective function in the previous section is the use of only
observed entries in S for squared error computation, and the use of regularization. As in
the previous section, we can compute the partial derivative of the objective function with
respect to the various parameters as follows:
∂J
= (eij )(−vjq ) + λuiq ∀i ∈ {1 . . . n}, q ∈ {1 . . . k}
∂uiq
j:(i,j)∈S
∂J
= (eij )(−uiq ) + λvjq ∀j ∈ {1 . . . d}, q ∈ {1 . . . k}
∂vjq
i:(i,j)∈S
One can also define these errors in matrix calculus notation. Let E be an n × d error matrix,
which is defined to be eij for each observed entry (i, j) ∈ S and 0 for each missing entry
in the ratings matrix. Note that (unlike vanilla SVD), the error matrix E is already sparse
because the vast majority of entries are not specified.
∂J
= −EV + λU
∂U
∂J
= −E T U + λV
∂V
348 CHAPTER 8. MATRIX FACTORIZATION
Note that the form of the derivative is exactly identical to traditional SVD except for the
regularization term and the difference in how the error matrix is defined (to account for
missing ratings). Then, the gradient-descent updates for the matrices U and V are as follows:
∂J
U ⇐U −α = U (1 − αλ) + αEV
∂U
∂J
V ⇐V −α = V (1 − αλ) + αE T U
∂V
Here, α > 0 is the learning rate. The matrix E can be explicitly materialized as a sparse error
matrix, and the above updates can be achieved using only sparse matrix multiplications.
Although this approach is referred to as singular value decomposition in the literature on
recommender systems (because of the relationship of unconstrained matrix factorization
with the SVD optimization model), one will typically not obtain orthogonal columns of U
and V with this approach.
ui ⇐ ui (1 − αλ) + eij v j
v j ⇐ v j (1 − αλ) + eij ui
Here, α > 0 is the learning rate. Note that exactly 2k entries in the matrices U and V are
updated for each observed entry in S. Therefore, a single cycle of stochastic gradient descent
through all the observed ratings will make exactly 2k|S| updates.
√ One starts by initializing
the matrices U and V to uniform random values in (0, M/ k), where M is the maximum
value of a rating. This type of initialization enures that the initial product U V T yields
values in a similar order of magnitude as the original ratings matrix. One then performs
the aforementioned updates to convergence. Stochastic gradient descent tends to converge
faster than gradient descent, and is often the method of choice in recommender systems.
∂J
= (eij )(−vjq ) + λuiq = 0
∂uiq
j:(i,j)∈S
2
uiq (λ + vjq )= (eij + uiq vjq )vjq
j:(i,j)∈S j:(i,j)∈S
j:(i,j)∈S (eij + uiq vjq )vjq
uiq = 2
λ + j:(i,j)∈S vjq
In
the second2 step of the above algebraic manipulation, we added the quantity
j:(i,j)∈S uiq vjq to both sides in order to create a stable form of the update. The final
form of the algebraic equation contains uiq on both sides, and therefore it provides an iter-
ative update. One can also derive a similar iterative update for each vjq . The updates for
the various values of uiq and vjq need to be performed sequentially as follows:
j:(i,j)∈S (eij + uiq vjq )vjq
uiq ⇐ 2 ∀i, q
λ + j:(i,j)∈S vjq
i:(i,j)∈S (eij + uiq vjq )uiq
vjq ⇐ ∀j, q
λ + i:(i,j)∈S u2iq
One simply starts with random values of the parameters in the matrices U and V , and
performs the above updates. One cycles through the (m + n) · k parameters in U and V
with these updates until convergence is reached.
1. The quantities of items bought by a user are a nonnegative values. The rows of the
matrix correspond to users, and the columns correspond to items. The (i, j)th entry
corresponds to the number of units bought by user i for item j.
Why is the nonnegativity of factors useful? As we will see later, the nonnegativity of the
factor matrices results in a very high level of interpretability of the factorization. Secondly,
the nonnegativity of the factor matrices plays a role in regularizing the factorization. While
the error of the factorization always increases by adding constraints such as nonnegativ-
ity, the predictions obtained from the factorization often improve for out-of-sample data
(such as making predictions with missing data). This is an example of how the goals of
optimization in machine learning are often different from those of traditional optimization
(cf. Section 4.5.3 of Chapter 4).
1 λ λ
Minimize J = D − U V T 2F + U |2F + V 2F
2 2 2
subject to:
U ≥ 0, V ≥0
It is evident that this problem differs from unconstrained matrix factorization only in terms
of the addition of the nonnegativity constraints. These are box constraints, which are par-
ticularly easy to address in the context of constrained optimization (cf. Section 6.3.2 of
Chapter 6).
8.4. NONNEGATIVE MATRIX FACTORIZATION 351
∂J
= −(D − U V T )V + λU
∂U
∂J
= −(D − U V T )T U + λV
∂V
Therefore, the gradient-descent updates (without worrying about the nonnegativity con-
straints) are as follows:
∂J
U ⇐U −α = U (1 − αλ) + α(D − U V T )V
∂U
∂J
V ⇐V −α = V (1 − αλ) + α(D − U V T )T U
∂V
The main difference is that we add two steps to the updates to ensure nonnegativity of each
matrix entry:
U ⇐ max{U, 0}, V ⇐ max{V, 0}
This procedure is based on the ideas discussed in Section 6.3.2 on box constraints. In
practice, projected gradient descent is used rarely for nonnegative matrix factorization.
1 λ λ n k d k
L= D − U V T 2F + U |2F + V 2F − uir αir − vjr βjr (8.4)
2 2 2 i=1 r=1 j=1 r=1
As discussed in Section 6.4, the first step is to compute the gradient of the Lagrangian
relaxation with respect to the (minimization) optimization variables uis and vjs . Therefore,
we have:
352 CHAPTER 8. MATRIX FACTORIZATION
∂L
= −(DV )is + (U V T V )is + λuis − αis ∀i ∈ {1, . . . , n}, s ∈ {1, . . . , k} (8.6)
∂uis
∂L
= −(DT U )js + (V U T U )js + λvjs − βjs ∀j ∈ {1, . . . , d}, s ∈ {1, . . . , k} (8.7)
∂vjs
These partial derivatives are set to zero in order to obtain the following conditions:
We would like to eliminate the Lagrangian parameters and set up the optimization condi-
tions purely in terms of U and V . In this context, the complementary slackness components
of the Kuhn-Tucker optimality conditions turn out to be very helpful. These conditions are
uis αis = 0 and vjs βjs = 0 over all parameters. By multiplying Equation 8.8 with uis and
multiplying Equation 8.9 with vjs , one obtains a condition purely in terms of the primal
variables:
− (DV )is uis + (U V T V )is uis + λu2is − αis uis = 0 ∀i ∈ {1, . . . , n}, s ∈ {1, . . . , k}
0
(8.10)
− (D U )js vjs + (V U U )js vjs +
T T 2
λvjs − βjs vjs = 0 ∀j ∈ {1, . . . , d}, s ∈ {1, . . . , k}
0
(8.11)
One can rewrite these optimality conditions, so that a single parameter occurs on one side
of the condition:
[(DV )is − λuis ]uis
uis = ∀i ∈ {1, . . . , n}, s ∈ {1, . . . , k} (8.12)
(U V T V )is
[(DT U )js − λvjs ]vjs
vjs = ∀j ∈ {1, . . . , d}, s ∈ {1, . . . , k} (8.13)
(V U T U )js
The aforementioned conditions can be used in order to perform iterative updates. A small
value of is typically added to the denominator to avoid ill conditioning. Therefore, the
iterative approach starts by initializing the parameters in U and V to nonnegative random
values in (0, 1) and then uses the following updates:
These iterations are then repeated to convergence. Improved initialization provides signifi-
cant advantages [76].
As in all other forms of matrix factorization, it is possible to convert the factoriza-
tion U V T into the three-way factorization QΣP T by using the approach discussed in
Section 7.2.7 of Chapter 7. For a nonnegative factorization, it makes sense to use L1 -
normalization on each column of U and V , so that the columns of the resulting matrices Q
and P each sum to 1. This type of normalization makes nonnegative factorization similar to
8.4. NONNEGATIVE MATRIX FACTORIZATION 353
a closely related factorization known as Probabilistic Semantic Analysis (PLSA). The main
difference between PLSA and nonnegative matrix factorization is that the former uses a
maximum likelihood optimization function (or I-divergence objective) whereas nonnegative
matrix factorization (typically) uses the Frobenius norm. Refer to Section 8.4.5.
k
D≈ Ur VrT (8.16)
r=1
CHE
CHEETTAH
ARI
AR
JAGUA
PORSC
FERRA
R
TIGER
CARS
LION
CATS
PORSCHE
CHEETAH
RRARI
GUAR
X1 2 2 1 2 0 0 X1 2 0
GER
ON
JAG
FER
LIO
TIG
CATS X2 2 3 3 3 0 0 X2 3 0
X3 1 X3 1 0 CATS 1 1 1 1 0 0
X
1 1 1 0 0
BOTH X4 2 2 2 3 1 1 X4 2 1 CARS 0 0 0 1 1 1
X5 0 0 0 1 1 1 X5 0 01 VT
CARS
X6 0 0 0 2 1 2 X6 0 20
D U
(a) Two-way factorization
PORSCHE
CHEETAH
FERRARI
JAGUAR
CARS
TIGER
CATS
LION
PORSCHE
CHEETAH
FERRARI
JAGUAR
X1 2 X1 0
TIGER
CARS
CATS
2 1 2 0 0
LION
X2 2 X2 0
CATS 3 3 3 0 0
X3 0 32 0 CATS 0 0
X3 1
X4 2
1 1 1 0
1
0
X4
X 0 0
12 X CARS 0 0 0
BOTH 2 2 3 1
X5 0 0 Σ PT
X5 0 0 0 1 1 1
CARS X6 0 0
X6 0 0 0 2 1 2
D Q
⎛ ⎞
lion tiger cheetah jaguar porsche ferrari
⎜ Document-1 2 2 1 2 0 0 ⎟
⎜ ⎟
⎜ Document-2 2 3 3 3 0 0 ⎟
⎜ ⎟
D=⎜
⎜ Document-3 1 1 1 1 0 0 ⎟⎟
⎜ Document-4 2 2 2 3 1 1 ⎟
⎜ ⎟
⎝ Document-5 0 0 0 1 1 1 ⎠
Document-6 0 0 0 2 1 2
This matrix represents topics related to both cars and cats. The first three documents are
related to cats, the fourth is related to both, and the last two are related to cars. The
polysemous word “jaguar” is present in documents of both topics.
A highly interpretable nonnegative factorization of rank-2 is shown in Figure 8.3(a). We
have shown an approximate decomposition containing only integers for simplicity, although
the optimal solution would (almost always) be dominated by floating point numbers in prac-
tice. It is clear that the first latent concept is related to cats and the second latent concept
is related to cars. Furthermore, documents are represented by two non-negative coordinates
indicating their affinity to the two topics. Correspondingly, the first three documents have
strong positive coordinates for cats, the fourth has strong positive coordinates in both, and
8.4. NONNEGATIVE MATRIX FACTORIZATION 355
PORSCHE
CHEETAH
FERRARI
JAGUAR
TIGER
LION
CATS
PORSCHEE
CHEETAH
FERRARI
X1 2
JAGUAR
X1 2 2 2 2 0 0
TIGER
LION
X2 3 3 3 3 3 0 0
X2
1
X CATS =
X3 X3 1 1 1 1 0 0
12 1 1 1 1 0 0
X4 X4 2 2 2 2 0 0
X5 0 X5 0 0 0 0 0 0
X6 0 X6 0 0 0 0 0 0
LATENT COMPONENT (CATS)
ORSCHE
CHEETAH
FERRARI
GUAR
GER
ON
JAG
TIG
LIO
PO
CARS
PORSCHE
CHEETAH
FFERRARI
X1 0 X1 0 0 0 0 0 0
AGUAR
TTIGER
LLION
X2 0 X2 0 0 0 0 0 0
JA
P
C
0 0 0 0 0 0
=
X3 0 X3
X4 1 X CARS 0 0 0 1 1 1
X4 0 0 0 1 1 1
X5 1 X5 0 0 0 1 1 1
X6 2 X6 0 0 0 2 2 2
LATENT COMPONENT (CARS)
the last two belong only to cars. The matrix V tells us that the vocabularies of the various
topics are as follows:
It is noteworthy that the polysemous word “jaguar” is included in the vocabulary of both
topics, and its usage is automatically inferred from its context (i.e., other words in doc-
ument) during the factorization process. This fact becomes especially evident when we
decompose the original matrix into two rank-1 matrices according to Equation 8.16. This
decomposition is shown in Figure 8.4 in which the rank-1 matrices for cats and cars are
shown. It is particularly interesting that the occurrences of the polysemous word “jaguar”
are nicely divided up into the two topics, which roughly correspond with their usage in
these topics.
As discussed in Section 7.2.7 of Chapter 7, any two-way matrix factorization can be
converted into a standardized three-way factorization. In the case of nonnegative matrix
factorization, it is common to use L1 -normalization, rather than L2 -normalization (which is
used in SVD). The three-way normalized representation is shown in Figure 8.3(b), and
it tells us a little bit more about the relative frequencies of the two topics. Since the
diagonal entry in Σ is 32 for cats in comparison with 12 for cars, it indicates that the
topic of cats is more dominant than cars. This is consistent with the observation that
more documents and terms in the collection are associated with cats as compared to
cars.
356 CHAPTER 8. MATRIX FACTORIZATION
i=1 j=1
(U V T )ij
subject to:
U ≥ 0, V ≥0
This formulation takes on its minimum value D = U V T . The reader is advised to solve the
following problem to obtain more insight on this point:
Problem 8.4.1 Consider the following function F (x):
F (x) = a · log(a/x) − a + x
Here, a is a constant. Show that the function achieves its minimum value x = a.
In nonnegative matrix factorization, the function F (x) is applied to each reconstructed
entry x and (corresponding) observed entry a, and then this value is aggregated over all
matrix entries. In the case of the Frobenius norm, the function x − a2 is used instead of
F (x). In both cases, the objective function tries to make x as close to a as possible. The
model requires the following iterative solution for U = [uis ] and V = [vjs ]:
d T
j=1 [Dij vjs /(U V )ij ]
uis ⇐ uis d ∀i, s
j=1 vjs
n
[Dij uis /(U V T )ij ]
vjs ⇐ vjs i=1 n ∀j, s
i=1 uis
The two-way factorization can be converted into a normalized three-way factorization using
the approach discussed in Section 7.2.7 of Chapter 7. The three-way factorization can be
interpreted from the perspective of a probabilistic generative model, which is identical to
probabilistic latent semantic analysis.
In weighted matrix factorization, we have a weight wij associated with the (i, j)th entry
of the n × d matrix D = [xij ] to be factorized. As in the case of unconstrained matrix
factorization, we assume that the two factor matrices are the n × k matrix U and the d × k
matrix V . Then, the objective function of weighted matrix factorization is as follows:
# %2
1 λ 2 λ 2
n d k n k d k
Minimize J = wij xij − uis · vjs + uis + v
2 i=1 j=1 s=1
2 i=1 s=1 2 j=1 s=1 js
Note that this objective function is different from that of unconstrained matrix factorization
only in terms of the weights wij of the entries. The partial derivative of the objective function
with respect to the various parameters can be expressed in terms of the error eij = xij − x̂ij
of the factorization as follows:
∂J n d
= (wij eij )(−vjq ) + λuiq ∀i ∈ {1 . . . n}, q ∈ {1 . . . k}
∂uiq i=1 j=1
∂J n d
= (wij eij )(−uiq ) + λvjq ∀j ∈ {1 . . . d}, q ∈ {1 . . . k}
∂vjq i=1 j=1
The main difference from unconstrained matrix factorization is in terms of weighting the
errors with wij . In order to express the aforementioned derivatives in matrix form, we define
E as an n × d error matrix in which the (i, j)th entry is eij . Furthermore W = [wij ] is an
n × d matrix containing the weights of various entries.
∂J
= −(W E)V + λU
∂U
∂J
= −(W E)T U + λV
∂V
Here, the notation indicates elementwise multiplication between two matrices of exactly
the same size. The weight matrix W essentially controls the importance of the errors of
the individual entries in gradient descent. One can, therefore, express the gradient descent
updates for the matrices U and V as follows:
∂J
U ⇐U −α = U (1 − αλ) + α(W E)V
∂U
∂J
V ⇐V −α = V (1 − αλ) + α(W E)T U
∂V
Here, α > 0 is the learning rate.
DATA
ROW VALUE
SPACE OF A
n MATRIX D=[x
VECTOR X ij]
d
RAW SPACE
ROW QUANTITY
OF A
n
MATRIX Q=[q
VECTOR X ij] d
n ROWWEIGHT
SPACE OF A
MATRIX W=[wij]
VECTOR X
Figure 8.5: Deriving the data value and weight matrices from raw quantity matrix
not). Setting the entries to binary values makes sense in cases where the final prediction of
the factorization is intended to be binary (e.g., recommend an item or link). In such a case,
a new binary data matrix D is used in lieu of Q, where the (i, j)th entry xij of D is set to 1
when the corresponding entry of Q = [qij ] is non-zero. In other applications, the values of
the raw data matrix Q are “damped” before factorization. In other words, each raw entry
qij is replaced with the damped value xij = f (qij ), where f (·) is a damping function like the
square-root or logarithm. An example of such an approach is the GloVe embedding [101]
for factorization of matrices derived from text (cf. Section 8.5.5). The weight matrix W
is also derived as a function of the quantity matrix. This overall process is illustrated in
Figure 8.5.
The choice of the weight matrix W = [wij ] is, however, more application-specific. In
some cases, the weight matrix is set to the entries in D when they are non-zero. However,
the zero entries also need to be set to specific weights. Typically, the weight of a zero entry
is either set to a constant value or it is set to a column-specific value. Allowing non-zero
weights on zero entries amounts to using negative sampling in the context of stochastic
gradient descent. As we will see later, this type of negative sampling is important in most
applications. While the weight matrix is technically dense because of the non-zero weights
on zero entries, it can still be represented in compressed form. This is because all zero entries
in a column have the same weight, and therefore one only needs to store the column-specific
weight.
Why is this type of weighted matrix factorization more desirable than vanilla nonnegative
matrix factorization? The reason is that the data matrix Q is sparse, and the vast majority
of entries are 0s. In such cases, the fact that an entry is non-zero is more important than the
specific magnitude of that value. Factorizing the values of the matrix Q might sometimes
cause problems when the different entries of the matrix vary by orders of magnitude. This
type of situation can occur in word matrices with large variations in word frequencies,
or in graphs with power-law frequency distributions. If one simply performs value-based
factorization, the preponderance of the (relatively unimportant) zero entries and the large
magnitudes of a very small number of entries might play too large a role in the factorization.
As a result, the modeling of most of the important entries in the matrix will be poor. As
8.5. WEIGHTED MATRIX FACTORIZATION 359
discussed in [65] in the case of ratings matrices, the general principle for treating raw
numerical values is as follows:
Of course, a zero value in a sparse matrix does not necessarily indicate zero confidence, which
is why one must resort to setting some non-zero weights on default values. In this section, we
will provide several application-specific examples of scenarios in which sparse values should
be treated as weights. Another useful property of weighted matrix factorization is that it
allows a very efficient trick for parameter learning when most of the entries are zeros.
ui ⇐ ui (1 − αλ) + eij v j
v j ⇐ v j (1 − αλ) + eij ui
Here, α > 0 is the learning rate. Here, ui represents the ith row of the n × k matrix U and
v j represents the jth row of the d × k matrix V . Note that exactly 2k entries in the matrices
U and V are updated for each entry in the matrix.
This type of weighted matrix factorization is particularly efficient when the vast majority
of the (raw) entries in the matrix are zeros. In many applications, the number of entries in
the n × d matrix D might be very large, but the number of non-zero entries is several orders
of magnitude lower. This is common in applications such as graphs in which a 106 × 106
adjacency matrix might have only 10 non-zero entries in each row. Therefore, the weight
matrix is also sparse, and one only needs to keep track of positive sampling probabilities
(i.e., non-zero entries in the original matrix D). All the zero weights are aggregated into
a single negative sampling probability. The stochastic gradient descent procedure works as
follows:
1. A coin is tossed with probability equal to the negative sampling rate. If the coin toss
is a success, a random entry is treated as a negative entry, and an update is performed
with the random entry (assuming that the observed value of the random entry is zero).
2. If the coin toss in the previous step is a failure, then a positive entry is sampled with
probability proportional to its weight. Subsequently, stochastic gradient descent is
performed with the positive entry.
The above weighting scheme ensures that zero entries in the raw matrix Q have a non-
zero weight. These non-zero weights of zero entries define the negative sampling probability,
when added over the various zero entries. It is suggested in [65] to use a value of θ = 40.
Then, the weighted matrix factorization of D is used to find the factor matrices U and V .
The entries (i, j) with large values of (U V T )ij are suggested recommendations of item j for
user i.
a few weights can dominate the factorization. This is obviously undesirable. Therefore, one
possibility is to define wij with logarithmic damping:
The value of θ can be tuned by testing its accuracy over a set of entries that are excluded
from the sampling process in stochastic gradient descent.
Note that a binary data matrix D is no longer used in this application. The weight wij is
defined as follows:
+ 4 c 5α
min 1, Mij qij > 0
wij =
0 qij = 0
The values of M and α are recommended to be 100 and 3/4, respectively, based on empirical
considerations. It is possible to enhance this basic model in a number of ways, such as by
the use of bias variables.
Note that GloVe sets the negative sampling probability to zero, and therefore depends
almost entirely on the variation among the different non-zero values of xij . This is a un-
usual and controversial design choice, and it is very different from almost all other known
techniques for weighted matrix factorization. It is significant that GloVe does not try to
extract binary values of xij from the original quantity matrix Q = qij . Trying to use binary
values of xij would be disastrous in the case of GloVe (cf. Exercise 9). A directly competing
method, referred to as word2vec, plays great emphasis on negative sampling in order to
achieve high-quality results [91, 92]. If the values of qij do not vary significantly in a given
collection, it is possible for GloVe to provide overfitted results (cf. Section 8.5.2.1). This
does not seem to be the case in practice, as GloVe seems to provide reasonably good results
(based on independent evaluations by researchers and practitioners). This might possibly
be a result of the fact that there is sufficient variation among the non-zero frequency counts
(even after damping) in word-word context matrices. This property might not be the case
in other domains, and therefore one should generally be cautious of factorizations that do
not use some type of negative sampling.
362 CHAPTER 8. MATRIX FACTORIZATION
The value of m is set in a domain-specific way, and it is often a small integer such as 5. Under
the assumption
d that the matrix is sparse, the sum of the negative entries in each row is
roughly m( s=1 qis ). Therefore, the weights of the negative entries are m times the weights
of the positive entries, where m is a user-driven parameter. Implicitly, the negative entries
are underweighted by this approach, because a sparse matrix will often contain negative
entries that are hundreds of times the number of positive entries, whereas m is a small value
such as 5.
8.6. NONLINEAR MATRIX FACTORIZATIONS 363
A key point in logistic matrix factorization is what we would like the (learned) probability
matrix P = pij to have a large value of pij when xij is 1, and a small value of pij when
xij is 0. This can be achieved with a log-likelihood objective function, which is defined as
follows:
n d
J =− wij [xij log(pij ) + (1 − xij )log(1 − pij )]
i=1 j=1
Here, it is evident that this loss function is always nonnegative, and it takes2 on its minimum
value of 0 when pij = xij . Recall that each pij is the (i, j)th entry of F (U V T ), and it is
defined as follows:
1
pij =
1 + exp(−ui · v j )
Here, ui is the ith row of the n × k matrix U , and v j is the jth row of the d × k matrix V .
Therefore, one can substitute this value of pij in the objective function in order to obtain
the following loss function for logistic matrix factorization:
n
d ! " ! "
1 1
J =− wij xij log + (1 − xij )log
i=1 j=1
1 + exp(−ui · v j ) 1 + exp(ui · v j )
Now that we have set up the objective function of logistic matrix factorization, it remains
to derive the gradient descent steps.
∂J d
∂J ∂(ui · v j )
d
∂J
= = vj
∂ui j=1
∂(u i · v j ) ∂u i j=1
∂(u i · vj )
∂J n
∂J ∂(ui · v j )
n
∂J
= = ui
∂v j i=1
∂(u i · v j ) ∂v j i=1
∂(u i · vj )
Note that the partial derivative of ui · v j with respect to either ui or v j is obtained using
identity (v) of Table 4.2(a). It is relatively easy to compute the partial derivative of J with
respect to ui · v j because the objective function is defined as a function of this quantity. By
computing this derivative and substituting in the above equations, we obtain the following:
∂J d
wij xij v j
d
wij (1 − xij )v j
=− +
∂ui j=1
1 + exp(ui · v j ) j=1 1 + exp(−ui · v j )
∂J n
wij xij ui n
wij (1 − xij )ui
=− +
∂v j i=1
1 + exp(u i · v j ) i=1
1 + exp(−ui · v j )
Unlike logistic matrix factorization, the predicted value ŷij is intended to match a quantity
yij from {−1, +1}, rather than a value from {0, 1}. An important point here is that entries
with large absolute values of ŷij are not penalized, as long as their sign is correct. This is
because this factorization predicts the original entries by using the sign function on U V T
instead of using the absolute deviation from observed values:
Y ≈ sign(U V T )
This is a margin-based objective function, because an entry is not penalized only when its
predicted value matches the sign of the original binary value with a sufficient margin of 1.
Then, the overall objective function of maximum margin factorization (without regulariza-
tion) can be expressed as follows:
n
d
J= wij max{0, 1 − yij [ui · v j ]}
i=1 j=1
As in the case of logistic matrix factorization, one can use the chain rule to compute the
derivative.
∂J d
∂J
= vj = − wij yij v j
∂ui ∂(ui · v j )
j=1 j:yij (ui ·v j )<1
∂J n
∂J
= ui = − wij yij ui
∂v j ∂(ui · v j )
i=1 i:yij (ui ·v j )<1
It is common to use L2 -regularization, in which case the gradients above are adjusted by λui
and λv j , respectively. Here, λ > 0 is the regularization parameter. Therefore, at learning
rate α > 0, the gradient-descent updates of maximum-margin matrix factorization are as
follows:
ui ⇐ ui (1 − αλ) + α wij yij v j ∀i
j:yij (ui ·v j )<1
v j ⇐ v j (1 − αλ) + α wij yij ui ∀j
i:yij (ui ·v j )<1
Just as SVMs and logistic regression provide very similar results for classification of bi-
nary labels, logistic matrix factorization and maximum margin matrix factorization provide
similar results for factorization of binary matrices.
Table 8.1: A demographic data set containing heterogeneous data types in different columns
Age Gender Zip Code Race Education Level
(Numerical) (Binary) (Categorical) (Categorical) (Ordinal)
32 F 10598 Caucasian Bachelors
41 M 10532 African American Bachelors
36 M 10562 Filipino High School
32 F 10532 Hispanic Masters
29 F 10532 Native American Doctorate
to encounter data matrices in which different features of the matrix might be numerical,
binary, categorical, ordinal, and so on. Table 8.1 illustrates a table of demographic data
containing heterogeneous data types in which the different columns correspond to different
data types. A natural question that arises is how one might possibly create a factorization
of a table with such bewilderingly different data types.
In this case, we have an n×d matrix D = [xij ] of heterogeneous data types and W = [wij ]
is an n × d matrix of weights. An important distinction from the scenarios we have seen
so far is that the data type of xij depends on the column index j. In order to perform the
factorization, we use an n×k matrix U and an r×k matrix V . In most forms of factorization,
the number of rows in V is equal to the number of columns d in D, whereas we have r > d
in this case. Why is r > d? The reason is that some data types like categorical data require
multiple rows for a single column of D, which is not the case in any of the models we have
seen so far. Therefore, the “reconstructed matrix” U V T does not have the same size as
the original matrix D; the reconstruction has the same number of rows as D, but it might
have a much larger number of columns. Therefore, a one-to-one correspondence of columns
between D and U V T is no longer possible, and it is sensitive to the data type at hand.
The jth column in D is associated with multiple columns in U V T , and it is assumed that
these columns are located consecutively in U V T with column indices in the range [lj , hj ].
When the column index j of D corresponds to a numeric, ordinal, or binary variable, we
will have lj = hj and therefore a single column of U V T corresponds to a single column
of D. However, for some data types like categorical data, we will have hj > lj . For each
column j in D, we define a loss function that is specific to the column at hand, and it uses
hj − lj + 2 arguments. The loss function Lj (·) of the jth column of D is defined as follows:
1. The first argument of the loss function for any entry (i, j) of the jth column of D is
the observed value of the entry xij .
3. The loss value Lij for the (i, j)th entry of D is defined using the loss function Lj
specific to column j:
Lij = Lj (xij , zi,lj . . . zi,lj +r )
The nature of the loss function depends heavily on the data type at hand. We have
already seen some examples of loss functions for binary and numerical variables. In
the following, we will also introduce some loss functions for categorical and ordinal
variables.
8.7. GENERALIZED LOW-RANK MODELS 367
The overall objective function of the factorization can be expressed as a function of entry-
specific weights and additional regularization:
n
d
λ0 1
Minimize J = wij Lij + U 2F + V 2F
i=1 j=1
2
We have already seen how the loss functions for numerical and binary factorization were
derived directly from their counterparts in linear regression and binary classification. Cor-
respondingly, we can also derive the loss functions of categorical and ordinal values from
their counterparts in multinomial logistic regression and ordinal regression.
In other words, we use the ordered thresholds y1 . . . ym to define (m + 1) buckets on the real
line. The (i, j)th entry is mapped to an ordinal value depending on which bucket it falls in
on the real line. In the following discussion, we also assume (for notational convenience) that
y0 = −∞ and ym+1 = +∞. Although these (trivial) end-point intercepts do not need to be
learned, they help in reducing unnecessary case-wise analysis. For example, the prediction
x̂ij can now be collapsed into a single case as follows:
x̂ij = qth ordinal value q ∈ [1, m + 1], yq−1 ≤ zi,oj ≤ yq
There are many possible ways in which one can set up the loss function for ordinal entries.
One possible way is to use the proportional odds model in which we view the ordinal predic-
tion model as that of summing the losses of m different binary predictions for the (i, j)th
entry– the qth prediction checks whether xij and zi,oj end up on the same side of yq . Note
that this is the same approach used in binary logistic matrix factorization, except that we
have to learn multiple intercepts y1 . . . ym in this case. Then, we compute the probability
that the (i, j)th entry lies on either side of yb as follows:
1
Pij (xij ≤ yb ) =
1 + exp(zi,oj − yb )
1
Pij (xij > yb ) =
1 + exp(−zi,oj + yb )
It is easy to verify that the sum of the above two probabilities is 1. Note that larger values
of yb will increase the probability Pij (xij ≤ yb ), which makes sense in this case. At b = 0
and b = m + 1, the values of yb are fixed to −∞ and +∞, respectively. In such cases, it can
be easily verified that each of the aforementioned probabilities is either a 0 or 1.
Suppose that the observed value of the ordinal variable xij lies between the current
values of ys and ys+1 for some s ∈ {0, . . . , m}. Then, we would like P (xij > yb ) to be as
large as possible for b ≤ s and we would like P (xij ≤ yb ) to be as large as possible for b > s.
This is achieved with the use of the following loss function:
s
m
Lij = − log[Pij (xij > yb )] − log[Pij (xij ≤ yb )]
b=1 b=s+1
Note that if s is 0, the first set of terms vanish. Similarly, if s is m, the second set of terms
vanish. This loss function is very similar to binary logistic prediction; the main difference
from binary logistic modeling is that we have m different binary predictions corresponding
to each threshold ys , and we want to reward predictions on the correct side of each of m
thresholds. The loss function contains the sum of m different (negative) rewards. Here, it is
important to note that each ys is a variable. Therefore, the gradient-descent procedure not
only has to update the factor matrices, but it also has to update the thresholds y1 . . . ym .
In all problems we have seen so far, one can always substitute hinge loss wherever
logistic loss is used. This is because of the similarity of these loss functions (cf. Figure 4.9
of Chapter 4). Suppose that the ordinal variable xij lies between the current values of ys
and ys+1 for some s ∈ {0, . . . , m}. As in the case of logistic model, we can view the loss
function as the sum of m different losses for the m different binary predictions (one for each
non-trivial threshold yb ). The loss function penalizes cases in which zi,oj either lies on the
8.8. SHARED MATRIX FACTORIZATION 369
wrong side of each yb , or it lies on the correct side (but without sufficient margin). This is
achieved by defining the loss function Lij as follows:
s
m
Lij = max(1 − zi,oj + yb , 0) + max(1 + zi,oj − yb , 0)
b=1 b=s+1
The hinge loss has the advantage of having a simpler derivative. This model is also available
in the Julia package discussed in [128].
∂J d m
=− eD v
ij jq − β ip wpq + λuiq ∀i ∈ {1 . . . n}, ∀q ∈ {1 . . . k}
eM
∂uiq j=1 p=1
∂J n
=− ij uiq + λvjq ∀j ∈ {1 . . . d}, ∀q ∈ {1 . . . k}
eD
∂vjq i=1
∂J n
= −β ip uiq + λwpq ∀p ∈ {1 . . . m}, ∀q ∈ {1 . . . k}
eM
∂wpq i=1
These gradients can be used to update the entire set of (n + m + d)k parameters with a
step-size of α. This approach corresponds to vanilla gradient descent. It is also possible to
use stochastic gradient descent, which effectively computes the gradients with respect to
residual errors in randomly sampled entries of the matrices. One can sample any entry in
either the document-term matrix or the adjacency matrix, and then perform the gradient-
descent step with respect to the error in this single entry:
The probability of sampling each entry is fixed irrespective of which matrix it is drawn
from. Consider a case in which the (i, j)th entry in the document-term matrix is sampled
ij . Then, the following updates are executed for each q ∈ {1 . . . k} and step-
with error eD
size α:
On the other hand, if the (i, p)th entry in the adjacency matrix is sampled, then the following
updates are performed for each q ∈ {1 . . . k} and step-size α:
One can set up an objective function that minimizes the sum of the squares of the errors
over all three matrices. It is even possible to weight the different types of errors differently,
depending on the application at hand. We leave the derivation of the gradient descent steps
as an exercise for the reader.
Problem 8.8.1 Write down the sum-of-squared objective function for the factorization of
the matrices D1 , D2 , and A, as discussed above. Derive the gradient descent steps for the
entries in these matrices. You may introduce any notation as needed for this problem.
All the settings used for shared matrix factorization are very similar; we have a set of matri-
ces in which some of the modalities are shared, and we wish to extract latent representations
of the shared relationships implicit in these matrices. The key in this entire process is to
use shared latent factors between different modalities so that they are able to incorporate
the impact of these relationships in an indirect (i.e., latent) way within the extracted em-
bedding. A single set of factors is introduced for each shared modality, and each matrix is
factorized. The sum-of-squared objective function is used to determine the gradient-descent
updates.
TERMINATOR
family fun
SPIDERMAN
ludicrous
boring
comics
GANDHI
action
RATING
SAYANI
DAVID
SHREK
MARK
JOSE
ANN
JIM
0 1 0 0 0 0 0 1 0 0 0 1 0 0 0 5
1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 4
0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 2
0 1 0 0 0 0 1 0 0 0 1 0 0 0 0 1
0 0 0 1 0 0 0 0 0 1 0 0 0 1 0 1
REGRESSORS REGRESSAND
Figure 8.6: An example of a sparse regression modeling problem with heterogeneous at-
tributes
Unfortunately, the sparsity of the data in Figure 8.6 ensures that a least-squares regres-
sion method does rather poorly. For example, each row might contain only three or four
non-zero entries. In such cases, linear regression may not be able to model the dependent
variable very well, because the presence of a small number of non-zero entries provides lit-
tle information. Therefore, a second possibility is to use higher-order interactions between
the attributes in which we use the simultaneous presence of multiple entries for model-
ing. As a practical matter, one typically chooses to use second-order interactions between
attributes, which corresponds to second-order polynomial regression. However, as we will
discuss below, an attempt to do so leads to overfitting, which is exacerbated by the sparse
data representation.
Let d1 . . . dr be the number of attributes in each of the r data modalities such as text,
images,
r network data and so on. Therefore, the total number of attributes is given by
p = k=1 dk . We represent the variables of the row by x1 . . . xp , most of which are 0s, and
a few might be non-zero. In many natural applications in the recommendation domain, the
values of xi might be binary. Furthermore, it is assumed that a target variable is available
for each row. In the example of Figure 8.6, the target variable is the rating associated with
each row, although it could be any type of dependent variable in principle.
Consider the use of a regression methodology in this setting. For example, the simplest
possible prediction would be use linear regression with the variables x1 . . . xp .
p
ŷ(x) = b + wi x i (8.18)
i=1
Here, b is the bias variable and wi is the regression coefficient of the ith attribute. This is in
an almost identical form to the linear regression discussed in Chapter 4, except that we have
explicitly used a global bias variable b. Although this form can provide reasonable results in
some cases, it is often not sufficient for sparse data in which a lot of information is captured
by the correlations between various attributes. For example, in a recommender system, the
co-occurrence of a user-item pair is far more informative than the separate coefficients of
users and items. Therefore, the key is to use a second-order regression coefficient sij , which
captures the coefficient of the interaction between the ith and jth attribute.
8.9. FACTORIZATION MACHINES 373
p
p
p
ŷ(x) = b + wi x i + sij xi xj (8.19)
i=1 i=1 j=i+1
p
Note that one could also include the second-order term i=1 sii x2i , although xi is often
drawn from sparse domains with little variation in non-zero values of xi , and the addition
of such a term is not always helpful. For example, if the value of xi is binary (as is common),
the coefficient of x2i would be redundant with respect to that of xi .
One observation is that the above model is very similar to what one would obtain with
the use of kernel regression with a second-order polynomial kernel. In sparse domains like
text, such kernels often overfit the data, especially when the dimensionality is large and the
data is sparse. Even for an application in a single domain (e.g., short-text tweets), the value
of d is greater than 105 , and therefore the number of second-order coefficients is more than
1010 . With any training data set containing less than 1010 points, one would perform quite
poorly. This problem is exacerbated by sparsity, in which pairs of attributes co-occur rarely
in the training data, and may not generalize to the test data. For example, in a recommender
application, a particular user-item pair may occur only once in the entire training data, and
it will not occur in the test data if it occurs in the training data. In fact, all the user-item
pairs that occur in the test data will not have occurred in the training data. How, then,
does one learn the interaction coefficients sij for such user-item pairs? Similarly, in a short-
text mining application, the words “movie” and “film” may occur together, and the words
“comedy” and “film” may also occur together, but the words “comedy” and “movie” might
never have occurred together in the training data. What does one do, if the last pair occurs
in the test data?
A key observation is that one can use the learned values of sij for the other two pairs
(i.e., “comedy”/“film” and “movie”/“‘film”) in order to make some inferences about the
interaction coefficient for the pair “comedy” and “movie.” How does one achieve this goal?
The key idea is to assume that the d × d matrix S = [sij ] of second-order coefficients has a
low-rank structure for some d × k matrix V = [vis ]:
S =VVT (8.20)
Here, k is the rank of the factorization. Intuitively, one can view Equation 8.20 as a kind of
regularization constraint on the (massive number of) second-order coefficients in order to
prevent overfitting. Therefore, if vi = [vi1 . . . vik ] is the k-dimensional row vector represent-
ing the ith row of V , we have:
sij = vi · vj (8.21)
By substituting Equation 8.21 in the prediction function of Equation 8.19, one obtains the
following:
p
p p
ŷ(x) = b + wi x i + (vi · vj )xi xj (8.22)
i=1 i=1 j=i+1
The variables to be learned are b, the different values of wi , and each of the vectors vi .
Although the number of interaction terms might seem large, most of them will evaluate
to zero in sparse settings in Equation 8.22. This is one of the reasons that factorization
machines are designed to be used only in sparse settings where most of the terms of Equa-
tion 8.22 evaluate to 0. A crucial point is that we only need to learn the O(d · k) parameters
represented by v1 . . . vk in lieu of the O(d2 ) parameters in [sij ]d×d .
A natural approach to solve this problem is to use the stochastic gradient-descent
method, in which one cycles through the observed values of the dependent variable to
374 CHAPTER 8. MATRIX FACTORIZATION
compute the gradients with respect to the error in the observed entry. The update step
with respect to any particular model parameter θ ∈ {b, wi , vis } depends on the error
e(x) = y(x) − ŷ(x) between the predicted and observed values:
∂ ŷ(x)
θ ⇐ θ(1 − α · λ) + α · e(x) (8.23)
∂θ
Here, α > 0 is the learning rate, and λ > 0 is the regularization parameter. The partial
derivative in the update equation is defined as follows:
⎧
⎪1 if θ is b
∂ ŷ(x) ⎨
= xi if θ is wi (8.24)
∂θ ⎪
⎩ p
xi j=1 vjs · xj − vis · xi if θ is vis
2
p
The term Ls = j=1 vjs · xj in the third case is noteworthy. To avoid redundant effort,
this term can be pre-stored while evaluating ŷ(x) for computation of the error term e(x) =
y(x) − ŷ(x). This is because Equation 8.22 can be algebraically rearranged as follows:
⎛ ⎞
p
1 k p p
ŷ(x) = b + wi x i + ⎝[ vjs · xj ]2 − 2
vjs · x2j ⎠
i=1
2 s=1 j=1 j=1
⎛ ⎞
p
1 k p
=b+ wi x i + ⎝L2s − 2
vjs · x2j ⎠
i=1
2 s=1 j=1
Furthermore, the parameters vi and wi do not need to be updated when xi = 0. This allows
for an efficient update process in sparse settings, which is linear in both the number of
non-zero entries and the value of k.
Factorization machines can be used for any (massively sparse) classification or regression
task; ratings prediction in recommender systems is only one example of a natural applica-
tion. Although the model is inherently designed for regression, binary classification can be
handled by applying the logistic function on the numerical predictions to derive the proba-
bility whether ŷ(x) is +1 or −1. The prediction function of Equation 8.22 is modified to a
form used in logistic regression:
1
P [y(x) = 1] = p p p (8.25)
1 + exp(−[b + i=1 wi xi + i=1 j=i+1 (vi · vj )xi xj ])
This form is the same as the logistic regression approach discussed in Chapter 4. The main
difference is that we are also using second-order interactions within the prediction function.
A log-likelihood criterion can be optimized to learn the underlying model parameters with
a gradient-descent approach [47, 107, 108].
The description in this section is based on second-order factorization machines that
are popularly used in practice. In third-order polynomial regression, we would have O(p3 )
additional regression coefficients of the form wijk , which correspond to interaction terms of
the form xi xj xk . These coefficients would define a massive third-order tensor, which can
be compressed with tensor factorization. Although higher-order factorization machines have
also been developed, they are often impractical because of greater computational complexity
and overfitting. A software library, referred to as libFM [108], provides an excellent set of
factorization machine implementations. The main task in using libFM is an initial feature
8.12. EXERCISES 375
engineering effort, and the effectiveness of the model mainly depends on the skill of the
analyst in extracting the correct set of features. Other useful libraries include fastFM [11]
and3 libMF [144], which have some fast learning methods for factorization machines.
8.10 Summary
Matrix factorization is one of the most fundamental tools in machine learning, which is
exploited both for the useful linear algebra and also for the compression properties of the
underlying factors. One of the most fundamental forms of factorization is singular value de-
composition in which the columns of the different factor matrices are mutually orthogonal.
More general forms of the matrix factorization modify the optimization model to allow dif-
ferent types of objective functions, constraints, and data types. Certain types of constraints
like nonnegativity have a regularization effect, and they help in creating more interpretable
matrix factorizations. Methods like logistic matrix factorization, maximum margin factor-
ization, and generalized low-rank models are designed to deal with different data types.
Shared matrix factorization and factorization machines are designed to factorize multiple
matrices. In general, the broader theme of matrix factorization provides a very wide variety
of tools that can be harnessed for various machine learning scenarios.
8.12 Exercises
1. Biased matrix factorization: Consider the factorization of an incomplete n × d
matrix D into an n × k matrix U and a d × k matrix V :
D ≈ UV T
Suppose you add the constraint that all entries of the penultimate column of U and
the final column of V are fixed to 1. Discuss the similarity of this model to that of
the addition of bias to classification models. How is gradient descent modified?
2. In the scenario of Exercise 1, will the Frobenius norm on observed ratings be better
optimized with or without constraints on the final columns of U and V ? Why might
it be desirable to add such a constraint during the estimation of missing entries?
3. Suppose that you have a symmetric n × n matrix D of similarities, which has missing
entries. You decide to recover the missing entries by using the symmetric factorization
D ≈ U U T . Here, U is an n × k matrix, and k is the rank of the factorization.
(a) Write the objective function for the optimization model using the Frobenius norm
and L2 -regularization.
(b) Derive the gradient-descent steps in terms of matrix-centric updates.
(c) Discuss the conditions under which an exact factorization will not exist, irrespec-
tive of how large a value of k is used for the factorization.
4. Derive the gradient-descent updates for L1 -loss matrix factorization in which the
objective function is J = D − U V T 1 .
7. Show that the k-means formulation in Section 4.10.3 of Chapter 4 is identical to the
formulation of Section 8.2.1. [Hint: Propose a one-to-one mapping of optimization
variables in the two problems. Show that the constraints and the objective functions
are equivalent in the two cases.]
9. Suppose that you use GloVe on a quantity matrix Q = [qij ] in which each count qij is
either 0 or 10000. A sizeable number of counts are 0s. Show that GloVe can discover
a trivial factorization with zero error in which each word has the same embedded
representation.
10. Derive the gradient update equations for using factorization machines in binary clas-
sification with logistic loss and hinge loss.
8.12. EXERCISES 377
11. Suppose you want to perform the rank-k factorization D ≈ U V T of the n × d matrix
D using gradient descent. Propose an initialization method for U and V using QR
decomposition of k randomly chosen columns of D.
12. Suppose that you have a sparse non-negative matrix D of size n × d. What can you
say about the dot product of any pair of columns as a consequence of sparsity? Use
this fact along with the intuition derived from the previous exercise to initialize U
using k randomly sampled columns of D for non-negative matrix factorization. In this
case, the initialized matrices U and V need to be non-negative.
13. Nonlinear matrix factorization of positive matrices: Consider a nonlinear
model for matrix factorization of positive matrices D = [xij ], where D = F (U V T ),
and F (x) = x2 is applied in element-wise fashion. The vectors ui and v j represent
the ith and jth rows of U and V , respectively. The loss function is D − F (U V T )2F .
Show that the gradient descent steps are as follows:
ui ⇐ ui + α (ui · v j )(xij − F (ui · v j ))v j
j
vj ⇐ vj + α (ui · v j )(xij − F (ui · v j ))ui
i
14. Out-of-sample factor learning: Suppose that you learn the optimal matrix factor-
ization D ≈ U V T of n × d matrix D, where U, V are n × k and d × k matrices, respec-
tively. Now you are given a new out-of-sample t×d data matrix Do with rows collected
using the same methodology as the rows of D (and with the same d attributes). You
are asked to quickly factorize this out-of-sample data matrix into Do ≈ Uo V T with
the objective of minimizing Do − Uo V T 2F , where V is fixed to the matrix learned
from the earlier in-sample factorization. Show that the problem can be decomposed
into t linear regression problems, and the optimal solution Uo is given by:
UoT = V + DoT
Here, V + is the pseudoinverse of V . Show that the rank-k approximation of Do ≈
Uo V T is given by Do Pv , where Pv = V (V T V )−1 V T is the d × d projection matrix
induced by V . Propose a fast solution approach using QR decomposition of V and
back-substitution with a triangular equation system. How does this problem relate to
the alternating minimization approach?
15. Out-of-sample factor learning: Consider the same scenario as Exercise 14, where
you are trying to learn the out-of-sample factor matrix Uo for in-sample data matrix
D ≈ U V T and out-of-sample data matrix Do . The factor matrix V is fixed from in-
sample learning. Closed-form solutions, such as the one in Exercise 14, are rare in most
matrix factorization settings. Discuss how the gradient-descent updates discussed in
this chapter can be modified so that Uo can be learned directly. Specifically discuss the
case of (i) unconstrained matrix factorization, (ii) nonnegative matrix factorization,
and (iii) logistic matrix factorization.
16. Suppose that you have a user-item ratings matrix with numerical/missing values. Fur-
thermore, users have rated each other’s trustworthiness with binary/missing values.
(a) Show how you can use shared matrix factorization for estimating the rating of a
user on an item that they have not already rated.
(b) Show how you can use factorization machines to achieve similar goals as (a).
378 CHAPTER 8. MATRIX FACTORIZATION
17. Propose an algorithm for finding outlier entries in a matrix with the use of matrix
factorization.
18. Suppose that you are given the linkage of a large Website with n pages, in which
each page contains a bag of words drawn from a lexicon of size d. Furthermore, you
are given information on how m users have rated each page on a scale of 1 to 5.
The ratings data is incomplete. Propose a model to create an embedding for each
Webpage by combining all three pieces of information. [Hint: This is a shared matrix
factorization problem.]
19. True or false: A zero error non-negative matrix factorization (NMF) U V T of an
n × d non-negative matrix D always exists, where U is an n × k matrix and V is a
d × k matrix, as long as k is chosen large enough. At what value of k can you get an
exact NMF of the following matrix?
1 1
D=
1 0
20. True or false: Suppose you have the exact non-negative factorization (NMF) U V T
of a matrix D, so that each column of V is constrained to sum to 1. Subject to this
normalization rule, the NMF of D is unique.
21. Discuss why the following algorithm will work in computing the matrix factorization
Dn×d ≈ U V T after initializing Un×k and Vd×k randomly:
25. Suppose that you have a very large and dense matrix D of low rank that you cannot
hold in memory, and you want to factorize it as D ≈ U V T . Propose a method for
factorization that uses only sparse matrix multiplication. [Hint: Read the section on
recommender systems.]
26. Temporal matrix factorization: Consider a sequence of n × d matrices D1 . . . Dt
that are slowly evolving over t time stamps. Show how one can create an optimization
model to infer a single n × k static factor matrix that does not change over time,
and multiple d × k dynamic factor matrices, each of which is time-specific. Derive the
gradient descent steps to find the factor matrices.
Chapter 9
“The worst form of inequality is to try to make unequal things equal.” – Aristotle
9.1 Introduction
A dot-product similarity matrix is an alternative way to represent a multidimensional data
set. In other words, one can convert an n × d data matrix D into an n × n similarity matrix
S = DDT (which contains n2 pairwise dot products between points). One can use S instead
of D for machine learning algorithms. The reason is that the similarity matrix contains
almost the same information about the data as the original matrix. This equivalence is the
genesis of a large class of methods in machine learning, referred to as kernel methods. This
chapter builds the linear algebra framework required for understanding this important class
of methods in machine learning. The real utility of such methods arises when the similarity
matrix is chosen differently from the use of dot products (and the data matrix is sometimes
not even available).
This chapter is organized as follows. The next section discusses how similarity matrices
are alternative representations of data matrices. The efficient recovery of data matrices from
similarity matrices is discussed in Section 9.3. The different types of linear algebra operations
on similarity matrices are discussed in Section 9.4. The implementation of machine learning
algorithms with similarity matrices is discussed in Section 9.5. The representer theorem is
discussed in Section 9.6. The choice of similarity matrices that promote linear separation is
discussed in Section 9.7. A summary is given in Section 9.8.
d
sij = X i · X j = xik xjk
k=1
One can write the above similarity relationship in matrix form as well:
S = DDT
How does one recover the original data set D from the similarity matrix? First, note that
the recovery can never be unique. This is because dot products are invariant to rotation and
reflection. Therefore, rotating a data set about the origin or reflecting it along any axis will
result in the same similarity matrix. For example, consider a d × d matrix P with orthonor-
mal columns, which is essentially a rotation/reflection matrix. Then, the rotated/reflected
version of D is as follows:
D = DP
Then, the similarity matrix S using D can be shown to be equal to S as follows:
In other words, the similarity matrices using D and D are the same.
It is noteworthy that both (DP )(DP )T and DDT represent symmetric factorizations
of the similarity matrix S. A symmetric factorization of an n × n matrix is a factorization
of S into two n × k matrices of the form S = U U T . For exact factorization, the value of
k will be equal to the rank of the similarity matrix S. The ith row of U in any symmetric
factorization U U T of S yields a valid set of features of the ith data point.
The simplest way to perform symmetric factorization of a similarity matrix is with the
use of eigendecomposition. First, note that if S was indeed created using dot products on the
data matrix D, it is of the form DDT and is therefore positive semidefinite (cf. Lemma 3.3.14
of Chapter 3). Therefore, one can diagonalize it with nonnegative eigenvalues of which at
most min{n, d} are non-zero. To emphasize the nonnegativity of the eigenvalues, we will
represent the diagonal matrix as Σ2 :
S = QΣ2 QT = (QΣ)(QΣ)T
U
Therefore, QΣ is the extracted representation from the similarity matrix S, and it will
contain at most min{n, d} non-zero columns. Specifically, the ith row of QΣ contains the
embedded representation of the ith data point (based on the ordering of rows/columns of
the similarity matrix S). Note that the eigenvectors and eigenvalues of DDT = QΣ2 QT are
(respectively) the left singular vectors and squared singular values of D.
The eigendecomposition of the similarity matrix provides one of an infinite number of
possible embeddings obtained from factorization of the similarity matrix, and it is one of
the most compact ones in terms of the number of non-zero columns. The compactness
can be improved further by dropping smaller eigenvectors. As another example, one can
9.2. EQUIVALENCE OF DATA AND SIMILARITY MATRICES 381
extract a symmetric Cholesky factorization S = LLT , and use the rows of matrix L as
the engineered representations of the points (see Section 3.3.9), although a small positive
value might be needed to be added to each diagonal entry of S to make it positive definite.
Another example is the symmetric square-root matrix, which √ can also be extracted from the
eigendecomposition as S = QΣ2 QT = (QΣQT )(QΣQT )T = ( S)2 . Choosing any particular
embedding among these will not affect the predictions of any machine learning algorithm
that relies on dot products (or Euclidean distances), because they remain the same whether
we use eigendecomposition, Cholesky factorization, or the square-root matrix.
Problem 9.2.1 (Alternative Embeddings Are Orthogonally Related) Show that if
the rank-k similarity matrix S of size n × n can be expressed as either U1 U1T or U2 U2T with
n × k matrices, then (i) a full SVD of each of U1 and U2 can be constructed so that the
left singular vectors and the singular values are the same in the two cases, and (ii) an
orthogonal matrix P12 can be found so that U2 = U1 P12 .
S = S + δI = QΔQT + δI = Q (Δ + δI) QT
≥0
√
In this case, the embedding can be extracted as Q Δ + δI. The modification of the simi-
larity matrix is often not a significant one from the perspective of application-centric inter-
pretability. By doing so, we are only translating the (less important) self-similarity values
to make them sufficiently large, while keeping the pairwise similarity values unchanged.
Intuitively, the self-similarities among points are always larger than those across different
points (on the average), when working with dot products.
Problem 9.2.2 Let X and Y be two d-dimensional points. Show that the average of the two
dot-product self-similarities X · X and Y · Y is at least as large as the pairwise dot-product
similarity X · Y .
Stated differently, the above problem implies that if we have a 2 × 2 symmetric similarity
matrix in which the sum of diagonal entries is smaller than the sum of off-diagonal entries,
the matrix would not be positive semidefinite.
Problem 9.2.3 Show that the sum of the entries in a similarity matrix S can be expressed
as y T Sy for appropriately chosen column vector y. What can you infer about the sign of the
sum of the values in a similarity matrix that is positive semidefinite?
Problem 9.2.4 Let y be an n-dimensional column vector. Show that the expression y T Sy
represents the squared norm of some vector in the multidimensional space containing the
embedding induced by the n × n similarity matrix S.
that the feature space is bounded above by the size of the similarity matrix. However,
it is also possible to (implicitly) specify a similarity matrix of infinite size, by defining
similarities between each pair of objects from an infinite domain as a function in closed
form. For example, consider a situation where one wants to engineer new features Φ(x)
from multidimensional vectors x ∈ Rd . In such a case, one can define a similarity function
between multidimensional objects x and y (different from the dot product) by the following
simplified Gaussian kernel with unit variance:
By providing a closed-form expression, one has effectively defined a similarity matrix be-
tween all pairs of objects x and y in the infinite set Rd . Since the dimensionality of the
eigenvectors increases with similarity matrix size, it is possible1 for the resulting eigenvec-
tors to also be infinite dimensional. Such spaces of infinite-dimensional vectors are natural
generalizations of finite-dimensional Euclidean spaces, and are referred to as Hilbert spaces.
However, even in these abstract cases of infinite-dimensional representations, it is possible
to represent a specific data set of finite size containing n objects in n-dimensional space—
the key point is that an n-dimensional projection of this infinite-dimensional space always
exists that contains all n objects (and the origin). After all, any set of n vectors defines
an (at most) n-dimensional subspace. The eigendecomposition of the sample matrix of size
n × n discovers precisely this subspace, which is the data-specific feature space. For most
machine learning problems, only the data-specific feature space is needed. We emphasize
this point below:
For a finite similarity matrix of size n × n, one can always extract an (at most)
n-dimensional engineered representation using eigendecomposition of the simi-
larity matrix. This is true even when the dimensionality of the true feature space
induced by a (closed-form) similarity function over an infinite domain of points
is much larger.
The key point is that as long as we do not need to know the representations of points outside
our finite data set of n points, one can restrict the dimensionality of representation to an
n-dimensional subspace (and much lower in many cases). In this chapter, whenever we refer
to kernel feature space, we refer to the data-specific feature space whose dimensionality is
bounded above by the number of points. The eigendecomposition of the similarity matrix
to extract features is also referred to as kernel SVD.
Definition 9.2.2 (Kernel SVD) The embedding QΣ extracted by the eigendecomposition
S = QΣ2 QT of an n × n positive semidefinite similarity matrix S is referred to as kernel
SVD. The n × n matrix QΣ contains the embedding of the ith data point in its ith row
for each i ∈ {1 . . . n}. When S already contains dot products between points from Rd , the
approach specializes to standard SVD.
All kernel methods in machine learning implicitly transform the data using kernel SVD
via a method referred to as the “kernel trick.” However, we will revisit some traditional
applications of kernels like SVMs and show how to implement them using explicit eigende-
composition of the similarity matrix. Although this approach is unusual, it is instructive
and has some advantages over the alternative that avoids this eigendecomposition.
1 For some closed-form functions like the dot product, only d components of the eigenvectors will be
non-zero, whereas for others like the Gaussian kernel, the entire set of infinite components will be needed.
9.3. EFFICIENT DATA RECOVERY FROM SIMILARITY MATRICES 385
• (In-sample embedding): Diagonalize Sin = QΣ2 QT . If there are fewer than p non-
zero eigenvectors, then extract all k < p non-zero eigenvectors in the n × k matrix Qk
and k × k diagonal matrix Σk . This step requires O(p2 · k) time and O(p2 ) space. Since
p is typically a small constant of the order of a few thousand, this step is extremely
fast and space-efficient irrespective of the base number of objects.
the embeddings of the in-sample points, we will use the properties of the similarity
matrix in transformed space to derive all rows in a uniform way. The dot products of
the n points in Uk and in-sample points in Qk Σk can be computed as the matrix prod-
uct of Uk and (Qk Σk )T . This set of n × p dot products is contained in the matrix Sa ,
because it is assumed that Sa contains the dot products of embedded representations
of all points and in-sample points. Therefore, we have the following:
Sa ≈ Uk (Qk Σk )T (9.1)
Dot Products
The approximation is caused by the fact that the embedding of all points might require
n-dimensional space, whereas we have restricted ourselves to at most p dimensions
defined by the in-sample points. By postmultiplying each side with Qk Σ−1 k and using
QTk Qk = Ik , we obtain the following:
Uk ≈ Sa Qk Σ−1
k (9.2)
It is noteworthy that the p in-sample rows in Uk are the same as the p rows in Qk Σk .
It is interesting to note that we are able to represent n points in at most p dimensions,
whereas the data-specific feature space for the full data might have dimensionality as large as
n. What we have effectively done is to use the fact that a hyperplane defined by p points (and
the origin) in feature space is an at most p-dimensional projection of the n-dimensional data-
specific feature space, and it can be represented in at most k ≤ p coordinates. Therefore,
we first find the exact k-dimensional representation of a subset of p points (where k ≤ p);
then, we project the remaining (n − p) points from n-dimensional feature space to the k-
dimensional subspace in which the p points lie. Therefore, the remaining (n − p) points lose
some accuracy of representation, which is expected in a sampling method. In fact, it is even
possible to drop some of the smaller non-zero eigenvectors from the in-sample embedding
for better efficiency.
One can assume that the matrix S is symmetric, and therefore the observed set of similarities
O can be grouped into symmetric pairs of entries satisfying sij = sji . It is desired to learn
an n × k embedding U for user-specified rank k, so that for any observed entry (i, j) the
dot product of the ith row of U and the jth row of U is as close as possible to the (i, j)th
entry, sij , of S. In other words, the value of S − U U T 2F should be as small as possible for
the observed entries in S. This problem can be formulated only over the “observed” entries
in O as follows:
1 λ 2
k n k
Minimize J = (sij − uip ujp )2 + u
2 p=1
2 i=1 p=1 ip
(i,j)∈O
Therefore, we have changed the optimization model of Section 9.2.4, so that it is formulated
only over a tiny subset of entries in S. Furthermore, regularization becomes particularly
important in these cases as the subset of entries to be used is small. This problem is similar
to the determination of factors in recommendation problems, and is a natural candidate for
gradient-descent methods.
k The main difference is that the factorization is symmetric.
Let eij = sij − p=1 uip ujp be the error of any entry (i, j) from set O at a particular
value of the parameter matrix U . On computing the partial derivative of J with respect to
uim , one obtains the following:
∂J
k
= (sij + sji − 2 · uip ujp )(−ujm ) + λuim ∀i ∈ {1 . . . n}, m ∈ {1 . . . k}
∂uim p=1
j:(i,j)∈O
= (eij + eji )(−ujm ) + λuim ∀i ∈ {1 . . . n}, m ∈ {1 . . . k}
j:(i,j)∈O
= −2 eij ujm + λuim ∀i ∈ {1 . . . n}, m ∈ {1 . . . k}
j:(i,j)∈O
Note that sij and sji are either both present or both absent from the observed entries
because of the symmetric assumption. It is possible to express these partial derivatives in
matrix form. Let E = [eij ] be an error matrix, in which (i, j)th entry is set to the error
for any observed entry (i, j) in O, and 0, otherwise. When a small number of entries are
observed, this matrix'is a sparse
( matrix. It is not difficult to see that the entire n × k matrix
of partial derivatives ∂u∂Jim is given by −2EU . This suggests that one should randomly
n×k
initialize the matrix U of parameters, and use the following gradient-descent steps:
Here, α > 0 is the step size, which one can follow through to convergence. Note that the
error matrix E is sparse, and therefore it makes sense to compute only those entries that
are present in O before converting to a sparse data structure.
To determine the optimal rank k of the factorization, one can hold out a small subset
O1 ⊂ O of the observed entries, which are not used for learning U . These entries are used to
test the squared error (i,j)∈O1 e2ij of the matrix U learned using various values of k. The
388 CHAPTER 9. THE LINEAR ALGEBRA OF SIMILARITY
value of k at which the error of the held out entries is minimized is used. Furthermore, one
can also use the held-out entries to determine the stopping criterion for the gradient-descent
approach. The gradient-descent procedure is terminated when the error on the held-out
entries begins to rise. The recovered matrix U provides a k-dimensional embedding of the
data, which can be used in conjunction with machine learning algorithms.
The use of a fixed set of pre-computed entries in O allows the leveraging of gradient-
descent methods. On the other hand, if we use stochastic gradient descent, we can simply
sample any position in S and compute the similarity value on the fly. This type of approach
does have the advantage that one does not have to cycle through the same set of entries in
O. Presumably, the number of entries in the similarity matrix is so large that even when one
samples as many entries as possible (with replacement) for stochastic gradient descent, most
entries would not be visited more than once (or at all). Therefore, the stochastic gradient
descent step boils down to the following step, which is executed repeatedly:
Randomly sample index pair [i, j] and compute similarity value sij ;
k
Compute the error eij = sij − p=1 uip ujp ;
im ⇐ uim (1 − αλ) + 2eij ujm for all m ∈ {1 . . . k};
Update u+
jm ⇐ ujm (1 − αλ) + 2eij uim for all m ∈ {1 . . . k};
Update u+
Update uim ⇐ u+im and ujm ⇐ ujm ;
+
The similarity values are computed on the fly, as entries are sampled. The algorithm can
be used even for similarity matrices that are not positive semidefinite. The diagonal entries
will be learned automatically to create the closest positive semidefinite approximation.
Problem 9.3.1 Let S be an n × n symmetric matrix that is not positive semidefinite. It
has r n negative eigenvalues of sizes λ1 . . . λr . Show that the objective function J =
r
S − U U T 2F is always at least p=1 λ2p , irrespective of the value of k in the n × k matrix
U . What is the minimum value of k at which this error is guaranteed?
U ⇐ U + αEV
ΔU
V ⇐ V + αE T
U
ΔV
How does one use the decomposition components to create the embedding? There are several
choices; for example, one can use only V to create the embedding. However, one can also
concatenate the ith row of U and the ith row of V to create a 2k-dimensional embedding of
the ith object. Using both the matrices U and V recognizes the fact that rows and columns
capture different aspects of the similarity between objects. For example, in an asymmetric
9.4. LINEAR ALGEBRA OPERATIONS ON SIMILARITY MATRICES 389
S ≈ Qk Σk PkT (9.5)
S = U ΔU −1 (9.6)
The columns of U contain the eigenvectors, and they are not necessarily orthonormal. In such
a case, one can extract the top-k columns of U (corresponding to the largest eigenvalues) to
create a k-dimensional embedding. Asymmetric decompositions are particularly useful for
asymmetric similarity matrices that arise in a number of real-world applications:
1. In a social network, one user might follow another (or like another user), but the “simi-
larity” relationship might not be reciprocated. Similarly, hyperlinks between Webpages
can be viewed as directed indicators of similarity.
2. Even an undirected graph might have an asymmetric similarity network, if the edge
weights are normalized in an asymmetric way. As we will see in Chapter 10, an ad-
jacency matrix of an undirected graph can be converted into a stochastic transition
matrix by normalizing each row to sum to 1, and the right eigenvectors of this tran-
sition matrix provide an embedding, referred to as the Shi-Malik embedding [115].
On the other hand, the symmetric decomposition of a symmetric normalization of the
same adjacency matrix leads to a related embedding known as the Ng-Jordan-Weiss
embedding [98]. Both embeddings are different forms of spectral decomposition, which
are used for applications like spectral clustering (see Section 10.5.1 of Chapter 10).
multidimensional embedding Φ(oi ), so that the (i, j)th entry sij in matrix S is defined by
the following dot product:
sij = Φ(oi ) · Φ(oj )
With these notations, we will define the basic operations between two points:
In other words, the total energy in the data set is equal to the sum of the diagonal entries
of the similarity matrix!
The norm can be used to normalize a similarity matrix, so that all engineered points
Φn (oi ) lie on a unit ball. Note that dot products become cosine similarities for unit normal-
ized points. Unlike dot products, cosine similarities are invariant to normalization. Consider
the case where we have an unnormalized similarity matrix S = [sij ] corresponding to en-
gineered representation Φ(·), and we want to normalize these points to Φn (·) on the unit
ball.
Φ(oi ) · Φ(oj ) sij
Φn (oi ) · Φn (oj ) = cosine[Φn (oi ), Φn (oj )] = =√ √
Φ(oi ) · Φ(oj ) sii sjj
Each entry sij is replaced with the above normalized value. Note that a normalized similarity
matrix will contain only 1s along the diagonal. This is because one has effectively normalized
the data-specific kernel features to lie on a unit ball in Rn .
In other words, the squared norm of the mean is equal to average value of the entries in the
similarity matrix. This value is always nonnegative according to Problem 9.2.3.
The total variance σ 2 (S) of the data set (over all dimensions) in the embedded space
(induced by similarity matrix S) is obtained by subtracting the squared norm of the mean
from the normalized energy (i.e., energy averaged over number of dimensions):
n
n
n
σ 2 (S) = Energy(S)/n − μ2 = sii /n − sij /n2
i=1 i=1 j=1
Note that the variance is the difference between the average diagonal entry and average
matrix entry.
9.4. LINEAR ALGEBRA OPERATIONS ON SIMILARITY MATRICES 391
Problem 9.4.1 The variance of a data set containing n points can be shown 0 1 to be pro-
portional to the sum of squared pairwise distances between points (over all n2 pairs). Use
this result to show that the variance σ 2 (S) in the data induced by similarity matrix S can
be expressed in the following form for appropriately chosen n-dimensional vectors y r for
r ∈ {1, 2, . . . , n(n − 1)/2}:
n(n−1)/2
σ 2 (S) ∝ y Tr Sy r
r=1
The above problem also makes it evident why the variance will be nonnegative, given that
the similarity matrix S is positive semidefinite.
Data matrices are often centered in machine learning as a preprocessing step to various
tasks. A specific example is kernel PCA. It is also possible to recognize when a similarity
matrix is mean-centered as follows:
Observation 9.4.1 (Recognizing Mean-Centered Similarities) The sum of the ith
row (or column) of a similarity matrix is the dot product between the embedding of the ith
point and the sum of all vectors in the embedding. Therefore, all rows and columns of a
mean-centered similarity matrix sum to 0.
1.2
0.6
1
0.4 POINT B POINT C
0.8 POINT A
0.2
0.6 POINT C POINT A
0
0.4
−0.2
0.2
−0.4
0 POINT B
−0.6
−0.2
1.5 0.6
1 1 0.4 5
0.5 0.5 0.2
0
0 0 −0.2 0
−0.5 −0.5 −0.4
−0.6
−1 −1 −5
(a) A and C seem close (b) A and C are actually far away
(original data) (ISOMAP embedding)
One can left-multiply and right-multiply both sides of Equation 9.8 with (I − M/n) to
obtain the following:
! " ! " ! " ! "
M M M T M
I− Δ I− = I− [1n z T + z 1n − 2S] I −
n n n n
0 1 T 0 1
Note that M 1 = n1. As a result, it is easy to show that I − M
n 1n = 0 and 1n I − M
n =
T
0 . We can use these results to simplify the above equation as follows:
! " ! " ! " ! "
M M M M
I− Δ I− = −2 I − [S] I −
n n n n
= −2S, [Using Equation 9.10]
correspond to distances (and similarities) along a curved manifold rather than straight line
distances. It can be argued that geodesic distances are more accurate representations of true
distances as compared to straight-line distances in real applications. Such distances can be
computed by using an approach that is derived from a non-linear dimensionality reduction
and embedding method, known as ISOMAP. The approach consists of two steps:
1. Compute the k-nearest neighbors of each point. Construct a weighted graph G with
nodes representing data points, and edge weights (costs) representing distances of
these k-nearest neighbors.
2. For any pair of points X and Y , report Dist(X, Y ) as the shortest path between
the corresponding nodes in G. Any graph-theoretic algorithm, such as the Dijkstra
algorithm can be used [8].
Subsequently, a squared distance matrix is constructed. This distance matrix is converted
into a similarity matrix using the approach of this section. Subsequently, the eigenvectors
of this matrix are used to create the ISOMAP embedding. A 3-dimensional example is
illustrated in Figure 9.1(a), in which the data is arranged along a spiral. In this figure,
data points A and C seem much closer to each other than data point B. However, in the
ISOMAP embedding of Figure 9.1(b), the data point B is much closer to each of A and
C. This example shows how ISOMAP has a drastically different view of similarity and
distances, as compared to the pure use of Euclidean distances.
SCALE
EIGEN- OR DROP APPLY ANY
SIMILARITY DECOMPOSE KERNEL FEATURES ENHANCED MODEL FINAL
MATRIX FEATURES FEATURES (FLEXIBLE RESULT
OPTIONS)
Since lower-order eigenvectors are often dropped anyway, it is possible to use any sam-
pling method that preserves information only about dominant eigenvectors. A specific ex-
ample is Nyström sampling, which subsamples a set of s objects in order to create an
s-dimensional representation. Typically, the value of s is independent of the data set size,
although it depends on the complexity of the underlying data distribution (e.g., number of
clusters). Then, the approach proceeds as follows:
Draw a subsample of s objects from the data set;
Use the Nyström method (cf. Section 9.3.1) to create an s-dimensional
representation of all objects denoted by the n × s matrix Us ;
Apply any existing clustering algorithm on Us ;
It is also possible to use stochastic gradient descent (cf. Section 9.3.2) to extract the
embedding matrix Us . Furthermore, sampling-based methods can often be repeated to cre-
ate multiple models. The averaged model from these multiple models is referred to as an
ensemble, and it provides superior results.
Diagonalize S = QΣ2 QT ;
Extract the n-dimensional embeddings in rows of QΣ;
Drop any zero columns from QΣ to create Q0 Σ0 ;
Report the outlier score of each row of Q0 as the Euclidean distance of
that row from the mean computed over all rows of Q0 ;
Problem 9.5.1 Write the pseudocode for kernel outlier detection with Nyström sampling.
Utest = St Q0 Σ−1
0 (9.12)
The matrix Utest contains the engineered representations of the test objects. Therefore, we
present the algorithm for kernel SVMs as follows:
Diagonalize S = QΣ2 QT ;
Extract the n-dimensional embedding in rows of QΣ;
Drop any zero eigenvectors from QΣ to create Q0 Σ0 ;
{ The n rows of Q0 Σ0 and their class labels constitute training data }
Apply linear SVM on Q0 Σ0 and class labels to learn model M;
Convert test-train similarity matrix St to representation matrix Utest using Equation 9.12;
Apply M on each row of Utest to yield predictions;
The above implementation is identical to the kernel SVM that is implemented using the
kernel trick (cf. Section 9.5.2.1). One can substitute the SVM with any learning algorithm like
logistic regression or least-squares classification, which is one of the advantages of explicit
feature engineering. One can also use this approach in conjunction with Nyström sampling
in order to improve efficiency.
Problem 9.5.2 Show how the kernel SVM approach discussed in this section can be effi-
ciently implemented with Nyström sampling.
algorithm determines the centroids of the clusters as the representatives of the next iteration.
The kernel k-means algorithm computes the dot product of each point to the various cluster
centroids in transformed space and re-assigns each point to its closest centroid in the next
iteration. How can one compute the dot product between a embedded object Φ(oi ) and the
centroid μj of Cj (in transformed space)? This can be achieved as follows:
( q∈Cj Φ(oq )) q∈Cj Φ(oi ) · Φ(oq )
siq
Φ(oi ) · μj = Φ(oi ) · = =
|Cj | |Cj | |Cj |
q∈Cj
Therefore, for any given object oi , we only need to compute its average kernel similarity to
all points in that cluster. Instead of the centroids, the approach does require the explicit
maintenance of assignments of each point to various clusters in order to recompute the
assignments for the next iteration. As in all k-means algorithms, the approach is iterated
to convergence. For a data set containing n points, the approach requires O(n2 ) time in
each iteration of the k-means algorithm, which can be quite costly for large data sets. The
approach also requires the computation of the entire kernel matrix, which might require
O(n2 ) storage. However, if the similarity function can be computed efficiently, then one
does not need to store the kernel matrix a priori, but simply recompute individual entries
on the fly when they are needed.
This algorithm is identical to the approach discussed in Section 9.5.1.1, when the em-
bedding Q0 Σ0 is used in Section 9.5.1.1 and the k-means algorithm is used in the final step.
However, a disadvantage of the kernel trick is that it can be paired with only a restricted
subset of clustering algorithms (e.g., k-means) that use similarity functions between points.
Not all clustering algorithms are equally friendly to the use of the kernel trick. Furthermore,
one can perform no further engineering or normalization of the extracted features, if they
are being used only indirectly via the kernel trick.
subject to:
0 ≤ αi ≤ C ∀i ∈ {1 . . . n}
The notations of this problem are the same as those in Section 6.4.4.1. Each αi is the ith
dual variable. The quantity C is the slack penalty. The only difference from the objective
function of Section 6.4.4.1 is that we have replaced the training point X i with an engineered
point Φ(oi ). However, the dot product between Φ(oi ) and Φ(oj ) is simply sij , and therefore
the engineered points can be made to disappear from the formulation and be replaced by
similarities. The partial derivative of LD with respect to αk is as follows:
∂LD n
= yk yq αq skq − 1 ∀k ∈ {1 . . . n} (9.13)
∂αk q=1
9.6. THE LINEAR ALGEBRA OF THE REPRESENTER THEOREM 399
This is a convex optimization problem with box constraints. Therefore, one starts by setting
the vector of Lagrangian parameters α = [α1 . . . αn ] to an n-dimensional vector of 0s and
uses the following update steps with learning rate η:
repeat
Update αk ⇐ αk + η 1 − yk n q=1 yq αq skq for each k ∈ {1 . . . n};
Update is equivalent to α ⇐ α − η ∂L D
∂α
for each k ∈ {1 . . . n} do begin
αk ⇐ min{αk , C};
αk ⇐ max{αk , 0};
endfor;
until convergence
After the variables α1 . . . αn have been learned, the test objects are predicted using
their similarities with training objects. This is because the classification of an unseen test
nZ is given by the sign of W · Φ(Z), where our analysis in Chapter 6 shows that
instance
W = j=1 αj y j Φ(X j ). Here, Φ(X j ) is the engineered representation of the jth training
n
point. Therefore, the prediction of Z is given by j=1 αj y j Φ(X j ) · Φ(Z). This is the simply
the weighted sum of the similarities of the training instances with the test instance, where
the weight of the jth similarity is yj αj .
One can also express this result in terms of training-test similarity matrices. Let St
be the t × n similarity matrix of training-test similarities between test objects and training
objects. Let γ be an n-dimensional column vector in which the jth component is yj αj . Then,
the analysis of the previous paragraph shows that the prediction of the t test instances is
given by the sign of each element in the t-dimensional vector St γ.
The optimization model discussed in this section is identical to the one discussed in
Section 9.5.1.3 (although the computational procedures are very different in the two cases).
However, the approach in Section 9.5.1.3 is more flexible, because it cleanly decouples feature
engineering from model building. Therefore, it can be used more easily with any off-the-shelf
classification model or computational procedure.
1. The parameters of the optimization problem can be expressed as one or more vectors
in the same multidimensional space as the individual data points. For example, the
weight vector W in problems like the SVM lies in the same multidimensional space
as the data points.
parameter vectors and points, and (iii) dot products between the parameter vectors
themselves (e.g., L2 -regularizer).
Under these circumstances, a representer theorem can be used to transform any machine
learning problem on multidimensional vectors into a formulation that uses only similarities
between points. Interestingly, all the linear classification models we have seen so far satisfy
this property. In other words, the optimization formulation can be converted into one that
uses only similarities between objects.
Consider the L2 -regularized form of all linear models discussed in Chapter 4 over the
training pairs (X 1 , y1 ) . . . (X n , yn ), where each X i is a row vector. Furthermore, the predic-
T
tion of yi is done as ŷi = f (W · X i ) for some function f (·) that depends on the nature of
the target variable (e.g., numeric, binary, or categorical). The loss function can be written
T T
as L(yi , W · X i ) in each case, because the objective function compares W · X i to yi in
each case to decide the loss. The overall objective function, including the regularizer, may
be written as follows:
n
T λ
Minimize J = L(yi , W · X i ) + W 2 (9.14)
i=1
2
Consider a situation in which the training data points have dimensionality d, but all of them
lie on a 2-dimensional plane. Note that the optimal linear separation of points on this plane
can always be achieved with the use of a 1-dimensional line on this 2-dimensional plane.
Furthermore, this separator is more concise than any higher dimensional separator and will
therefore be preferred by the L2 -regularizer. A 1-dimensional separator of training points
lying on a 2-dimensional plane is shown in Figure 9.3(a). Although it is also possible to get
the same separation of training points using any 2-dimensional plane (e.g., Figure 9.3(b))
passing through the 1-dimensional separator of Figure 9.3(a), such a separator would not be
preferred by an L2 -regularizer because of its lack of conciseness. In other words, given a set of
training data points (row vectors from a data matrix) denoted by X 1 . . . X n , the separator
W , defined as a column vector, always lies in the space spanned by these vectors (after
converting them to column vectors). We state this result below, which is a very simplified
version of the representer theorem, and is specific to linear models with L2 -regularizers.
Theorem 9.6.1 (Simplified Representer Theorem) Let J be any optimization prob-
lem of the following form:
n
T λ
Minimize J = L(yi , W · X i ) + W 2
i=1
2
∗
Then, any optimum solution W to the aforementioned problem lies in the subspace spanned
T T
by the training points X 1 . . . X n . In other words, there must exist real values β1 . . . βn such
that the following is true:
∗ n
T
W = βi X i
i=1
∗
Proof: Suppose that W cannot be expressed in the subspace spanned by the training
∗ n T
points. Then, let us decompose W into the portion W = i=1 βi X i spanned by the
training points and an additional orthogonal residual W ⊥ . In other words, we have:
∗
W = W + W⊥ (9.15)
9.6. THE LINEAR ALGEBRA OF THE REPRESENTER THEOREM 401
. LINEAR SEPARATOR .
. .. ON SAME 2-D
PLANE AS POINTS
. ..
.. . AND ORIGIN .. . IDENTICAL SEPARATION
. . . .
. .. ... ..
AS THE ONE IN LEFT
... .. .
REGULARIZER FOR LACK
... .. . OF CONCISENESS
... . ORIGIN
... . ORIGIN
. .
SEPARATOR IN LINEAR SPAN SEPARATOR NOT IN LINEAR SPAN
OF TRAINING POINTS OF TRAINING POINTS
Figure 9.3: Both the linear separators in (a) and (b) provide exactly the same separation
of training points, except that the one in (a) can be expressed as a linear combination of
the training points. The separator in (b) will always be rejected by the regularizer. The key
point of the representer theorem is that a separator W can always be found in the plane
(subspace) of the training points with an identical separation to one that does not
∗
Then, it suffices to show that W can be optimal only when W ⊥ is the zero vector.
Each (W ⊥ · X i ) has to be 0, because W ⊥ is orthogonal to the subspace spanned by the
various training points. The optimal objective J ∗ can be written as follows:
n
∗ T λ ∗ n
T λ
J∗ = L(yi , W · X i ) + W 2 = L(yi , (W + W ⊥ ) · X i ) + W + W ⊥ 2
i=1
2 i=1
2
n
T T λ λ
= L(yi , W · X i + W ⊥ · X i ) + W 2 + W ⊥ 2
2 2
i=1
0
n
T λ λ
= L(yi , W · X i ) + W 2 + W ⊥ 2
i=1
2 2
∗
It is noteworthy that W ⊥ 2 must be 0, or else W will be a better solution than W .
∗
Therefore, W = W lies in the subspace spanned by the training points.
Intuitively, the representer theorem states that for a particular family of loss functions, one
can always find an optimal linear separator within the subspace spanned by the training
points (see Figure 9.3), and the regularizer ensures that this is the concise way to do it.
After all, even though the embedding of an object might be infinite dimensional, each data
object only lies in an n-dimensional projection of this space for a data set of size n. This
T T
n-dimensional projection is defined by the span of the n vectors X 1 . . . X n . The parameter
vector W also lies in the span of this n-dimensional subspace; this is the essence of the
representer theorem.
The representer theorem provides a boilerplate method to create an optimization model
that is expressed as a function of dot products:
402 CHAPTER 9. THE LINEAR ALGEBRA OF SIMILARITY
For any given optimization model of the form of Equation 9.14 plug in W =
n T
i=1 βi X i to obtain a new optimization problem parameterized by β1 . . . βn ,
and expressed only in terms of dot products between training points. Further-
T
more, the same approach is also used while evaluating W · Z for test instance
Z.
T
Consider what happens when one evaluates W ·X i in order to plug it into the loss function:
T
n
T T
n
W · Xi = βp X p · X i = βp X p · X i (9.16)
p=1 p=1
n
n
W 2 = βi βj X i · X j (9.17)
i=1 j=1
In order to kernelize the problem, all we have to do is to substitute the dot product with
the similarity value sij = Xi · Xj from the n × n similarity matrix S. Note that each X i is
really the embedded representation Φ(oi ) of an object. Therefore, one obtains the following
optimization objective function:
n
n
λ
n n
J= L(yi , βp spi ) + βi βj sij [General form]
i=1 p=1
2 i=1 j=1
T
In other words, all we need to do is to substitute each W · X i in the loss function with
p βp spi . Therefore, one obtains the following form for least-squares regression:
1 λ
n n n n
J= (yi − βp spi )2 + βi βj sij [Least-squares regression]
2 i=1 p=1
2 i=1 j=1
T
By substituting W · X i = p βp spi into the loss functions of various classifiers for binary
data, one can obtain corresponding optimization formulations:
n
n
λ
n n
J= max{0, 1 − yi βp spi } + βi βj sij [SVM]
i=1 p=1
2 i=1 j=1
n
n
λ
n n
J= log(1 + exp(−yi βp spi )) + βi βj sij [Logistic Regression]
i=1 p=1
2 i=1 j=1
T
W can be expressed as the summation i βi X i over all engineered training instances, and
T
the dot product with the engineered test instance Z simply extracts the corresponding
T T
row from St ; the entries in that row are Z · X i . Therefore, the entire set of t test instances
can be predicted as the t-dimensional vector St β. In the case of the classification problem,
one needs to use the sign of each element of this vector as the predicted class label.
EMBEDDING OF EMBEDDING OF
SIMILARITY MATRIX SIMILARITY MATRIX
Problem 9.7.1 Consider two points (x1 , y1 ) and (x2 , y2 ) in 2-dimensional space and the
dot product similarity s = x1 · x2 + y1 · y2 . Now imagine that you modify the similarity to the
superlinear function
√
2
√ s =√(1 + s) . Show that s can
√ be √expressed
√ as the dot product between
2 2 2 2
(x1 , y1 , x1 y1 2, x1 2, y1 2, 1) and (x2 , y2 , x2 y2 2, x2 2, y2 2, 1).
Now consider a situation where an 100000 × 100000 similarity matrix S has two non-
zero eigenvalues, and the 2-dimensional embedding [x1 , x2 ] extracted by eigendecomposition
of S exhibits the property that all members of one class are lie inside the ellipse x21 + 4y12 ≤
10, whereas all members of the second class lie outside this ellipse. Discuss why the two
classes will become linearly separable if the embedding is extracted by eigendecomposition of
a modified similarity matrix S in which we add 1 to each similarity entry and then square it.
What is the dimensionality of the modified embedding (i.e., number of non-zero eigenvectors
of S )?
There are two other functions that are commonly used for increasing the embedding dimen-
sionality and capturing nonlinearity:
Of course, applying a superlinear function does not always help, because it could lead to
overfitting. The level of sensitivity of the superlinear function depends on the parameters
(such as the bandwidth, σ, of the Gaussian function above), which are often chosen in a
data-driven manner. For example, one can test the classification accuracy on out-of-sample
data in order to select σ 2 . A critical fact about many of these functions is that they do not
destroy the positive semidefinite nature of the underlying similarity matrix. In Section 9.7.1,
we will discuss some of these transformations.
The above ideas are used frequently for multidimensional data, where the similarity
value sij is often set to a superlinear function of the dot product. Let X 1 . . . X n be the
n points, and the similarity sij be defined by the kernel function K(X i , X j ). Then, the
common kernel functions used for multidimensional data are defined in Table 9.1.
9.7. SIMILARITY MATRICES AND LINEAR SEPARABILITY 405
Each of the kernels in Table 9.1 has parameters associated with it, which need to be
learned in a data-driven manner. Note that the above kernels are similar to the techniques
discussed for modifying the similarity matrix. These modifications improve the level of sepa-
ration among different classes. The dimensionality of the embedding depends on the nature
of the kernel function. For example, the Gaussian kernel leads to an infinite-dimensional
embedding to represent all possible data pairs in Rn × Rn , although the data-specific em-
bedding is always n-dimensional and can be materialized for a data set containing n points
(using the eigendecomposition methods discussed earlier in this chapter).
Next, we list the Schur’s product theorem as a practice exercise in a step-by-step manner,
because it was used in one of the above results to show that S1 S2 is positive semidefinite,
when S1 and S2 are positive semidefinite.
Problem 9.7.2 (Schur’s Product Theorem) Let S1 = AAT and S2 = BB T be two
positive semidefinite matrices. Let ai the ith row of A and bi be the ith row of B.
• Show that for any vector x, one can express xT (S1 S2 )x in the following form:
T
xT (S1 S2 )x = xi xj [ai aTj ][bi bj ]
i j
• Suppose that qth components of ai and bi are aiq and biq respectively. Show that one
can simplify the above expression to the following:
xT (S1 S2 )x = xi xj [ aik ajk ][ bil bjl ]
i j k l
Discuss why this expression is always nonnegative, and therefore the matrix S1 S2
is positive semidefinite.
Problem 9.7.3 Show that adding a non-negative value c to each entry of a positive semidef-
inite matrix does not affect its positive semidefinite property.
The aforementioned list of properties of positive semidefinite matrices map to properties of
positive semidefinite kernels (in closed form as in Table 9.1). First, we define the notion of
a (closed-form) positive semidefinite kernel function:
Definition 9.7.1 A kernel function is positive semidefinite if and only if all possible ma-
trices created by samples of the arguments of that function are positive semidefinite.
For example, in order to show that the polynomial kernel of Table 9.1 is positive semidefinite,
we will have to show that any n × n similarity matrix P = [pij ] created from arbitrary
X 1 . . . X n ∈ Rd using the function pij = (c + X i · X j )h and c ≥ 0 is positive semidefinite.
The value of n can also be arbitrary, whereas h is a positive integer.
Lemma 9.7.1 (Polynomial Kernel Is Positive Semidefinite) The n × n similarity
matrix P = [pij ] defined by the polynomial kernel pij = (X i ·X j +c)h for any X 1 . . . X n ∈ Rd
and c ≥ 0 is positive semidefinite.
Proof: Let S = [sij ] be an n × n matrix in which sij = X i · X j . We already know that
the matrix S is positive semidefinite because it is a dot product (Gram) matrix. Let C
be an n × n matrix containing c in each entry. Since c ≥ 0, it follows that C is positive
semidefinite, and the matrix C + S is positive semidefinite as well. The polynomial kernel
P can be expressed in the following form:
P = (C + S) (C + S) . . . (C + S)
From Schur’s product theorem, the matrix P is positive semidefinite as well.
From Definition 9.7.1, this means that the polynomial kernel is positive semidefinite. One
can also show that the Gaussian kernel is positive semidefinite.
9.10. EXERCISES 407
9.8 Summary
Many forms of data are not multidimensional, and examples include discrete sequences and
graphs. In such cases, one might have similarities available between objects, but one might
not have any multidimensional representation of the data. Eigendecomposition methods
from linear algebra help in converting such similarity matrices to multidimensional embed-
dings. This chapter discusses the use of the similarity matrix in lieu of the multidimensional
representation in order to implement machine learning algorithms with similarities rather
than multidimensional representations. Similarity-based representations also allow the flat-
tening of nonlinear relationships in the data, so that linear learners become more effective.
9.10 Exercises
1. Suppose that you are given a 10 × 10 binary matrix of similarities between objects.
The similarities between all pairs of objects for the first four objects is 1, and also
408 CHAPTER 9. THE LINEAR ALGEBRA OF SIMILARITY
between all pairs of objects for the next six objects is 1. All other similarities are 0.
Derive an embedding of each object.
2. Suppose that you have two non-disjoint sets of objects A and B. The set A ∩ B is a
modestly large sample of objects. You are given all similarities between pairs of objects,
one drawn from each of the two sets. Discuss how you can efficiently approximate the
entire similarity matrix over the entire set A∪B. It is known that the similarity matrix
is symmetric. [Hint: Think of the connections of this setting with matrix factorization.
After all, all embeddings are extracted as symmetric matrix factorizations. Here, we
are given only a block of the similarity matrix.]
3. Suppose that S1 and S2 are n × n positive semidefinite matrices of ranks k1 and k2 ,
respectively, where k2 > k1 . Show that S1 − S2 can never be positive semidefinite.
4. Suppose you are given a binary matrix of similarities between objects, in which most
entries are 0s. Discuss how you can adapt the logistic matrix factorization approach
of Chapter 8 to make it more suitable to symmetric matrix factorization.
5. Suppose that you were given an incomplete matrix of similarities between objects
belonging to two sets A and B that are completely disjoint (unlike Exercise 2). Discuss
how you can find an embedding for each of the objects in the two sets. Are the
embeddings of the objects in set A comparable to those in the set B?
6. A centered vector is one whose elements sum to 0. Show that for any valid (squared)
2
distance matrix Δ = [δij ] defined on a Euclidean space, the following must be true
for any centered d-dimensional vector y:
y T Δy ≤ 0
(a) Suppose that you are given a symmetric matrix Δ in which all entries along the
diagonal are 0s, and it always satisfies y T Δy ≤ 0 for all centered y. Show that
all entries of Δ must be nonnegative by using an appropriate choice of vector y.
(b) Discuss why a distance matrix Δ of (squared) Euclidean distances is always
indefinite, unless it is a trivial matrix of 0s.
7. You have an n × n (dot-product) similarity matrix between training points and a t × n
similarity matrix St between test and training points. The n-dimensional column
vector of class variables is y. Furthermore, the true n × d data matrix is D (i.e.,
S = DDT ), but you are not shown this matrix. As discussed in Chapter 4, the d-
dimensional coefficient vector W of linear regression is given by the following:
W = (DT D + λI)−1 DT y
p = St (S + λI)−1 y
(b) The previous exercise performs differentiation with respect to the weight vector.
Show the result of (a) using the representer-based loss function discussed in this
chapter, and differentiating with respect to β.
9.10. EXERCISES 409
(c) Take a moment to examine the coefficient vector obtained using the dual ap-
proach in Equation 6.14 of Chapter 6 and compare it to one in this exercise.
What do you observe?
8. Derive the gradient descent steps for the primal formulation of logistic regression using
the similarity matrix S and the representer theorem.
9. A student is given a square and symmetric similarity matrix S that is not positive
semidefinite. The student computes the following new matrix:
S = I − S + S2
S = S1 S 2 + S 2 S 3 + S 3 S 1
W ⇐ (Dw
T
Dw + λId )−1 Dw
T
y
diagonal matrix in which the ith diagonal entry is 1 only if the ith training instance
of D is margin-violating. How would you compute Δw with representer coefficients?
Use the push-through identity to show that this update is equivalent to the following
with representer coefficients β:
Note that one can implicitly implement this update using the following:
16. Consider loss functions of the following form (same notations as text):
n
λ
Minimize J = L(yi , W · Xi ) + ||W ||2
i=1
2
Now imagine that you only had similarities S = [sij ] available to you. Show that W
can be updated indirectly by updating its representer coefficients β as follows:
∂L(yi , ti )
βi ⇐ βi (1 − αλ) − α ∀i ∈ {1 . . . n}
∂ti
n
Here, we define ti = p=1 sip .
Chapter 10
“If people do not believe that mathematics is simple, it is only because they do
not realize how complicated life is.” – John von Neumann
10.1 Introduction
Graphs are encountered in many real-world settings, such as the Web, social networks, and
communication networks. Furthermore, many machine learning applications are conceptu-
ally represented as optimization problems on graphs. Graph matrices have a number of
useful algebraic properties, which can be leveraged in machine learning. There are close
connections between kernels and the linear algebra of graphs; a classical application that
naturally belongs to both fields is spectral clustering (cf. Section 10.5).
This chapter is organized as follows. The next section introduces the basics of graphs
and representations with adjacency matrices. The structural properties of the powers of
adjacency matrices are discussed in Section 10.3. The eigenvectors and eigenvalues of graph
matrices are discussed in Section 10.4. The linear algebra of graph clustering is explored
in Section 10.5, whereas the linear algebra of graph ranking algorithms is explored in Sec-
tion 10.6. The linear algebra of graphs with poor connectivity properties is discussed in
Section 10.7. Machine learning applications of graphs are discussed in Section 10.8. A sum-
mary is given in Section 10.9.
OH 1
O H
ANN
1 JILL
2 C 1 KIM
1 1 SAM
H C C H TIM RON
1 2 MEG JOE
1 1
H C C H
AMY
2 C 1
H TOM
1 1 JOHN
1 1 1 DAN
N C CH3 N C C H
SUE BILL
1 2 1
H O H
H O CLOSE GROUP OF FRIENDS
2 4 6 8
3 5 7 9
The objects in a graph are referred to as vertices, and the relationships among them are
referred to as edges. A vertex is also sometimes referred to as a node. Throughout this book,
we use the term “vertex” and “node” interchangeably. A graph G is denoted by the pair
(V, E), where V is a set of vertices (nodes), and E is a set of edges. If the graph contains
n vertices, it is assumed that the vertices are V = {1 . . . n}. Similarly, each edge (i, j) ∈ E
represents a connection between the vertices i and j.
Graphs may be directed or undirected. In directed graphs, each edge has a direction.
For example, Web links have a direction from the source page to the destination page. The
source of an edge is referred to as its tail and the destination is referred to as its head.
Therefore, edges are shown using arrows, where the head corresponds to the end containing
an arrowhead. An example of a directed graph is illustrated in Figure 10.2. On the other
hand, edges do not have direction in undirected graphs. For example, a Facebook friendship
link or a chemical bond does not have direction. All the graphs illustrated in Figure 10.1
are undirected graphs. An undirected graph may be converted into a directed graph by
replacing each undirected edge with a pair of directed edges in opposite directions.
Finally, graphs may be unweighted or weighted. In an unweighted graph, an edge may be
present or absent between two vertices, and there is no “strength” associated with a specific
edge. In algebraic terms, the representation is binary and the relationship between a pair
of vertices has a value of either 1 or 0, depending on whether or not an edge is present
between the pair. On the other hand, in many applications, the relationship might have a
weight associated with it. For example, a chemical bond has a strength corresponding to
the number of shared electrons. Correspondingly, weights are shown in Figure 10.1(b). In
an email network, the weight of an edge from one participant to another might correspond
to the number of messages sent along that edge. Since weighted graphs are more general,
the graphs discussed in this chapter are always associated with nonnegative weights.
10.2. GRAPH BASICS AND ADJACENCY MATRICES 413
In an undirected graph, the degree of a vertex is defined as the number of incident edges
at that vertex. For example, in Figure 10.1(c), the degree of the vertex corresponding to
Sam is 4. Since every edge is incident on two vertices, the sum of the degrees of the vertices
in an undirected graph with m edges is always equal to 2m. In the case of a directed graph,
it makes sense to talk about the indegree and the outdegree of a vertex. The indegree of a
vertex is the number of incoming edges at a vertex, whereas the outdegree is the number
of outgoing edges. For example, the indegree of vertex 1 in Figure 10.2 is 1, whereas its
outdegree is 2. The sum of the indegrees over all vertices and the sum of the outdegrees
over all vertices are both equal to the number of edges m. This is because each edge is
incident on exactly one vertex as an incoming edge, and it is incident on one vertex as an
outgoing edge. All definitions of vertex degrees can be generalized to the weighted case by
adding the weights of the edges instead of using a default weight of 1 for each edge.
(a) Connected undirected graph (b) Disconnected graph with two components
(because directed paths do not exist between specific ordered pairs of vertices). The graph
in Figure 10.2 is not strongly connected, because a directed path does not exist from vertex 7
to vertex 9. As we will see later, strongly connected graphs have useful algebraic properties.
The distance or shortest path between a pair of vertices in a directed graph is defined
as the least number of edges on a directed path between them. The diameter of a directed
graph is defined as the largest distance between two vertices in the graph. Note that the
distance from vertex i to vertex j might be different from that from vertex j to vertex i.
Therefore, one needs to compute the distances between all n(n − 1) ordered pairs of vertices
in the graph and compute the largest among them in order to compute the graph diameter.
If no directed path exists between a particular pair of vertices, then the diameter of the
graph is ∞. Therefore, a directed graph needs to be strongly connected in order for its
diameter to be finite. For example, the diameter of the directed graph in Figure 10.2 is ∞
because no directed path exists from vertex 7 to vertex 9.
In undirected graphs, the shortest path distance from vertex i to j is the same as that
from vertex j to i. If no path exists between a pair of vertices, it means that the graph is
disconnected, and the distance between this vertex pair is ∞. The diameter of an undirected
graph is the maximum of the shortest path distances between each pair of vertices. The
diameter of a disconnected graph [like Figure 10.3(b)] is ∞.
Figure 10.4: An undirected graph and its random walk graph. Note that asymmetric nor-
malization makes a symmetric adjacency matrix asymmetric
algebraic properties of the graph. Most real-world graphs have power-law degree distribu-
tions [43], as a result of which the sum of the degrees of a tiny fraction of the vertices often
form the vast majority of the sum of the degrees of all vertices in the full graph. As a result,
the structure of the edges incident on these vertices dominate any type of analysis or the
results of a machine learning algorithm applied to the entire network. This is undesirable
because the structural behavior of high-degree nodes is often caused by spam and other
irrelevant/noisy edges.
Some forms of normalization have a probabilistic interpretation, which are useful in
real applications. The first type of normalization is asymmetric normalization, in which
every row is normalized to sum to 1 unit. Therefore, we sum the elements of each row, and
divide each element in that row by this sum. The result of this type of normalization is to
create a stochastic transition matrix, that converts the adjacency matrix into the transition
matrix of a Markov chain. This transition matrix defines a random walk at the graph,
where the outgoing probabilities at each vertex define the probability of traversing along
that edge. The resulting graph is referred to as a random walk graph. Note that this type
of normalization results in asymmetric weights even for an undirected graph (for which the
unnormalized adjacency matrix is symmetric). An example of asymmetric normalization is
shown in Figure 10.4, where the original graph with binary edge weights is shown on the
left, whereas the normalized graph (i.e., random walk graph) is shown on the right. It is
noteworthy that this type of asymmetric normalization can also be applied to a directed
graph. In such a case, the weight of each edge is divided by the sum of the weights of the
outgoing edges at a vertex. The goal is again to interpret each edge weight as a random
walk probability out of a given vertex.
Symmetric normalization is generally defined for undirected graphs. Therefore, one starts
with a symmetric adjacency matrix and the goal is to preserve its symmetry in the normal-
ization process. In symmetric normalization, we sum up the nonnegative entries of the ith
row to create the sum δi . Since the matrix is symmetric, the sum of the elements of the ith
column is also δi . In other words, we have the following:
n
n
δi = aij = aji
j=1 j=1
In symmetric normalization, we divide each entry with the geometric mean of its row and
column sums. The resulting similarity value sij is defined as follows:
aij
sij ⇐
δi δj
416 CHAPTER 10. THE LINEAR ALGEBRA OF GRAPHS
Note that in asymmetric normalization, we always use pij ⇐ aij /δi , and therefore the sum
of each row is 1. Here, pij represents the probability of transition to vertex j from vertex i
in the random walk graph.
One can also represent the above normalizations algebraically in the form of matrix
multiplication. Let A = [aij ] be the original n×n (undirected) adjacency matrix, and Δ be a
diagonal n×n matrix in which the ith diagonal entry is δi = j aij . The matrix Δ is referred
to as the degree matrix of A. It is noteworthy that the degree matrix incorporates information
about the weights aij of edges; the values on the diagonal of Δ will be the aggregate weights
of incident edges rather than the number of incident edges. We occasionally refer to the
matrix Δ as the weighted degree matrix, although referring to it simply as “degree matrix” is
more common. Let P = [pij ] be the asymmetrically normalized stochastic transition matrix,
and S = [sij ] be the symmetrically normalized adjacency matrix. Then, the asymmetrically
and symmetrically normalized matrices are defined as follows:
As we will see, these two related matrices play an important role in many network applica-
tions including clustering, classification, and PageRank computation.
This is because there is only one way of walking from a vertex to itself in a specified number
of even steps, and only one way of walking across the two vertices in a specified number of
10.3. POWERS OF ADJACENCY MATRICES 417
odd steps. An infinite summation over these matrices will not yield a converging summation.
Furthermore, the entries of Ar could themselves blow up over an infinite summation (in some
types of matrices).
However, by allowing for a decay factor γ < 1, it is possible for this summation to
converge. From a semantic perspective, this means that a walk of length r is weighted by
γ r . Even though there might be more walks of greater length, the decay factor ensures that
the infinite summation eventually converges. Clearly, the choice of γ required for convergence
depends on the structural properties of the graph; interestingly, these structural properties
can be captured by the eigendecomposition of the underlying adjacency matrix. Choosing
γ less than the reciprocal of the largest absolute eigenvalue of A ensures that the powers of
(γA)k do converge over an infinite summation. In other words, we are causing a decay at
the multiplicative factor of γ < 1, and then summing up the weights of all walks between a
pair of vertices.
This result follows from the fact that if λ is an eigenvalue of A, then γλ is an eigenvalue
of the matrix Aγ = γA. This is because det(Aγ − λγI) = γ n det(A − λI). As a result, by
choosing γ to be less than the reciprocal of the largest absolute magnitude of the eigenvalues
in A, the largest absolute magnitude of the eigenvalues of the matrix Aγ = γA is strictly
less than 1. The following result can be easily shown:
Lemma 10.3.1 Let A be a matrix for which all eigenvalues have absolute magnitude less
than 1/γ for γ > 0. Then, the following can be shown:
limr→∞ (γA)r = 0
Proof Sketch: We denote γA with the matrix Aγ , and all its eigenvalues have absolute
magnitude less than 1. Any matrix can be converted into Jordan normal form (cf. Sec-
tion 3.3.3) with possibly complex eigenvalues as follows:
Aγ = V JV −1
Here, J is an upper-triangular matrix in Jordan normal form in which the diagonal entries
contain the (possibly complex) eigenvalues, each with magnitude less than 1. In such a case,
we can show the following:
Arγ = V J r V −1
As r goes to ∞, the matrix J r can be shown1 to go to 0. Therefore, the matrix Arγ converges
to the zero matrix as well.
This result provides an approach for computing the decay-weighted sum of walks between
any pair of vertices.
Lemma 10.3.2 Given a directed graph with adjacency matrix A in which the largest eigen-
value is less than 1/γ, the weighted sum of all decayed walks between each pair of vertices
is contained in the matrix (I − γA)−1 − I.
∞
(γA) = (I − γA)−1 − I
r=1
1 The diagonal entries in the upper-triangular matrix J r are powers of those in J, which will go to 0.
Such a strictly triangular matrix is known to be nilpotent, and, therefore, J r will eventually go to 0.
418 CHAPTER 10. THE LINEAR ALGEBRA OF GRAPHS
Proof Sketch: All eigenvalues of γA have magnitude less than 1. Each eigenvalue of γA
has a corresponding eigenvalue in (I − γA), and the two eigenvalues sum to 1 with the
same eigenvector. Therefore all eigenvalues of (I − γA) are non-zero, and the matrix is non-
singular. In other words, we can multiply both sides of the above equation with (I − γA)
without affecting the correctness of the above result. Multiplying each side with (I − γA)
yields the matrix γA for both sides. This proves the result.
The matrix (I − γA)−1 is very useful because, the (i, j)th entry tells us about the level
of indirect connectivity from vertex i to j even when this pair of vertices is not directly
connected. This matrix contains all the Katz measures between pairs of vertices. The Katz
measure is generally used for undirected graphs, although it can also be applied to directed
graphs.
The Katz measure can be used for link prediction. In this problem, the goal is to discover
pairs of vertices between which links are likely to form in the future in a graph of interest
(e.g., social network). Clearly, if many (short) walks exist between a pair of vertices, links
are more likely to form between them in the future. For example, one is more likely to form
links with the friends of one’s friends in a social networks.
This problem of link prediction can be viewed in a similar manner to that of the problem
of recommendations with implicit feedback. To solve this problem, we first set a small subset
of the non-zero entries in A to 0 to obtain A , and save those edges as a validation set Av .
We also add some of the zero edges in A to Av . One can compute the inverse of (I − γA )
at different values of γ and select the one at which the link predictions on the validation
set Av are the most accurate. Once the value of γ has been determined, we compute the
matrix (I − γA)−1 (with all edges included) in order to rank the edges in the order of the
likelihood of forming links.
The powers of the adjacency matrix can also be used to characterize the diameter and
connectivity of both undirected and directed graphs. Note that if the diameter of a graph
is d, then a path of length at most d exists between each pair of vertices. In other words,
if rij ≤ d be the length of the shortest path between an arbitrary pair of vertices i and j,
then the (i, j)th entry of Arij will be non-zero. This implies the following way of defining
the diameter of a graph in terms of the powers of the adjacency matrix.
Property 10.3.2 The diameter of a (directed or undirected) graph with adjacency matrix
d
A is the smallest value of d for which all entries of the matrix k=0 Ak are non-zero. For
any connected, undirected graph or strongly connected, directed graph, the value of d is at
most n − 1.
When an undirected graph is not connected or a directed graph is not strongly connected,
the diameter of the graph is ∞. Since all graphs with finite diameter have a diameter value
at most n − 1, this provides a simple approach for testing the connectivity of a (directed or
undirected) graph with the use of powers of the adjacency matrix.
A simple approach for testing connectivity is to compute the matrix containing all Katz
measures and checking if all nondiagonal entries are non-zero.
Undirected and directed graphs that are not connected/strongly connected will have
n−1
missing edges between specific sets of vertices in the matrix k=0 Ak . The missing edges in
n−1 k
k=0 A will result in a specific type of block structure of the non-zero entries. For example,
consider a directed graph in which the vertices can be divided into sets 1 and 2. Assume
that the vertices in each of the sets 1 and 2 are strongly connected. Furthermore, edges exist
from vertices in set 1 to vertices in set 2, but no edges exist in the other direction. This
type of graph is not strongly connected and its adjacency matrix can always be converted
into a block upper-triangular form by appropriately reordering the vertices:
A11 A12
A=
0 A22
Note that the blocks along the diagonal are always square, but the blocks above the diagonal
might not be square. It is not very difficult to verify that no matter how many times
we exponentiate the matrix A, the lower block of zeros will stay as zeros. Similarly, a
disconnected, undirected adjacency matrix can be converted into block diagonal form like
the one below:
A11 0
A=
0 A22
It is relatively easy to verify that exponentiating this matrix any number of times will only
result in the individual blocks being exponentiated:
k
A11 0
Ak =
0 Ak22
The two blocks A11 and A22 do not interact with one another in matrix multiplication.
Property 10.4.1 The adjacency matrices of directed graphs might have one or more com-
plex eigenvalues. Furthermore, a directed graph is not even guaranteed to have a diagonal-
izable adjacency matrix even after allowing for complex eigenvalues.
420 CHAPTER 10. THE LINEAR ALGEBRA OF GRAPHS
0 0 1 0
Without the unidirectional edge, the graph is symmetric and diagonalizable with real and
orthonormal eigenvectors. However, the addition of a single edge from node 1 to node
4 makes the adjacency matrix non-diagonalizable. On computing det(N − λI), it can be
√ √has the characteristic polynomial λ − 3λ , which corresponds to
4 2
shown that this matrix
the eigenvalues {− 3, 3, 0, 0}. Therefore, the eigenvalue 0 is repeated. However, there is
only one eigenvector [0, 1, 0, −1]T with eigenvalue 0, since the matrix has rank 3. Therefore,
this adjacency matrix is not diagonalizable.
As an example of a directed graph with a diagonalizable adjacency matrix and complex
eigenvalues, consider the following adjacency matrix, which contains a directed cycle of
three vertices:
⎡ ⎤
0 0 1
A=⎣ 1 0 0 ⎦ (10.1)
0 1 0
T
Note that this matrix does have one real-valued eigenvector
√ [1, 1, 1] with√an eigenvalue of
1. The other two eigenvalues can be shown to be (−1+i 3)/2 and (−1−i 3)/2, which are
obviously complex. The corresponding eigenvectors are also complex. All eigenvalues can be
shown to be the real and complex cube roots of unity because the characteristic polynomial
is λ3 − 1. It is noteworthy that we did get at least one real eigenvector-eigenvalue pair.
Furthermore, this eigenvector is the largest eigenvector in absolute2 magnitude, although
the other two eigenvectors also have a magnitude of 1. This is not a coincidence. It can
be shown that the adjacency matrix of any strongly connected directed graph will have at
least one real eigenvector-eigenvalue pair, which is also the dominant pair. The adjacency
matrix of a strongly connected graph is said to be irreducible.
An adjacency matrix that is not irreducible is said to be reducible. Note that if the graph
is not strongly connected (i.e., its adjacency matrix is reducible), then either the graph is
completely disconnected, or its vertices can be partitioned into two sets (i.e., a cut can be
created) such that edges between the two sets point in only one direction. In other words,
the matrix can be expressed in the following block upper-triangular form:
A11 A12
A= (10.2)
0 A22
In terms of walks on the directed graph, it means that once one moves from block 1 to block
2 via edges in A12 , it is impossible to come back to any vertex in block 1. In the above
matrix it is assumed that the vertices are ordered so that all vertices in one component
occur before all vertices in another component.
2 The
√
magnitude of the complex number a + i b is a2 + b 2 .
10.4. THE PERRON-FROBENIUS THEOREM 421
The Perron-Frobenius theorem applies only to strongly connected graphs, which are
represented by irreducible adjacency matrices. The primary focus of this result is on the
eigenvector of largest magnitude, which is also referred to as the principal eigenvector.
Note that the left eigenvectors and right eigenvectors of an asymmetric matrix (like a
directed graph) are different. The general version of the Perron-Frobenius theorem is stated
as follows:
Theorem 10.4.1 (Perron-Frobenius Theorem for Directed Graphs) Let A be a
square, irreducible adjacency matrix with nonnegative entries for a directed graph contain-
ing n vertices. Then, one of the largest eigenvalues of A (in absolute magnitude) is always
real-valued and positive (denoted by λmax ), and the multiplicity of this (positive) eigenvalue
is 1. However, other complex or negative eigenvalues could exist with absolute magnitude
λmax . The following results also hold true:
• The unique left eigenvector and unique right eigenvector corresponding to the real and
positive eigenvalue λmax contains only real and strictly positive entries.
• The real and positive eigenvalue λmax satisfies the following:
averagei aij ≤ λmax ≤ maxi aij
j j
averagej aij ≤ λmax ≤ maxj aij
i i
This means that for unweighted matrices, the largest eigenvalue lies between the aver-
age and maximum indegree (and outdegree). Therefore, the minimum of the maximum
indegree and maximum outdegree can be used to provide an upper bound on λmax .
• Special case of stochastic transition matrices: Since stochastic transition matri-
ces have a weighted outdegree of 1 for each vertex, the largest eigenvalue is 1 according
to the above results. The corresponding right eigenvector of a stochastic transition ma-
trix P is the n-dimensional column vector of 1s because each row of P sums to 1. The
corresponding left eigenvector, referred to as the PageRank vector, is the solution to
πT P = πT .
It is noteworthy that even though the largest eigenvalue in magnitude is positive, com-
plex or negative eigenvalues might exist with the same absolute magnitude λmax . As a
specific example, consider the directed cycle of three vertices in the adjacency matrix of
Equation 10.1. The eigenvalues
√ of this matrix
√ are the three real or complex cube-roots of
1, which are 1, (−1 + i 3)/2 and (−1 − i 3)/2. All three roots have an absolute magni-
tude of 1. In general, it can be shown that a directed cycle of n vertices has n eigenvalues
corresponding to the n real and complex nth roots of 1. These values can be shown to be
exp(2i πt/n) = cos(2πt/n) + i sin(2πt/n) for t ∈ {0 . . . n − 1}. All real and complex eigen-
values have an absolute magnitude of 1. If the graph is reducible, there might be multiple
eigenvectors corresponding to the largest (positive) eigenvalue. Furthermore, the principal
eigenvector is no longer guaranteed to contain only positive entries in such a case.
A simplified version of the Perron-Frobenius theorem also applies to undirected graphs.
In undirected graphs, the adjacency matrix is symmetric and therefore all eigenvectors
are real. Furthermore, the graph is always strongly connected, when viewed as a directed
network (with two directed edges replacing each undirected edge). Correspondingly, one can
state the Perron-Frobenius theorem for undirected graphs as follows:
422 CHAPTER 10. THE LINEAR ALGEBRA OF GRAPHS
This means that for unweighted matrices, the largest eigenvalue lies between the aver-
age and maximum degree.
Finally, the stochastic transition matrices of undirected graphs are not symmetric; however,
they inherit some of the properties of undirected (symmetric) adjacency matrices from
which they are derived. For example, the stochastic transition matrices of undirected graphs
continue to have real eigenvectors and real eigenvalues like their symmetric counterparts.
Corollary 10.4.2 (Stochastic Transition Matrices of Undirected Graphs) Let P =
Δ−1 A be the normalized transition matrix of an undirected, connected graph with n × n ad-
jacency matrix A and degree matrix Δ. The following results are true:
• All eigenvectors and eigenvalues are real. The largest eigenvalue in absolute magnitude
is always unique and has a value of 1, although it is also possible to have an eigenvalue
with value −1.
• The single right eigenvector with eigenvalue of 1 corresponds to an n-dimensional
column of 1s. This vector is a valid right eigenvector because each row sums to 1 in
the stochastic transition matrix.
• The left eigenvector solution to π T P = π T is unique to within scaling. This vector is
referred to as the PageRank vector and all components are strictly positive.
Note that even though the eigenvector with largest eigenvalue of 1 is unique, other eigenvec-
tors could exist with an eigenvalue of −1. Therefore, the eigenvector in largest magnitude is
not unique. As a specific example, consider the stochastic transition matrix of an undirected
graph with two vertices and a single edge between them. Both the adjacency matrix A and
the stochastic transition matrix P have the following form:
0 1
A=P =
1 0
This graph has the same adjacency matrix as a directed cycle of two vertices. It has eigen-
values of −1 and +1.
10.5. THE RIGHT EIGENVECTORS OF GRAPH MATRICES 423
The eigenvectors of normalized adjacency matrices are used extensively in machine learn-
ing for applications like spectral clustering and ranking. The left eigenvectors are used for
applications like ranking nodes, whereas the right eigenvectors are used for spectral clus-
tering. This will be the focus of subsequent sections.
matrix. This discussion sets the stage for understanding spectral methods as variants of
this kernel method. We assume that the adjacency matrix (after removing the weak links)
is denoted by A = [aij ]. The n × n adjacency matrix A of an undirected graph is a sym-
metric similarity matrix, although it is not positive semidefinite because the sum of its
eigenvalues (i.e., matrix trace) is 0. Nevertheless, one can always diagonalize the symmetric
adjacency matrix as A = QΛQT and simply use the top-k columns of Q as the engineered
representation. Any clustering algorithm like k-means can be applied on the embedding
(cf. Section 9.5.1.1). The lack of positive semidefiniteness of the similarity matrix (i.e., the
adjacency matrix) might lead one to assume that this is not a kernel method. However, this
is not quite correct; we could also condition the matrix A by adding the absolute value γ > 0
of the most negative eigenvalue to each diagonal entry. The matrix A + γI = Q(Λ + γI)QT
is a positive semidefinite matrix with exactly the same eigenvectors. The eigenvalues are not
used. This is slightly different from kernel k-means (cf. Section 9.5.2.1), which implicitly
scales the eigenvectors with the square-root of the eigenvalues via the kernel trick. Spectral
clustering ignores the scaling effect of eigenvalues, and works with whitened representations
(cf. Section 7.4.6 of Chapter 7). Furthermore, it only uses the top-k eigenvectors, thereby
replacing soft eigenvalue weighting with discrete selection. These types of normalization and
feature selection differences always occur in cases where one uses feature engineering (as in
spectral methods) rather than the kernel trick.
The approach discussed above is a kernel method (related to spectral clustering), but it is
not precisely spectral clustering. One problem with using adjacency matrices (as similarity
matrices) directly is that the entries of the matrix are dominated by a few vertices. Most real-
world graphs satisfy power-law degree distributions [43] in which a tiny fraction (typically
less than 1%) of the vertices account for most of the edges in the graph. As a result, the
embedding is dominated by the topological structure of a small fraction of vertices, which
is undesirable. Spectral clustering solves this problem with vertex degree normalization.
In symmetric normalization, we compute the degree matrix Δ, which is an n×n diagonal
matrix containing the degree δi = j aij on the ith diagonal entry. Each entry aij is divided
by the geometric mean of δi and δj in order to reduce the influence of high-degree vertices.
As discussed in Section 10.2, the symmetrically normalized similarity matrix S is defined
as follows:
S = Δ−1/2 AΔ−1/2 (10.3)
We can diagonalize this similarity matrix S = Q(s) ΛQT(s) , and then use the top-k columns
of Q(s) (i.e., largest eigenvectors) as the n × k matrix Q(s),k containing the embedding. We
subscript the matrix Q(s),k containing the embedding with ‘(s)’ to emphasize symmetric
normalization. The ith row of Q(s),k contains the k-dimensional embedding of the ith ver-
tex. This embedding is referred to as the Ng-Jordan-Weiss embedding [98]. Any clustering
algorithm like k-means can be applied to this embedding. It is also common to normalize
each row of the n×k matrix Q(s),k to unit norm just before applying the k-means algorithm.
Note that normalizing each row of Q(s),k will result in an embedding matrix in which the
columns are no longer normalized.
A related variation of this approach is the Shi-Malik algorithm [115], which uses the
stochastic transition matrix instead of symmetrically normalized matrix. In other words,
the normalization is asymmetric:
P = Δ−1 A
One can also view P as a similarity matrix; however, the main problem is that it is asym-
metric; therefore, it does not even make sense to talk about positive semidefiniteness.
10.5. THE RIGHT EIGENVECTORS OF GRAPH MATRICES 425
Note that this decomposition always contain real-valued eigenvectors and eigenvalues ac-
cording to the Perron-Frobenius theorem for stochastic transition matrices (cf. Corol-
lary 10.4.2). The columns of Q(a) contain the right eigenvectors of P . We subscript the
matrix with ‘(a)’ to emphasize the fact that it is extracted from an asymmetric similarity
matrix. The top-k columns of Q(a) are extracted in order to create an n × k embedding
matrix Q(a),k . The ith row of this matrix contains the k-dimensional embedding of the
ith vertex, and it is referred to as the Shi-Malik embedding. Any off-the-shelf clustering
algorithm can be applied to this embedding. We make an observation about the top eigen-
vector in this embedding, which is also stated in the Perron-Frobenius result for stochastic
matrices (cf. Corollary 10.4.2):
Property 10.5.1 The largest eigenvector in the Shi-Malik embedding is a column of 1s.
This particular eigenvector is not very informative from a clustering point of view and is
sometimes discarded (although including it does not seem to make much of a difference).
Proof: We show that both of the above statements are true if and only if x is a generalized
eigenvector of A satisfying Ax = λΔx.
First, we note that P x = λx is true if and only if Δ−1 Ax = λx, which is the same as
saying that Ax = λΔx. √ √ √
Second, we note that S[ Δx] = λ[ Δx] is true if and only if Δ−1/2 Ax = λ[ Δx],
which is the same as saying that Ax = λΔx. This completes the proof.
Since the first eigenvector of the Shi-Malik embedding is a column of 1s,√it follows that
T
the first eigenvector of√the √ √ T embedding is proportional to Δ[1, 1, . . . 1] ,
Ng-Jordan-Weiss
which is the same as [ δ1 , δ2 , . . . , δn ] .
Corollary 10.5.1 The first eigenvector of the symmetrically normalized adjacency matrix
S = Δ−1/2 AΔ−1/2 is proportional to an n-dimensional vector containing the square-roots
of the weighted vertex degrees.
An important point is that the Ng-Jordan-Weiss embedding normalizes the rows before ap-
plying k-means, whereas the Shi-Malik approach does not normalize the rows before apply-
ing k-means. In other words, the former is row-normalized before applying k-means, whereas
the latter is column-normalized before applying k-means. This implies the following unified
extraction of both embeddings using the symmetric kernel matrix S = Δ−1/2 AΔ−1/2 :
426 CHAPTER 10. THE LINEAR ALGEBRA OF GRAPHS
Lemma 10.5.2 Let R be the n × k matrix containing the top-k (unit normalized) eigenvec-
tors of the symmetric matrix S = Δ−1/2 AΔ−1/2 in its columns. Then, both the Shi-Malik
and Ng-Jordan-Weiss embeddings can be obtained from R using the following postprocessing
steps:
• The Ng-Jordan-Weiss embedding is obtained by normalizing each row of the n × k
matrix R to unit norm.
• The Shi-Malik embedding is obtained by normalizing each column of the n × k matrix
Δ−1/2 R to unit norm.
This observation shows that both methods are very similar, and differ only in terms of the
minor post-processing steps of scaling/normalizing the rows/columns of the same matrix.
L=Δ−A (10.4)
The Laplacian of an undirected graph is symmetric because both Δ and A are symmetric.
The Laplacian is always a singular matrix because the sum of each row is 0. Therefore, one
of the eigenvalues of the Laplacian is 0.
The asymmetrically normalized Laplacian La applies a one-sided normalization to the
matrix L introduced above:
Here, P is the stochastic transition matrix P = Δ−1 A, which was introduced slightly earlier.
Therefore, the asymmetrically normalized Laplacian is closely connected to the stochastic
transition matrix P . This Laplacian is not symmetric, because P is not symmetric. The
symmetrically normalized Laplacian Ls is defined in a similar way, except that two-sided
normalization is applied to L:
Therefore, the symmetrically normalized Laplacian is closely related to the symmetric sim-
ilarity matrix S.
10.5. THE RIGHT EIGENVECTORS OF GRAPH MATRICES 427
The graph Laplacian has some interesting interpretations in terms of embedding ver-
tices in multidimensional space. In the simplest case, consider a setting in which each
vertex i is embedded to the real number xi . Therefore, one can create a normalized vector
x = [x1 . . . xn ]T , so that xT x = 1. First, let us examine the unnormalized Laplacian L of
Equation 10.4. One can show the following result:
1
n n
xT Lx = aij (xi − xj )2
2 i=1 j=1
Proof: In order to show this result, we can expand the expressions on both sides of the
equation in the statement of the lemma and examine the coefficient of each term on both
sides. For i = j, it can be shown that the coefficient of any term of the form xi xj is
−(aij + aji ) on both sides. On the other hand, for any i, it can be shown that the coefficient
n
of x2i is 12 j=1 (aij + aji ).
There are several observations that one can make from the above result. These observations
are used in various ways in machine learning applications on graphs:
2. The fact that the unnormalized graph Laplacian is positive semidefinite can be eas-
ily extended to the symmetrically normalized graph Laplacian (see Problem 10.5.1).
Even though the notion of positive semidefiniteness is defined only for symmetric
graphs, it can also be shown that the asymmetrically normalized graph Laplacian has
nonnegative eigenvalues and satisfies xT La x ≥ 0 for all x. In other words, it satisfies
an extended notion of positive semidefiniteness for asymmetric graphs. Each type of
Laplacian always has one of its eigenvalues as 0, because Laplacians are singular ma-
trices with a null space of rank at least 1. The null space has rank exactly 1 if the
graph is connected. Since Laplacians always have nonnegative eigenvalues, it follows
that 0 is the smallest eigenvalue, which is unique for connected graphs.
The following result can be shown relatively easily by using Lemma 10.5.3 and some prop-
erties of determinants.
Problem 10.5.1 Let L, La = Δ−1 L, and Ls = Δ−1/2 LΔ−1/2 be the unnormalized, asym-
metrically normalized, and symmetrically normalized Laplacians of an undirected graph with
nonnegative weights on edges. Show the following: (i) The value of xT Ls x is always non-
negative for any x; (ii) the eigenvalues of Ls and La must be the same; and (iii) The value
of xT La x is always nonnegative.
428 CHAPTER 10. THE LINEAR ALGEBRA OF GRAPHS
2. In the event that the symmetric Laplacian Ls was chosen in the first step, perform
the additional step of normalizing each row of Qk to unit norm.
3. Apply a k-means algorithm to the embedding representations of the different vertices.
An important point is that we use the large eigenvectors of similarity matrices, whereas we
use the small eigenvectors of the Laplacian. This is not surprising because of the relationship
between the two. For examples, the symmetrically normalized adjacency matrix S is related
to the symmetric Laplacian Ls as Ls = I−S. As a result, the eigenvectors of the two matrices
are identical and the corresponding eigenvalues sum to 1. We summarize these results in
terms of the equivalence of the kernel view and the Laplacian view of spectral clustering.
Proof: We show the result in the case of the symmetric Laplacian S. The proof for the
asymmetric Laplacian is similar. Thd eigenvector of any matrix S with eigenvalue λ is also
an eigenvector of the matrix (I − S) with eigenvalue 1 − λ. This is because Sx = λx is true
if and only if (I − S)x = (1 − λ)x is true. Therefore, large eigenvectors of S correspond to
small eigenvectors of Ls = (I − S).
It is noteworthy that the small eigenvectors of the unnormalized Laplacian L are not exactly
the same as the large eigenvectors of the adjacency matrix A. In any case, the use of
unnormalized variants is relatively uncommon in most practical applications.
1
n n
xT Lx = aij (xi − xj )2
2 i=1 j=1
Here, x is an n-dimensional vector, which contains one coordinate for each vertex. Note
that one can easily extend this result to a k-dimensional embedding of each vertex by using
10.5. THE RIGHT EIGENVECTORS OF GRAPH MATRICES 429
k vectors x1 . . . xk . In this case, the n-dimensional vector xi contains the ith coordinate
of the n different vertices. In such a case, the sum over the k different values of xTi Lxi
provides the weighted sum-of-square Euclidean distances for the embedding. Therefore, a
clustering-friendly embedding may be defined by the following optimization model:
k
Minimize xTi Lxi
i=1
subject to:
xi 2 = 1 ∀i ∈ {1 . . . k}
x1 . . . xk are mutually orthogonal
This optimization model tries to find a k-dimensional embedding of each vertex, so that
the weighted sum-of-square Euclidean distances is minimized. The above model is a special
case of the norm-constrained optimization problem discussed in Section 3.4.5 of Chapter 3.
As discussed in Section 3.4.5, the smallest k eigenvectors of L provide a solution to this
optimization problem. Note that one can use exactly the same model for the symmetrically
normalized Laplacian by replacing L with Ls in the above model. The case of the asymmetric
Laplacian is slightly different, because the problem boils down to the following:
k
Minimize xTi Lxi
i=1
subject to:
x1 . . . xk are Δ-orthonormal
The only difference from the original optimization problem is that the vectors are Δ-
orthonormal. The notion of Δ-orthonormality is defined as follows:
+
1 i=j
xTi Δxj =
0 i = j
Problem 10.5.2 Show that the optimum solution to the optimization model of asymmetric
spectral clustering corresponds to the smallest eigenvector of La = Δ−1 L.
The key hint in the above problem is to use a variable transformation, wherein each xi =
n
j=1 βij pj is expressed as a linear combination of the basis system of unit eigenvectors
of Δ−1 L. Subsequently, one has to solve for the coefficients βij , while transforming the
objective function and the constraints in terms of these coefficient variables. Show that
the eigenvectors p1 . . . pn are Δ-orthogonal, and use it to simplify the objective function
terms of the different values of βij . It can be shown that all values of βij will be either
in √
1/ Δ or 0.
430 CHAPTER 10. THE LINEAR ALGEBRA OF GRAPHS
In other words, both factorizations can be expressed in the form U V T . We can also approx-
imately factorize either S or P into U V T using the gradient-descent methods of Chapter 8.
Note that both U and V are n × k matrices, where k is the rank of the factorization. Sub-
sequently, the k-dimensional rows of U and V can be concatenated in order to create the
2k-dimensional embedded representations of each vertex. The result will not be exactly the
same as that of spectral clustering, but this generalized approach will often provide similar
results to spectral clustering. This generalized view is useful in cases where the direct appli-
cation of spectral clustering is not possible. For example, in the case of directed graphs, the
adjacency matrix may not be diagonalizable with real-valued eigenvectors and eigenvalues.
In such cases, we can use generalized forms of the factorization on the directed graph.
Let A be an adjacency matrix of a directed graph. As in the case of undirected adjacency
matrices, weak links are removed by keeping only those edges, which are both among the
top-k incoming edges and the top-k outgoing edges of the two vertices at their end points.
Note that the matrix A will not be symmetric either before or after removal of the weak
links (since the graph is directed to begin with). Let δiin be the weighted indegree of the ith
vertex, which is obtained by adding the (possibly non-binary) elements of the ith column
of A. Similarly, let δiout be the weighted outdegree of the ith vertex, which is obtained by
summing the values in the ith row of A. One can create corresponding n×n diagonal matrices
denoted by Δin and Δout , whose diagonal elements are the δiin and δiout , respectively. Then,
each edge (i, j) is normalized using the geometric mean of outdegree at i and the indegree
at j:
aij
aij ⇐
out
δi δjin
One can also write this relationship in matrix form with the use of the normalized matrix N :
−1/2 −1/2
N = Δout AΔin
Once the matrix N has been computed, we can factorize it as N ≈ U V T using any of the
methods discussed in Chapter 8. Here, U and V are n × k matrices, where k is the rank of
the factorization. The ith row of U provides the outgoing factor of the ith vertex (or sender
factor), and is related to some of the centrality measures discussed in the next section. The
ith row of V provides the incoming factor of the ith vertex (or receiver factor). One can
concatenate these representations to create a 2k-dimensional representation of each vertex.
Subsequently, this representation can be used for clustering.
1. The kernel view with adjacency matrices: The normalized variations of spectral
clustering can be viewed as special cases of the similarity-based clustering methods
with explicit feature engineering (cf. Section 9.5.1.1 of Chapter 9). The main difference
between spectral clustering and this family of methods is only in terms of how the
matrix is preprocessed to remove weak links. The main advantage of the kernel view
is that it provides a unified view with all the other kernel methods we have seen so
far in Chapter 9. A spectral method is simply an equal citizen of the vast family of
kernel methods such as kernel k-means – nothing more and nothing less. The main
distinguishing characteristic is the heuristic sparsening/normalization of the similarity
matrix and the normalization (whitening) of engineered features.
2. The Laplacian view: The use of the Laplacian is the dominant treatment in popular
expositions of spectral clustering [84]. Because of the different nature of this treatment
as compared to kernel or factorization methods, spectral clustering is often viewed in
a very different light than other related embedding methods. The Laplacian view is
often interpreted as an elegant discrete optimization problem of finding minimum cuts
in a normalized graph [115]. However, the actual optimization problem of spectral
clustering is only a continuous approximation of this problem, which could be an
arbitrarily poor approximation of the discrete version.
3. The matrix factorization view and its extensions: Asymmetric forms of spectral
clustering can be viewed in the context of matrix factorization. Most importantly, this
point of view provides a generalization of the approach to clustering directed graphs,
which is not possible with the vanilla version of spectral clustering.
However, there are several differences from the exposition in the previous section. First,
we will examine the characteristics of only the principal (i.e., largest) eigenvector, which
has an eigenvalue of 1 according to the Perron-Frobenius theorem. Second, since we will
be examining applications associated with the graph structure of the Web (rather than
clustering), the focus will be on directed graphs rather than undirected graphs. After all,
Web page linkage structure is not symmetric. Finally, the clustering application discussed
in the previous section always modifies the adjacency matrix in order to remove weak
links. This modification is not made for the applications discussed in this section. Rather a
different type of modification is used, which focuses on making the directed graph strongly
connected.
In many applications such as the Web, one is looking for vertices with a high level of
prestige. Intuitively, a Web page has a high level of prestige if many Web pages point to it.
However, simply using the number of Web pages pointing to a page might be a deceptive
indicator of its prestige, because the pages pointing to it might be of low quality themselves.
Therefore, one typically wants to discover pages that are pointed to by other high prestige
pages. One can model this type of recursive relationship by using the notion of random
walks in the graph.
Imagine a directed graph with adjacency matrix A, weighted degree matrix Δ, and
stochastic transition matrix P = Δ−1 A. One can interpret each entry pij in P as the
probability of a transition of random surfer from Web page i to Web page j, under the
assumption that the surfer selects the vertices outgoing from i using the vector of transi-
tion probabilities [pi1 , pi2 , . . . pin ]. Note that these transition probabilities sum to 1. The
PageRank-based prestige of a vertex (Web page) is defined as the steady-state probability
of a random surfer visiting that vertex (Web page). Interestingly, the process that we just
described is precisely the transition process of a Markov chain. A Markov chain consists of
a set of states (vertices) along with a set of transitions (edges with probabilities) among
them. Therefore, a Markov chain is perfectly described by the graph structures introduced
in this section. Furthermore, a Markov chain has a well-defined procedure for finding steady
state probabilities in terms of eigenvectors of its transition matrix.
One question that arises is as to the conditions under which a Markov chain has steady-
state probabilities that are independent of the starting vertex of the random surfer. Such
Markov chains are referred to as ergodic and they must satisfy the following condition:
Definition 10.6.1 (Ergodic Markov Chain) A Markov chain is defined as ergodic, if
its transition matrix is strongly connected.
Graphs that are not strongly connected might have steady-state probabilities for a walk
that depend on the starting point of the walk. For example, consider the directed graph
in Figure 10.5(b), which is not strongly connected. In this case, starting a random walk at
vertex 1 will result in steady-state probabilities that are distributed only among the vertices
in the component A1. However, starting the walk at vertex 6 could lead to either of the
components A1 or A2.
An immediate problem that arises because of this requirement is that many real applica-
tions do not result in directed graphs that are strongly connected. For example, the Web is
certainly not a directed graph that is strongly connected. When you set up a new Website,
the chances are that you might point to many other Websites, but no one might know of
your Website or point to it. This can be a problem in terms of computing steady-state
probabilities.
This problem is solved with the use of restart probabilities. In each step, the random
surfer is allowed to reset to a completely random vertex in the network with probability
10.6. THE LEFT EIGENVECTORS OF GRAPH MATRICES 433
α < 1 and continue with the random walk with probability (1 − α). The value of α is a
hyper-parameter that is chosen in an application-specific manner. The transition matrix P
is obtained from the old transition matrix Po as follows:
P ⇐ (1 − α)Po + αM/n
Here, M is an n × n matrix containing 1s. The matrix M/n is a transition matrix in which
one can move from any vertex to another with probability 1/n. Therefore, this is a strongly-
connected restart matrix. The final transition matrix P is a weighted combination of the
original transition matrix and the restart matrix. Henceforth, we will always assume that
the transition matrix is strongly connected, and the restart matrix has been incorporated
as a preprocessing step.
n
πi = πj pji ∀i ∈ {1 . . . n}
j=1
1. π T ⇐ π T P
Problem 10.6.1 (Efficient Computation) One problem with the use of restart is that
it makes the transition matrix P dense, even when the underlying graph is sparse. Discuss
how you can treat the restart component of P more carefully, so as to be able to compute π
using only sparse matrix operations.
A hint for solving the above problem is that the final transition matrix can be represented
T
in terms of the original transition matrix Po as (1 − α)Po + α1n 1n /n. Here, 1n is an n-
dimensional column vector of 1s.
434 CHAPTER 10. THE LINEAR ALGEBRA OF GRAPHS
According to the Perron-Frobenius theorem, the largest eigenvalue lies between the maxi-
mum and average degree of the matrix. Like PageRank, eigenvector centrality can be com-
puted using the power method. For directed graphs, one can also compute the notion of
eigenvector prestige.
As in the case of PageRank, the largest eigenvector may not be unique if the graph is not
strongly connected. Therefore, the matrix A may need to be averaged with a restart matrix
M/n in order to make it strongly connected. Here, M is a matrix of 1s. As in PageRank,
the averaging is done in a weighted way with smoothing parameter α.
Finally, it is interesting to explore the left eigenvectors of symmetrically normalized
matrices. Let A be the adjacency matrix of an undirected network, and Δ be its weighted
degree matrix in which the ith diagonal entry is the degree δi . Then, the symmetrically
normalized matrix is given by the following:
S = Δ−1/2 AΔ−1/2
Since the matrix S is symmetric, its left and right eigenvectors are the same. The principal
eigenvector of this matrix contains the square-roots of the vertex degrees. This result is
given in Corollary 10.5.1. The degree of a vertex is its degree centrality.
Since degree centrality can be computed trivially from the adjacency matrix, it does not
make sense to use eigenvector computation. Nevertheless, it is interesting that all eigenvec-
tors relate in one way or another to centrality measures. The corresponding notion of degree
prestige is defined by the indegree of each vertex in a directed graph. It is possible to re-
late degree prestige to the eigenvectors of the adjacency matrix after careful normalization,
although it is not practically useful. Importantly, the normalization can be done only with
weighted indegrees (i.e., column sums) of A rather than outdegrees (i.e., row sums). Note
10.6. THE LEFT EIGENVECTORS OF GRAPH MATRICES 435
that S will not be symmetric, because A is not symmetric to begin with. Just as directed
graphs associate prestige measures with vertex indegrees, it is also possible to associate gre-
gariousness measures with vertex outdegrees. Intuitively, the gregariousness of a vertex is
defined by the propensity of that vertex to easily reach other vertices via directed paths.
This is a complementary idea to prestige measures, in which vertices with high prestige
are likely to be easily reached by other vertices via directed paths. In other words, the key
difference is in terms of the direction of the edges in the two cases. We leave the modeling
of this problem as a practice exercise.
Problem 10.6.2 (Gregariousness) Each of the three prestige measures (i.e., eigenvec-
tor, PageRank, and degree) has a corresponding gregariousness measure. The difference is
that prestige measures value in-linking vertices, whereas gregariousness measures value out-
linking vertices. For example, the degree gregariousness is the outdegree of a vertex. How
would you define the different gregariousness measures of a directed graph in terms of the
eigenvectors of an appropriately chosen matrix? This matrix should be defined as a function
of the (directed) adjacency matrix A, outdegree matrix Δout , and indegree matrix Δin .
In general, a graph having r connected components will have r square blocks along the
diagonal. Each block Pii along the diagonal is a stochastic transition matrix within that
subset of vertices.
The Perron-Frobenius theorem states that the largest eigenvector of the stochastic tran-
sition matrix of a connected graph is unique, and has an eigenvalue of 1. The uniqueness
of the largest eigenvector of the adjacency matrix no longer holds when the graph is not
connected. For example, in the case of the two-component adjacency matrix, an eigenvector
containing 1s for the first component and 0s for the second component has an eigenvalue of
1. This is because the block P11 is a transition matrix in its own right, and it is the only
part of P that will interact with the non-zero part of the eigenvector. Similarly, one can
create an eigenvector containing 0s for the first component and 1s for the second compo-
nent. This eigenvector also has an eigenvalue of 1. In other words, the dimensionality of the
eigenspace corresponding to an eigenvalue of 1 is equal to the number of connected com-
ponents. The eigenvectors corresponding to different connected components form a basis of
this eigenspace.
Property 10.7.1 The dimensionality of the vector space corresponding to the largest eigen-
vectors (i.e., eigenvectors with eigenvalue 1) of the stochastic transition matrix of an undi-
rected graph is equal to the number of connected components in it.
We further note that one can compute the number of connected components in a graph
using linear algebra.
Property 10.7.2 Let P be the stochastic transition matrix of an undirected graph. Then,
the dimensionality of the null space of (P − I) yields the number of connected components
in the graph.
It is evident from Property 10.7.1 that the eigenspace of the matrix P (for the eigenvalue
of 1) is the null space of (P − I). The null space of a matrix can be easily computed using
SVD.
First, the creation of a stochastic transition matrix from a directed adjacency matrix is not
always possible because some vertices might not have any outgoing edges. In other words,
the corresponding row in the adjacency matrix contains 0s, although its column does contain
non-zero entries. It is impossible to normalize such rows to sum to 1. Therefore, the following
analysis will assume that such dead-end nodes do not exist. When such dead-end nodes do
exist, one can add a self-loop with probability of 1 to facilitate the following analysis. This
type of addition of self-loops is often performed in numerous machine learning algorithms
like PageRank and collective classification (in the presence of dead-end nodes). Another
assumption that we make to facilitate analysis is that the graph is fully connected if the
directions of edges are ignored.
When a graph is not strongly connected, the matrix will be block upper-triangular, in
which the diagonal contains square blocks of strongly connected components with some
additional non-zero entries above these diagonal blocks. These additional entries only allow
unidirectional walks between blocks (sets) of vertices. It is easiest to understand this type
of block structure in terms of random walks. Strongly connected graphs result in ergodic
Markov chains in which the steady-state probabilities do not depend on where the random
walk is started. Furthermore, all vertices have non-zero probability of being reached in
steady state.
Graphs that are not strongly connected do not satisfy these properties. Such graphs
containing vertices of two types:
1. The first type is the transient vertex set in the graph. The transient vertex set is a
maximal set of vertices, such that the direction of the edges between vertices belonging
to the transient set and all other vertices are always outgoing from vertices belonging
to the transient set. Note that the vertices in the transient set may or may not be
connected/strongly connected to one another, and therefore the edge structure on
the subgraph of transient vertices may be arbitrary. These vertices are referred to as
transient because a random walk will never visit these vertices in steady-state. Once
a walk exits this component, it is no longer possible for the walk to return to this
component in steady state. For example, in the case of Figure 10.5(a), vertex 9 is the
only transient vertex. On the other hand, in Figure 10.5(b), the vertices 1, 4, 5, 6, and
7 are transient vertices. The transient vertices are labeled by ‘T’ in these figures. The
reader should take a moment to verify that a random walk starting at any of these
vertices will eventually reach a vertex outside this set so that it becomes impossible to
ever visit any of these vertices again. The transient component is an essential property
of reducible graphs. If a connected graph does not have a transient component, then
it is also strongly connected.
A1 A1 A1 A1 A1 T T A2
2 4 6 8 2 4 6 8
A1 1 T 1
3 5 7 9 3 5 7 9
A1 A1 A1 T A1 T T A2
Figure 10.5: Examples of directed graphs that are not strongly connected. Transient vertices
are labeled as ‘T’ and vertices belonging to absorbing components are labeled as ‘A1’
and ‘A2’
438 CHAPTER 10. THE LINEAR ALGEBRA OF GRAPHS
2. In addition to the transient vertex set, the graph contains l vertex-disjoint compo-
nents, which are referred to as absorbing components. Each absorbing component is
strongly connected in terms of the subgraph induced by its vertex set. Furthermore,
an absorbing component only has incoming edges from transient vertices, is not con-
nected to other absorbing components, and has no outgoing edges. Each absorbing
component has a non-zero probability to be visited in steady-state from a random
walk that starts at a random vertex in the network. However, the steady-state proba-
bility depends on the vertex at which the random walk starts. In Figure 10.5(a), there
is a single absorbing component containing all vertices except vertex 9. On the other
hand, Figure 10.5(b) contains two absorbing components, which are labeled by ‘A1’
and ‘A2.’ Note that starting the random walk at vertex 1 will always reach absorbing
component A1, but will never reach absorbing component A2. Starting the walk at
vertex 4 will allow both A1 and A2 to be reached.
Without loss of generality, we assume that the vertices of such a graph are ordered as follows.
The first block of this graph contains all the transient vertices. For this set aij can be non-
zero for each transient vertex i and j can be any value from 1 to n. All other l absorbing
components are arranged in block diagonal form, as in the case of undirected matrices. For
example, a graph with a transient set and three absorbing components will have the following
block structure of the stochastic transition matrix on appropriate reordering the vertex
indices (to put the transient vertices first and the vertices of the absorbing components
contiguously in succession):
⎡ ⎤
P11 P12 P13 P14
⎢ 0 P22 0 0 ⎥
P =⎢ ⎣ 0
⎥
0 P33 0 ⎦
0 0 0 P44
It is important to note that the square blocks P22 , P33 , and P44 correspond to the edges
within absorbing components, and are complete stochastic transition matrices. This matrix
has three eigenvectors in the eigenspace belonging to eigenvalue 1. Each of these eigenvectors
can be defined by an absorbing component. It is easier to discuss the left eigenvectors.
First one can compute the principal left eigenvectors of each of the blocks P22 , P33 , and
P44 (corresponding to the absorbing components) separately. Each such eigenvector of the
absorbing submatrix can be used to define a left eigenvector of P by setting the remaining
components of the larger eigenvector to zeros. Therefore, the Markov chains defined by such
reducible graphs have more than one solution to the steady-state equation π T P = π T .
Although it is possible for directed graphs to be completely disconnected (i.e., discon-
nected irrespective of the direction of edges), this situation rarely arises in machine learning
applications. In this chapter, we ignore this case, because it is not very interesting from an
application-centric point of view. All connected graphs that are not reducible will have at
least some transient vertices that have a one-way connection to strongly connected compo-
nents. The basic structure of a reducible graph always appears in the form of Figure 10.6,
assuming that the graph is not completely disconnected. Note that the transient vertices
need not be strongly connected (or even connected) when considered as a subgraph. For
example, the transient vertices in Figure 10.5(b) (which are labeled by ‘T’) are not con-
nected to one another, when considered as a subgraph. Each of the satellite components in
Figure 10.6 is an absorbing component. If a random walk is performed on the full graph,
and the random walk happens to enter an absorbing component, the walk will never exit the
component. It is possible for a reducible network to contain only one absorbing component,
10.8. MACHINE LEARNING APPLICATIONS 439
ABSORBING COMPONENT
(STRONGLY CONNECTED)
ABSORBING COMPONENT
(STRONGLY CONNECTED)
ABSORBING COMPONENT
TRANSIENT NODES (STRONGLY CONNECTED)
(MAY BE DISCONNECTED)
ABSORBING COMPONENT
(STRONGLY CONNECTED)
ABSORBING COMPONENT
ABSORBING COMPONENT (STRONGLY CONNECTED)
(STRONGLY CONNECTED)
Figure 10.6: If a graph is not strongly connected, it will have at least some transient vertices
with one-way connections to strongly connected components
as long as some of the vertices are transient. In fact, an adjacency matrix for a directed,
connected graph is reducible if and only if transient vertices exist in the graph.
We provide a number of extended properties of these types of reducible matrices. Let
P be the n × n stochastic transition matrix of a connected (but not strongly connected),
directed graph with at least one transient vertex and l absorbing components. We provide
an intuitive interpretation of the principal eigenspace in P :
Given an unlabeled vertex, perform a random walk by using the stochastic tran-
sition matrix of the adjacency matrix until a labeled vertex is reached. Output
the observed label of the destination (labeled) vertex as the predicted class label
of the source (unlabeled) vertex from which the random walk begins.
For better robustness, one can compute the probability that a vertex of each class is reached.
The intuition for this approach is that the walk is more likely to terminate at labeled vertices
in the proximity of the starting vertex i. Therefore, when many vertices of a particular class
are located in its proximity, then the vertex i is more likely to be labeled with that class.
In the particular case of Figure 10.7(a), any random walk starting from test vertex X will
always reach label ‘A’ first rather than label ‘B’ because of the topology of the graph.
However, it is also possible to select test vertices, where there is a non-zero probability of
reaching either the label ‘A’ or the label ‘B.’ For example, if one starts the random walk at
test vertex Y, then a vertex corresponding to either label ‘A’ or label ‘B’ could be reached
first.
An important assumption is that the graph needs to be label connected. In other words,
every unlabeled vertex needs to be able to reach a labeled vertex in the random walk.
For undirected graphs, this means that every connected component of the graph needs to
contain at least one labeled vertex. In the following discussion, it will be assumed that
the entire undirected graph is connected; any undirected graph with only one connected
component will always lead to a modified transition matrix that is label connected.
Since the approach is based on random walks, the first step is to create the (directed)
stochastic transition matrix P from the undirected n × n adjacency matrix A. Let Δ be the
(diagonal) degree matrix that contains the weighted degree δi = j aij = j aji on the ith
entry of its main diagonal. As in the case of all other applications of this chapter, the adja-
cency matrix is converted into the stochastic transition matrix by using left-normalization:
P = Δ−1 A (10.7)
10.8. MACHINE LEARNING APPLICATIONS 441
A B A B
A B A B
IRREDUCIBLE MATRIX: UNIQUE PAGE-RANK VECTOR REDUCIBLE MATRIX: NO UNIQUE PAGE-RANK VECTOR
(c) Original transition matrix (irreducible) (d) Modified transition matrix (reducible)
(probabilities not shown) (probabilities not shown)
Since this transition matrix is derived from an undirected graph, it will always be an irre-
ducible matrix. The corresponding strongly connected graph is illustrated in Figure 10.7(b),
and the probabilities on the various edges are explicitly shown. The same illustration is
shown in Figure 10.7(c), but without the probabilities on the edges (to avoid clutter).
Although we can use this transition matrix to model the random walks in the graph, such
an approach will not provide the first stopping point of the walk. In other words, we need to
model the random walks in such a way that they always terminate at their first arrival at
labeled vertices. This can be achieved by removing outgoing edges from labeled vertices and
replacing them with self-loops. This results in a singleton absorbing component containing
only one vertex, which we refer to as an absorbing vertex. Such vertices are referred to as
absorbing vertices because they trap the random walk after an incoming transition. The
goal of creating such absorbing vertices is to ensure that a random walk is trapped by the
first labeled node it reaches.
The stochastic transition matrix P needs to be modified to account for the effect of
absorbing vertices. For each absorbing vertex i, the ith row of P is replaced with the ith
row of the identity matrix. Henceforth, we will assume that the matrix denoted by notation
P incorporates this modification (and is therefore not exactly equal to Δ−1 A according to
Equation 10.7). An example of the final transition graph is illustrated in Figure 10.7(d). The
resulting matrix is no longer irreducible because the resulting graph is no longer strongly
connected; no other vertex can be reached from an absorbing vertex. Note that this graph
has exactly the structure of Figure 10.6 because each of the absorbing components is a
singleton (labeled) vertex, and all unlabeled vertices are transient. Since the transition
matrix P is reducible, it does not have a unique eigenvector with eigenvalue 1. Rather, it
has as many principal eigenvectors with eigenvalue 1 as the number of absorbing vertices.
For any given starting vertex i, the steady-state probability distribution has positive
values only at labeled vertices. This is because a random walk will eventually reach an
absorbing vertex in a label-connected graph, and it will never emerge from that vertex.
Therefore, if one can estimate the steady-state probability distribution of labeled nodes for
442 CHAPTER 10. THE LINEAR ALGEBRA OF GRAPHS
a starting unlabeled vertex i, then the probability values of the labeled vertices in each class
can be aggregated. The class with the highest probability is reported as the relevant label
of the unlabeled vertex i.
Note that the (i, j)th entry of P r yields the probability of a random walk of length r
starting at any vertex i to terminate at any vertex j. Because of the self-loops at absorbing
vertices, all walks of length less than r are automatically included in the probability (as the
remaining steps can be completed inside the self-loop). Therefore, P ∞ is the steady-state
matrix of probabilities, in which the (i, j)th entry of P ∞ provides the probability that a
walk starting a vertex i terminates at vertex j. For each row, we would like to aggregate
the probabilities of the classes belonging to the different labels. As we will see below, this
can be achieved with a simple matrix multiplication.
Let Y be an n × k matrix in which the (i, c)th entry is 1, if the ith vertex is labeled and
it belongs to the class c ∈ {1, . . . , k}. Then, the aggregation of the probabilities of labeled
vertices in each row of P ∞ is given by the matrix P ∞ Y . In other words, we can obtain an
n × k matrix Z of probabilities of the classes for the various vertices as follows:
Z = P ∞Y (10.8)
The class with the maximum probability in Z for unlabeled vertex (row) i may be reported
as its class label. This approach is also referred to as the rendezvous approach to label
propagation [9]. How is P ∞ computed? One possibility is keep multiplying P with itself
in order to compute P ∞ . One can use eigendecomposition tricks to speed up the process.
However, we do not want to work with large matrices (like P ∞ ) of size n × n. Therefore, a
more efficient (but equivalent) approach is iterative label propagation [143].
In iterative label propagation, we initialize Z (0) = Y and then repeatedly use the fol-
lowing update for increasing value of iteration index t:
It is easy to see that Z (∞) is the same as the value of Z in Equation 10.8. Furthermore, each
column of Z is a principal right eigenvector of P at convergence. Each principal eigenvector
corresponds to a label with multiple absorbing components rather than a single absorbing
component. Furthermore, the absorbing components are singleton vertices in this case.
The following exercise generalizes this idea to finding principal eigenvectors of absorbing
components with no special structure.
Problem 10.8.1 (Right Principal Eigenvectors) Consider a directed graph with l ab-
sorbing components, which are already demarcated. Discuss how you can use the ideas in
iterative label propagation to create a basis of l principal right eigenvectors satisfying the
following property. The ith component of the jth principal eigenvector should be equal to the
probability that a walk starting at node i ends in absorbing component j.
K(X i , X j ) = e−X i −X j
2
/(2·σ 2 )
(10.10)
10.10. FURTHER READING 443
Here, σ is the bandwidth of the Gaussian kernel (cf. Table 9.1 of Chapter 9). For supervised
applications, the value of σ is often tuned using out-of-sample data. For unsupervised ap-
plications, the value of σ is chosen to be of the order of the median of all pairwise Euclidean
distances between points.
A graph is constructed, where the ith vertex corresponds to the data point X i . The
similarity value can be used to compute the mutual k-nearest neighbor graph, as discussed
in Section 10.5. The weight of an edge is set to the kernel similarity value introduced in
Equation 10.10. Subsequently any of the applications such as clustering or classification
(discussed in this chapter) can be used on this graph. In the case of classification, the
graph is constructed on both the labeled and the unlabeled vertices. Therefore, unlike
traditional classification, the classification benefits from unlabeled samples in the data. This
type of approach can be helpful when the number of labeled instances is small. The use of
unlabeled instances for better classification is referred to as semi-supervised classification.
Furthermore, one can use this approach not only for multidimensional data, but for any
type of data where a similarity function can be computed between the objects. After all
the various forms of graph embeddings (e.g., spectral embeddings) can be viewed as special
cases of kernel methods. Like any kernel method, a similarity graph and its embeddings can
be used for any data type.
10.9 Summary
This chapter discusses the linear algebra of graphs. Many important structural properties of
graphs such as the walks between vertices and connectivity can be inferred from the linear
algebra of the adjacency matrix. The fundamental result underpinning the primary results
on spectral analysis of graphs is the Perron-Frobenius theorem. A particular type of matrix
associated with graphs is the stochastic transition matrix, whose principal eigenvector has
an eigenvalue of 1. The dominant left eigenvector and the dominant right eigenvector(s)
of the stochastic transition matrix have different types of applications. The dominant left
eigenvector is used for ranking, whereas the right eigenvectors are used for clustering. One
can also use a symmetrically normalized adjacency matrix for clustering, which is roughly
equivalent to the use of kernel methods. Different forms of normalization of the adjacency
matrix help in extracting different centrality and prestige measures. One can also compute
the eigenvectors of modified versions of stochastic transition matrices in order to perform
collective classification of graphs. All graph-based machine learning applications can be
generalized to arbitrary data types with the construction of suitable similarity graphs over
the underlying objects.
10.11 Exercises
1. Consider the n × n adjacency matrices A1 and A2 of two graphs. Suppose that the
graphs are known to be isomorphic. Two graphs are said to be isomorphic, if one graph
can be obtained from the other by reordering its vertices. Show that isomorphic graphs
have the same eigenvalues. [Hint: What is the nature of the relationship between their
adjacency matrices in algebraic form? You may introduce any new matrices as needed.]
2. Suppose that you were given the eigenvectors and eigenvalues of the stochastic tran-
sition matrix P of an undirected graph. Discuss how you can quickly compute P ∞
using these eigenvectors and eigenvalues.
3. Let Δ be the weighted degree of matrix of an n × n (undirected) adjacency matrix
A, and e1 . . . en be the n eigenvectors of the stochastic transition matrix P = Δ−1 A.
Show that any pair of eigenvectors ei and ej are Δ-orthogonal. In other words, any
pair of eigenvectors ei and ej must satisfy the following:
eTi Δej = 0
4. Show that all eigenvectors (other than the first eigenvector) of the stochastic tran-
sition matrix of a connected, undirected graph will have both positive and negative
components.
5. Consider the adjacency matrix A of an n × n undirected graph, which is also bipartite.
In a bipartite graph, the n vertices can be divided into two vertex sets V1 and V2 of
respectively n1 and n2 vertices, so that all edges occur between vertices of V1 and
vertices of V2 . The adjacency matrix of such a graph always has the following form
for an n1 × n2 matrix B:
0 B
A=
BT 0
Even though A is symmetric, B might not be symmetric. Given the eigenvectors and
eigenvalues of A, show how you can perform the SVD of B quickly (and vice versa).
6. A complete directed graph is defined on n vertices and it contains all n(n − 1) possible
edges in both directions between each pair of vertices (other than self-loops). Each
edge weight is 1.
(a) Give a short reason why all eigenvalues must be real.
(b) Give a short reason why the eigenvalues must sum to 0.
(c) Show that this graph has one eigenvalue of (n − 1) and (n − 1) eigenvalues are
T
−1. [Express adjacency matrix as 1 1 − I.]
7. A complete bipartite graph (see Exercise 5) is defined on 4 vertices, where 2 vertices
are contained in each partition. A edge of weight 1 exists in both directions between
each pair of vertices drawn from the two partitions. Find the eigenvalues of this graph.
Can you generalize this result to the case of a complete bipartite graph containing 2n
vertices, where n vertices are contained in each partition?
8. Suppose you create a symmetrically normalized adjacency matrix S = Δ−1/2 AΔ−1/2
for an undirected adjacency matrix A. You decide that some vertices are “important”
and they should get relative weight γ > 1 in an embedding that is similar to that in
spectral clustering, whereas other vertices only get a weight to 1.
10.11. EXERCISES 445
17. Consider two n × n symmetric matrices A and B, such that B is also positive definite.
Show that BA need not be symmetric, but it is diagonalizable with real eigenvalues.
[Hint: This is a generalization of the proof that stochastic transition matrices have
real eigenvalues by setting B to the inverse degree matrix and A to the adjacency
matrix.]
18. Suppose that A is the 20×20 binary adjacency matrix of a directed graph of 20 nodes.
Interpret the matrix (I −A20 )(I −A)−1 in terms of walks in the graph. Will this matrix
have any special properties for a strongly connected graph? Argue algebraically why
the following is true:
19. Exercise 13 of the previous chapter introduces symmetric non-negative matrix factor-
ization, which can also be used to factorize the symmetrically normalized adjacency
matrix S ≈ U U T , which is used in spectral clustering. Here, U is an n×k non-negative
factor matrix. Discuss why the top-r components of each column of U (in magnitude)
directly provide clustered bags of nodes of size r in the graph.
20. Find the PageRank of each node in (i) an undirected cycle of n nodes, and (ii) a single
central node connected with an undirected edge to each of (n − 1) nodes. In each case,
compute the PageRank at a restart probability of 0.
21. Signed network embedding: Suppose that you have a graph with both positive and
negative weights on edges. Propose modifications of the algorithms used to remove
“weak edges” and to symmetrically normalize the graph for spectral clustering. Will
the resulting graph be diagonalizable with orthogonal eigenvectors and real eigen-
values? Is there anything special about the first eigenvector? [This is an open-ended
question with multiple solutions.]
22. Heterogeneous network embedding: Consider a social network graph with di-
rected/undirected edges of multiple types (e.g., undirected friendship links, directed
messaging links, and directed “like” links). Propose a shared matrix factorization al-
gorithm (cf. Chapter 8) to extract an embedding of each node. How would you tune
the parameters? [This is an open-ended question with multiple solutions.]
Chapter 11
“Science is the differential calculus of the mind. Art the integral calculus; they
may be beautiful when apart, but are greatest only when combined.”– Ronald
Ross
11.1 Introduction
A computational graph is a network of connected nodes, in which each node is a unit of
computation and stores a variable. Each edge joining two nodes indicates a relationship
between the corresponding variables. The graph may be either directed or undirected. In a
directed graph, a node computes its associated variable as a function of the variables in the
nodes that have edges incoming to it. In an undirected graph, the functional relationship
works in both directions. Most practical computational graphs (e.g., conventional neural
networks) are directed acyclic graphs, although many undirected probabilistic models in
machine learning can be implicitly considered computational graphs with cycles. Similarly,
the variables at the nodes might be continuous, discrete, or probabilistic, although most
real-world computational graphs work with continuous variables.
In many machine learning problems, parameters may be associated with the edges, which
are used as additional arguments to the functions computed at nodes connected to these
edges. These parameters are learned in a data-driven manner so that variables in the nodes
mirror relationships among attribute values in data instances. Each data instance contains
both input and target attributes. The variables in a subset of the input nodes are fixed
to input attribute values in data instances, whereas the variables in all other nodes are
computed using the node-specific functions. The variables in some of the computed nodes
are compared to observed target values in data instances, and edge-specific parameters are
modified to match the observed and computed values as closely as possible. By learning the
parameters along the edges in a data-driven manner, one can learn a function relating the
input and target attributes in the data.
In this chapter, we will primarily focus on directed acyclic graphs with continuous, de-
terministic variables. A feed-forward neural network is an important special case of this type
of computational graph. The inputs often correspond to the features in each data point,
whereas the output nodes might correspond to the target variables (e.g., class variable or
regressand). The optimization problem is defined over the edge parameters so that the pre-
dicted variables match the observed values in the corresponding nodes as closely as possible.
In other words, the loss function of a computational graph might penalize differences be-
tween predicted and observed values. In computational graphs with continuous variables,
one can use gradient descent for optimization. Almost all machine learning problems that
we have seen so far in this book, such as linear regression, logistic regression, SVMs, SVD,
PCA, and recommender systems, can be modeled as directed acyclic computational graphs
with continuous variables.
This chapter is organized as follows. The next section will introduce the basics of compu-
tational graphs. Section 11.3 discusses optimization in directed acyclic graphs. Applications
to neural networks are discussed in Section 11.4. A general view of computational graphs is
provided in Section 11.5. A summary is given in Section 11.6.
HIDDEN NODES
2 2*2=4 ln(4)=1.386
x1 * ln
INPUT NODES
2 2*2*1=4 54.6 1.386*54.6*3=227
x2 * exp *
1 w2=1 OUTPUT
NODE
x3 +
w3=7
2*1+1*7=9 9=3
the forward direction from the input to the output. For example, if the weights w2 and w3
are chosen to be 1 and 7, respectively, the global function f (x1 , x2 , x3 ) is as follows:
√
f (x1 , x2 , x2 ) = ln(x1 x2 ) · exp(x1 x2 x3 ) · x2 + 7x3
For [x1 , x2 , x3 ] = [2, 2, 1], the cascading sequence of computations is shown in the figure with
a final output value of approximately 227.1. However, if the observed value of the output is
only 100, it means that the weights need to be readjusted to change the computed function.
In this case, one can observe from inspection of the computational graph that reducing
either w2 or w3 will help reduce the output value. For example, if we change the weight w3
to −1, while keeping w2 = 1, the computed function becomes the following:
√
f (x1 , x2 , x2 ) = ln(x1 x2 ) · exp(x1 x2 x3 ) · x2 − x3
In this case, for the same set of inputs [x1 , x2 , x3 ] = [2, 2, 1], the computed output becomes
75.7, which is much closer to the true output value of 100. Therefore, it is clear that one must
use the mismatch of predicted values with observed outputs to adjust the computational
function, so that there is a better matching between predicted and observed outputs across
the data set. Although we adjusted w3 here by inspection, such an approach will not work
in very large computational graphs containing millions of weights.
The goal in machine learning is to learn parameters (like weights) using examples of
input-output pairs, while adjusting weights with the help of the observed data. The key
point is to convert the problem of adjusting weights into an optimization problem. The
computational graph may be associated with a loss function, which typically penalizes the
differences in the predicted outputs from observed outputs, and adjusts weights accordingly.
Since the outputs are functions of inputs and edge-specific parameters, the loss function
can also be viewed as a complex function of the inputs and edge-specific parameters. The
goal of learning the parameters is to minimize the loss, so that the input-output pairs in
the computational graph mimic the input-output pairs in the observed data. It should be
immediately evident that the problem of learning the weights is likely to challenging, if the
underlying computational graph is large with a complex topology.
The choice of loss function depends on the application at hand. For example, one can
model least-squares regression by using as many input nodes as the number of input variables
(regressors), and a single output node containing the predicted regressand. Directed edges
exist from each input node to this output node, and the parameter on each such edge
450 CHAPTER 11. OPTIMIZATION IN COMPUTATIONAL GRAPHS
INPUT NODES
x1
w1
x2 w2 OUTPUT NODE
x3 w3
w4
x4
w5
x5
Figure 11.2: A single-layer computational graph that can perform linear regression
corresponds to the weight associated with that input variable (cf. Figure 11.2). The output
node computes the following function of the variables x1 . . . xd in the d input nodes:
d
ô = f (x1 , x2 , . . . , xd ) = wi x i
i=1
If the observed regressand is o, then the loss function simply computes (o − ô)2 , and adjusts
the weights w1 . . . wd so as to reduce this value. Typically, the derivative of the loss is
computed with respect to each weight in the computational graph, and the weights are
updated by using this derivative. One processes each training point one-by-one and updates
the weights. The resulting algorithm is identical to using stochastic gradient descent in the
linear regression problem (cf. Section 4.7 of Chapter 4). In fact, by changing the nature of
the loss function at the output node, it is possible to model both logistic regression and the
support vector machine.
Problem 11.2.1 (Logistic Regression with Computational Graph) Let o be an ob-
served binary class label drawn from {−1, +1} and ô be the predicted real value by the
neural architecture of Figure 11.2. Show that the loss function log(1 + exp(−oô)) yields the
same loss function for each data instance as the logistic regression model in Equation 4.56
of Chapter 4. Ignore the regularization term in Chapter 4.
In the particular case of Figure 11.2, the choice of a computational graph for model represen-
tation does not seem to be useful because a single computational node is rather rudimentary
for model representation; indeed, one can directly compute gradients of the loss function
with respect to the weights without worrying about computational graphs at all! The main
usefulness of computational graphs is realized when the topology of computation is more
complex.
The nodes in the directed acyclic graph of Figure 11.2 are arranged in layers, because
all paths from an input node to any node in the network have the same length. This type
of architecture is common in computational graphs. Nodes that are reachable by a path of
a particular length i from input nodes are assumed to belong to layer i. At first glance,
11.2. THE BASICS OF COMPUTATIONAL GRAPHS 451
Figure 11.2 looks like a two-layer network. However, such networks are considered single-
layer networks, because the non-computational input layer is not counted among the number
of layers.
The value p1 represents the number of nodes in the first hidden layer. Here, the function
Φ(·) is referred to as an activation function. The final numerical value of the variable in a
particular node (i.e., h1r in this case) for a particular input is also sometimes referred to
as its activation for that input. In the case of linear regression, the activation function is
missing, which is also referred to as using the identity activation function or linear activation
function. However, computational graphs primarily gain better expressive power by using
nonlinear activation functions such as the following:
1
Φ(v) = [Sigmoid function]
1 + e−v
e2v − 1
Φ(v) = 2v [Tanh function]
e +1
Φ(v) = max{v, 0} [ReLU: Rectified Linear Unit]
Φ(v) = max {min [v, 1] , −1} [Hard tanh]
It is noteworthy that these functions are nonlinear, and nonlinearity is essential for greater
expressive power of networks with increased depth. Networks containing only linear activa-
tion functions are not any more powerful than single-layer networks.
In order to understand this point, consider a two-layer computational graph (not count-
ing the input layer) with 4-dimensional input vector x, 3-dimensional hidden-layer vector
h, and 2-dimensional output-layer vector o. Note that we are creating a column vector from
the node variables in each layer. Let W1 and W2 be two matrices of sizes 3 × 4 and 2 × 3
so that h = W1 x and o = W2 h. The matrices W1 and W2 contain the weight parame-
ters of each layer. Note that one can express o directly in terms of x without using h as
o = W2 W1 x = (W2 W1 )x. One can replace the matrix W2 W1 with a single 2 × 4 matrix W
without any loss of expressive power. In other words, this is a single-layer network! It is
not possible to use this type of approach to (easily) eliminate the hidden layer in the case
of nonlinear activation functions without creating extremely complex functions at individ-
ual nodes (thereby increasing node-specific complexity). This means that increased depth
results in increased complexity only when using nonlinear activation functions.
452 CHAPTER 11. OPTIMIZATION IN COMPUTATIONAL GRAPHS
x4 h13 h23
x5
(a) Scalar notation and architecture (b) Vector notation and architecture
Figure 11.3: A feed-forward network with two hidden layers and a single output layer
In the case of Figure 11.3(a), the neural network contains three layers. Note that the
input layer is often not counted, because it simply transmits the data and no computation
is performed in that layer. If a neural network contains p1 . . . pk units in each of its k lay-
ers, then the (column) vector representations of these outputs, denoted by h1 . . . hk have
dimensionalities p1 . . . pk . Therefore, the number of units in each layer is referred to as the
dimensionality of that layer. It is also possible to create a computational graph in which the
variables in nodes are vectors, and the connections represent vector-to-vector functions. Fig-
ure 11.3(b) creates a computational graph in which the nodes are represented by rectangles
rather than circles. Rectangular representations of nodes correspond to nodes containing
vectors. The connections now contain matrices. The sizes of the corresponding connection
matrices are shown in Figure 11.3(b). For example, if the input layer contains 5 nodes and
the first hidden layer contains 3 nodes, the connection matrix is of size 5 × 3. However, as
we will see later, the weight matrix has size that is the transpose of the connection matrix
(i.e., 3 × 5) in order to facilitate matrix operations. Note that the computational graph in
the vector notation has a simpler structure, where the entire network contains only a single
path. The weights of the connections between the input layer and the first hidden layer are
contained in a matrix W1 with size p1 × d, whereas the weights between the rth hidden
layer and the (r + 1)th hidden layer are denoted by the pr+1 × pr matrix denoted by Wr .
If the output layer contains s nodes, then the final matrix Wk+1 is of size s × pk . Note that
the weight matrix has transposed dimensions with respect to the connection matrix. The
d-dimensional input vector x is transformed into the outputs using the following recursive
equations:
Here, the activation functions are applied in element-wise fashion to their vector arguments.
Here, it is noteworthy that the final output is a recursively nested composition func-
tion of the inputs, which is as follows:
This type of neural network is harder to train than single-layer networks because one must
compute the derivative of a nested composition function with respect to each weight. In
11.3. OPTIMIZATION IN DIRECTED ACYCLIC GRAPHS 453
particular, the weights of earlier layers lie inside the recursive nesting, and are harder to learn
with gradient descent, because the methodology for computation of the gradient of weights
in the inner portions of the nesting (i.e., earlier layers) is not obvious, especially when the
computational graph has a complex topology. It is also noticeable that the global input-to-
output function computed by the neural network is harder to express in closed form neatly.
The recursive nesting makes the closed-form representation look extremely cumbersome.
A cumbersome closed-form representation causes challenges in derivative computation for
parameter learning.
1
f (x) = g(x) =
1 + exp(−x)
x y=g(x) O = f(g(x))=cos(x2)
x g(x)=x2 f(y)=cos(y)
OUTPUT
Figure 11.4: A simple computational graph with an input node and two computational
nodes
454 CHAPTER 11. OPTIMIZATION IN COMPUTATIONAL GRAPHS
INPUT
LAYER HIDDEN LAYERS
OUTPUT
x0 x1 x2 x3 x4 NODE
o
y0 y1 y2 y3 y4
This simple graph already computes a rather awkward composition function. Trying the
find the derivative of this composition function becomes increasingly tedious with increasing
complexity of the graph.
Consider a case in which the functions g1 (·), g2 (·) . . . gk (·) are the functions computed in
layer m, and they feed into a particular layer-(m + 1) node that computes the multivariate
function f (·) that uses the values computed in the previous layer as arguments. Therefore,
the layer-(m+1) function computes f (g1 (·), . . . gk (·)). This type of multivariate composition
function already appears rather awkward. As we increase the number of layers, a function
that is computed several edges downstream will have as many layers of nesting as the length
of the path from the source to the final output. For example, if we have a computational
graph which has 10 layers, and 2 nodes per layer, the overall composition function would
have 210 nested “terms”. This makes the handling of closed-form functions of deep networks
unwieldy and impractical.
In order to understand this point, consider the function in Figure 11.5. In this case, we
have two nodes in each layer other than the output layer. The output layer simply sums its
inputs. Each hidden layer contains two nodes. The variables in the ith layer are denoted
by xi and yi , respectively. The input nodes (variables) use subscript 0, and therefore they
are denoted by x0 and y0 in Figure 11.5. The two computed functions in the ith layer are
F (xi−1 , yi−1 ) and G(xi−1 , yi−1 ), respectively.
In the following, we will write the expression for the variable in each node in order to
show the increasing complexity with increasing number of layers:
x1 = F (x0 , y0 )
y1 = G(x0 , y0 )
x2 = F (x1 , y1 ) = F (F (x0 , y0 ), G(x0 , y0 ))
y2 = G(x1 , y1 ) = G(F (x0 , y0 ), G(x0 , y0 ))
We can already see that the expressions have already started looking unwieldy. On com-
puting the values in the next layer, this becomes even more obvious:
An immediate observation is that the complexity and length of the closed-form function
increases exponentially with the path lengths in the computational graphs. This type of
complexity further increases in the case when optimization parameters are associated with
the edges, and one tries to express the outputs/losses in terms of the inputs and the param-
eters on the edges. This is obviously a problem, if we try to use the boilerplate approach of
first expressing the loss function in closed form in terms of the optimization parameters on
the edges (in order to compute the derivative of the closed-form loss function).
This variant is referred to as the univariate chain rule. Note that each term on the right-hand
side is a local gradient because it computes the derivative of a local function with respect
to its immediate argument rather than a recursively derived argument. The basic idea is
that a composition of functions is applied on the input x to yield the final output, and the
gradient of the final output is given by the product of the local gradients along that path.
Each local gradient only needs to worry about its specific input and output, which simplifies
the computation. An example is shown in Figure 11.4 in which the function f (y) is cos(y)
and g(x) = x2 . Therefore, the composition function is cos(x2 ). On using the univariate
chain rule, we obtain the following:
Note that we can annotate each of the above two multiplicative components on the two
connections in the graph, and simply compute the product of these values. Therefore, for
a computational graph containing a single path, the derivative of one node with respect
to another is simply the product of these annotated values on the connections between the
two nodes. The example of Figure 11.4 is a rather simple case in which the computational
graph is a single path. In general, a computational graph with good expressive power will
not be a single path. Rather, a single node may feed its output to multiple nodes. For
example, consider the case in which we have a single input x, and we have k independent
computational nodes that compute the functions g1 (x), g2 (x), . . . gk (x). If these nodes are
connected to a single output node computing the function f () with k arguments, then the
11.3. OPTIMIZATION IN DIRECTED ACYCLIC GRAPHS 457
g(y)=cos(y)
o = [cos(x2)] + [sin(x2)]
x x o
f(x)=x2 K(p,q)=p+q
OUTPUT
INPUT NODE
h(z)=sin(z)
Figure 11.6: A simple computational function that illustrates the chain rule
resulting function that is computed is f (g1 (x), . . . gk (x)). In such cases, the multivariate
chain rule needs to be used. The multivariate chain rule is defined as follows:
It is easy to see that the multivariate chain rule of Equation 11.3 is a simple generalization
of that in Equation 11.2.
One can also view the multivariate chain rule in a path-centric fashion rather than a
node-centric fashion. For any pair of source-sink nodes, the derivative of the variable in the
sink node with respect to the variable in the source node is simply the sum of the expressions
arising from the univariate chain rule being applied to all paths existing between that pair
of nodes. This view leads to a direct expression for the derivative between any pair of nodes
(rather than the recursive multivariate rule). However, it leads to an excessive computation,
because the number of paths between a pair of nodes is exponentially related to the path
length. In order to show the repetitive nature of the operations, we work with a very simple
closed-form function with a single input x:
The resulting computational graph is shown in Figure 11.6. In this case, the multivariate
chain rule is applied to compute the derivative of the output o with respect to x. This is
achieved by summing the results of the univariate chain rule for each of the two paths from
x to o in Figure 11.6:
∂o ∂K(p, q) ∂K(p, q)
= · g (y) · f (x) + · h (z) · f (x)
∂x ∂p
∂q
2x
cos(z) 2x
−sin(y)
1 1
= −2x · sin(y) + 2x · cos(z)
= −2x · sin(x2 ) + 2x · cos(x2 )
In this simple example, there are two paths, both of which compute the function f (x) = x2 .
As a result, the function f (x) is differentiated twice, once for each path. This type of
repetition can have severe effects for large multilayer networks containing many shared
nodes, where the same function might be differentiated hundreds of thousands of times as a
458 CHAPTER 11. OPTIMIZATION IN COMPUTATIONAL GRAPHS
portion of the nested recursion. It is this repeated and wasteful approach to the computation
of the derivative, that it is impractical to express the global function of a computational
graph in closed form and explicitly differentiating it.
One can summarize the path-centric view of the multivariate chain rule as follows:
This lemma can be easily shown by applying the multivariate chain rule (Equation 11.3)
recursively over the computational graph. Although the use of the pathwise aggregation
lemma is a wasteful approach for computing the derivative of y(t) with respect to y(s), it
enables a simple and intuitive exponential-time algorithm for derivative computation.
An Exponential-Time Algorithm
The pathwise aggregation lemma provides a natural exponential-time algorithm, which is
roughly similar to the steps one would go through by expressing the computational function
in closed form with respect to a particular variable and then differentiating it. Specifically,
the pathwise aggregation lemma leads to the following exponential-time algorithm to com-
pute the derivative of the output o with respect to a variable x in the graph:
1. Use computational graph to compute the value y(i) of each node i in a forward phase.
∂y(j)
2. Compute the local partial derivatives z(i, j) = ∂y(i) on each edge in the computational
graph.
3. Let P be the set of all paths from an input node with value x to the output o. For
each path P ∈ P compute the product of each local derivative z(i, j) on that path.
Figure 11.7: The chain rule aggregates the product of local derivatives along 25 = 32 paths
the values of the complementary inputs, because the partial derivative of the multiplication
of two variables is the complementary variable:
∂h(i, j) ∂h(i, j)
= h(i − 1, 2), = h(i − 1, 1)
∂h(i − 1, 1) ∂h(i − 1, 2)
∂o
The pathwise aggregation lemma implies that the value of ∂x is the product of the local
derivatives (which are the complementary input values in this particular case) along all 32
paths from the input to the output:
∂o $
= h(1, j1 ) h(2, j2 ) h(3, j3 ) h(4, j4 ) h(5, j5 )
∂x
j1 ,j2 ,j3 ,j4 ,j5 ∈{1,2}5
2
4
8
16
x x x x x
= x31 = 32x31
All 32 paths
This result is, of course, consistent with what one would obtain on differentiating x32 directly
with respect to x. However, an important observation is that it requires 25 aggregations to
compute the derivative in this way for a relatively simple graph. More importantly, we
repeatedly differentiate the same function computed in a node for aggregation. For example,
the differentiation of the variable h(3, 1) is performed 16 times because it appears in 16
paths from x to o.
Obviously, this is an inefficient approach to compute gradients. For a network with
100 nodes in each layer and three layers, we will have a million paths. Nevertheless, this
is exactly what we do in traditional machine learning when our prediction function is a
complex composition function. Manually working out the details of a complex composition
function is tedious and impractical beyond a certain level of complexity. It is here that one
can apply dynamic programming (which is guided by the structure of the computational
graph) in order to store important intermediate results. By using such an approach, one
can minimize repeated computations, and achieve polynomial complexity.
1 3 5 7 9 o
x 11
INPUT 2 4 6 8 10 OUTPUT
y(4) z(4, 6) y(6)
NODE NODE
HIDDEN NODES
∂y(6)
Figure 11.8: Edges are labeled with local partial derivatives such as z(4, 6) = ∂y(4)
in node i) is associated with edge (i, j). In other words, if y(p) is the variable in the node
p, we have the following:
∂y(j)
z(i, j) = (11.7)
∂y(i)
An example of such a computational graph is shown in Figure 11.8. In this case, we have
associated the edge (2, 4) with the corresponding partial derivative. We would like to com-
pute the product of z(i, j) over each path P ∈ P from source node s to output node t and
∂y(t)
then add them in order to obtain the partial derivative S(s, t) = ∂y(s) :
$
S(s, t) = z(i, j) (11.8)
P ∈P (i,j)∈P
Let A(i) be the set of nodes at the end points of outgoing edges from node i. We can
compute the aggregated value S(i, t) for each intermediate node i (between source node s
and output node t) using the following well-known dynamic programming update:
S(i, t) ⇐ S(j, t)z(i, j) (11.9)
j∈A(i)
This computation can be performed backwards starting from the nodes directly incident on
o, since S(t, t) = ∂y(t)
∂y(t) is already known to be 1. This is because the partial derivative of
a variable with respect to itself is always 1. Therefore one can describe the pseudocode of
this algorithm as follows:
Initialize S(t, t) = 1;
repeat
Select an unprocessed node i such that the values of S(j, t) all of its outgoing
nodes j ∈ A(i) are available;
Update S(i, t) ⇐ j∈A(i) S(j, t)z(i, j);
until all nodes have been selected;
Note that the above algorithm always selects a node i for which the value of S(j, t) is
available for all nodes j ∈ A(i). Such a node is always available in directed acyclic graphs,
and the node selection order will always be in the backwards direction starting from node
t. Therefore, the above algorithm will work only when the computational graph does not
have cycles, and it is referred to as the backpropagation algorithm.
The algorithm discussed above is used by the network optimization community for com-
puting all types of path-centric functions between source-sink node pairs (s, t) on directed
acyclic graphs, which would otherwise require exponential time. For example, one can even
use a variation of the above algorithm to find the longest path in a directed acyclic graph [8].
11.3. OPTIMIZATION IN DIRECTED ACYCLIC GRAPHS 461
sin[y(4)]
*
1 4 7
INPUT NODES
* cos[y(5)] *
2 5 8 10
w2
sin[y(6)] OUTPUT
+
3 6 9 NODE
w3
HIDDEN NODES
Subsequently, the derivatives of y(10) with respect to all the variables on its incoming
nodes are computed. Since y(10) is expressed in terms of the variables y(7), y(8), and y(9)
incoming into it, this is easy to do, and the results are denoted by z(7, 10), z(8, 10), and
z(9, 10) (which is consistent with the notations used earlier in this chapter). Therefore, we
have the following:
∂y(10)
z(7, 10) = = y(8) · y(9)
∂y(7)
∂y(10)
z(8, 10) = = y(7) · y(9)
∂y(8)
∂y(10)
z(9, 10) = = y(7) · y(8)
∂y(9)
Subsequently, we can use these values in order to compute S(7, 10), S(8, 10), and S(9, 10)
using the recursive backpropagation update:
∂y(10)
S(7, 10) = = S(10, 10) · z(7, 10) = y(8) · y(9)
∂y(7)
∂y(10)
S(8, 10) = = S(10, 10) · z(8, 10) = y(7) · y(9)
∂y(8)
∂y(10)
S(9, 10) = = S(10, 10) · z(9, 10) = y(7) · y(8)
∂y(9)
Next, we compute the derivatives z(4, 7), z(5, 8), and z(6, 9) associated with all the edges
incoming into nodes 7, 8, and 9:
∂y(7)
z(4, 7) = = cos[y(4)]
∂y(4)
∂y(8)
z(5, 8) = = −sin[y(5)]
∂y(5)
∂y(9)
z(6, 9) = = cos[y(6)]
∂y(6)
11.3. OPTIMIZATION IN DIRECTED ACYCLIC GRAPHS 463
These values can be used to compute S(4, 10), S(5, 10), and S(6, 10):
∂y(10)
S(4, 10) = = S(7, 10) · z(4, 7) = y(8) · y(9) · cos[y(4)]
∂y(4)
∂y(10)
S(5, 10) = = S(8, 10) · z(5, 8) = −y(7) · y(9) · sin[y(5)]
∂y(5)
∂y(10)
S(6, 10) = = S(9, 10) · z(6, 9) = y(7) · y(8) · cos[y(6)]
∂y(6)
In order to compute the derivatives with respect to the input values, one now needs to
compute the values of z(1, 3), z(1, 4), z(2, 4), z(2, 5), z(2, 6), z(3, 5), and z(3, 6):
∂y(4)
z(1, 4) = = y(2)
∂y(1)
∂y(4)
z(2, 4) = = y(1)
∂y(2)
∂y(5)
z(1, 5) = = y(2) · y(3)
∂y(1)
∂y(5)
z(2, 5) = = y(1) · y(3)
∂y(2)
∂y(5)
z(3, 5) = = y(1) · y(2)
∂y(3)
∂y(6)
z(2, 6) = = w2
∂y(2)
∂y(6)
z(3, 6) = = w3
∂y(3)
These partial derivatives can be backpropagated to compute S(1, 10), S(2, 10), and S(3, 10):
∂y(10)
S(1, 10) = = S(4, 10) · z(1, 4) + S(5, 10) · z(1, 5)
∂y(1)
= y(8) · y(9) · cos[y(4)] · y(2) − y(7) · y(9) · sin[y(5)] · y(2) · y(3)
∂y(10)
S(2, 10) = = S(4, 10) · z(2, 4) + S(5, 10) · z(2, 5) + S(6, 10) · z(2, 6)
∂y(2)
= y(8) · y(9) · cos[y(4)] · y(1) − y(7) · y(9) · sin[y(5)] · y(1) · y(3)+
+ y(7) · y(8) · cos[y(6)] · w2
∂y(10)
S(3, 10) = = S(5, 10) · z(3, 5) + S(6, 10) · z(3, 6)
∂y(3)
= −y(7) · y(9) · sin[y(5)] · y(1) · y(2) + y(7) · y(8) · cos[y(6)] · w3
Note that the use of a backward phase has the advantage of computing the derivative of
y(10) (output node variable) with respect to all the hidden and input node variables. These
different derivatives have many sub-expressions in common, although the derivative compu-
tation of these sub-expressions is not repeated. This is the advantage of using the backwards
phase for derivative computation as opposed to the use of closed-form expressions.
Because of the tedious nature of the closed-form expressions for outputs, the algebraic
expressions for derivatives are also very long and awkward (no matter how we compute
464 CHAPTER 11. OPTIMIZATION IN COMPUTATIONAL GRAPHS
them). One can see that this is true even for the simple, ten-node computational graph of
this section. For example, if one examines the derivative of y(10) with respect to each of
nodes y(1), y(2) and y(3), the algebraic expression wraps into multiple lines. Furthermore,
one cannot avoid the presence of repeated subexpressions within the algebraic derivative.
This is counter-productive because our original goal in the backwards algorithm was to
avoid the repeated computation endemic to traditional derivative evaluation with closed-
form expressions. Therefore, one does not algebraically compute these types of expressions
in real-world networks. One would first numerically compute all the node variables for a
specific set of numerical inputs from the training data. Subsequently, one would numerically
carry the derivatives backward, so that one does not have to carry the large algebraic ex-
pressions (with many repeated sub-expressions) in the backwards direction. The advantage
of carrying numerical expressions is that multiple terms get consolidated into a single nu-
merical value, which is specific to a particular input. By making the numerical choice, one
must repeat the backwards computation algorithm for each training point, but it is still a
better choice than computing the (massive) symbolic derivative in one shot and substituting
the values in different training points. This is the reason that such an approach is referred
to as numerical differentiation rather than symbolic differentiation. In much of machine
learning, one first computes the algebraic derivative (which is symbolic differentiation) be-
fore substituting numerical values of the variables in the expression (for the derivative) to
perform gradient-descent updates. This is different from the case of computational graphs,
where the backwards algorithm is numerically applied to each training point.
Here, it is noteworthy that the loss function is typically a closed-form function of the
variables in the node indices t1 . . . tp , which is often either is least-squares function or a
logarithmic loss function (like the examples in Chapter 4). Therefore, each derivative of the
11.3. OPTIMIZATION IN DIRECTED ACYCLIC GRAPHS 465
loss L with respect to y(ti ) is easy to compute. Furthermore, the value of each ∂y(t k)
∂y(i) for
k ∈ {1 . . . p} can be computed using the dynamic programming algorithm of the previous
∂yi
section. The value of ∂w ji
is a derivative of the local function at each node, which usually
has a simple form. Therefore, the loss-to-weight derivatives can be computed relatively
easily, once the node-to-node derivatives have been computed using dynamic programming.
Although one can apply the pseudocode of page 460 to compute ∂y(t k)
∂y(i) for each
k ∈ {1 . . . p}, it is more efficient to collapse all these computations into a single backwards
algorithm. In practice, one initializes the derivatives at the output nodes to the loss deriva-
∂L
tives ∂y(t k)
for each k ∈ {1 . . . p} rather than the value of 1 (as shown in the pseudocode of
∂L
page 460). Subsequently, the entire loss derivative Δ(i) = ∂y(i) is propagated backwards.
Therefore, the modified algorithm for computing the loss derivative with respect to the node
variables as well as the edge variables is as follows:
∂L
Initialize Δ(tr ) = ∂y(t for each k ∈ {1 . . . p};
k)
repeat
Select an unprocessed node i such that the values of Δ(j) all of its outgoing
nodes j ∈ A(i) are available;
Update Δ(i) ⇐ j∈A(i) Δ(j)z(i, j);
until all nodes have been selected;
∂L ∂y(i)
for each edge (j, i) with weight wji do compute ∂w = Δ(i) ∂w ;
ji ji
In the above algorithm, y(i) denotes the variable at node i. The key difference of this
algorithm from the algorithm on page 460 is in the nature of the initialization and the
addition of a final step computing the edge-wise derivatives. However, the core algorithm
for computing the node-to-node derivatives remains an integral part of this algorithm. In
fact, one can convert all the weights on the edges into additional “input” nodes containing
weight parameters, and also add computational nodes that multiply the weights with the
corresponding variables at the tail nodes of the edges. Furthermore, a computational node
can be added that computes the loss from the output node(s). For example, the architecture
of Figure 11.9 can be converted to that in Figure 11.10. Therefore, a computational graph
with learnable weights can be converted into an unweighted graph with learnable node
variables (on a subset of nodes). Performing only node-to-node derivative computation in
Figure 11.10 from the loss node to the weight nodes is equivalent to loss-to-weight derivative
computation. In other words, loss-to-weight derivative computation in a weighted graph is
equivalent to node-to-node derivative computation in a modified computational graph. The
∂L
derivative of the loss with respect to each weight can be denoted by the vector ∂W (in
matrix calculus notation), where W denotes the weight vector. Subsequently, the standard
gradient descent update can be performed:
∂L
W ⇐W −α (11.10)
∂W
Here, α is the learning rate. This type of update is performed to convergence by repeating
the process with different inputs in order to learn the weights of the computational graph.
sin[y(4)]
*
INPUT NODES
1 4 7
cos[y(5)]
* *
2 5 8 10 L
WEIGHT NODES
w2 * LOSS
+ sin[y(6)] OUTPUT
3 NODE
6 9 NODE
w3 * HIDDEN NODES
∂L ∂L ∂y(10) ∂y(6) 2
= = [y(7) · y(8) · cos[y(6)]] y(2)
∂w2 ∂y(10) ∂y(6) ∂w2 y(10)
∂L ∂L ∂y(10) ∂y(6) 2
= = [y(7) · y(8) · cos[y(6)]] y(3)
∂w3 ∂y(10) ∂y(6) ∂w3 y(10)
h1
h2
SOME SOME
GRAPH SOME LOSS L SOME
v GRAPH SOME SOME
h LOSS L
GRAPH v h3 GRAPH
h4
h5
(a) Vector-centric graph with single path (b) Vector-centric graph with multiple paths
well as a loss L computed in a later layer. Then, using the denominator layout of matrix
calculus, the vector-to-vector derivative is the transpose of the Jacobian matrix, which was
introduced in Chapter 4:
∂h
= Jacobian(h, v)T
∂v
∂h
The (i, j)th entry of the above vector-to-vector derivative is simply ∂vij . Since h is an m-
dimensional vector and v is a d-dimensional vector, the vector derivative is a d × m matrix.
As discussed in Section 4.6.3 of Chapter 4, the chain rule over a single vector-centric path
looks almost identical to the univariate chain rule over scalars, when one substitutes local
partial derivatives with Jacobians. In other words, we can derive the following vector-valued
chain rule for the single path of Figure 11.11(a):
∂L ∂h ∂L ∂L
= = Jacobian(h, v)T
∂v ∂v ∂h
∂h
d×m m×1
Therefore, once the gradient of the loss is available with respect to a layer, it can be
backpropagated by multiplying it with the transpose of a Jacobian! Here the ordering of
the matrices is important, since matrix multiplication is not commutative.
The above provides the chain rule only for the case where the computational graph is
a single path. What happens when the computational graph has an arbitrary structure?
In such a case, we might have a situation where we have multiple nodes h1 . . . hs between
node v and a network in later layers, as shown in Figure 11.11(b). Furthermore, there are
connections between alternate layers, which are referred to as skip connections. Assume that
the vector hi has dimensionality mi . In such a case, the partial derivative turns out to be
a simple generalization of the previous case:
∂L ∂hi ∂L
s s
∂L
= = Jacobian(hi , v)T
∂v ∂v ∂hi
i=1
i=1
∂hi
d×mi mi ×1
In most layered neural networks, we only have a single path and we rarely have to deal with
the case of branches. Such branches might, however, arise in the case of neural networks with
468 CHAPTER 11. OPTIMIZATION IN COMPUTATIONAL GRAPHS
skip connections [see Figures 11.11(b) and 11.13(b)]. However, even in complicated network
architectures like Figures 11.11(b) and 11.13(b), each node only has to worry about its local
outgoing edges during backpropagation. Therefore, we provide a very general vector-based
algorithm below that can work even in the presence of skip connections.
Consider the case where we have p output nodes containing vector-valued variables,
which have indices denoted by t1 . . . tp , and the variables in it are y(t1 ) . . . y(tp ). In such a
case, the loss function L might be function of all the components in these vectors. Assume
that the ith node contains a column vector of variables denoted by y(i). Furthermore, in
∂L
the denominator layout of matrix calculus, each Δ(i) = ∂y(i) is a column vector with di-
mensionality equal to that of y(i). It is this vector of loss derivatives that will be propagated
backwards. The vector-centric algorithm for computing derivatives is as follows:
∂L
Initialize Δ(tk ) = ∂y(t for each output node tk for k ∈ {1 . . . p};
k)
repeat
Select an unprocessed node i such that the values of Δ(j) all of its outgoing
nodes j ∈ A(i) are available;
Update Δ(i) ⇐ j∈A(i) Jacobian(y(j), y(i))T Δ(j);
until all nodes have been selected;
∂L ∂y(i)
for the vector wi of edges incoming to each node i do compute ∂w = ∂w Δ(i);
i i
In the final step of the above pseudocode, the derivative of vector y(i) with respect to the
vector wi is computed, which is itself the transpose of a Jacobian matrix. This final step
converts a vector of partial derivatives with respect to node variables into a vector of partial
derivatives with respect to weights incoming at a node.
h(i) = Φ(a(i))
The variables h(i) and a(i) are shown in Figure 11.12. In this case, it is noteworthy that
there are several ways in which the computational graph can be created. For example,
one might create a computational graph in which each node contains the post-activation
value h(i), and therefore we are implicitly setting y(i) = h(i). A second choice is to create
a computational graph in which each node contains the pre-activation variable a(i) and
therefore we are setting y(i) = a(i). It is even possible to create a decoupled computational
graph containing both a(i) and h(i); in the last case, the computational graph will have twice
as many nodes as the neural network. In all these cases, a relatively straightforward special-
case/simplification of the pseudocodes in the previous section can be used for learning the
gradient:
11.4. APPLICATION: BACKPROPAGATION IN NEURAL NETWORKS 469
{ ∑ɸ
W
X h=ɸ(W. X)
BREAK UP
{ ∑ ɸ= W X ∑ ɸ
W a .
h h=ɸ(ah)
X
POST-ACTIVATION
VALUE
PRE-ACTIVATION
VALUE
1. The post-activation value y(i) = h(i) could represent the variable in the ith computa-
tional node in the graph. Therefore, each computational node in such a graph first ap-
plies the linear function, and then applies the activation function. The post-activation
value is shown in Figure 11.12. In such a case, the value of z(i, j) = ∂y(j) ∂h(j)
∂y(i) = ∂h(i)
in the pseudocode of page 465 is wij Φj . Here, wij is the weight of the edge from i
to j and Φj = ∂Φ(a(j))
∂a(j) is the local derivative of the activation function at node j
with respect to its argument. The value of each Δ(tr ) at output node tr is simply
the derivative of the loss function with respect to h(tr ). The final derivative with
respect to the weight wji (in the final line of the pseudocode on page 465) is equal to
Δ(i) ∂h(i)
∂wji = Δ(i)h(j)Φi .
2. The pre-activation value (after applying the linear function), which is denoted by
a(i), could represent the variable in each computational node i in the graph. Note
the subtle distinction between the work performed in computational nodes and neural
network nodes. Each computational node first applies the activation function to each
of its inputs before applying a linear function, whereas these operations are performed
in the reverse order in a neural network. The structure of the computational graph
is roughly similar to the neural network, except that the first layer of computational
nodes do not contain an activation. In such a case, the value of z(i, j) = ∂y(j) ∂a(j)
∂y(i) = ∂a(i)
in the pseudocode of page 465 is Φi wij . Note that Φ(a(i)) is being differentiated with
respect to its argument in this case, rather than Φ(a(j)) as in the case of the post-
activation variables. The value of the loss derivative with respect to the pre-activation
variable a(tr ) in the rth output node tr needs to account for the fact that it is a
pre-activation value, and therefore, we cannot directly use the loss derivative with
respect to post-activation values. Rather the post-activation loss derivative needs to
be multiplied with the derivative Φtr of the activation function at that node. The final
derivative with respect to the weight wji (final line of pseudocode on page 465) is
equal to Δ(i) ∂a(i)
∂wji = Δ(i)h(j).
The use of pre-activation variables for backpropagation is more common than the use of
post-activation variables. Therefore, we present the backpropagation algorithm in a crisp
pseudocode with the use of pre-activation variables. Let tr be the index of the rth output
node. Then, the backpropagation algorithm with pre-activation variables may be presented
as follows:
470 CHAPTER 11. OPTIMIZATION IN COMPUTATIONAL GRAPHS
∂L
Initialize Δ(tr ) = ∂y(t = Φ (a(tr )) ∂h(t
∂L
for each output node tr with r ∈ {1 . . . k};
r) r)
repeat
Select an unprocessed node i such that the values of Δ(j) all of its outgoing
nodes j ∈ A(i)
are available;
Update Δ(i) ⇐ Φi j∈A(i) wij Δ(j);
until all nodes have been selected;
∂L
for each edge (j, i) with weight wji do compute ∂w = Δ(i)h(j);
ji
It is also possible to use both pre-activation and post-activation variables as separate nodes
of the computational graph. In the next section, we will combine this approach with a
vector-centric representation.
5X3
MATRIX
Figure 11.13: Most neural networks have a layer-wise architecture, and therefore the vector-
centric architecture has a single path. However, if there are shortcuts across layers, it is
possible for the topology of the vector-centric architecture to be arbitrary
1 ResNet is a convolutional neural network in which the structure of the layer is spatial, and the operations
correspond to convolutions.
472 CHAPTER 11. OPTIMIZATION IN COMPUTATIONAL GRAPHS
DECOUPLED LAYER i
FUNCTION FUNCTION LOSS
First, the forward phase is performed on the inputs in order to compute the activations in
each layer. Subsequently, the gradients are computed in the backwards phase. For each pair
of matrix multiplication and activation function layers, the following forward and backward
steps need to be performed:
1. Let z i and z i+1 be the column vectors of activations in the forward direction when
the matrix of linear transformations from the ith to the (i + 1)th layer is denoted by
W . Each element of the gradient g i is the partial derivative of the loss function with
respect to a hidden variable in the ith layer. Then, we have the following:
2. Now consider a situation where the activation function Φ(·) is applied to each node
in layer (i + 1) to obtain the activations in layer (i + 2). Then, we have the following:
Here, Φ(·) and its derivative Φ (·) are applied in element-wise fashion to vector argu-
ments. The symbol indicates elementwise multiplication.
Note the extraordinary simplicity once the activation is decoupled from the matrix multi-
plication in a layer. The forward and backward computations are shown in Figure 11.14.
Examples of different types of backpropagation updates for various forward functions are
shown in Table 11.1. Therefore, the backward propagation operation is just like forward
propagation. Given the vector of gradients in a layer, one only has to apply the operations
shown in the final column of Table 11.1 to obtain the gradients of the loss with respect to
the previous layer. In the table, the vector indicator function I(x > 0) is an element-wise
indicator function that returns a binary vector of the sam size as x; the ith output com-
ponent is set to 1 when the ith component of x is larger than 0. The notation 1 denotes a
column vector of 1s.
11.4. APPLICATION: BACKPROPAGATION IN NEURAL NETWORKS 473
Table 11.1: Examples of different functions and their backpropagation updates between
layers i and (i + 1). The hidden values and gradients in layer i are denoted by z i and g i .
Some of these computations use I(·) as the binary indicator function
M = g i z Ti−1
Since M is given by the product of a column vector and a row vector of sizes equal to two
successive layers, it is a matrix of exactly the same size as the weight matrix between the
two layers. The (q, p)th element of M yields the derivative of the loss with respect to the
weight between the pth element of z i−1 and qth element of z i .
LINEAR LAYER
WITH WEIGHTS
ReLU LINEAR LAYER
WITH WEIGHTS
2 2 2 2
OUTPUT NODE
-1
INPUT NODES
-1 IS SIGMOID
-2 ReLU
1 5 1 L=-log(o)
1 1 -1 0.27
3 -3 h3 o
-1 ReLU
2 -1 0
-2
x h1 h2
Figure 11.15: Example of decoupled neural network with vector layers x, h1 , h2 , h3 , and o:
variable values are shown within the nodes
are assumed to have zero weight. In the following, we will provide the details of both the
forward and the backwards phase.
Forward phase: The first hidden layer h1 is related to the input vector x with the weight
matrix W as h1 = W x. We can reconstruct the weights matrix W and then compute h1 for
forward propagation as follows:
⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎡ ⎤
2 −2 0 2 −2 0 2 2
W = ⎣ −1 5 −1 ⎦ ; h1 = W x = ⎣ −1 5 −1 ⎦ ⎣ 1 ⎦ = ⎣ 1 ⎦
0 3 −2 0 3 −2 2 −1
The hidden layer h2 is obtained by applying the ReLU function in element-wise fashion to
h1 during the forward phase. Therefore, we obtain the following:
⎡ ⎤ ⎡ ⎤
2 2
h2 = ReLU(h1 ) = ReLU ⎣ 1 ⎦ = ⎣ 1 ⎦
−1 0
Subsequently, the 1 × 3 weight matrix W2 = [−1, 1, −3] is used to transform the 3-
dimensional vector h2 to the 1-dimensional “vector” h3 as follows:
⎡ ⎤
2
h3 = W2 h2 = [−1, 1, −3] ⎣ 1 ⎦ = −1
0
The output o is obtained by applying the sigmoid function to h3 . In other words, we have
the following:
1 1
o= = ≈ 0.27
1 + exp(−h3 ) 1+e
The point-specific loss is L = −loge (0.27) ≈ 1.3.
∂o to −1/o, which
Backwards phase: In the backward phase, we first start by initializing ∂L
is −1/0.27. Then, the 1-dimensional “gradient” g3 of the hidden layer h3 is obtained by
using the backpropagation formula for the sigmoid function in Table 11.1:
∂L
g3 = o(1 − o) = o − 1 = 0.27 − 1 = −0.73 (11.18)
∂o
−1/o
11.5. A GENERAL VIEW OF COMPUTATIONAL GRAPHS 475
The gradient g 2 of the hidden layer h2 is obtained by multiplying g3 with the transpose of
the weight matrix W2 = [−1, 1, −3]:
⎡ ⎤ ⎡ ⎤
−1 0.73
g 2 = W2T g3 = ⎣ 1 ⎦ (−0.73) = ⎣ −0.73 ⎦
−3 2.19
Based on the entry in Table 11.1 for the ReLU layer, the gradient g 2 can be propagated
∂L
backwards to g 1 = ∂h by copying the components of g 2 to g 1 , when the corresponding
1
components in h1 are positive; otherwise, the components of g 1 are set to zero. Therefore,
∂L
the gradient g 1 = ∂h can be obtained by simply copying the first and second components
1
of g 2 to the first and second components of g 1 , and setting the third component of g 1 to 0.
In other words, we have the following:
⎡ ⎤
0.73
g 1 = ⎣ −0.73 ⎦
0
Note that we can also compute the gradient g 0 = ∂L ∂x of the loss with respect to the input
layer x by simply computing g 0 = W T g 1 . However, this is not really needed for computing
loss-to-weight derivatives.
Similarly, one can compute the loss-to-weight derivative matrix M2 for the 1 × 3 matrix W2
between h2 and h3 :
T
M2 = g3 h2 = (−0.73)[2, 1, 0] = [−1.46, −0.73, 0]
Note that the size of the matrix M2 is identical to that of W2 , although the weights of the
missing edges should not be updated.
INPUT NODES
(FIXED)
(FIXED)
a
a b c d e
w1 w2
x w3 y
x y
TWO OUTPUT NODES TWO HIDDEN NODES
(COMPUTATIONAL) (COMPUTATIONAL)
(a) An undirected computational graph (b) More input states than hidden states
of neural networks like Kohonen self-organizing maps, Hopfield networks, and Boltzmann
machines. Furthermore, these neural networks use discrete and probabilistic data types as
variables within their nodes (implicitly or explicitly).
Another important variation is the use of undirected computational graphs. In undi-
rected computational graphs, each node computes a function of the variables in nodes
incident on it, and there is no direction to the links. This is the only difference between
an undirected computational graph and a directed computational graph. As in the case of
directed computational graphs, one can define a loss function on the observed variables in
the nodes. Examples of undirected computational graphs are shown in Figure 11.16. Some
nodes are fixed (for observed data) whereas others are computational nodes. The compu-
tation can occur in both directions of an edge as long as the value in the node is not fixed
externally.
It is harder to learn the parameters in undirected computational graphs, because the
presence of cycles creates additional constraints on the values of variables in the nodes. In
fact, it is not even necessary for there to be a set of variable values in nodes that satisfy
all the functional constraints implied by the computational graph. For example, consider
a computational graph with two nodes in which the variable of each node is obtained by
adding 1 to the variable on the other node. It is impossible to find a pair of values in the two
nodes that can satisfy both constraints (because both variable values cannot be larger than
the other by 1). Therefore, one would have to be satisfied with a best-fit solution in many
cases. This situation is different from a directed acyclic graph, where appropriate variables
values can always be defined over all values of the inputs and parameters (as long as the
function in each node is computable over its inputs).
Undirected computational graphs are often used in all types of unsupervised algorithms,
because the cycles in these graphs help in relating other hidden nodes to the input nodes.
For example, if the variables x and y are assumed to be hidden variables in Figure 11.16(b),
this approach learns weights so that the two hidden variables correspond to the compressed
representations of 5-dimensional data. The weights are often learned to minimize a loss
function (or energy function) that rewards large weights when connected nodes are highly
correlated in a positive way. For example, if variable x is heavily correlated with input a
in a positive way, then the weight between these two nodes should be large. By learning
these weights, one can compute the hidden representation of any 5-dimensional point by
providing it as an input to the network.
The level of difficulty in learning the parameters of a computational graph is regulated by
three characteristics of the graph. The first characteristic is the structure of the graph itself.
It is generally much easier to learn the parameters of computational graphs without cycles
11.5. A GENERAL VIEW OF COMPUTATIONAL GRAPHS 477
(which are always directed). The second characteristic is whether the variable in a node
is continuous or discrete. It is much easier to optimize the parameters of a computational
graph with continuous variables with the use of differential calculus. Finally, the function
computed at a node of can be either probabilistic or deterministic. The parameters of
deterministic computational graphs are almost always much easier to optimize with observed
data. All these variations are important, and they arise in different types of machine learning
applications. Some examples of different types of computational graphs in machine learning
are as follows:
Table 11.2 shows several variations of the computational graph paradigm in machine learn-
ing, and their specific properties. It is evident that the methodology used for a particular
problem is highly dependent on the structure of the computational graph, its variables, and
the nature of the node-specific function. We refer the reader to [6] for the neural architec-
tures of the basic machine learning models discussed in this book (like linear regression,
logistic regression, matrix factorization, and SVMs).
2 When a problem is NP-hard, it means that a polynomial-time algorithm for such a problem is not known
(although it is unknown whether one exists). More specifically, finding a polynomial-time algorithm for such
a problem would automatically provide a polynomial-time algorithm for thousands of related problems for
which no one has been able to find a polynomial-time algorithm. The inability to find a polynomial-time
problem for a large class of related problems is generally assumed to be evidence of the fact that the entire
set of problems is hard to solve, and a polynomial time algorithm probably does not exist for any of them.
478 CHAPTER 11. OPTIMIZATION IN COMPUTATIONAL GRAPHS
Table 11.2: Types of computational graphs for different machine learning problems. The
properties of the computational graph vary according to the application at hand
Model Cycles? Variable Function Methodology
SVM
Logistic Regression
Linear Regression No Continuous Deterministic Gradient
SVD Descent
Matrix Factorization
Feedforward Neural No Continuous Deterministic Gradient
Networks Descent
Kohonen Yes Continuous Deterministic Gradient
Map Descent
Hopfield Yes Discrete Deterministic Iterative
Networks (Undirected) (Binary) (Hebbian Rule)
Boltzmann Yes Discrete Probabilistic Monte Carlo Sampling +
Machines (Undirected) (Binary) Iterative (Hebbian)
Probabilistic Varies Varies Probabilistic Varies
Graphical Models (largely)
11.6 Summary
This chapter introduces the basics of computational graphs for machine learning applica-
tions. Computational graphs often have parameters associated with their edges, which need
to be learned. Learning the parameters of a computational graph from observed data pro-
vides a route to learning a function from observed data (whether it can be expressed in
closed form or not). The most commonly used type of computational graph is a directed
acyclic graph. Traditional neural networks represent a class of models that is a special case
of this type of graph. However, other types of undirected and cyclic graphs are used to
represent other models like Hopfield networks and restricted Boltzmann machines.
11.8 Exercises
1. Problem 11.2.2 proposes a loss function for the L1 -SVM in the context of a computa-
tional graph. How would you change this loss function, so that the same computational
graph results in an L2 -SVM?
11.8. EXERCISES 479
2. Repeat Exercise 1 with the changed setting that you want to simulate Widrow-Hoff
learning (least-squares classification) with the same computational graph. What will
be the loss function associated with the single output node?
3. The book discusses a vector-centric view of backpropagation in which backpropagation
in linear layers can be implemented with matrix-to-vector multiplications. Discuss how
you can deal with batches of training instances at a time (i.e., mini-batch stochastic
gradient descent) by using matrix-to-matrix multiplications.
4. Let f (x) be defined as follows:
Consider the the function f (f (f (f (x)))). Write this function in closed form to ob-
tain an appreciation of the awkwardly long function. Evaluate the derivative of this
function at x = π/3 radians by using a computational graph abstraction.
5. Suppose that you have a computational graph with the constraint that specific sets of
weights are always constrained to be at the same value. Discuss how you can compute
the derivative of the loss function with respect to these weights. [Note that this trick
is used frequently in the neural network literature to handle shared weights.]
6. Consider a computational graph in which you are told that the variables on the edges
satisfy k linear equality constraints. Discuss how you would train the weights of such a
graph. How would your answer change, if the variables satisfied box constraints. [The
reader is advised to refer to the chapter on constrained optimization for answering
this question.]
7. Discuss why the dynamic programming algorithm for computing the gradients will
not work in the case where the computational graph contains cycles.
8. Consider the neural architecture with connections between alternate layers, as shown
in Figure 11.13(b). Suppose that the recurrence equations of this neural network are
as follows:
h1 = ReLU(W1 x)
h2 = ReLU(W2 x + W3 h1 )
y = W4 h2
o = U ht
hp = tanh(W hp−1 + V xp ) ∀p ∈ {1 . . . t}
The vector output o has dimensionality k, each hp has dimensionality m, and each
xp has dimensionality d. The “tanh” function is applied in element-wise fashion. The
480 CHAPTER 11. OPTIMIZATION IN COMPUTATIONAL GRAPHS
10. Show that if we use the loss function L(o) in Exercise 9, then the loss-to-node gradient
can be computed for the final layer ht as follows:
∂L(o) ∂L(o)
= UT
∂ht ∂o
The updates in earlier layers remain similar to Exercise 9, except that each o is replaced
by L(o). What is the size of each matrix ∂L(o)
∂h
?
p
11. Suppose that the output structure of the neural network in Exercise 9 is changed so
that there are k-dimensional outputs o1 . . . ot in each layer, and the overall loss is
t
L = i=1 L(oi ). The output recurrence is op = U hp . All other recurrences remain
the same. Show that the backpropagation recurrence of the hidden layers changes as
follows:
∂L ∂L(ot )
= UT
∂ht ∂ot
∂L ∂L ∂L(op−1 )
= W T Δp−1 + UT ∀p ∈ {2 . . . t}
∂hp−1 ∂hp ∂op−1
∂L ∂L(op ) T
t
∂L t
∂L T ∂L t
∂L T
= h , = Δp−1 hp−1 , = Δp xp
∂U p=1
∂op p ∂W p=2
∂hp ∂V p=1
∂hp
∂L
What are the sizes of W1 , W2 , and ∂v ?
x1 -2 3 x1 -2 3
1 -1 -1
INPUT NODES
INPUT NODES
-2 4
3 o=0.1 3 o=0.1
x2 -1 1 1 x2 1
LOSS L=-log(o) LOSS L=-log(o)
2 1 2 OUTPUT 2 1 2 OUTPUT
1 NODE NODE
x3 -3 -1 x3 -3 -1
20. SVD with neural networks: In the previous exercise, unconstrained matrix factor-
ization finds the same k-dimensional subspace as SVD. However, it does not find an
orthonormal basis in general like SVD (see Chapter 8). Provide an iterative training
method for the computational graph of the previous section by gradually increasing
the value of k so that an orthonormal basis is found.
21. Consider the computational graph shown in Figure 11.17(a), in which the local deriva-
tive ∂y(j)
∂y(i) is shown for each edge (i, j), where y(k) denotes the activation of node k.
The output o is 0.1, and the loss L is given by −log(o). Compute the value of ∂x∂L
i
for
each input xi using both the path-wise aggregation lemma, and the backpropagation
algorithm.
22. Consider the computational graph shown in Figure 11.17(b), in which the local deriva-
tive ∂y(j)
∂y(i) is shown for each edge (i, j), where y(k) denotes the activation of node k.
The output o is 0.1, and the loss L is given by −log(o). Compute the value of ∂x
∂L
i
for
each input xi using both the path-wise aggregation lemma, and the backpropagation
algorithm.
23. Convert the weighted computational graph of Figure 11.2 into an unweighted graph
by defining additional nodes containing w1 . . . w5 along with appropriately defined
hidden nodes.
24. Multinomial logistic regression with neural networks: Propose a neural net-
work architecture using the softmax activation function and an appropriate loss func-
tion that can perform multinomial logistic regression. You may refer to Chapter 4 for
details of multinomial logistic regression.
25. Weston-Watkins SVM with neural networks: Propose a neural network archi-
tecture and an appropriate loss function that is equivalent to the Weston-Watkins
SVM. You may refer to Chapter 4 for details of the Weston-Watkins SVM.
Bibliography
42. A. Emmott, S. Das, T. Dietterich, A. Fern, and W. Wong. Systematic Construction of Anomaly
Detection Benchmarks from Real Data. arXiv:1503.01158, 2015.
https://arxiv.org/abs/1503.01158
43. M. Faloutsos, P. Faloutsos, and C. Faloutsos. On power-law relationships of the internet topol-
ogy. ACM SIGCOMM Computer Communication Review, pp. 251–262, 1999.
44. R. Fan, K. Chang, C. Hsieh, X. Wang, and C. Lin. LIBLINEAR: A library for large linear
classification. Journal of Machine Learning Research, 9, pp. 1871–1874, 2008.
http://www.csie.ntu.edu.tw/∼cjlin/liblinear/
45. R. Fisher. The use of multiple measurements in taxonomic problems. Annals of Eugenics, 7:
pp. 179–188, 1936.
46. P. Flach. Machine learning: the art and science of algorithms that make sense of data. Cam-
bridge University Press, 2012.
47. C. Freudenthaler, L. Schmidt-Thieme, and S. Rendle. Factorization machines: Factorized poly-
nomial regression models. GPSDAA, 2011.
48. J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the
graphical lasso. Biostatistics, 9(3), pp. 432–441, 2008.
49. M. Garey, and D. S. Johnson. Computers and intractability: A guide to the theory of NP-
completeness. New York, Freeman, 1979.
50. E. Gaussier and C. Goutte. Relation between PLSA and NMF and implications. ACM SIGIR
Conference, pp. 601–602, 2005.
51. H. Gavin. The Levenberg-Marquardt method for nonlinear least squares curve-fitting prob-
lems, 2011.
http://people.duke.edu/∼hpgavin/ce281/lm.pdf
52. G. Golub and C. F. Van Loan. Matrix computations, John Hopkins University Press, 2012.
53. I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT Press, 2016.
54. I. Goodfellow, O. Vinyals, and A. Saxe. Qualitatively characterizing neural network optimiza-
tion problems. arXiv:1412.6544, 2014. [Also appears in ICLR, 2015]
https://arxiv.org/abs/1412.6544
55. A. Grover and J. Leskovec. node2vec: Scalable feature learning for networks. ACM KDD
Conference, pp. 855–864, 2016.
56. T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning. Springer, 2009.
57. T. Hastie, R. Tibshirani, and M. Wainwright. Statistical learning with sparsity: the lasso and
generalizations. CRC Press, 2015.
58. K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level
performance on imagenet classification. IEEE International Conference on Computer Vision,
pp. 1026–1034, 2015.
59. M. Hestenes and E. Stiefel. Methods of conjugate gradients for solving linear systems. Journal
of Research of the National Bureau of Standards, 49(6), 1952.
60. G. Hinton. Connectionist learning procedures. Artificial Intelligence, 40(1–3), pp. 185–234,
1989.
61. G. Hinton. Neural networks for machine learning, Coursera Video, 2012.
62. K. Hoffman and R. Kunze. Linear algebra, Second Edition, Pearson, 1975.
486 BIBLIOGRAPHY
63. T. Hofmann. Probabilistic latent semantic indexing. ACM SIGIR Conference, pp. 50–57, 1999.
64. C. Hsieh, K. Chang, C. Lin, S. S. Keerthi, and S. Sundararajan. A dual coordinate descent
method for large-scale linear SVM. ICML, pp. 408–415, 2008.
65. Y. Hu, Y. Koren, and C. Volinsky. Collaborative filtering for implicit feedback datasets. IEEE
ICDM, pp. 263–272, 2008.
66. H. Yu and B. Wilamowski. Levenberg–Marquardt training. Industrial Electronics Handbook,
5(12), 1, 2011.
67. R. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks,
1(4), pp. 295–307, 1988.
68. T. Jaakkola, and D. Haussler. Probabilistic kernel regression models. AISTATS, 1999.
69. P. Jain, P. Netrapalli, and S. Sanghavi. Low-rank matrix completion using alternating mini-
mization. ACM Symposium on Theory of Computing, pp. 665–674, 2013.
70. C. Johnson. Logistic matrix factorization for implicit feedback data. NIPS Conference, 2014.
71. H. J. Kelley. Gradient theory of optimal flight paths. Ars Journal, 30(10), pp. 947–954, 1960.
72. D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
https://arxiv.org/abs/1412.6980
73. M. Knapp. Sines and cosines of angles in arithmetic progression. Mathematics Magazine, 82(5),
2009.
74. D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT
Press, 2009.
75. Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems.
Computer, 8, pp. 30–37, 2009.
76. A. Langville, C. Meyer, R. Albright, J. Cox, and D. Duling. Initializations for the nonnegative
matrix factorization. ACM KDD Conference, pp. 23–26, 2006.
77. D. Lay, S. Lay, and J. McDonald. Linear Algebra and its applications, Pearson, 2012.
78. Q. Le, J. Ngiam, A. Coates, A. Lahiri, B. Prochnow, and A. Ng, On optimization methods for
deep learning. ICML Conference, pp. 265–272, 2011.
79. D. Lee and H. Seung. Algorithms for non-negative matrix factorization. Advances in Neural
Information Processing Systems, pp. 556–562, 2001.
80. C. J. Lin, R. C. Weng, and S. S. Keerthi. Trust region newton method for logistic regression.
, 9(Apr), 627–650. Journal of Machine Learning Research, 9, pp. 627–650, 2008.
81. T.-Y. Liu. Learning to rank for information retrieval. Foundations and Trends in Information
Retrieval, 3(3), pp. 225–231, 2009.
82. B. London and L. Getoor. Collective classification of network data. Data Classification: Algo-
rithms and Applications, CRC Press, pp. 399–416, 2014.
83. D. Luenberger and Y. Ye. Linear and nonlinear programming, Addison-Wesley, 1984.
84. U. von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4), pp. 395–416,
2007.
85. S. Marsland. Machine learning: An algorithmic perspective, CRC Press, 2015.
86. J. Martens. Deep learning via Hessian-free optimization. ICML Conference, pp. 735–742, 2010.
BIBLIOGRAPHY 487
87. J. Martens and I. Sutskever. Learning recurrent neural networks with hessian-free optimiza-
tion. ICML Conference, pp. 1033–1040, 2011.
88. J. Martens, I. Sutskever, and K. Swersky. Estimating the hessian by back-propagating curva-
ture. arXiv:1206.6464, 2016.
https://arxiv.org/abs/1206.6464
89. J. Martens and R. Grosse. Optimizing Neural Networks with Kronecker-factored Approximate
Curvature. ICML Conference, 2015.
90. P. McCullagh. Regression models for ordinal data. Journal of the royal statistical society.
Series B (Methodological), pp. 109–142, 1980.
91. T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations
in vector space. arXiv:1301.3781, 2013.
https://arxiv.org/abs/1301.3781
92. T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed representations of
words and phrases and their compositionality. NIPS Conference, pp. 3111–3119, 2013.
93. T. Minka. A comparison of numerical optimizers for logistic regression. Unpublished Draft,
2003.
94. T. Mitchell. Machine learning, McGraw Hill, 1997.
95. K. Murphy. Machine learning: A probabilistic perspective, MIT Press, 2012.
96. G. Nemhauser, A. Kan, and N. Todd. Nondifferentiable optimization. Handbooks in Operations
Research and Management Sciences, 1, pp. 529–572, 1989.
97. Y. Nesterov. A method of solving a convex programming problem with convergence rate
O(1/k2 ). Soviet Mathematics Doklady, 27, pp. 372–376, 1983.
98. A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. NIPS
Conference, pp. 849–856, 2002.
99. J. Nocedal and S. Wright. Numerical optimization. Springer, 2006.
100. N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trends in Optimization, 1(3),
pp. 127–239, 2014.
101. J. Pennington, R. Socher, and C. Manning. Glove: Global Vectors for Word Representation.
EMNLP, pp. 1532–1543, 2014.
102. J. C. Platt. Sequential minimal optimization: A fast algorithm for training support vector
machines. Advances in Kernel Method: Support Vector Learning, MIT Press, pp. 85–208, 1998.
103. B. Perozzi, R. Al-Rfou, and S. Skiena. Deepwalk: Online learning of social representations.
ACM KDD Conference, pp. 701–710, 2014.
104. E. Polak. Computational methods in optimization: a unified approach. Academic Press, 1971.
105. B. Polyak and A. Juditsky. Acceleration of stochastic approximation by averaging. SIAM
Journal on Control and Optimization, 30(4), pp. 838–855, 1992.
106. N. Qian. On the momentum term in gradient descent learning algorithms. Neural networks,
12(1), pp. 145–151, 1999.
107. S. Rendle. Factorization machines. IEEE ICDM Conference, pp. 995–100, 2010.
108. S. Rendle. Factorization machines with libfm. ACM Transactions on Intelligent Systems and
Technology, 3(3), 57, 2012.
488 BIBLIOGRAPHY
109. F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization
in the brain. Psychological Review, 65(6), 386, 1958.
110. D. Rumelhart, G. Hinton, and R. Williams. Learning internal representations by back-
propagating errors. In Parallel Distributed Processing: Explorations in the Microstructure of
Cognition, pp. 318–362, 1986.
111. T. Schaul, S. Zhang, and Y. LeCun. No more pesky learning rates. ICML Confererence, pp.
343–351, 2013.
112. B. Schölkopf, A. Smola, and K.-R. Müller. Nonlinear component analysis as a kernel eigenvalue
problem. Neural Computation, 10(5), pp. 1299–1319, 1998.
113. B. Schölkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating
the Support of a High-Dimensional Distribution. Neural Computation, 13(7), pp. 1443–1472,
2001.
114. J. Shewchuk. An introduction to the conjugate gradient method without the agonizing pain.
Technical Report, CMU-CS-94-125, Carnegie-Mellon University, 1994.
115. J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 22(8), pp. 888–905, 2000.
116. N. Shor. Minimization methods for non-differentiable functions (Vol. 3). Springer Science and
Business Media, 2012.
117. A. Singh and G. Gordon. A unified view of matrix factorization models. Joint European
Conference on Machine Learning and Knowledge Discovery in Databases, pp. 358–373, 2008.
118. B. Schölkopf and A. J. Smola. Learning with kernels: support vector machines, regularization,
optimization, and beyond. Cambridge University Press, 2001.
119. J. Solomon. Numerical Algorithms: Methods for Computer Vision, Machine Learning, and
Graphics. CRC Press, 2015.
120. N. Srebro, J. Rennie, and T. Jaakkola. Maximum-margin matrix factorization. Advances in
neural information processing systems, pp. 1329–1336, 2004.
121. G. Strang. The discrete cosine transform. SIAM review, 41(1), pp. 135–147, 1999.
122. G. Strang. An introduction to linear algebra, Fifth Edition. Wellseley-Cambridge Press, 2016.
123. G. Strang. Linear algebra and its applications, Fourth Edition. Brooks Cole, 2011.
124. G. Strang and K. Borre. Linear algebra, geodesy, and GPS. Wellesley-Cambridge Press, 1997.
125. G. Strang. Linear algebra and learning from data. Wellesley-Cambridge Press, 2019.
126. J. Tenenbaum, V. De Silva, and J. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290 (5500), pp. 2319–2323, 2000.
127. A. Tikhonov and V. Arsenin. Solution of ill-posed problems. Winston and Sons, 1977.
128. M. Udell, C. Horn, R. Zadeh, and S. Boyd. Generalized low rank models. Foundations and
Trends in Machine Learning, 9(1), pp. 1–118, 2016.
https://github.com/madeleineudell/LowRankModels.jl
129. G. Wahba. Support vector machines, reproducing kernel Hilbert spaces and the randomized
GACV. Advances in Kernel Methods-Support Vector Learning, 6, pp. 69–87, 1999.
130. H. Wendland. Numerical linear algebra: An introduction. Cambridge University Press, 2018.
131. P. Werbos. Beyond Regression: New Tools for Prediction and Analysis in the Behavioral
Sciences. PhD thesis, Harvard University, 1974.
BIBLIOGRAPHY 489
132. B. Widrow and M. Hoff. Adaptive switching circuits. IRE WESCON Convention Record, 4(1),
pp. 96–104, 1960.
133. C. Williams and M. Seeger. Using the Nyström method to speed up kernel machines. NIPS
Conference, 2000.
134. S. Wright. Coordinate descent algorithms. Mathematical Programming, 151(1), pp. 3–34, 2015.
135. T. T. Wu, and K. Lange. Coordinate descent algorithms for lasso penalized regression. The
Annals of Applied Statistics, 2(1), pp. 224–244, 2008.
136. H. Yu, F. Huang, and C. J. Lin. Dual coordinate descent methods for logistic regression and
maximum entropy models. Machine Learning, 85(1–2), pp. 41–75, 2011.
137. H. Yu, C. Hsieh, S. Si, and I. S. Dhillon. Scalable coordinate descent approaches to parallel
matrix factorization for recommender systems. IEEE ICDM, pp. 765–774, 2012.
138. R. Zafarani, M. A. Abbasi, and H. Liu. Social media mining: an introduction. Cambridge
University Press, 2014.
139. M. Zeiler. ADADELTA: an adaptive learning rate method. arXiv:1212.5701, 2012.
https://arxiv.org/abs/1212.5701
140. T. Zhang. On the dual formulation of regularized linear systems with convex risks. Machine
Learning, 46, 1–3, pp. 81–129, 2002.
141. Y. Zhou, D. Wilkinson, R. Schreiber, and R. Pan. Large-scale parallel collaborative filtering
for the Netflix prize. Algorithmic Aspects in Information and Management, pp. 337–348, 2008.
142. J. Zhu and T. Hastie. Kernel logistic regression and the import vector machine. Advances in
neural information processing systems, 2002.
143. X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and
harmonic functions. ICML Conference, pp. 912–919, 2003.
144. https://www.csie.ntu.edu.tw/∼cjlin/libmf/
Index
Symbols C
(PD) Constraints, 275 Cauchy-Schwarz Inequality, 6
Cayley-Hamilton Theorem, 106
A Chain Rule for Vectored Derivatives, 175
Activation, 451 Chain Rule of Calculus, 174
AdaGrad, 214 Characteristic Polynomial, 105
Adam Algorithm, 215 Cholesky Factorization, 119
Additively Separable Functions, 128 Closed Convex Set, 155
Adjacency Matrix, 414 Clustering Graphs, 423
Affine Transform, 42 Collective Classification, 440
Affine Transform Definition, 43 Compact Singular Value Decomposition, 307
Algebraic Multiplicity, 110 Competitive Learning Algorithm, 477
Alternating Least-Squares Method, 197 Complementary Slackness Condition, 275
Alternating Least Squares, 349 Complex Eigenvalues, 107
Anisotropic Scaling, 49 Computational Graphs, 34
Armijo Rule, 162 Condition Number of a Matrix, 85, 326
Asymmetric Laplacian, 426 Conjugate Gradient Method, 233
Conjugate Transpose, 88
B Connected Component of Graph, 413
Backpropagation, 459 Connected Graph, 413
Barrier Function, 289 Constrained Optimization, 255
Barrier Methods, 288 Convergence in Gradient Descent, 147
Basis, 54 Convex Objective Functions, 124, 154
Basis Change Matrix, 104 Convex Sets, 154
BFGS, 237, 251 Coordinate, 2, 55
Binary Search, 161 Coordinate Descent, 194, 348
Block Coordinate Descent, 197, 349 Coordinate Descent in Recommenders, 348
Block Diagonal Matrix, 13, 419 Cosine Law, 7
Block Upper-Triangular Matrix, 419 Covariance Matrix, 122
Bold-Driver Algorithm, 160 Critical Points, 143
Box Regression, 269 Cycle, 413
D Fields, 3
Data-Specific Mercer Kernel Map, 383 Finite-Difference Approximation, 159
Davidson–Fletcher–Powell, 238 Frobenius Inner Product, 309
Decision Boundary, 181 Full Rank Matrix, 63
Decomposition of Matrices, 339 Full Row/Column Rank, 63
Defective Matrix, 110 Full Singular Value Decomposition, 306
Degree Centrality, 434 Fundamental Subspaces of Linear Algebra,
Degree Matrix, 416 63, 325
Degree of a Vertex, 413
Degree Prestige, 434 G
Denominator Layout, 170 Gaussian Elimination, 65
DFP, 238 Gaussian Radial Basis Kernel, 405
Diagonal Entries of a Matrix, 13 Generalization, 165
Diameter of Graph, 414 Generalized Low-Rank Models, 365
Dimensionality Reduction, 307 Geometric Multiplicity, 111
Directed Acyclic Graphs, 413 Givens Rotation, 47
Directed Acyclic Graphs, 413 Global Minimum, 145
Directed Graph, 412 GloVe, 361
Directed Link Prediction, 430 Golden-Section Search, 161
Discrete Cosine Transform, 77 Gram-Schmidt Orthogonalization, 73
Discrete Fourier Transform, 79, 89 Gram Matrix, 72
Discrete Wavelet Transform, 60
Graphs, 411
Disjoint Vector Spaces, 61
Divergence in Gradient Descent, 148
Document-Term Matrix, 340 H
Duality, 255 Hard Tanh Activation, 451
Dynamic Programming, 248, 453 Hessian, 127, 152, 217
Hessian-free Optimization, 233
E Hinge Loss, 184
Economy Singular Value Decomposition, 306 Homogeneous System of Equations, 327
Eigenspace, 110, 436 Homophily, 440
Eigenvalues, 104 Householder Reflection Matrix, 47
Eigenvector Centrality, 434 Huber Loss, 223
Eigenvector Prestige, 434 Hyperbolic Tangent Activation, 451
Eigenvectors, 104
Elastic-Net regression, 244 I
Elementary Matrix, 22 Idempotent Property, 83
Energy, 20, 311 Identity Activation, 451
Epoch, 165 Ill-Conditioned Matrices, 85
Ergodic Markov Chain, 432 Implicit Feedback Data, 341
Euler Identity, 33, 87 Indefinite Matrix, 118
Indegree of a Vertex, 413
F Inflection Point, 143
Factorization of Matrices, 299, 339 Initialization, 163
Fat Matrix, 3 Inner Product, 86, 309
Feasible Direction method, 256 Interior Point Methods, 288
Feature Engineering, 329 Irreducible Matrix, 420
Feature Preprocessing with PCA, 327 ISOMAP, 394
Feature Spaces, 383 ISTA, 246
INDEX 493
K M
K-Means Algorithm, 197, 342 Mahalanobis Distance, 329
Katz Measure, 418 Manhattan Norm, 5
Kernel Feature Spaces, 383 Markov Chain, 432
Kernel K-Means, 395, 397 Matrix Calculus, 170
Kernel Methods, 122 Matrix Decomposition, 339
Kernel PCA, 391 Matrix Factorization, 299, 339
Kernel SVD, 384 Matrix Inversion, 67
Kernel SVM, 396, 398 Matrix Inversion Lemma, 18
Kernel Trick, 395, 397 Maximum Margin Matrix Factorization, 364
Kuhn-Tucker Optimality Conditions, 274 Minimax Theorem, 272
Momentum-based Learning, 212
L
Moore-Penrose Pseudoinverse, 81, 179, 325
L-BFGS, 237, 239, 251
Multivariate Chain Rule, 175
Lagrangian Relaxation, 270
Laplacian, 426
Latent Components, 306 N
Latent Semantic Analysis, 323 Negative Semidefinite Matrix, 118
Learning Rate Decay, 159 Newton Update, 218
Learning Rate in Gradient Descent, 33, 146 Nilpotent Matrix, 14
Left Eigenvector, 108 Noise Removal with SVD, 324
Left Gram Matrix, 73 Non-Differentiable Optimization, 239
Left Inverse, 79 Nonlinear Conjugate Gradient Method, 237
Left Null Space, 63 Nonnegative Matrix Factorization, 350
Leibniz formula, 100 Nonsingular Matrix, 15
Levenberg–Marquardt Algorithm, 251 Norm, 5
libFM, 374 Normal Equation, 56, 80
LIBLINEAR, 199 Null Space, 63
Linear Activation, 451 Nyström Technique, 385
Linear Conjugate Gradient Method, 237
Linear Independence/Dependence, 53 O
Linear Kernel, 405 One-Sided Inverse, 79
Linearly Additive Functions, 149 Open Convex Set, 155
Linear Programming, 257 Orthogonal Complementary Subspace, 62
Linear Transform as Matrix Multiplication, Orthogonal Matrix, 17
9 Orthogonal Vectors, 7
Linear Transform Definition, 42 Orthogonal Vector Spaces, 61
Line Search, 160 Orthonormal Vectors, 7
Link Prediction, 360 Outdegree of a Vertex, 413
Local Minimum, 145 Outer Product, 10
Logarithmic Barrier Function, 289 Outlier Detection, 328
Loss Function, 142 Overfitting, 166
494 INDEX
T V
Tall Matrix, 3 Vector Space, 51, 87
Taylor Expansion, 31, 217 Vertex Classification, 440
Trace, 20, 113 Von Mises Iterations, 133
Triangle Inequality, 6
Triangular Matrix, 13 W
Triangular Matrix Inversion, 18 Walk, 413
Truncated SVD, 307 Weak Duality, 272
Trust Region Method, 232 Weighted Graph, 412
Tuning Hyperparameters, 168 Weston-Watkins SVM, 190
Whitening with PCA, 327
U Wide Matrix, 3
Undirected Graph, 412 Widrow-Hoff Update, 183
Unitary Matrix, 89 Woodbury Identity, 19
Univariate Optimization, 142 Word2vec, 364
Upper Triangular Matrix, 13