Nicholson OpenLAWA 2019A PDF
Nicholson OpenLAWA 2019A PDF
Nicholson OpenLAWA 2019A PDF
LINEAR ALGEBRA
with Applications
Open Edition
BASE TEXTBOOK
VERSION 2019 – REVISION A
by W. Keith Nicholson
Creative Commons License (CC BY-NC-SA)
a d v a n c i n g l e a r n i n g
All digital forms of access to our high-quality We have been developing superior online for-
open texts are entirely FREE! All content is mative assessment for more than 15 years. Our
reviewed for excellence and is wholly adapt- questions are continuously adapted with the
able; custom editions are produced by Lyryx content and reviewed for quality and sound
for those adopting Lyryx assessment. Access pedagogy. To enhance learning, students re-
to the original source files is also open to any- ceive immediate personalized feedback. Stu-
one! dent grade reports and performance statistics
are also provided.
SUPPORT INSTRUCTOR
SUPPLEMENTS
Access to our in-house support team is avail- Additional instructor resources are also freely
able 7 days/week to provide prompt resolution accessible. Product dependent, these supple-
to both student and instructor inquiries. In ad- ments include: full sets of adaptable slides and
dition, we work one-on-one with instructors to lecture notes, solutions manuals, and multiple
provide a comprehensive system, customized choice question banks with an exam building
for their course. This can include adapting the tool.
text, managing multiple sections, and more!
CONTRIBUTIONS
Author
W. Keith Nicholson, University of Calgary
LICENSE
Creative Commons License (CC BY-NC-SA): This text, including the art and illustrations, are available
under the Creative Commons license (CC BY-NC-SA), allowing anyone to reuse, revise, remix and
redistribute the text.
To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/
a d v a n c i n g l e a r n i n g
2 Matrix Algebra 35
2.1 Matrix Addition, Scalar Multiplication, and Transposition . . . . . . . . . . . . . . . . . . 35
2.2 Matrix-Vector Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.3 Matrix Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.4 Matrix Inverses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
2.5 Elementary Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
2.6 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
2.7 LU-Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
2.8 An Application to Input-Output Economic Models . . . . . . . . . . . . . . . . . . . . . 128
2.9 An Application to Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Supplementary Exercises for Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
iii
iv CONTENTS
8 Orthogonality 415
8.1 Orthogonal Complements and Projections . . . . . . . . . . . . . . . . . . . . . . . . . . 415
8.2 Orthogonal Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
8.3 Positive Definite Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433
8.4 QR-Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
8.5 Computing Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
CONTENTS v
B Proofs 611
D Polynomials 623
Index 665
Foreward
Mathematics education at the beginning university level is closely tied to the traditional publishers. In my
opinion, it gives them too much control of both cost and content. The main goal of most publishers is
profit, and the result has been a sales-driven business model as opposed to a pedagogical one. This results
in frequent new “editions” of textbooks motivated largely to reduce the sale of used books rather than to
update content quality. It also introduces copyright restrictions which stifle the creation and use of new
pedagogical methods and materials. The overall result is high cost textbooks which may not meet the
evolving educational needs of instructors and students.
To be fair, publishers do try to produce material that reflects new trends. But their goal is to sell books
and not necessarily to create tools for student success in mathematics education. Sadly, this has led to
a model where the primary choice for adapting to (or initiating) curriculum change is to find a different
commercial textbook. My editor once said that the text that is adopted is often everyone’s third choice.
Of course instructors can produce their own lecture notes, and have done so for years, but this remains
an onerous task. The publishing industry arose from the need to provide authors with copy-editing, edi-
torial, and marketing services, as well as extensive reviews of prospective customers to ascertain market
trends and content updates. These are necessary skills and services that the industry continues to offer.
Authors of open educational resources (OER) including (but not limited to) textbooks and lecture
notes, cannot afford this on their own. But they do have two great advantages: The cost to students is
significantly lower, and open licenses return content control to instructors. Through editable file formats
and open licenses, OER can be developed, maintained, reviewed, edited, and improved by a variety of
contributors. Instructors can now respond to curriculum change by revising and reordering material to
create content that meets the needs of their students. While editorial and quality control remain daunting
tasks, great strides have been made in addressing the issues of accessibility, affordability and adaptability
of the material.
For the above reasons I have decided to release my text under an open license, even though it was
published for many years through a traditional publisher.
Supporting students and instructors in a typical classroom requires much more than a textbook. Thus,
while anyone is welcome to use and adapt my text at no cost, I also decided to work closely with Lyryx
Learning. With colleagues at the University of Calgary, I helped create Lyryx almost 20 years ago. The
original idea was to develop quality online assessment (with feedback) well beyond the multiple-choice
style then available. Now Lyryx also works to provide and sustain open textbooks; working with authors,
contributors, and reviewers to ensure instructors need not sacrifice quality and rigour when switching to
an open text.
I believe this is the right direction for mathematical publishing going forward, and look forward to
being a part of how this new approach develops.
vii
Preface
This textbook is an introduction to the ideas and techniques of linear algebra for first- or second-year
students with a working knowledge of high school algebra. The contents have enough flexibility to present
a traditional introduction to the subject, or to allow for a more applied course. Chapters 1–4 contain a one-
semester course for beginners whereas Chapters 5–9 contain a second semester course (see the Suggested
Course Outlines below). The text is primarily about real linear algebra with complex numbers being
mentioned when appropriate (reviewed in Appendix A). Overall, the aim of the text is to achieve a balance
among computational skills, theory, and applications of linear algebra. Calculus is not a prerequisite;
places where it is mentioned may be omitted.
As a rule, students of linear algebra learn by studying examples and solving problems. Accordingly,
the book contains a variety of exercises (over 1200, many with multiple parts), ordered as to their difficulty.
In addition, more than 375 solved examples are included in the text, many of which are computational in
nature. The examples are also used to motivate (and illustrate) concepts and theorems, carrying the student
from concrete to abstract. While the treatment is rigorous, proofs are presented at a level appropriate to
the student and may be omitted with no loss of continuity. As a result, the book can be used to give a
course that emphasizes computation and examples, or to give a more theoretical treatment (some longer
proofs are deferred to the end of the Section).
Linear Algebra has application to the natural sciences, engineering, management, and the social sci-
ences as well as mathematics. Consequently, 18 optional “applications” sections are included in the text
introducing topics as diverse as electrical networks, economic models, Markov chains, linear recurrences,
systems of differential equations, and linear codes over finite fields. Additionally some applications (for
example linear dynamical systems, and directed graphs) are introduced in context. The applications sec-
tions appear at the end of the relevant chapters to encourage students to browse.
This text includes the basis for a two-semester course in linear algebra.
• Chapters 1–4 provide a standard one-semester course of 35 lectures, including linear equations, ma-
trix algebra, determinants, diagonalization, and geometric vectors, with applications as time permits.
At Calgary, we cover Sections 1.1–1.3, 2.1–2.6, 3.1–3.3, and 4.1–4.4 and the course is taken by all
science and engineering students in their first semester. Prerequisites include a working knowledge
of high school algebra (algebraic manipulations and some familiarity with polynomials); calculus is
not required.
• Chapters 5–9 contain a second semester course including Rn , abstract vector spaces, linear trans-
formations (and their matrices), orthogonality, complex matrices (up to the spectral theorem) and
applications. There is more material here than can be covered in one semester, and at Calgary we
ix
x CONTENTS
cover Sections 5.1–5.5, 6.1–6.4, 7.1–7.3, 8.1–8.7, and 9.1–9.3 with a couple of applications as time
permits.
• Chapter 5 is a “bridging” chapter that introduces concepts like spanning, independence, and basis
in the concrete setting of Rn , before venturing into the abstract in Chapter 6. The duplication is
balanced by the value of reviewing these notions, and it enables the student to focus in Chapter 6
on the new idea of an abstract system. Moreover, Chapter 5 completes the discussion of rank and
diagonalization from earlier chapters, and includes a brief introduction to orthogonality in Rn , which
creates the possibility of a one-semester, matrix-oriented course covering Chapter 1–5 for students
not wanting to study the abstract theory.
CHAPTER DEPENDENCIES
The following chart suggests how the material introduced in each chapter draws on concepts covered in
certain earlier chapters. A solid arrow means that ready assimilation of ideas and techniques presented
in the later chapter depends on familiarity with the earlier chapter. A broken arrow indicates that some
reference to the earlier chapter is made but the chapter need not be covered.
• Two-stage definition of matrix multiplication. First, in Section 2.2 matrix-vector products are
introduced naturally by viewing the left side of a system of linear equations as a product. Second,
matrix-matrix products are defined in Section 2.3 by taking the columns of a product AB to be A
times the corresponding columns of B. This is motivated by viewing the matrix product as compo-
sition of maps (see next item). This works well pedagogically and the usual dot-product definition
follows easily. As a bonus, the proof of associativity of matrix multiplication now takes four lines.
CONTENTS xi
• Matrices as transformations. Matrix-column multiplications are viewed (in Section 2.2) as trans-
formations Rn → Rm . These maps are then used to describe simple geometric reflections and rota-
tions in R2 as well as systems of linear equations.
• Early linear transformations. It has been said that vector spaces exist so that linear transformations
can act on them—consequently these maps are a recurring theme in the text. Motivated by the matrix
transformations introduced earlier, linear transformations Rn → Rm are defined in Section 2.6, their
standard matrices are derived, and they are then used to describe rotations, reflections, projections,
and other operators on R2 .
• Early diagonalization. As requested by engineers and scientists, this important technique is pre-
sented in the first term using only determinants and matrix inverses (before defining independence
and dimension). Applications to population growth and linear recurrences are given.
• Early dynamical systems. These are introduced in Chapter 3, and lead (via diagonalization) to
applications like the possible extinction of species. Beginning students in science and engineering
can relate to this because they can see (often for the first time) the relevance of the subject to the real
world.
• Bridging chapter. Chapter 5 lets students deal with tough concepts (like independence, spanning,
and basis) in the concrete setting of Rn before having to cope with abstract vector spaces in Chap-
ter 6.
• Examples. The text contains over 375 worked examples, which present the main techniques of the
subject, illustrate the central ideas, and are keyed to the exercises in each section.
• Exercises. The text contains a variety of exercises (nearly 1175, many with multiple parts), starting
with computational problems and gradually progressing to more theoretical exercises. Select solu-
tions are available at the end of the book or in the Student Solution Manual. There is a complete
Solution Manual is available for instructors.
• Applications. There are optional applications at the end of most chapters (see the list below).
While some are presented in the course of the text, most appear at the end of the relevant chapter to
encourage students to browse.
• Appendices. Because complex numbers are needed in the text, they are described in Appendix A,
which includes the polar form and roots of unity. Methods of proofs are discussed in Appendix B,
followed by mathematical induction in Appendix C. A brief discussion of polynomials is included
in Appendix D. All these topics are presented at the high-school level.
• Rigour. Proofs are presented as clearly as possible (some at the end of the section), but they are
optional and the instructor can choose how much he or she wants to prove. However the proofs are
there, so this text is more rigorous than most. Linear algebra provides one of the better venues where
students begin to think logically and argue concisely. To this end, there are exercises that ask the
student to “show” some simple implication, and others that ask her or him to either prove a given
statement or give a counterexample. I personally present a few proofs in the first semester course
and more in the second (see the Suggested Course Outlines).
xii CONTENTS
• Major Theorems. Several major results are presented in the book. Examples: Uniqueness of the
reduced row-echelon form; the cofactor expansion for determinants; the Cayley-Hamilton theorem;
the Jordan canonical form; Schur’s theorem on block triangular form; the principal axes and spectral
theorems; and others. Proofs are included because the stronger students should at least be aware of
what is involved.
CHAPTER SUMMARIES
A standard treatment of gaussian elimination is given. The rank of a matrix is introduced via the row-
echelon form, and solutions to a homogeneous system are presented as linear combinations of basic solu-
tions. Applications to network flows, electrical networks, and chemical reactions are provided.
After a traditional look at matrix addition, scalar multiplication, and transposition in Section 2.1, matrix-
vector multiplication is introduced in Section 2.2 by viewing the left side of a system of linear equations
as the product Ax of the coefficient matrix A with the column x of variables. The usual dot-product
definition of a matrix-vector multiplication follows. Section 2.2 ends by viewing an m × n matrix A as a
transformation Rn → Rm . This is illustrated for R2 → R2 by describing reflection in the x axis, rotation of
R2 through π2 , shears, and so on.
In Section 2.3, the product of matrices A and B is defined by AB = Ab1 Ab2 · · · Abn , where
the bi are the columns of B. A routine computation shows that this is the matrix of the transformation B
followed by A. This observation is used frequently throughout the book, and leads to simple, conceptual
proofs of the basic axioms of matrix algebra. Note that linearity is not required—all that is needed is some
basic properties of matrix-vector multiplication developed in Section 2.2. Thus the usual arcane definition
of matrix multiplication is split into two well motivated parts, each an important aspect of matrix algebra.
Of course, this has the pedagogical advantage that the conceptual power of geometry can be invoked to
illuminate and clarify algebraic techniques and definitions.
In Section 2.4 and 2.5 matrix inverses are characterized, their geometrical meaning is explored, and
block multiplication is introduced, emphasizing those cases needed later in the book. Elementary ma-
trices are discussed, and the Smith normal form is derived. Then in Section 2.6, linear transformations
Rn → Rm are defined and shown to be matrix transformations. The matrices of reflections, rotations, and
projections in the plane are determined. Finally, matrix multiplication is related to directed graphs, matrix
LU-factorization is introduced, and applications to economic models and Markov chains are presented.
CONTENTS xiii
The cofactor expansion is stated (proved by induction later) and used to define determinants inductively
and to deduce the basic rules. The product and adjugate theorems are proved. Then the diagonalization
algorithm is presented (motivated by an example about the possible extinction of a species of birds). As
requested by our Engineering Faculty, this is done earlier than in most texts because it requires only deter-
minants and matrix inverses, avoiding any need for subspaces, independence and dimension. Eigenvectors
of a 2 × 2 matrix A are described geometrically (using the A-invariance of lines through the origin). Di-
agonalization is then used to study discrete linear dynamical systems and to discuss applications to linear
recurrences and systems of differential equations. A brief discussion of Google PageRank is included.
Vectors are presented intrinsically in terms of length and direction, and are related to matrices via coordi-
nates. Then vector operations are defined using matrices and shown to be the same as the corresponding
intrinsic definitions. Next, dot products and projections are introduced to solve problems about lines and
planes. This leads to the cross product. Then matrix transformations are introduced in R3 , matrices of pro-
jections and reflections are derived, and areas and volumes are computed using determinants. The chapter
closes with an application to computer graphics.
Subspaces, spanning, independence, and dimensions are introduced in the context of Rn in the first two
sections. Orthogonal bases are introduced and used to derive the expansion theorem. The basic properties
of rank are presented and used to justify the definition given in Section 1.2. Then, after a rigorous study of
diagonalization, best approximation and least squares are discussed. The chapter closes with an application
to correlation and variance.
This is a “bridging” chapter, easing the transition to abstract spaces. Concern about duplication with
Chapter 6 is mitigated by the fact that this is the most difficult part of the course and many students
welcome a repeat discussion of concepts like independence and spanning, albeit in the abstract setting.
In a different direction, Chapter 1–5 could serve as a solid introduction to linear algebra for students not
requiring abstract theory.
Building on the work on Rn in Chapter 5, the basic theory of abstract finite dimensional vector spaces is
developed emphasizing new examples like matrices, polynomials and functions. This is the first acquain-
tance most students have had with an abstract system, so not having to deal with spanning, independence
and dimension in the general context eases the transition to abstract thinking. Applications to polynomials
and to differential equations are included.
xiv CONTENTS
General linear transformations are introduced, motivated by many examples from geometry, matrix theory,
and calculus. Then kernels and images are defined, the dimension theorem is proved, and isomorphisms
are discussed. The chapter ends with an application to linear recurrences. A proof is included that the
order of a differential equation (with constant coefficients) equals the dimension of the space of solutions.
Chapter 8: Orthogonality.
The study of orthogonality in Rn , begun in Chapter 5, is continued. Orthogonal complements and pro-
jections are defined and used to study orthogonal diagonalization. This leads to the principal axes theo-
rem, the Cholesky factorization of a positive definite matrix, QR-factorization, and to a discussion of the
singular value decomposition, the polar form, and the pseudoinverse. The theory is extended to Cn in
Section 8.7 where hermitian and unitary matrices are discussed, culminating in Schur’s theorem and the
spectral theorem. A short proof of the Cayley-Hamilton theorem is also presented. In Section 8.8 the field
Z p of integers modulo p is constructed informally for any prime p, and codes are discussed over any finite
field. The chapter concludes with applications to quadratic forms, constrained optimization, and statistical
principal component analysis.
The matrix of general linear transformation is defined and studied. In the case of an operator, the rela-
tionship between basis changes and similarity is revealed. This is illustrated by computing the matrix of a
rotation about a line through the origin in R3 . Finally, invariant subspaces and direct sums are introduced,
related to similarity, and (as an example) used to show that every involution is similar to a diagonal matrix
with diagonal entries ±1.
General inner products are introduced and distance, norms, and the Cauchy-Schwarz inequality are dis-
cussed. The Gram-Schmidt algorithm is presented, projections are defined and the approximation theorem
is proved (with an application to Fourier approximation). Finally, isometries are characterized, and dis-
tance preserving operators are shown to be composites of a translations and isometries.
The work in Chapter 9 is continued. Invariant subspaces and direct sums are used to derive the block
triangular form. That, in turn, is used to give a compact proof of the Jordan canonical form. Of course the
level is higher.
CONTENTS xv
Appendices
In Appendix A, complex arithmetic is developed far enough to find nth roots. In Appendix B, methods of
proof are discussed, while Appendix C presents mathematical induction. Finally, Appendix D describes
the properties of polynomials in elementary terms.
LIST OF APPLICATIONS
ACKNOWLEDGMENTS
Many colleagues have contributed to the development of this text over many years of publication, and I
specially thank the following instructors for their reviews of the 7th edition:
Robert Andre
University of Waterloo
Dietrich Burbulla
University of Toronto
Dzung M. Ha
Ryerson University
Mark Solomonovich
Grant MacEwan
Fred Szabo
Concordia University
Edward Wang
Wilfred Laurier
Petr Zizler
Mount Royal University
It is also a pleasure to recognize the contributions of several people. Discussions with Thi Dinh and
Jean Springer have been invaluable and many of their suggestions have been incorporated. Thanks are
also due to Kristine Bauer and Clifton Cunningham for several conversations about the new way to look
at matrix multiplication. I also wish to extend my thanks to Joanne Canape for being there when I had
technical questions. Thanks also go to Jason Nicholson for his help in various aspects of the book, partic-
ularly the Solutions Manual. Finally, I want to thank my wife Kathleen, without whose understanding and
cooperation, this book would not exist.
As we undertake this new publishing model with the text as an open educational resource, I would also
like to thank my previous publisher. The team who supported my text greatly contributed to its success.
Now that the text has an open license, we have a much more fluid and powerful mechanism to incorpo-
rate comments and suggestions. The editorial group at Lyryx invites instructors and students to contribute
to the text, and also offers to provide adaptations of the material for specific courses. Moreover the LaTeX
source files are available to anyone wishing to do the adaptation and editorial work themselves!
W. Keith Nicholson
University of Calgary
1. Systems of Linear Equations
Practical problems in many fields of study—such as biology, business, chemistry, computer science, eco-
nomics, electronics, engineering, physics and the social sciences—can often be reduced to solving a sys-
tem of linear equations. Linear algebra arose from attempts to find systematic methods for solving these
systems, so it is natural to begin this book by studying linear equations.
If a, b, and c are real numbers, the graph of an equation of the form
ax + by = c
is a straight line (if a and b are not both zero), so such an equation is called a linear equation in the
variables x and y. However, it is often convenient to write the variables as x1 , x2 , . . . , xn , particularly
when more than two variables are involved. An equation of the form
a1 x1 + a2 x2 + · · · + an xn = b
is called a linear equation in the n variables x1 , x2 , . . . , xn . Here a1 , a2 , . . . , an denote real numbers
(called the coefficients of x1 , x2 , . . . , xn , respectively) and b is also a number (called the constant term
of the equation). A finite collection of linear equations in the variables x1 , x2 , . . . , xn is called a system of
linear equations in these variables. Hence,
2x1 − 3x2 + 5x3 = 7
is a linear equation; the coefficients of x1 , x2 , and x3 are 2, −3, and 5, and the constant term is 7. Note that
each variable in a linear equation occurs to the first power only.
Given a linear equation a1 x1 + a2 x2 + · · · + an xn = b, a sequence s1 , s2 , . . . , sn of n numbers is called
a solution to the equation if
a1 s 1 + a2 s 2 + · · · + an s n = b
that is, if the equation is satisfied when the substitutions x1 = s1 , x2 = s2 , . . . , xn = sn are made. A
sequence of numbers is called a solution to a system of equations if it is a solution to every equation in
the system.
For example, x = −2, y = 5, z = 0 and x = 0, y = 4, z = −1 are both solutions to the system
x+y+ z=3
2x + y + 3z = 1
A system may have no solution at all, or it may have a unique solution, or it may have an infinite family of
solutions. For instance, the system x + y = 2, x + y = 3 has no solution because the sum of two numbers
cannot be 2 and 3 simultaneously. A system that has no solution is called inconsistent; a system with at
least one solution is called consistent. The system in the following example has infinitely many solutions.
1
2 Systems of Linear Equations
Example 1.1.1
Show that, for arbitrary values of s and t,
x1 = t − s + 1
x2 = t + s + 2
x3 = s
x4 = t
Because both equations are satisfied, it is a solution for all choices of s and t.
The quantities s and t in Example 1.1.1 are called parameters, and the set of solutions, described in
this way, is said to be given in parametric form and is called the general solution to the system. It turns
out that the solutions to every system of equations (if there are solutions) can be given in parametric form
(that is, the variables x1 , x2 , . . . are given in terms of new independent variables s, t, etc.). The following
example shows how this happens in the simplest systems where only one equation is present.
Example 1.1.2
Describe all solutions to 3x − y + 2z = 6 in parametric form.
Solution. Solving the equation for y in terms of x and z, we get y = 3x + 2z − 6. If s and t are
arbitrary then, setting x = s, z = t, we get solutions
x=s
y = 3s + 2t − 6 s and t arbitrary
z=t
x = 13 (p − 2q + 6)
y = p p and q arbitrary
z = q
y When only two variables are involved, the solutions to systems of lin-
ear equations can be described geometrically because the graph of a lin-
x−y = 1 ear equation ax + by = c is a straight line if a and b are not both zero.
Moreover, a point P(s, t) with coordinates s and t lies on the line if and
x+y = 3 only if as + bt = c—that is when x = s, y = t is a solution to the equa-
tion. Hence the solutions to a system of linear equations correspond to the
P(2, 1)
points P(s, t) that lie on all the lines in question.
x
In particular, if the system consists of just one equation, there must
be infinitely many solutions because there are infinitely many points on a
(a) Unique Solution line. If the system has two equations, there are three possibilities for the
(x = 2, y = 1) corresponding straight lines:
y
1. The lines intersect at a single point. Then the system has a unique
solution corresponding to that point.
x+y = 4
2. The lines are parallel (and distinct) and so do not intersect. Then
the system has no solution.
x+y = 2
x 3. The lines are identical. Then the system has infinitely many
solutions—one for each point on the (common) line.
(b) No Solution
y These three situations are illustrated in Figure 1.1.1. In each case the
graphs of two specific lines are plotted and the corresponding equations are
indicated. In the last case, the equations are 3x−y = 4 and −6x+2y = −8,
−6x + 2y = −8 which have identical graphs.
With three variables, the graph of an equation ax + by + cz = d can be
3x − y = 4
shown to be a plane (see Section 4.2) and so again provides a “picture”
of the set of solutions. However, this graphical method has its limitations:
x
When more than three variables are involved, no physical image of the
graphs (called hyperplanes) is possible. It is necessary to turn to a more
(c) Infinitely many solutions “algebraic” method of solution.
(x = t, y = 3t − 4)
Before describing the method, we introduce a concept that simplifies
Figure 1.1.1 the computations involved. Consider the following system
3x1 + 2x2 − x3 + x4 = −1
2x1 − x3 + 2x4 = 0
3x1 + x2 + 2x3 + 5x4 = 2
term. For clarity, the constants are separated by a vertical line. The augmented matrix is just a different
way of describing the system of equations. The array of coefficients of the variables
3 2 −1 1
2 0 −1 2
3 1 2 5
−1
is called the coefficient matrix of the system and 0 is called the constant matrix of the system.
2
Elementary Operations
The algebraic method for solving systems of linear equations is described as follows. Two such systems
are said to be equivalent if they have the same set of solutions. A system is solved by writing a series of
systems, one after the other, each equivalent to the previous system. Each of these systems has the same
set of solutions as the original one; the aim is to end up with a system that is easy to solve. Each system
in the series is obtained from the preceding system by a simple manipulation chosen so that it does not
change the set of solutions.
As an illustration, we solve the system x + 2y = −2, 2x + y = 7 in this manner. At each stage, the
corresponding augmented matrix is displayed. The original system is
x + 2y = −2 1 2 −2
2x + y = 7 2 1 7
First, subtract twice the first equation from the second. The resulting system is
x + 2y = −2 1 2 −2
− 3y = 11 0 −3 11
which is equivalent to the original (see Theorem 1.1.1). At this stage we obtain y = − 11
3 by multiplying
1
the second equation by − 3 . The result is the equivalent system
x + 2y = −2 1 2 −2
y = − 11
3 0 1 − 113
Finally, we subtract twice the second equation from the first to get another equivalent system.
16 16
x= 3 1 0 3
11 11
y=− 3 0 1 −3
Now this system is easy to solve! And because it is equivalent to the original system, it provides the
solution to that system.
Observe that, at each stage, a certain operation is performed on the system (and thus on the augmented
matrix) to produce an equivalent system.
1.1. Solutions and Elementary Operations 5
Theorem 1.1.1
Suppose that a sequence of elementary operations is performed on a system of linear equations.
Then the resulting system has the same set of solutions as the original, so the two systems are
equivalent.
In the illustration above, a series of such operations led to a matrix of the form
1 0 ∗
0 1 ∗
where the asterisks represent arbitrary numbers. In the case of three equations in three variables, the goal
is to produce a matrix of the form
1 0 0 ∗
0 1 0 ∗
0 0 1 ∗
6 Systems of Linear Equations
This does not always happen, as we will see in the next section. Here is an example in which it does
happen.
Example 1.1.3
Find all solutions to the following system of equations.
3x + 4y + z = 1
2x + 3y = 0
4x + 3y − z = −2
Now subtract 3 times row 3 from row 1, and then add 2 times row 3 to row 2 to get
3
1 0 0 −7
2
0 1 0 7
8
0 0 1 7
The corresponding equations are x = − 37 , y = 27 , and z = 87 , which give the (unique) solution.
Every elementary row operation can be reversed by another elementary row operation of the same
type (called its inverse). To see how, we look at types I, II, and III separately:
Type III Adding k times row p to a different row q is reversed by adding −k times row p to row q
(in the new matrix). Note that p 6= q is essential here.
To illustrate the Type III situation, suppose there are four rows in the original matrix, denoted R1 , R2 ,
R3 , and R4 , and that k times R2 is added to R3 . Then the reverse operation adds −k times R2 , to R3 . The
following diagram illustrates the effect of doing the operation first and then the reverse:
R1 R1 R1 R1
R2 R2 R2 R2
→ →
R3 R3 + kR2 (R3 + kR2 ) − kR2 = R3
R4 R4 R4 R4
The existence of inverses for elementary row operations and hence for elementary operations on a system
of equations, gives:
Proof of Theorem 1.1.1. Suppose that a system of linear equations is transformed into a new system
by a sequence of elementary operations. Then every solution of the original system is automatically a
solution of the new system because adding equations, or multiplying an equation by a nonzero number,
always results in a valid equation. In the same way, each solution of the new system must be a solution
to the original system because the original system can be obtained from the new one by another series of
elementary operations (the inverses of the originals). It follows that the original and new systems have the
same solutions. This proves Theorem 1.1.1.
8 Systems of Linear Equations
Exercise 1.1.1 In each case verify that the following are Exercise 1.1.7 Write the augmented matrix for each of
solutions for all values of s and t. the following systems of linear equations.
a. x − 3y = 5 b. x + 2y = 0
2x + y = 1 y=1
a. x = 19t − 35
y = 25 − 13t c. x − y + z = 2 d. x + y = 1
z=t x− z=1 y+ z=0
y + 2x = 0 z−x=2
is a solution of
Exercise 1.1.8 Write a system of linear equations that
2x + 3y + z = 5
has each of the following augmented matrices.
5x + 7y − 4z = 0
1 −1 6 0 2 −1 0 −1
b. x1 = 2s + 12t + 13 a. 0 1 0 3 b. −3 2 1 0
x2 = s 2 −1 0 1 0 1 1 3
x3 = −s − 3t − 3
Exercise 1.1.9 Find the solution of each of the following
x4 = t
systems of linear equations using augmented matrices.
is a solution of
a. x − 3y = 1 b. x + 2y = 1
2x1 + 5x2 + 9x3 + 3x4 = −1 2x − 7y = 3 3x + 4y = −1
x1 + 2x2 + 4x3 = 1
c. 2x + 3y = −1 d. 3x + 4y = 1
3x + 4y = 2 4x + 5y = −3
Exercise 1.1.2 Find all solutions to the following in Exercise 1.1.10 Find the solution of each of the follow-
parametric form in two ways. ing systems of linear equations using augmented matri-
ces.
a. 3x + y = 2 b. 2x + 3y = 1 a. x + y + 2z = −1 b. 2x + y + z = −1
2x + y + 3z = 0 x + 2y + z = 0
c. 3x − y + 2z = 5 d. x − 2y + 5z = 1 − 2y + z = 2 3x − 2z = 5
Exercise 1.1.14 In each case either show that the state- Exercise 1.1.17 Find a, b, and c such that
ment is true, or give an example2 showing it is false. 2 x −x+3 ax+b c
(x2 +2)(2x−1)
= x2 +2
+ 2x−1
a. If a linear system has n variables and m equations,
then the augmented matrix has n rows. [Hint: Multiply through by (x2 + 2)(2x − 1) and equate
b. A consistent linear system must have infinitely coefficients of powers of x.]
many solutions. Exercise 1.1.18 A zookeeper wants to give an animal 42
mg of vitamin A and 65 mg of vitamin D per day. He has
c. If a row operation is done to a consistent linear
two supplements: the first contains 10% vitamin A and
system, the resulting system must be consistent.
25% vitamin D; the second contains 20% vitamin A and
d. If a series of row operations on a linear system re- 25% vitamin D. How much of each supplement should
sults in an inconsistent system, the original system he give the animal each day?
is inconsistent.
Exercise 1.1.19 Workmen John and Joe earn a total of
$24.60 when John works 2 hours and Joe works 3 hours.
Exercise 1.1.15 Find a quadratic a + bx + cx2 such that If John works 3 hours and Joe works 2 hours, they get
the graph of y = a + bx + cx2 contains each of the points $23.90. Find their hourly rates.
(−1, 6), (2, 0), and (3, 2).
Exercise 1.1.20 A biologist wants to create a diet from
3x + 2y = 5
Exercise 1.1.16 Solve the system by fish and meal containing 183 grams of protein and 93
7x + 5y = 1
grams of carbohydrate per day. If fish contains 70% pro-
x = 5x′ − 2y′
changing variables and solving the re- tein and 10% carbohydrate, and meal contains 30% pro-
y = −7x′ + 3y′ tein and 60% carbohydrate, how much of each food is
sulting equations for x′ and y′ . required each day?
The algebraic method introduced in the preceding section can be summarized as follows: Given a system
of linear equations, use a sequence of elementary row operations to carry the augmented matrix to a “nice”
matrix (meaning that the corresponding equations are easy to solve). In Example 1.1.3, this nice matrix
took the form
1 0 0 ∗
0 1 0 ∗
0 0 1 ∗
The following definitions identify the nice matrices that arise in this process.
2 Such an example is called a counterexample. For example, if the statement is that “all philosophers have beards”, the
existence of a non-bearded philosopher would be a counterexample proving that the statement is false. This is discussed again
in Appendix B.
10 Systems of Linear Equations
2. The first nonzero entry from the left in each nonzero row is a 1, called the leading 1 for that
row.
3. Each leading 1 is to the right of all leading 1s in the rows above it.
A row-echelon matrix is said to be in reduced row-echelon form (and will be called a reduced
row-echelon matrix) if, in addition, it satisfies the following condition:
The row-echelon matrices have a “staircase” form, as indicated by the following example (the asterisks
indicate arbitrary numbers).
0 1 ∗ ∗ ∗ ∗ ∗
0 0 0 1 ∗ ∗ ∗
0 0 0 0 1 ∗ ∗
0 0 0 0 0 0 1
0 0 0 0 0 0 0
The leading 1s proceed “down and to the right” through the matrix. Entries above and to the right of the
leading 1s are arbitrary, but all entries below and to the left of them are zero. Hence, a matrix in row-
echelon form is in reduced form if, in addition, the entries directly above each leading 1 are all zero. Note
that a matrix in row-echelon form can, with a few more row operations, be carried to reduced form (use
row operations to create zeros above each leading one in succession, beginning from the right).
Example 1.2.1
The following matrices are in row-echelon form (for any choice of numbers in ∗-positions).
0 1 ∗ ∗ 1 ∗ ∗ ∗ 1 ∗ ∗
1 ∗ ∗
0 0 1 ∗ 0 1 ∗ ∗ 0 1 ∗
0 0 1
0 0 0 0 0 0 0 1 0 0 1
The choice of the positions for the leading 1s determines the (reduced) row-echelon form (apart
from the numbers in ∗-positions).
Theorem 1.2.1
Every matrix can be brought to (reduced) row-echelon form by a sequence of elementary row
operations.
In fact we can give a step-by-step procedure for actually finding a row-echelon matrix. Observe that
while there are many sequences of row operations that will bring a matrix to row-echelon form, the one
we use is systematic and is easy to program on a computer. Note that the algorithm deals with matrices in
general, possibly with columns of zeros.
Gaussian3 Algorithm4
Step 1. If the matrix consists entirely of zeros, stop—it is already in row-echelon form.
Step 2. Otherwise, find the first column from the left containing a nonzero entry (call it a),
and move the row containing that entry to the top position.
Step 3. Now multiply the new top row by 1/a to create a leading 1.
Step 4. By subtracting multiples of that row from rows below it, make each entry below the
leading 1 zero.
This completes the first row, and all further row operations are carried out on the remaining rows.
Step 5. Repeat steps 1–4 on the matrix consisting of the remaining rows.
The process stops when either no rows remain at step 5 or the remaining rows consist entirely of
zeros.
Observe that the gaussian algorithm is recursive: When the first leading 1 has been obtained, the
procedure is repeated on the remaining rows of the matrix. This makes the algorithm easy to use on a
computer. Note that the solution to Example 1.1.3 did not use the gaussian algorithm as written because
the first leading 1 was not created by dividing row 1 by 3. The reason for this is that it avoids fractions.
However, the general pattern is clear: Create the leading 1s from left to right, using each of them in turn
to create zeros below it. Here are two more examples.
3 CarlFriedrich Gauss (1777–1855) ranks with Archimedes and Newton as one of the three greatest mathematicians of all
time. He was a child prodigy and, at the age of 21, he gave the first proof that every polynomial has a complex root. In
1801 he published a timeless masterpiece, Disquisitiones Arithmeticae, in which he founded modern number theory. He went
on to make ground-breaking contributions to nearly every branch of mathematics, often well before others rediscovered and
published the results.
4 The algorithm was known to the ancient Chinese.
12 Systems of Linear Equations
Example 1.2.2
Solve the following system of equations.
3x + y − 4z = −1
x + 10z = 5
4x + y + 6z = 1
Now subtract 3 times row 1 from row 2, and subtract 4 times row 1 from row 3. The result is
1 0 10 5
0 1 −34 −16
0 1 −34 −19
x + 10z = 5
y − 34z = −16
0 = −3
is equivalent to the original system. In other words, the two have the same solutions. But this last
system clearly has no solution (the last equation requires that x, y and z satisfy 0x + 0y + 0z = −3,
and no such numbers exist). Hence the original system has no solution.
1.2. Gaussian Elimination 13
Example 1.2.3
Solve the following system of equations.
x1 − 2x2 − x3 + 3x4 = 1
2x1 − 4x2 + x3 =5
x1 − 2x2 + 2x3 − 3x4 = 4
The solution of Example 1.2.3 is typical of the general case. To solve a linear system, the augmented
matrix is carried to reduced row-echelon form, and the variables corresponding to the leading ones are
called leading variables. Because the matrix is in reduced form, each leading variable occurs in exactly
one equation, so that equation can be solved to give a formula for the leading variable in terms of the
nonleading variables. It is customary to call the nonleading variables “free” variables, and to label them
by new variables s, t, . . . , called parameters. Hence, as in Example 1.2.3, every variable xi is given by a
formula in terms of the parameters s and t. Moreover, every choice of these parameters leads to a solution
to the system, and every solution arises in this way. This procedure works in general, and has come to be
called
Gaussian Elimination
To solve a system of linear equations proceed as follows:
1. Carry the augmented matrix to a reduced row-echelon matrix using elementary row
operations.
2. If a row 0 0 0 · · · 0 1 occurs, the system is inconsistent.
3. Otherwise, assign the nonleading variables (if any) as parameters, and use the equations
corresponding to the reduced row-echelon matrix to solve for the leading variables in terms
of the parameters.
There is a variant of this procedure, wherein the augmented matrix is carried only to row-echelon form.
The nonleading variables are assigned as parameters as before. Then the last equation (corresponding to
the row-echelon form) is used to solve for the last leading variable in terms of the parameters. This last
leading variable is then substituted into all the preceding equations. Then, the second last equation yields
the second last leading variable, which is also substituted back. The process continues to give the general
solution. This procedure is called back-substitution. This procedure can be shown to be numerically
more efficient and so is important when solving very large systems.5
Example 1.2.4
Find a condition on the numbers a, b, and c such that the following system of equations is
consistent. When that condition is satisfied, find all solutions (in terms of a, b, and c).
x1 + 3x2 + x3 = a
−x1 − 2x2 + x3 = b
3x1 + 7x2 − x3 = c
Solution. We use gaussian elimination except that now the augmented matrix
1 3 1 a
−1 −2 1 b
3 7 −1 c
5 With
n equations where n is large, gaussian elimination requires roughly n3 /2 multiplications and divisions, whereas this
number is roughly n3 /3 if back substitution is used.
1.2. Gaussian Elimination 15
has entries a, b, and c as well as known numbers. The first leading one is in place, so we create
zeros below it in column 1:
1 3 1 a
0 1 2 a+b
0 −2 −4 c − 3a
The second leading 1 has appeared, so use it to create zeros in the rest of column 2:
1 0 −5 −2a − 3b
0 1 2 a+b
0 0 0 c − a + 2b
Now the whole solution depends on the number c − a + 2b = c − (a − 2b). The last row
corresponds to an equation 0 = c − (a − 2b). If c 6= a − 2b, there is no solution (just as in Example
1.2.2). Hence:
x1 = 5t − (2a + 3b) x2 = (a + b) − 2t x3 = t.
Rank
It can be proven that the reduced row-echelon form of a matrix A is uniquely determined by A. That is,
no matter which series of row operations is used to carry A to a reduced row-echelon matrix, the result
will always be the same matrix. (A proof is given at the end of Section 2.5.) By contrast, this is not
true for row-echelon matrices: Different series of row operations
can carry the same matrix A to different
1 −1 4
row-echelon matrices. Indeed, the matrix A = can be carried (by one row operation) to
2 −1 2
1 −1 4
the row-echelon matrix , and then by another row operation to the (reduced) row-echelon
0 1 −6
1 0 −2
matrix . However, it is true that the number r of leading 1s must be the same in each of
0 1 −6
these row-echelon matrices (this will be proved in Chapter 5). Hence, the number r depends only on A
and not on the way in which A is carried to row-echelon form.
16 Systems of Linear Equations
Example 1.2.5
1 1 −1 4
Compute the rank of A = 2 1 3 0 .
0 1 −5 8
Suppose that rank A = r, where A is a matrix with m rows and n columns. Then r ≤ m because the
leading 1s lie in different rows, and r ≤ n because the leading 1s lie in different columns. Moreover, the
rank has a useful application to equations. Recall that a system of linear equations is called consistent if it
has at least one solution.
Theorem 1.2.2
Suppose a system of m equations in n variables is consistent, and that the rank of the augmented
matrix is r.
Proof. The fact that the rank of the augmented matrix is r means there are exactly r leading variables, and
hence exactly n − r nonleading variables. These nonleading variables are all assigned as parameters in the
gaussian algorithm, so the set of solutions involves exactly n − r parameters. Hence if r < n, there is at
least one parameter, and so infinitely many solutions. If r = n, there are no parameters and so a unique
solution.
Theorem 1.2.2 shows that, for any system of linear equations, exactly three possibilities exist:
1. No solution. This occurs when a row 0 0 · · · 0 1 occurs in the row-echelon form. This is
the case where the system is inconsistent.
3. Infinitely many solutions. This occurs when the system is consistent and there is at least one
nonleading variable, so at least one parameter is involved.
Example 1.2.6
Suppose the matrix A in Example 1.2.5 is the augmented matrix of a system of m = 3 linear
equations in n = 3 variables. As rank A = r = 2, the set of solutions will have n − r = 1 parameter.
The reader can verify this fact directly.
Many important problems involve linear inequalities rather than linear equations. For example, a
condition on the variables x and y might take the form of an inequality 2x − 5y ≤ 4 rather than an equality
2x − 5y = 4. There is a technique (called the simplex algorithm) for finding solutions to a system of such
inequalities that maximizes a function of the form p = ax + by where a and b are fixed constants.
Exercise 1.2.1 Which of the following matrices are in Exercise 1.2.3 The augmented matrix of a system of
reduced row-echelon form? Which are in row-echelon linear equations has been carried to the following by row
form? operations. In each case solve the system.
1 −1 2 1 2 0 3 1 0 −1
2 1 −1 3 0 0 1 −1 1 0 2
a. 0 0 0 b. a.
0 0 0 0 0 0 0 0 0 1 3
0 0 1
0 0 0 0 0 0 0
1 0 0 3 1
1 −2 3 5
c. d. 0 0 0 1 1 1 −2 0 2 0 1 1
0 0 0 1 0 0 1 5 0 −3 −1
0 0 0 0 1 b.
0 0 0 0 1 6 1
0 0 1
1 1 0 0 0 0 0 0 0
e. f. 0 0 1
0 1
0 0 1 1 2 1 3 1 1
0 1 −1 0 1 1
c.
0
0 0 1 −1 0
Exercise 1.2.2 Carry each of the following matrices to
0 0 0 0 0 0
reduced row-echelon form.
1 −1 2 4 6 2
0 −1 2 1 2 1 −1 0 1 2 1 −1 −1
d.
0
0 1 −2 2 7 2 4 0 0 1 0 1
a.
0 −2
4 3 7 1 0 0 0 0 0 0 0
0 3 −6 1 6 4 1
Exercise 1.2.4 Find all solutions (if any) to each of the
0 −1 3 1 3 2 1 following systems of linear equations.
0 −2 6 1 −5 0 −1
b.
0
3 −9 2 4 1 −1 a. x − 2y = 1 b. 3x − y = 0
0 1 −3 −1 3 0 1 4y − x = −2 2x − 3y = 1
18 Systems of Linear Equations
c. 2x + y = 5 d. 3x − y = 2 d. x1 + x2 + 2x3 − x4 = 4
3x + 2y = 6 2y − 6x = −4 3x2 − x3 + 4x4 = 2
x1 + 2x2 − 3x3 + 5x4 = 0
e. 3x − y = 4 f. 2x − 3y = 5
x1 + x2 − 5x3 + 6x4 = −3
2y − 6x = 1 3y − 2x = 2
Exercise 1.2.8 In each of the following, find (if possi-
Exercise 1.2.5 Find all solutions (if any) to each of the ble) conditions on a and b such that the system has no
following systems of linear equations. solution, one solution, and infinitely many solutions.
a. x − 2y = 1 b. x + by = −1
a. x + y + 2z = 8 b. −2x + 3y + 3z = −9
ax + by = 5 ax + 2y = 5
3x − y + z = 0 3x − 4y + z = 5
−x + 3y + 4z = −4 −5x + 7y + 2z = −14 c. x − by = −1 d. ax + y = 1
x + ay = 3 2x + y = b
c. x + y − z = 10 d. x + 2y − z = 2
−x + 4y + 5z = −5 2x + 5y − 3z = 1 Exercise 1.2.9 In each of the following, find (if possi-
x + 6y + 3z = 15 x + 4y − 3z = 3 ble) conditions on a, b, and c such that the system has no
e. 5x + y =2 f. 3x − 2y + z = −2 solution, one solution, or infinitely many solutions.
3x − y + 2z = 1 x − y + 3z = 5
a. 3x + y − z = a b. 2x + y − z = a
x+y− z=5 −x + y + z = −1
x − y + 2z = b 2y + 3z = b
g. x+ y+ z=2 h. x + 2y − 4z = 10 5x + 3y − 4z = c x − z= c
x + z=1 2x − y + 2z = 5
c. −x + 3y + 2z = −8 d. x + ay = 0
2x + 5y + 2z = 7 x + y − 2z = 7
x + z= 2 y + bz = 0
3x + 3y + az = b z + cx = 0
Exercise 1.2.6 Express the last equation of each system e. 3x − y + 2z = 3
as a sum of multiples of the first two equations. [Hint: x+ y− z=2
Label the equations, use the gaussian algorithm.] 2x − 2y + 3z = b
f. x+ ay − z= 1
a. x1 + x2 + x3 = 1 b. x1 + 2x2 − 3x3 = −3
−x + (a − 2)y + z = −1
2x1 − x2 + 3x3 = 3 x1 + 3x2 − 5x3 = 5
2x + 2y + (a − 2)z = 1
x1 − 2x2 + 2x3 = 2 x1 − 2x2 + 5x3 = −35
Exercise 1.2.10 Find the rank of each of the matrices in
Exercise 1.2.7 Find all solutions to the following sys- Exercise 1.2.1.
tems. Exercise 1.2.11 Find the rank of each of the following
matrices.
a. 3x1 + 8x2 − 3x3 − 14x4 = 2
1 1 2 −2 3 3
2x1 + 3x2 − x3 − 2x4 = 1
a. 3 −1 1 b. 3 −4 1
x1 − 2x2 + x3 + 10x4 = 0
x1 + 5x2 − 2x3 − 12x4 = 1 −1 3 4 −5 7 2
1 1 −1 3 3 −2 1 −2
b. x1 − x2 + x3 − x4 = 0 c. −1 4 5 −2 d. 1 −1 3 5
−x1 + x2 + x3 + x4 = 0 1 6 3 4 −1 1 1 −1
x1 + x2 − x3 + x4 = 0
1 2 −1 0
x1 + x2 + x3 + x4 = 0 2
e. 0 a 1−a a +1
c. x1 − x2 + x3 − 2x4 = 1 1 2 − a −1 −2a2
−x1 + x2 + x3 + x4 = −1 1 1 2 a2
−x1 + 2x2 + 3x3 − x4 = 2 f. 1 1 − a 2 0
x1 − x2 + 2x3 + x4 = 1 2 2−a 6−a 4
1.2. Gaussian Elimination 19
Exercise 1.2.12 Consider a system of linear equations Exercise 1.2.15 Show that az + by + cz = 0
al-
with augmented matrix A and coefficient matrix C. In a 1 x + b1 y + c1 z = 0
each case either prove the statement or give an example ways has a solution other than x = 0, y = 0, z = 0.
showing that it is false. Exercise 1.2.16 Find the circle x2 + y2 + ax + by + c = 0
passing through the following points.
a. If there is more than one solution, A has a row of
zeros.
a. (−2, 1), (5, 0), and (4, 1)
b. If A has a row of zeros, there is more than one
solution. b. (1, 1), (5, −3), and (−3, −3)
Exercise 1.2.13 Find a sequence of row operations car- Exercise 1.2.20 The scores of three players in a tour-
rying nament have been lost. The only information available
is the total of the scores for players 1 and 2, the total for
b1 + c1 b2 + c2 b3 + c3 a1 a2 a3 players 2 and 3, and the total for players 3 and 1.
c1 + a1 c2 + a2 c3 + a3 to b1 b2 b3
a1 + b1 a2 + b2 a3 + b3 c1 c2 c3
a. Show that the individual scores can be rediscov-
Exercise 1.2.14 In each case, show that the reduced ered.
row-echelon form is as given. b. Is this possible with four players (knowing the to-
tals for players 1 and 2, 2 and 3, 3 and 4, and 4 and
p 0 a 1 0 0
a. b 0 0 with abc 6= 0; 0 1 0 1)?
q c r 0 0 1
Exercise 1.2.21 A boy finds $1.05 in dimes, nickels,
1 a b+c
and pennies. If there are 17 coins in all, how many coins
b. 1 b c + a where c 6= a or b =
6 a;
of each type can he have?
1 c a+b
1 0 ∗ Exercise 1.2.22 If a consistent system has more vari-
0 1 ∗ ables than equations, show that it has infinitely many so-
0 0 0 lutions. [Hint: Use Theorem 1.2.2.]
20 Systems of Linear Equations
A system of equations in the variables x1 , x2 , . . . , xn is called homogeneous if all the constant terms are
zero—that is, if each equation of the system has the form
a1 x1 + a2 x2 + · · · + an xn = 0
Example 1.3.1
Show that the following homogeneous system has nontrivial solutions.
x1 − x2 + 2x3 − x4 = 0
2x1 + 2x2 + x4 = 0
3x1 + x2 + 2x3 − x4 = 0
Solution. The reduction of the augmented matrix to reduced row-echelon form is outlined below.
1 −1 2 −1 0 1 −1 2 −1 0 1 0 1 0 0
2 2 0 1 0 → 0 4 −4 3 0 → 0 1 −1 0 0
3 1 2 −1 0 0 4 −4 2 0 0 0 0 1 0
The existence of a nontrivial solution in Example 1.3.1 is ensured by the presence of a parameter in the
solution. This is due to the fact that there is a nonleading variable (x3 in this case). But there must be
a nonleading variable here because there are four variables and only three equations (and hence at most
three leading variables). This discussion generalizes to a proof of the following fundamental theorem.
Theorem 1.3.1
If a homogeneous system of linear equations has more variables than equations, then it has a
nontrivial solution (in fact, infinitely many).
Proof. Suppose there are m equations in n variables where n > m, and let R denote the reduced row-echelon
form of the augmented matrix. If there are r leading variables, there are n − r nonleading variables, and so
n − r parameters. Hence, it suffices to show that r < n. But r ≤ m because R has r leading 1s and m rows,
and m < n by hypothesis. So r ≤ m < n, which gives r < n.
1.3. Homogeneous Equations 21
Note that the converse of Theorem 1.3.1 is not true: if a homogeneous system has nontrivial solutions,
it need not have more variables than equations (the system x1 + x2 = 0, 2x1 + 2x2 = 0 has nontrivial
solutions but m = 2 = n.)
Theorem 1.3.1 is very useful in applications. The next example provides an illustration from geometry.
Example 1.3.2
We call the graph of an equation ax2 + bxy + cy2 + dx + ey + f = 0 a conic if the numbers a, b, and
c are not all zero. Show that there is at least one conic through any five points in the plane that are
not all on a line.
Solution. Let the coordinates of the five points be (p1 , q1 ), (p2 , q2 ), (p3 , q3 ), (p4 , q4 ), and
(p5 , q5 ). The graph of ax2 + bxy + cy2 + dx + ey + f = 0 passes through (pi , qi ) if
This gives five equations, one for each i, linear in the six variables a, b, c, d, e, and f . Hence, there
is a nontrivial solution by Theorem 1.3.1. If a = b = c = 0, the five points all lie on the line with
equation dx + ey + f = 0, contrary to assumption. Hence, one of a, b, c is nonzero.
As for rows, two columns are regarded as equal if they have the same number of entries and corresponding
entries are the same. Let x and y be columns with the same number of entries. As for elementary row
operations, their sum x + y is obtained by adding corresponding entries and, if k is a number, the scalar
product kx is defined by multiplying each entry of x by k. More precisely:
x1 y1 x1 + y1 kx1
x2 y2 x2 + y2 kx2
If x = .. and y = .. then x + y = .. and kx = .. .
. . . .
xn yn xn + yn kxn
A sum of scalar multiples of several columns is called a linear combination of these columns. For
example, sx + ty is a linear combination of x and y for any choice of numbers s and t.
Example 1.3.3
3 −1 6 −5 1
If x = and then 2x + 5y = + = .
−2 1 −4 5 1
22 Systems of Linear Equations
Example 1.3.4
1 2 3 0 1
Let x = 0 , y = 1 and z = 1 . If v = −1 and w = 1 , determine whether v
1 0 1 2 1
and w are linear combinations of x, y and z.
Solution. For v, we must determine whether numbers r, s, and t exist such that v = rx + sy + tz,
that is, whether
0 1 2 3 r + 2s + 3t
−1 = r 0 + s 1 + t 1 = s+t
2 1 0 1 r +t
Our interest in linear combinations comes from the fact that they provide one of the best ways to
describe the general solution of a homogeneous system of linear equations. When solving such a system
x1
x2
with n variables x1 , x2 , . . . , xn , write the variables as a column6 matrix: x = .. . The trivial solution
.
xn
0
0
is denoted 0 = .. . As an illustration, the general solution in Example 1.3.1 is x1 = −t, x2 = t, x3 = t,
.
0
and x4 = 0,where t is a parameter, and we would now express this by saying that the general solution is
−t
t
x=
t , where t is arbitrary.
0
Now let x and y be two solutions to a homogeneous system with n variables. Then any linear combi-
nation sx + ty of these solutions turns out to be again a solution to the system. More generally:
In
fact,suppose
thata typical equation in the system is a1 x1 + a2 x2 + · · · + an xn = 0, and suppose that
x1 y1
x2 y2
x = .. , y = .. are solutions. Then a1 x1 + a2 x2 + · · · + an xn = 0 and a1 y1 + a2 y2 + · · · + an yn = 0.
. .
xn y
n
sx1 + ty1
sx2 + ty2
Hence sx + ty = .. is also a solution because
.
sxn + tyn
A similar argument shows that Statement 1.1 is true for linear combinations of more than two solutions.
The remarkable thing is that every solution to a homogeneous system is a linear combination of certain
particular solutions and, in fact, these solutions are easily computed using the gaussian algorithm. Here is
an example.
Example 1.3.5
Solve the homogeneous system with coefficient matrix
1 −2 3 −2
A = −3 6 1 0
−2 4 4 −2
1
2 5
1 0
Here x1 =
0 and x2 = 3 are particular solutions determined by the gaussian algorithm.
5
0 1
Moreover, the algorithm gives a routine way to express every solution as a linear combination of basic
solutions as in Example 1.3.5, where the general solution x becomes
1
2 5 2 1
1 0 1 1 0
x = s
0 + t 3 = s 0 + 5t 3
5
0 1 0 5
Hence by introducing a new parameter r = t/5 we can multiply the original basic solution x2 by 5 and so
eliminate fractions. For this reason:
Convention:
Any nonzero scalar multiple of a basic solution will still be called a basic solution.
In the same way, the gaussian algorithm produces basic solutions to every homogeneous system, one
for each parameter (there are no basic solutions if the system has only the trivial solution). Moreover every
solution is given by the algorithm as a linear combination of these basic solutions (as in Example 1.3.5).
If A has rank r, Theorem 1.2.2 shows that there are exactly n − r parameters, and so n − r basic solutions.
This proves:
Theorem 1.3.2
Let A be an m × n matrix of rank r, and consider the homogeneous system in n variables with A as
coefficient matrix. Then:
1. The system has exactly n − r basic solutions, one for each parameter.
Example 1.3.6
Find basic solutions of the homogeneous system with coefficient matrix A, and express every
solution as a linear combination of the basic solutions, where
1 −3 0 2 2
−2 6 1 2 −5
A= 3 −9 −1 0
7
−3 9 2 6 −8
1 0 1
f. If there exists a solution, there are infinitely many
solutions.
1 −1
g. If there exist nontrivial solutions, the row-echelon 2 9
form of A has a row of zeros. a. y = 4
b. y = 2
0 6
h. If the row-echelon form of A has a row of zeros,
there exist nontrivial solutions. Exercise 1.3.5 For each of the following homogeneous
systems, find a set of basic solutions and express the gen-
i. If a row operation is applied to the system, the new eral solution as a linear combination of these basic solu-
system is also homogeneous. tions.
a. x1 + 2x2 − x3 + 2x4 + x5 = 0
Exercise 1.3.2 In each of the following, find all values
x1 + 2x2 + 2x3 + x5 = 0
of a for which the system has nontrivial solutions, and
2x1 + 4x2 − 2x3 + 3x4 + x5 = 0
determine all solutions in each case.
b. x1 + 2x2 − x3 + x4 + x5 = 0
a. x − 2y + z = 0 b. x + 2y + z = 0 −x1 − 2x2 + 2x3 + x5 = 0
x + ay − 3z = 0 x + 3y + 6z = 0 −x1 − 2x2 + 3x3 + x4 + 3x5 = 0
−x + 6y − 5z = 0 2x + 3y + az = 0
c. x1 + x2 − x3 + 2x4 + x5 = 0
c. x + y − z = 0 d. ax + y + z = 0 x1 + 2x2 − x3 + x4 + x5 = 0
ay − z = 0 x+y− z=0 2x1 + 3x2 − x3 + 2x4 + x5 = 0
x + y + az = 0 x + y + az = 0 4x1 + 5x2 − 2x3 + 5x4 + 2x5 = 0
1.4. An Application to Network Flow 27
a.
Does Theorem 1.3.1 imply that the system a. Show that there is a line through any pair of points
−z + 3y = 0 in the plane. [Hint: Every line has equation
has nontrivial solutions? Explain.
2x − 6y = 0 ax + by + c = 0, where a, b, and c are not all zero.]
b. Show that the converse to Theorem 1.3.1 is not b. Generalize and show that there is a plane ax+by+
true. That is, show that the existence of nontrivial cz + d = 0 through any three points in space.
solutions does not imply that there are more vari-
ables than equations. Exercise 1.3.10 The graph of
There are many types of problems that concern a network of conductors along which some sort of flow
is observed. Examples of these include an irrigation network and a network of streets or freeways. There
are often points in the system at which a net flow either enters or leaves the system. The basic principle
behind the analysis of such systems is that the total flow into the system must equal the total flow out. In
fact, we apply this principle at every junction in the system.
Junction Rule
At each of the junctions in the network, the total flow into that junction must equal the total flow
out.
This requirement gives a linear equation relating the flows in conductors emanating from the junction.
28 Systems of Linear Equations
Example 1.4.1
A network of one-way streets is shown in the accompanying diagram. The rate of flow of cars into
intersection A is 500 cars per hour, and 400 and 100 cars per hour emerge from B and C,
respectively. Find the possible flows along each street.
f1 + f2 + f3 = 500
f1 + f4 + f6 = 400
f3 + f5 − f6 = 100
f2 − f4 − f5 = 0
f1 = 400 − f4 − f6 f2 = f4 + f5 f3 = 100 − f5 + f6
This gives all solutions to the system of equations and hence all the possible flows.
Of course, not all these solutions may be acceptable in the real situation. For example, the flows
f1 , f2 , . . . , f6 are all positive in the present context (if one came out negative, it would mean traffic
flowed in the opposite direction). This imposes constraints on the flows: f1 ≥ 0 and f3 ≥ 0 become
f4 + f6 ≤ 400 f5 − f6 ≤ 100
Further constraints might be imposed by insisting on maximum values on the flow in each street.
1.5. An Application to Electrical Networks 29
In an electrical network it is often necessary to find the current in amperes (A) flowing in various parts of
the network. These networks usually contain resistors that retard the current. The resistors are indicated
by a symbol ( ), and the resistance is measured in ohms (Ω). Also, the current is increased at various
points by voltage sources (for example, a battery). The voltage of these sources is measured in volts (V),
7 This section is independent of Section 1.4
30 Systems of Linear Equations
and they are represented by the symbol ( ). We assume these voltage sources have no resistance. The
flow of current is governed by the following principles.
Ohm’s Law
The current I and the voltage drop V across a resistance R are related by the equation V = RI .
Kirchhoff’s Laws
1. (Junction Rule) The current flow into a junction equals the current flow out of that junction.
2. (Circuit Rule) The algebraic sum of the voltage drops (due to resistances) around any closed
circuit of the network must equal the sum of the voltage increases around the circuit.
When applying rule 2, select a direction (clockwise or counterclockwise) around the closed circuit and
then consider all voltages and currents positive when in this direction and negative when in the opposite
direction. This is why the term algebraic sum is used in rule 2. Here is an example.
Example 1.5.1
Find the various currents in the circuit shown.
Solution.
10Ω First apply the junction rule at junctions A, B, C, and D to obtain
A
I3
Junction A I1 = I2 + I3
10 V
Junction B I6 = I1 + I5
20Ω 5V 20 V
Junction C I2 + I4 = I6
I2
B
5Ω
D Junction D I3 + I5 = I4
I1 I6 C Note that these equations are not independent
I4
15 28
I1 = 20 I4 = 20
−1 12
I2 = 20 I5 = 20
16 27
I3 = 20 I6 = 20
The fact that I2 is negative means, of course, that this current is in the opposite direction, with a
1
magnitude of 20 amperes.
Exercise 1.5.2
Exercise 1.5.4 All resistances are 10Ω.
5V 5Ω
I6 I4
I1 10 V
I2 I2I
3
10Ω I5
5Ω I3 I1
10 V
20 V
Exercise 1.5.5 2Ω
Find the voltage x such that the current I1 = 0.
I1
1Ω
1Ω I2 5V
2V
I3
xV
32 Systems of Linear Equations
When a chemical reaction takes place a number of molecules combine to produce new molecules. Hence,
when hydrogen H2 and oxygen O2 molecules combine, the result is water H2 O. We express this as
H2 + O2 → H2 O
Individual atoms are neither created nor destroyed, so the number of hydrogen and oxygen atoms going
into the reaction must equal the number coming out (in the form of water). In this case the reaction is
said to be balanced. Note that each hydrogen molecule H2 consists of two atoms as does each oxygen
molecule O2 , while a water molecule H2 O consists of two hydrogen atoms and one oxygen atom. In the
above reaction, this requires that twice as many hydrogen molecules enter the reaction; we express this as
follows:
2H2 + O2 → 2H2 O
This is now balanced because there are 4 hydrogen atoms and 2 oxygen atoms on each side of the reaction.
Example 1.6.1
Balance the following reaction for burning octane C8 H18 in oxygen O2 :
C8 H18 + O2 → CO2 + H2 O
where CO2 represents carbon dioxide. We must find positive integers x, y, z, and w such that
Equating the number of carbon, hydrogen, and oxygen atoms on each side gives 8x = z, 18x = 2w
and 2y = 2z + w, respectively. These can be written as a homogeneous linear system
8x − z =0
18x − 2w = 0
2y − 2z − w = 0
which can be solved by gaussian elimination. In larger systems this is necessary but, in such a
simple situation, it is easier to solve directly. Set w = t, so that x = 19 t, z = 89 t, 2y = 16 25
9 t + t = 9 t.
But x, y, z, and w must be positive integers, so the smallest value of t that eliminates fractions is 18.
Hence, x = 2, y = 25, z = 16, and w = 18, and the balanced reaction is
It is worth noting that this problem introduces a new element into the theory of linear equations: the
insistence that the solution must consist of positive integers.
1.6. An Application to Chemical Reactions 33
Exercise 1.6.1 CH4 + O2 → CO2 + H2 O. This is the Exercise 1.6.3 CO2 + H2 O → C6 H12 O6 + O2 . This
burning of methane CH4 . is called the photosynthesis reaction—C6 H12 O6 is glu-
cose.
Exercise 1.6.2 NH3 + CuO → N2 + Cu + H2 O. Here
NH3 is ammonia, CuO is copper oxide, Cu is copper, Exercise 1.6.4 Pb(N3 )2 + Cr(MnO4 )2 → Cr2 O3 +
and N2 is nitrogen. MnO2 + Pb3 O4 + NO.
Exercise 1.1 We show in Chapter 4 that the graph of an Exercise 1.4 Show that any two rows of a matrix can be
equation ax + by + cz = d is a plane in space when not all interchanged by elementary row transformations of the
of a, b, and c are zero. other two types.
a b
a. By examining the possible positions of planes in Exercise 1.5 If ad 6= bc, show that has re-
c d
space, show that three equations in three variables
1 0
can have zero, one, or infinitely many solutions. duced row-echelon form .
0 1
b. Can two equations in three variables have a unique Exercise 1.6 Find a, b, and c so that the system
solution? Give reasons for your answer.
x + ay + cz = 0
Exercise 1.2 Find all solutions to the following systems bx + cy − 3z = 1
of linear equations. ax + 2y + bz = 5
Exercise 1.10 A restaurant owner plans to use x tables Exercise 1.13 Solve the following system of equations
seating 4, y tables seating 6, and z tables seating 8, for a for x and y.
x2 + xy − y2 = 1
total of 20 tables. When fully occupied, the tables seat
2x2 − xy + 3y2 = 13
108 customers. If only half of the x tables, half of the y
x2 + 3xy + 2y2 = 0
tables, and one-fourth of the z tables are used, each fully
occupied, then 46 customers will be seated. Find x, y, [Hint: These equations are linear in the new variables
and z. x1 = x2 , x2 = xy, and x3 = y2 .]
2. Matrix Algebra
In the study of systems of linear equations in Chapter 1, we found it convenient to manipulate the aug-
mented matrix of the system. Our aim was to reduce it to row-echelon form (using elementary row oper-
ations) and hence to write down all solutions to the system. In the present chapter we consider matrices
for their own sake. While some of the motivation comes from linear equations, it turns out that matrices
can be multiplied and added and so form an algebraic system somewhat analogous to the real numbers.
This “matrix algebra” is useful in ways that are quite different from the study of linear equations. For
example, the geometrical transformations obtained by rotating the euclidean plane about the origin can be
viewed as multiplications by certain 2 × 2 matrices. These “matrix transformations” are an important tool
in geometry and, in turn, the geometry provides a “picture” of the matrices. Furthermore, matrix algebra
has many other applications, some of which will be explored in this chapter. This subject is quite old and
was first studied systematically in 1858 by Arthur Cayley.1
A rectangular array of numbers is called a matrix (the plural is matrices), and the numbers are called the
entries of the matrix. Matrices are usually denoted by uppercase letters: A, B, C, and so on. Hence,
1
1 2 −1 1 −1
A= B= C= 3
0 5 6 0 2
2
are matrices. Clearly matrices come in various shapes depending on the number of rows and columns.
For example, the matrix A shown has 2 rows and 3 columns. In general, a matrix with m rows and n
columns is referred to as an m × n matrix or as having size m × n . Thus matrices A, B, and C above have
sizes 2 × 3, 2 × 2, and 3 × 1, respectively. A matrix of size 1 × n is called a row matrix, whereas one of
size m × 1 is called a column matrix. Matrices of size n × n for some n are called square matrices.
Each entry of a matrix is identified by the row and column in which it lies. The rows are numbered
from the top down, and the columns are numbered from left to right. Then the ( i , j ) -entry of a matrix is
1 Arthur Cayley (1821-1895) showed his mathematical talent early and graduated from Cambridge in 1842 as senior wran-
gler. With no employment in mathematics in view, he took legal training and worked as a lawyer while continuing to do
mathematics, publishing nearly 300 papers in fourteen years. Finally, in 1863, he accepted the Sadlerian professorship in Cam-
bridge and remained there for the rest of his life, valued for his administrative and teaching skills as well as for his scholarship.
His mathematical achievements were of the first rank. In addition to originating matrix theory and the theory of determinants,
he did fundamental work in group theory, in higher-dimensional geometry, and in the theory of invariants. He was one of the
most prolific mathematicians of all time and produced 966 papers.
35
36 Matrix Algebra
A special notation is commonly used for the entries of a matrix. If A is an m × n matrix, and if the
(i, j)-entry of A is denoted as ai j , then A is displayed as follows:
a11 a12 a13 · · · a1n
a21 a22 a23 · · · a2n
A = .. .. .. ..
. . . .
am1 am2 am3 · · · amn
This is usually denoted simply as A = ai j . Thus ai j is the entry in row i and column j of A. For example,
a 3 × 4 matrix in this notation is written
a11 a12 a13 a14
A = a21 a22 a23 a24
a31 a32 a33 a34
It is worth pointing out a convention regarding rows and columns: Rows are mentioned before columns.
For example:
• If an entry is denoted ai j , the first subscript i refers to the row and the second subscript j to the
column in which ai j lies.
Two points (x1 , y1 ) and (x2 , y2 ) in the plane are equal if and only if2 they have the same coordinates,
that is x1 = x2 and y1 = y2 . Similarly, two matrices A and B are called equal (written A = B) if and only if:
2 If
p and q are statements, we say that p implies q if q is true whenever p is true. Then “p if and only if q” means that both
p implies q and q implies p. See Appendix B for more on this.
2.1. Matrix Addition, Scalar Multiplication, and Transposition 37
Example 2.1.1
a b 1 2 −1 1 0
Given A = ,B= and C = discuss the possibility that A = B,
c d 3 0 1 −1 2
B = C, A = C.
Matrix Addition
If A = ai j and B = bi j , this takes the form
A + B = ai j + bi j
Example 2.1.2
2 1 3 1 1 −1
If A = and B = , compute A + B.
−1 2 0 2 0 6
Solution.
2+1 1+1 3−1 3 2 2
A+B = =
−1 + 2 2 + 0 0 + 6 1 2 6
Example 2.1.3
Find a, b, and c if a b c + c a b = 3 2 −1 .
Because corresponding entries must be equal, this gives three equations: a + c = 3, b + a = 2, and
c + b = −1. Solving these yields a = 3, b = −1, c = 0.
38 Matrix Algebra
0+X = X
A + (−A) = 0
holds for all matrices A where, of course, 0 is the zero matrix of the same size as A.
A closely related notion is that of subtracting matrices. If A and B are two m × n matrices, their
difference A − B is defined by
A − B = A + (−B)
Note that if A = ai j and B = bi j , then
A − B = ai j + −bi j = ai j − bi j
Example 2.1.4
3 −1 0 1 −1 1 1 0 −2
Let A = ,B= ,C= . Compute −A, A − B, and
1 2 −4 −2 0 6 3 1 1
A + B −C.
Solution.
−3 1 0
−A =
−1 −2 4
3−1 −1 − (−1) 0−1 2 0 −1
A−B = =
1 − (−2) 2−0 −4 − 6 3 2 −10
3 + 1 − 1 −1 − 1 − 0 0 + 1 − (−2) 3 −2 3
A + B −C = =
1−2−3 2 + 0 − 1 −4 + 6 − 1 −4 1 1
2.1. Matrix Addition, Scalar Multiplication, and Transposition 39
Example 2.1.5
3 2 1 0
Solve +X = where X is a matrix.
−1 1 −1 2
The reader should verify that this matrix X does indeed satisfy the original equation.
The solution in Example 2.1.5 solves the single matrix equation A + X = B directly via matrix subtrac-
tion: X = B − A. This ability to work with matrices as entities lies at the heart of matrix algebra.
It is important to note that the sizes of matrices involved in some calculations are often determined by
the context. For example, if
1 3 −1
A +C =
2 0 1
then A and C must be the same size (so that A +C makes sense), and that size must be 2 × 3 (so that the
sum is 2 × 3). For simplicity we shall often omit reference to such facts when they are clear from the
context.
Scalar Multiplication
In gaussian elimination, multiplying a row of a matrix by a number k means multiplying every entry of
that row by k.
If A = ai j , this is
kA = kai j
Thus 1A = A and (−1)A = −A for any matrix A.
The term scalar arises here because the set of numbers from which the entries are drawn is usually
referred to as the set of scalars. We have been using real numbers as scalars, but we could equally well
have been using complex numbers.
40 Matrix Algebra
Example 2.1.6
3 −1 4 1 2 −1
If A = and B = compute 5A, 12 B, and 3A − 2B.
2 0 1 0 3 2
Solution.
1 1
15 −5 20 1 1 −
5A = , 2B = 2 3 2
10 0 30 0 2 1
9 −3 12 2 4 −2 7 −7 14
3A − 2B = − =
6 0 18 0 6 4 6 −6 14
If A is any matrix, note that kA is the same size as A for all scalars k. We also have
0A = 0 and k0 = 0
because the zero matrix has every entry zero. In other words, kA = 0 if either k = 0 or A = 0. The converse
of this statement is also true, as Example 2.1.7 shows.
Example 2.1.7
If kA = 0, show that either k = 0 or A = 0.
Solution. Write A = ai j so that kA = 0 means kai j = 0 for all i and j. If k = 0, there is nothing to
do. If k 6= 0, then kai j = 0 implies that ai j = 0 for all i and j; that is, A = 0.
For future reference, the basic properties of matrix addition and scalar multiplication are listed in
Theorem 2.1.1.
Theorem 2.1.1
Let A, B, and C denote arbitrary m × n matrices where m and n are fixed. Let k and p denote
arbitrary real numbers. Then
1. A + B = B + A.
2. A + (B +C) = (A + B) +C.
5. k(A + B) = kA + kB.
6. (k + p)A = kA + pA.
7. (kp)A = k(pA).
8. 1A = A.
2.1. Matrix Addition, Scalar Multiplication, and Transposition 41
Proof. Properties 1–4 were given previously.
To check Property 5, let A = a i j and B = bi j denote
matrices of the same size. Then A + B = ai j + bi j , as before, so the (i, j)-entry of k(A + B) is
But this is just the (i, j)-entry of kA + kB, and it follows that k(A + B) = kA + kB. The other Properties
can be similarly verified; the details are left to the reader.
The Properties in Theorem 2.1.1 enable us to do calculations with matrices in much the same way that
numerical calculations are carried out. To begin, Property 2 implies that the sum
(A + B) +C = A + (B +C)
is the same no matter how it is formed and so is written as A + B +C. Similarly, the sum
A + B +C + D
B + D + A +C = A + B +C + D
In other words, the order in which the matrices are added does not matter. A similar remark applies to
sums of five (or more) matrices.
Properties 5 and 6 in Theorem 2.1.1 are called distributive laws for scalar multiplication, and they
extend to sums of more than two terms. For example,
k(A + B −C) = kA + kB − kC
(k + p − m)A = kA + pA − mA
Similar observations hold for more than three summands. These facts, together with properties 7 and
8, enable us to simplify expressions by collecting like terms, expanding, and taking common factors in
exactly the same way that algebraic expressions involving variables and real numbers are manipulated.
The following example illustrates these techniques.
Example 2.1.8
Simplify 2(A + 3C) − 3(2C − B) − 3 [2(2A + B − 4C) − 4(A − 2C)] where A, B, and C are all
matrices of the same size.
Transpose of a Matrix
Many results about a matrix A involve the rows of A, and the corresponding result for columns is derived
in an analogous way, essentially by replacing the word row by the word column throughout. The following
definition is made with such applications in mind.
In other words, the first row of AT is the first column of A (that is it consists of the entries of column 1 in
order). Similarly the second row of AT is the second column of A, and so on.
Example 2.1.9
Write down the transpose of each of the following matrices.
1 1 2 3 1 −1
A= 3 B= 5 2 6 C= 3 4 D= 1 3 2
2 5 6 −1 2 1
Solution.
5
1 3 5
A = 1 3 2 , B = 2 , C =
T T T
, and DT = D.
2 4 6
6
If A = ai j is a matrix, write AT = bi j . Then bi j is the jth element of the ith row of AT and so is the
jth element of the ith column of A. This means bi j = a ji , so the definition of AT can be stated as follows:
If A = ai j , then AT = a ji . (2.1)
Theorem 2.1.2
Let A and B denote matrices of the same size, and let k denote a scalar.
2. (AT )T = A.
3. (kA)T = kAT .
4. (A + B)T = AT + BT .
2.1. Matrix Addition, Scalar Multiplication, and Transposition 43
Property 1 is part
of the definition of A , and Property 2 follows from (2.1). As to Property 3: If
Proof. T
A = ai j , then kA = kai j , so (2.1) gives
(kA)T = ka ji = k a ji = kAT
Finally, if B = bi j , then A + B = ci j where ci j = ai j + bi j Then (2.1) gives Property 4:
T
(A + B)T = ci j = c ji = a ji + b ji = a ji + b ji = AT + BT
There is another useful way to think of transposition. If A = ai j is an m × n matrix, the elements
a11 , a22 , a33 , . . . are called the main diagonal of A. Hence the main diagonal extends down and to the
right from the upper left corner of the matrix A; it is shaded in the following examples:
a11 a12 a11 a12 a13
a21 a22 a11 a12 a13 a21 a22 a23 a11
a21 a22 a23 a21
a31 a32 a31 a32 a33
Thus forming the transpose of a matrix A can be viewed as “flipping” A about its main diagonal, or
as “rotating” A through 180◦ about the line containing the main diagonal. This makes Property 2 in
Theorem 2.1.2 transparent.
Example 2.1.10
T
1 2 2 3
Solve for A if 2A − 3
T = .
−1 1 −1 2
Note that Example 2.1.10 can also be solved by first transposing both sides, then solving for AT , and so
obtaining A = (AT )T . The reader should do this.
1 2
The matrix D = in Example 2.1.9 has the property that D = DT . Such matrices are important;
2 5
a matrix A is called symmetric if A = AT . A symmetric matrix A is necessarily square (if A is m × n, then
AT is n×m, so A = AT forces n = m). The name comes from the fact that these matrices exhibit a symmetry
44 Matrix Algebra
about the main diagonal. That is, entries that are directly across the main diagonal from each other are
equal.
a b c
For example, b′ d e is symmetric when b = b′ , c = c′ , and e = e′ .
c′ e′ f
Example 2.1.11
If A and B are symmetric n × n matrices, show that A + B is symmetric.
Example 2.1.12
Suppose a square matrix A satisfies A = 2AT . Show that necessarily A = 0.
T
2 1 1 −1 Exercise 2.1.9 If A is any 2 × 2 matrix, show that:
h. 3 −2
−1 0 2 3
1 0 0 1 0 0
a. A = a +b +c +
2 1 0 0 0 0 1 0
Exercise 2.1.3 Let A = ,
0 −1 0 0
d for some numbers a, b, c, and d.
3 −1 2 3 −1 0 1
B= ,C= ,
0 1 4 2 0
1 0 1 1 1 0
1 3 b. A = p +q +r +
1 0 1 0 1 0 0 1 0
D = −1 0 , and E = .
0 1 0 0 1
1 4 s for some numbers p, q, r, and s.
Compute the following (where possible). 1 0
a. 3A − 2B b. 5C
Exercise
2.1.10 Let A = 1 1 −1 ,
c. 3E T d. B + D B = 0 1 2 , and C = 3 0 1 . If
e. 4AT − 3C f. (A +C)T rA + sB + tC = 0 for some scalars r, s, and t, show that
necessarily r = s = t = 0.
g. 2B − 3E h. A − D
Exercise 2.1.11
i. (B − 2E)T
a. If Q + A = A holds for every m × n matrix A, show
Exercise 2.1.4 Find A if:
that Q = 0mn .
1 0 5 2
a. 5A − = 3A − b. If A is an m × n matrix and A + A′ = 0mn , show that
2 3 6 1
A′ = −A.
2 3
b. 3A − = 5A − 2
1 0 Exercise 2.1.12 If A denotes an m × n matrix, show that
A = −A if and only if A = 0.
Exercise 2.1.5 Find A in terms of B if:
Exercise 2.1.13 A square matrix is called a diagonal
matrix if all the entries off the main diagonal are zero. If
a. A + B = 3A + 2B b. 2A − B = 5(A + 2B)
A and B are diagonal matrices, show that the following
matrices are also diagonal.
Exercise 2.1.6 If X , Y , A, and B are matrices of the same
size, solve the following systems of equations to obtain a. A + B b. A − B
X and Y in terms of A and B.
c. kA for any number k
a. 5X + 3Y = A b. 4X + 3Y = A
2X +Y = B 5X + 4Y = B Exercise 2.1.14 In each case determine all s and t such
that the given matrix is symmetric:
Exercise 2.1.7 Find all matrices X and Y such that:
1 s s t
a. 3X −2Y = 3 −1 b. 2X − 5Y = 1 2 a. b.
−2 t st 1
s 2s st 2 s t
Exercise 2.1.8 Simplify the following expressions
c. t −1 s d. 2s 0 s + t
where A, B, and C are matrices.
t s2 s 3 3 t
a. 2 [9(A − B) + 7(2B − A)]
−2 [3(2B + A) − 2(A + 3B) − 5(A + B)] Exercise 2.1.15 In each case find the matrix A.
b. 5 [3(A − B + 2C) − 2(3C − B) − A] T 2 1
1 −1 0
+2 [3(3A − B +C) + 2(B − 2A) − 2C] a. A+3 = 0 5
1 2 4
3 8
46 Matrix Algebra
T
1 0 8 0 Exercise 2.1.20 A square matrix W is called skew-
b. 3AT +2 =
0 2 3 1 symmetric if W T = −W . Let A be any square matrix.
T T
c. 2A − 3 1 2 0 = 3AT + 2 1 −1 a. Show that A − AT is skew-symmetric.
T
1 0 1 1 b. Find a symmetric matrix S and a skew-symmetric
d. 2AT −5 = 4A − 9
−1 2 −1 0 matrix W such that A = S +W .
Exercise 2.1.16 Let A and B be symmetric (of the same c. Show that S and W in part (b) are uniquely deter-
size). Show that each of the following is symmetric. mined by A.
c. If the (3, 1)-entry of A is 5, then the (1, 3)-entry a. k(A1 + A2 + · · · + An ) = kA1 + kA2 + · · · + kAn for
of AT is −5. any number k
d. A and AT have the same main diagonal for every b. (k1 + k2 + · · · + kn )A = k1 A + k2 A + · · · + kn A for
matrix A. any numbers k1 , k2 , . . . , kn
Up to now we have used matrices to solve systems of linear equations by manipulating the rows of the
augmented matrix. In this section we introduce a different way of describing linear systems that makes
more use of the coefficient matrix of the system and leads to a useful way of “multiplying” matrices.
Vectors
It is a well-known fact in analytic geometry that two points in the plane with coordinates (a1 , a2 ) and
(b1 , b2 ) are equal if and only if a1 = b1 and a2 = b2 . Moreover, a similar condition applies to points
(a1 , a2 , a3 ) in space. We extend this idea as follows.
An ordered sequence (a1 , a2 , . . . , an ) of real numbers is called an ordered n -tuple. The word “or-
dered” here reflects our insistence that two ordered n-tuples are equal if and only if corresponding entries
are the same. In other words,
Thus the ordered 2-tuples and 3-tuples are just the ordered pairs and triples familiar from geometry.
There
are two commonly used ways to denote the n-tuples in R : As rows (r1 , r2 , . . . , rn ) or columns
n
r1
r2
.. ; the notation we use depends on the context. In any event they are called vectors or n-vectors and
.
rn
will be denoted using bold type such as x or v. For example, an m × n matrix A will be written as a row of
columns:
A = a1 a2 · · · an where a j denotes column j of A for each j.
If x and y are two n-vectors in Rn , it is clear that their matrix sum x + y is also in Rn as is the scalar
multiple kx for any real number k. We express this observation by saying that Rn is closed under addition
and scalar multiplication. In particular, all the basic properties in Theorem 2.1.1 are true of these n-vectors.
These properties are fundamental and will be used frequently below without comment. As for matrices in
general, the n × 1 zero matrix is called the zero n -vector in Rn and, if x is an n-vector, the n-vector −x is
called the negative x.
Of course, we have already encountered these n-vectors in Section 1.3 as the solutions to systems of
linear equations with n variables. In particular we defined the notion of a linear combination of vectors
and showed that a linear combination of solutions to a homogeneous system is again a solution. Clearly, a
linear combination of n-vectors in Rn is again in Rn , a fact that we will be using.
48 Matrix Algebra
Matrix-Vector Multiplication
Given a system of linear equations, the left sides of the equations depend only on the coefficient matrix A
and the column x of variables, and not on the constants. This observation leads to a fundamental idea in
linear algebra: We view the left sides of the equations as the “product” Ax of the matrix A and the vector
x. This simple change of perspective leads to a completely new way of viewing linear systems—one that
is very useful and will occupy our attention throughout this book.
To motivate the definition of the “product” Ax, consider first the following system of two equations in
three variables:
ax1 + bx2 + cx3 = b1
(2.2)
a′ x1 + b′ x2 + c′ x3 = b1
x1
a b c b1
and let A = , x = x2 , b = denote the coefficient matrix, the variable matrix, and
a′ b′ c′ b2
x3
the constant matrix, respectively. The system (2.2) can be expressed as a single vector equation
ax1 + bx2 + cx3 b1
=
a′ x1 + b′ x2 + c′ x3 b2
which in turn can be written as follows:
a b c b1
x1 + x2 + x3 =
a′ b′ c′ b2
Now observe that the vectors appearing on the left side are just the columns
a b c
a1 = , a2 = , and a3 =
a′ b′ c′
of the coefficient matrix A. Hence the system (2.2) takes the form
x1 a1 + x2 a2 + x3 a3 = b (2.3)
This shows that the system (2.2) has a solution if and only if the constant matrix b is a linear combination3
of the columns of A, and that in this case the entries of the solution are the coefficients x1 , x2 , and x3 in
this linear combination.
Moreover, this holds in general. If A is any m × n matrix, it is often convenient to view A as a row of
columns. That is, if a1 , a2 , . . . , an are the columns of A, we write
A = a1 a2 · · · an
and say that A = a1 a2 · · · an is given in terms of its columns.
Now consider any system oflinearequations with m × n coefficient matrix A. If b is the constant
x1
x2
matrix of the system, and if x = .. is the matrix of variables then, exactly as above, the system can
.
xn
3 Linear
combinations were introduced in Section 1.3 to describe the solutions of homogeneous systems of linear equations.
They will be used extensively in what follows.
2.2. Matrix-Vector Multiplication 49
x1 a1 + x2 a2 + · · · + xn an = b (2.4)
Example 2.2.1
3x1 + 2x2 − 4x3 = 0
Write the system x1 − 3x2 + x3 = 3 in the form given in (2.4).
x2 − 5x3 = −1
Solution.
3 2 −4 0
x1 1 + x2 −3 + x3 1 = 3
0 1 −5 −1
As mentioned above, we view the left side of (2.4) as the product of the matrix A and the vector x.
This basic idea is formalized in the following definition:
Ax = x1 a1 + x2 a2 + · · · + xn an
In other words, if A is m × n and x is an n-vector, the product Ax is the linear combination of the columns
of A where the coefficients are the entries of x (in order).
Note that if A is an m × n matrix, the product Ax is only defined if x is an n-vector and then the vector
Ax is an m-vector because this is true of each column a j of A. But in this case the system of linear equations
with coefficient matrix A and constant vector b takes the form of a single matrix equation
Ax = b
The following theorem combines Definition 2.5 and equation (2.4) and summarizes the above discussion.
Recall that a system of linear equations is said to be consistent if it has at least one solution.
Theorem 2.2.1
1. Every system of linear equations has the form Ax = b where A is the coefficient matrix, b is
the constant matrix, and x is the matrix of variables.
x1
x2
3. If a1 , a2 , . . . , an are the columns of A and if x = .. , then x is a solution to the linear
.
xn
system Ax = b if and only if x1 , x2 , . . . , xn are a solution of the vector equation
x1 a1 + x2 a2 + · · · + xn an = b
A system of linear equations in the form Ax = b as in (1) of Theorem 2.2.1 is said to be written in matrix
form. This is a useful way to view linear systems as we shall see.
Theorem 2.2.1 transforms the problem of solving the linear system Ax = b into the problem of ex-
pressing the constant matrix B as a linear combination of the columns of the coefficient matrix A. Such
a change in perspective is very useful because one approach or the other may be better in a particular
situation; the importance of the theorem is that there is a choice.
Example 2.2.2
2
2 −1 3 5 1
If A = 0 2 −3 1 and x =
0 , compute Ax.
−3 4 1 2
−2
2 −1 3 5 −7
Solution. By Definition 2.5: Ax = 2 0 + 1 2 + 0 −3 − 2 1 = 0 .
−3 4 1 2 −6
Example 2.2.3
Given columns a1 , a2 , a3 , and a4 in R3 , write 2a1 − 3a2 + 5a3 + a4 in the form Ax where A is a
matrix and x is a vector.
2
−3
Solution. Here the column of coefficients is x =
5 . Hence Definition 2.5 gives
1
Example 2.2.4
2
Let A = a1 a2 a3 a4 be the 3 × 4 matrix given in terms of its columns a1 = 0 ,
−1
1 3 3
a2 = 1 , a3 = −1 , and a4 = 1 . In each case below, either express b as a linear
1 −3 0
combination of a1 , a2 , a3 , and a4 , or show that it is not such a linear combination. Explain what
your answer means for the corresponding system Ax = b of linear equations.
1 4
a. b = 2 b. b = 2
3 1
Thus b is a linear combination of a1 , a2 , a3 , and a4 in this case. In fact the general solution is
x1 = 1 − 2s − t, x2 = 2 + s − t, x3 and x4 = t where s and t are arbitrary parameters. Hence
= s,
4
x1 a1 + x2 a2 + x3 a3 + x4 a4 = b = 2 for any choice of s and t. If we take s = 0 and t = 0, this
1
becomes a1 + 2a2 = b, whereas taking s = 1 = t gives −2a1 + 2a2 + a3 + a4 = b.
Example 2.2.5
Taking A to be the zero matrix, we have 0x = 0 for all vectors x by Definition 2.5 because every
column of the zero matrix is zero. Similarly, A0 = 0 for all matrices A because every entry of the
zero vector is zero.
52 Matrix Algebra
Example 2.2.6
1 0 0
If I = 0 1 0 , show that Ix = x for any vector x in R3 .
0 0 1
x1
Solution. If x = x2 then Definition 2.5 gives
x3
1 0 0 x1 0 0 x1
Ix = x1 0 + x2 1 + x3 0 = 0 + x2 + 0 = x2 = x
0 0 1 0 0 x3 x3
The matrix I in Example 2.2.6 is called the 3 × 3 identity matrix, and we will encounter such matrices
again in Example 2.2.11 below. Before proceeding, we develop some algebraic properties of matrix-vector
multiplication that are used extensively throughout linear algebra.
Theorem 2.2.2
Let A and B be m × n matrices, and let x and y be n-vectors in Rn . Then:
1. A(x + y) = Ax + Ay.
3. (A + B)x = Ax + Bx.
Proof. We prove (3); the other
verifications are similar and are left as exercises. Let A = a 1 a 2 · · · an
and B = b1 b2 · · · bn be given in terms of their columns. Since adding two matrices is the same
as adding their columns, we have
A + B = a1 + b1 a2 + b2 · · · an + bn
x1
x2
If we write x = .. Definition 2.5 gives
.
xn
(A + B)x = x1 (a1 + b1 ) + x2 (a2 + b2 ) + · · · + xn (an + bn )
= (x1 a1 + x2 a2 + · · · + xn an ) + (x1 b1 + x2 b2 + · · · + xn bn )
= Ax + Bx
Theorem 2.2.2 allows matrix-vector computations to be carried out much as in ordinary arithmetic. For
example, for any m × n matrices A and B and any n-vectors x and y, we have:
A(2x − 5y) = 2Ax − 5Ay and (3A − 7B)x = 3Ax − 7Bx
2.2. Matrix-Vector Multiplication 53
We will use such manipulations throughout the book, often without mention.
Linear Equations
Theorem 2.2.2 also gives a useful way to describe the solutions to a system
Ax = b
Ax = 0
called the associated homogeneous system, obtained from the original system Ax = b by replacing all
the constants by zeros. Suppose x1 is a solution to Ax = b and x0 is a solution to Ax = 0 (that is Ax1 = b
and Ax0 = 0). Then x1 + x0 is another solution to Ax = b. Indeed, Theorem 2.2.2 gives
Theorem 2.2.3
Suppose x1 is any particular solution to the system Ax = b of linear equations. Then every solution
x2 to Ax = b has the form
x2 = x0 + x1
for some solution x0 of the associated homogeneous system Ax = 0.
Example 2.2.7
Express every solution to the following system as the sum of a specific solution plus a solution to
the associated homogeneous system.
x1 − x2 − x3 + 3x4 = 2
2x1 − x2 − 3x3 + 4x4 = 6
x1 − 2x3 + x4 = 4
54 Matrix Algebra
Theorem 2.2.4
Let Ax = b be a system of equations with augmented matrix A b . Write rank A = r.
1. rank A b is either r or r + 1.
2. The system is consistent if and only if rank A b = r.
3. The system is inconsistent if and only if rank A b = r + 1.
Definition 2.5 is not always the easiest way to compute a matrix-vector product Ax because it requires
that the columns of A be explicitly identified. There is another way to find such a product which uses the
matrix A as a whole with no reference to its columns, and hence is useful in practice. The method depends
on the following notion.
To see how this relates to matrix products, let A denote a 3 × 4 matrix and let x be a 4-vector. Writing
x1
x2 a11 a12 a13 a14
x=
x3 and A = a21 a22 a23 a24
a31 a32 a33 a34
x4
in the notation of Section 2.1, we compute
x1
a11 a12 a13 a14 a11 a12 a13 a14
x2
Ax = a21 a22 a23 a24
x3 = x1 a21 + x2 a22 + x3 a23 + x4 a24
a31 a32 a33 a34 a31 a32 a33 a34
x4
a11 x1 + a12 x2 + a13 x3 + a14 x4
= a21 x1 + a22 x2 + a23 x3 + a24 x4
a31 x1 + a32 x2 + a33 x3 + a34 x4
From this we see that each entry of Ax is the dot product of the corresponding row of A with x. This
computation goes through in general, and we record the result in Theorem 2.2.5.
row i entry i As an illustration, we rework Example 2.2.2 using the dot product rule
instead of Definition 2.5.
Example 2.2.8
2
2 −1 3 5 1
If A = 0 2 −3 1 and x =
0 , compute Ax.
−3 4 1 2
−2
56 Matrix Algebra
Solution. The entries of Ax are the dot products of the rows of A with x:
2
2 −1 3 5 2 · 2 + (−1)1 + 3 · 0 + 5(−2) −7
1
Ax = 0 2 −3 1 0 =
0·2 + 2 · 1 + (−3)0 + 1(−2) = 0
−3 4 1 2 (−3)2 + 4·1 + 1 · 0 + 2(−2) −6
−2
Example 2.2.9
Write the following system of linear equations in the form Ax = b.
x1
5 −1 2 1 −3 8 x2
Solution. Write A = 1 1 3 −5 2 , b = −2 , and x =
x3 . Then the dot
−1 1 −2 0 −3 0 x4
x5
5x1 − x2 + 2x3 + x4 − 3x5
product rule gives Ax = x1 + x2 + 3x3 − 5x4 + 2x5 , so the entries of Ax are the left sides of
−x1 + x2 − 2x3 − 3x5
the equations in the linear system. Hence the system becomes Ax = b because matrices are equal if
and only corresponding entries are equal.
Example 2.2.10
If A is the zero m × n matrix, then Ax = 0 for each n-vector x.
Solution. For each k, entry k of Ax is the dot product of row k of A with x, and this is zero because
row k of A consists of zeros.
Example 2.2.11
For each n ≥ 2 we have In x = x for each n-vector x in Rn .
x1
x2
Solution. We verify the case n = 4. Given the 4-vector x =
x3 the dot product rule gives
x4
1 0 0 0 x1 x1 + 0 + 0 + 0 x1
0 1 0 0 x2 0 + x2 + 0 + 0 x2
I4 x =
0 0 1 0 x3 = 0 + 0 + x3 + 0 = x3 = x
0 0 0 1 x4 0 + 0 + 0 + x4 x4
In general, In x = x because entry k of In x is the dot product of row k of In with x, and row k of In
has 1 in position k and zeros elsewhere.
Example 2.2.12
Let A = a1 a2 · · · an be any m × n matrix with columns a1 , a2 , . . . , an . If e j denotes
column j of the n × n identity matrix In , then Ae j = a j for each j = 1, 2, . . . , n.
t1
t2
Solution. Write e j = .. where t j = 1, but ti = 0 for all i 6= j. Then Theorem 2.2.5 gives
.
tn
Ae j = t1 a1 + · · · + t j a j + · · · + tn an = 0 + · · · + a j + · · · + 0 = a j
Theorem 2.2.6
Let A and B be m × n matrices. If Ax = Bx for all x in Rn , then A = B.
Proof. Write A = a1 a2 · · · an and B = b1 b2 · · · bn and in terms of their columns. It is
enough to show that ak = bk holds for all k. But we are assuming that Aek = Bek , which gives ak = bk by
Example 2.2.12.
58 Matrix Algebra
We have introduced matrix-vector multiplication as a new way to think about systems of linear equa-
tions. But it has several other uses as well. It turns out that many geometric operations can be described
using matrix multiplication, and we now investigate how this happens. As a bonus, this description pro-
vides a geometric “picture” of a matrix by revealing the effect on a vector when it is multiplied by A. This
“geometric view” of matrices is a fundamental tool in understanding them.
Transformations
The set R2 hasa geometrical interpretation as the euclidean plane where
x2 a1
a vector in R2 represents the point (a1 , a2 ) in the plane (see Fig-
a2
a2
a1
ure 2.2.1). In this way we regard R2 as the set of all points in the plane.
Accordingly, we will refer to vectors in R2 as points, and denote their
a2
Example 2.2.13
Consider the transformation of R2 givenby reflection
in the
y a1
x axis. This operation carries the vector to its reflection
a1
a2
a2
a1
as in Figure 2.2.3. Now observe that
−a2
x
0
a1 1 0 a1
=
a1
−a2 0 −1 a2
−a2
a1
Figure 2.2.3 so reflecting in the x axis can be achieved by multiplying
a
2
1 0
by the matrix .
0 −1
4
This “arrow” representation of vectors in R2 and R3 will be used extensively in Chapter 4.
2.2. Matrix-Vector Multiplication 59
1 0
If we write A = , Example 2.2.13 shows that reflection in the x axis carries each vector x in
0 −1
R2 to the vector Ax in R2 . It is thus an example of a function
As such it is a generalization of the familiar functions f : R → R that carry a number x to another real
number f (x).
More generally, functions T : Rn → Rm are called transformations
from Rn to Rm . Such a transformation T is a rule that assigns to every
T vector x in Rn a uniquely determined vector T (x) in Rm called the image
x T (x) of x under T . We denote this state of affairs by writing
T
Rn Rm T : Rn → Rm or Rn −
→ Rm
that the action defines the transformation means that we regard two transformations S : Rn → Rm and
T : Rn → Rm as equal if they have the same action; more formally
Example 2.2.14
x1
x2 x1 + x2
The formula T 4 3
x3 = x2 + x3 defines a transformation R → R .
x3 + x4
x4
Example 2.2.13 suggests that matrix multiplication is an important way of defining transformations
Rn → Rm . If A is any m × n matrix, multiplication by A gives a transformation
Example 2.2.15
Let R π : R2 → R2 denote counterclockwise rotation about the origin through π
2 radians (that is,
2
0 −1
90◦ )5 . Show that R π is induced by the matrix .
2 1 0
Solution.
y a
The effect of R π is to rotate the vector x =
2 b
−b
counterclockwise through π2 to produce the vector R π (x) shown
R π (x) = 2
2 a
q in Figure 2.2.5. Since triangles 0px and 0qR π (x) are identical,
b 2
x=
a −b −b 0 −1 a
a b we obtain R π (x) = . But = ,
b 2 a a 1 0 b
a
x 0 −1
0 p
so we obtain R π (x) = Ax for all x in R2 where A = .
2 1 0
In other words, R π is the matrix transformation induced by A.
Figure 2.2.5 2
That is, the action of 1Rn on x is to do nothing to it. If In denotes the n × n identity matrix, we showed in
Example 2.2.11 that In x = x for all x in Rn . Hence 1Rn (x) = In x for all x in Rn ; that is, the identity matrix
In induces the identity transformation.
Here are two more examples of matrix transformations with a clear geometric description.
5 Radian measure for angles is based on the fact that 360◦ equals 2π radians. Hence π radians = 180◦ and π
radians = 90◦ .
2
2.2. Matrix-Vector Multiplication 61
Example 2.2.16
x ax a 0
If a > 0, the matrix transformation T = induced by the matrix A = is called
y y 0 1
an x -expansion of R2 if a > 1, and an x -compression if 0 < a< 1. The
reason for the names is
1 0
clear in the diagram below. Similarly, if b > 0 the matrix A = gives rise to y -expansions
0 b
and y -compressions.
y y y
x-compression x-expansion
1
3
x 2x 2x
y y y
x x x
0 0 a= 1 0 a= 3
2 2
Example 2.2.17
x x + ay
If a is a number, the matrix transformation T = induced by the matrix
y y
1 a
A= is called an x -shear of R2 (positive if a > 0 and negative if a < 0). Its effect is
0 1
illustrated below when a = 14 and a = − 41 .
y y y
Positive x-shear Negative x-shear
x x + 41 y x − 41 y
y y y
x x x
0 0 a= 1 0 a= − 14
4
y
We hasten to note that there are important geometric transformations
that are not matrix transformations. For example, if w is a fixed column in
x+2
Tw (x) =
y+1 Rn , define the transformation Tw : Rn → Rn by
x=
x
Tw (x) = x + w for all x in Rn
y
x
0 2
Then Tw is called translation by w. In particular, if w = in R2 , the
1
Figure 2.2.6
62 Matrix Algebra
x
effect of Tw on is to translate it two units to the right and one unit
y
up (see Figure 2.2.6).
The translation Tw is not a matrix transformation unless w = 0. Indeed, if Tw were induced by a matrix
A, then Ax = Tw (x) = x + w would hold for every x in Rn . In particular, taking x = 0 gives w = A0 = 0.
Exercise 2.2.6 If x0 and x1 are solutions to the homo- e. If A = a1 a2 a3 in terms of its columns, and
geneous system of equations Ax = 0, use Theorem 2.2.2 if b = 3a1 − 2a2 , then the system Ax = b has a so-
to show that sx0 + tx1 is also a solution for any scalars s lution.
and t (called a linear combination of x0 and x1 ).
f. If A = a1 a2 a3 in terms of its columns,
1 2 and if the system Ax = b has a solution, then
Exercise 2.2.7 Assume that A −1 = 0 = A 0 . b = sa1 + ta2 for some s, t.
2 3
g. If A is m × n and m < n, then Ax = b has a solution
2
Show that x0 = −1 is a solution to Ax = b. Find a for every column b.
3 h. If Ax = b has a solution for some column b, then
two-parameter family of solutions to Ax = b. it has a solution for every column b.
Exercise 2.2.8 In each case write the system in the form i. If x1 and x2 are solutions to Ax = b, then x1 − x2
Ax = b, use the gaussian algorithm to solve the system, is a solution to Ax = 0.
and express the solution as a particular solution plus a
linear combination of basic solutions to the associated j. Let A = a1 a2 a3 in terms of its columns. If
homogeneous system Ax = 0. s
a3 = sa1 + ta2 , then Ax = 0, where x = t .
−1
a. x1 − 2x2 + x3 + 4x4 − x5 = 8
−2x1 + 4x2 + x3 − 2x4 − 4x5 = −1
Exercise 2.2.11 Let T : R2 → R2 be a transformation.
3x1 − 6x2 + 8x3 + 4x4 − 13x5 = 1
In each case show that T is induced by a matrix and find
8x1 − 16x2 + 7x3 + 12x4 − 6x5 = 11
the matrix.
b. x1 − 2x2 + x3 + 2x4 + 3x5 = −4 a. T is a reflection in the y axis.
−3x1 + 6x2 − 2x3 − 3x4 − 11x5 = 11
−2x1 + 4x2 − x3 + x4 − 8x5 = 7 b. T is a reflection in the line y = x.
−x1 + 2x2 + 3x4 − 5x5 = 3
c. T is a reflection in the line y = −x.
d. T is a clockwise rotation through π2 .
1
Exercise 2.2.9 Given vectors a1 = 0 , The projection 3 2
1
Exercise
2.2.12
P : R → R is defined
x x
1 0 x
by P y = for all y in R3 . Show that P is
a2 = 1 , and a3 = −1 , find a vector b that is y
z z
0 1 induced by a matrix and find the matrix.
not a linear combination of a1 , a2 , and a3 . Justify your
Exercise 2.2.13 Let T : R3 → R3 be a transformation.
answer. [Hint: Part (2) of Theorem 2.2.1.]
In each case show that T is induced by a matrix and find
Exercise 2.2.10 In each case either show that the state- the matrix.
ment is true, or give an example showing that it is false.
a. T is a reflection in the x − y plane.
3 1 0 b. T is a reflection in the y − z plane.
a. is a linear combination of and .
2 0 1
Exercise 2.2.14 Fix a > 0 in R, and define Ta : R4 → R4
b. If Ax has a zero entry, then A has a row of zeros. by Ta (x) = ax for all x in R4 . Show that T is induced by
a matrix and find the matrix. [T is called a dilation if
c. If Ax = 0 where x 6= 0, then A = 0.
a > 1 and a contraction if a < 1.]
d. Every linear combination of vectors in Rn can be Exercise 2.2.15 Let A be m × n and let x be in Rn . If A
written in the form Ax. has a row of zeros, show that Ax has a zero entry.
64 Matrix Algebra
Exercise 2.2.16 If a vector b is a linear combination of Exercise 2.2.19 Suppose x1 is a solution to the system
the columns of A, show that the system Ax = b is consis- Ax = b. If x0 is any nontrivial solution to the associ-
tent (that is, it has at least one solution.) ated homogeneous system Ax = 0, show that x1 + tx0 , t a
scalar, is an infinite one parameter family of solutions to
Exercise 2.2.17 If a system Ax = b is inconsistent (no Ax = b. [Hint: Example 2.1.7 Section 2.1.]
solution), show that b is not a linear combination of the
columns of A. Exercise 2.2.20 Let A and B be matrices of the same
size. If x is a solution to both the system Ax = 0 and the
Exercise 2.2.18 Let x1 and x2 be solutions to the homo- system Bx = 0, show that x is a solution to the system
geneous system Ax = 0. (A + B)x = 0.
Exercise 2.2.21 If A is m × n and Ax = 0 for every x in
a. Show that x1 + x2 is a solution to Ax = 0. Rn , show that A = 0 is the zero matrix. [Hint: Consider
Ae j where e j is the jth column of In ; that is, e j is the
b. Show that tx1 is a solution to Ax = 0 for any scalar vector in Rn with 1 as entry j and every other entry 0.]
t. Exercise 2.2.22 Prove part (1) of Theorem 2.2.2.
Exercise 2.2.23 Prove part (2) of Theorem 2.2.2.
x1
x2
x = .. , Definition 2.5 reads
.
xn
Ax = x1 a1 + x2 a2 + · · · + xn an (2.5)
This was motivated as a way of describing systems of linear equations with coefficient matrix A. Indeed
every such system has the form Ax = b where b is the column of constants.
In this section we extend this matrix-vector multiplication to a way of multiplying matrices in gen-
eral, and then investigate matrix algebra for its own sake. While it shares several properties of ordinary
arithmetic, it will soon become clear that matrix arithmetic is different in a number of ways.
Matrix multiplication is closely related to composition of transformations.
2.3. Matrix Multiplication 65
6 When reading the notation S ◦ T , we read S first and then T even though the action is “first T then S ”. This annoying state
of affairs results because we write T (x) for the effect of the transformation T on x, with T on the left. If we wrote this instead
as (x)T , the confusion would not occur. However the notation T (x) is well established.
66 Matrix Algebra
Thus the product matrix AB is given in terms of its columns Ab1 , Ab2 , . . . , Abn : Column j of AB is the
matrix-vector product Ab j of A and the corresponding column b j of B. Note that each such product Ab j
makes sense by Definition 2.5 because A is m × n and each b j is in Rn (since B has n rows). Note also that
if B is a column matrix, this definition reduces to Definition 2.5 for matrix-vector multiplication.
Given matrices A and B, Definition 2.9 and the above computation give
A(Bx) = Ab1 Ab2 · · · Abn x = (AB)x
Theorem 2.3.1
Let A be an m × n matrix and let B be an n × k matrix. Then the product matrix AB is m × k and
satisfies
A(Bx) = (AB)x for all x in Rk
Here is an example of how to compute the product AB of two matrices using Definition 2.9.
Example 2.3.1
2 3 5 8 9
Compute AB if A = 1 4 7 and B = 7 2 .
0 1 8 6 1
8 9
Solution. The columns of B are b1 = 7 and b2 = 2 , so Definition 2.5 gives
6 1
2 3 5 8 67 2 3 5 9 29
Ab1 = 1 4 7 7 = 78 and Ab2 = 1 4 7 2 = 24
0 1 8 6 55 0 1 8 1 10
67 29
Hence Definition 2.9 above gives AB = Ab1 Ab2 = 78 24 .
55 10
2.3. Matrix Multiplication 67
Example 2.3.2
If A is m × n and B is n × k, Theorem 2.3.1 gives a simple formula for the composite of the matrix
transformations TA and TB :
TA ◦ TB = TAB
While Definition 2.9 is important, there is another way to compute the matrix product AB that gives
a way to calculate each individual entry. In Section 2.2 we defined the dot product of two n-tuples to be
the sum of the products of corresponding entries. We went on to show (Theorem 2.2.5) that if A is an
m × n matrix and x is an n-vector, then entry j of the product Ax is the dot product of row j of A with x.
This observation was called the “dot product rule” for matrix-vector multiplication, and the next theorem
shows that it extends to matrix multiplication in general.
Proof. Write B = b1 b2 · · · bn in terms of its columns. Then Ab j is column j of AB for each j.
Hence the (i, j)-entry of AB is entry i of Ab j , which is the dot product of row i of A with b j . This proves
the theorem.
Thus to compute the (i, j)-entry of AB, proceed as follows (see the diagram):
Go across row i of A, and down column j of B, multiply corresponding entries, and add the results.
A B AB
=
Note that this requires that the rows of A must be the same length as the columns of B. The following rule
is useful for remembering this and for deciding the size of the product matrix AB.
68 Matrix Algebra
Compatibility Rule
Let A and B denote matrices. If A is m × n and B is n′ × k, the product AB
A B can be formed if and only if n = n′ . In this case the size of the product
matrix AB is m × k, and we say that AB is defined, or that A and B are
m × n n′ × k
compatible for multiplication.
The diagram provides a useful mnemonic for remembering this. We adopt the following convention:
Convention
Whenever a product of matrices is written, it is tacitly assumed that the sizes of the factors are such that
the product is defined.
To illustrate the dot product rule, we recompute the matrix product in Example 2.3.1.
Example 2.3.3
2 3 5 8 9
Compute AB if A = 1 4 7 and B = 7 2 .
0 1 8 6 1
Solution. Here A is 3 × 3 and B is 3 × 2, so the product matrix AB is defined and will be of size
3 × 2. Theorem 2.3.2 gives each entry of AB as the dot product of the corresponding row of A with
the corresponding column of B j that is,
2 3 5 8 9 2·8+3·7+5·6 2·9+3·2+5·1 67 29
AB = 1 4 7 7 2 = 1 · 8 + 4 · 7 + 7 · 6 1 · 9 + 4 · 2 + 7 · 1 = 78 24
0 1 8 6 1 0·8+1·7+8·6 0·9+1·2+8·1 55 10
Example 2.3.4
Compute the (1, 3)- and (2, 4)-entries of AB where
2 1 6 0
3 −1 2
A= and B = 0 2 3 4 .
0 1 4
−1 0 5 8
Solution. The (1, 3)-entry of AB is the dot product of row 1 of A and column 3 of B (highlighted
in the following display), computed by multiplying corresponding entries and adding the results.
2 1 6 0
3 −1 2
0 2 3 4 (1, 3)-entry = 3 · 6 + (−1) · 3 + 2 · 5 = 25
0 1 4
−1 0 5 8
2.3. Matrix Multiplication 69
Example 2.3.5
5
If A = 1 3 2 and B = 6 , compute A2 , AB, BA, and B2 when they are defined.7
4
Solution. Here, A is a 1 × 3 matrix and B is a 3 × 1 matrix, so A2 and B2 are not defined. However,
the compatibility rule reads
A B B A
and
1×3 3×1 3×1 1×3
so both AB and BA can be formed and these are 1 × 1 and 3 × 3 matrices, respectively.
5
AB = 1 3 2 6 = 1 · 5 + 3 · 6 + 2 · 4 = 31
4
5 5·1 5·3 5·2 5 15 10
BA = 6 1 3 2 = 6 · 1 6 · 3 6 · 2 = 6 18 12
4 4·1 4·3 4·2 4 12 8
Unlike numerical multiplication, matrix products AB and BA need not be equal. In fact they need not
even be the same size, as Example 2.3.5 shows. It turns out to be rare that AB = BA (although it is by no
means impossible), and A and B are said to commute when this happens.
Example 2.3.6
6 9 1 2
Let A = and B = . Compute A2 , AB, BA.
−4 −6 −1 0
7 As for numbers, we write A2 = A · A, A3 = A · A · A, etc. Note that A2 is defined if and only if A is of size n × n for some n.
70 Matrix Algebra
6 9 6 9 0 0
Solution. A2 = = , so A2 = 0 can occur even if A 6= 0. Next,
−4 −6 −4 −6 0 0
6 9 1 2 −3 12
AB = =
−4 −6 −1 0 2 −8
1 2 6 9 −2 −3
BA = =
−1 0 −4 −6 −6 −9
Example 2.3.7
If A is any matrix, then IA = A and AI = A, and where I denotes an identity matrix of a size so that
the multiplications are defined.
If e j denotes column j of I, then Ae j = a j for each j by Example 2.2.12. Hence Definition 2.9
gives:
AI = A e1 e2 · · · en = Ae1 Ae2 · · · Aen = a1 a2 · · · an = A
The following theorem collects several results about matrix multiplication that are used everywhere in
linear algebra.
Theorem 2.3.3
Assume that a is any scalar, and that A, B, and C are matrices of sizes such that the indicated
matrix products are defined. Then:
Proof. Condition (1) is Example 2.3.7; we prove (2), (4), and (6) and leave (3) and (5) as exercises.
1. If C = c1 c2 · · · ck in terms of its columns, then BC = Bc1 Bc2 · · · Bck by Defini-
2.3. Matrix Multiplication 71
tion 2.9, so
A(BC) = A(Bc1 ) A(Bc2 ) · · · A(Bck ) Definition 2.9
= (AB)c1 (AB)c2 · · · (AB)ck ) Theorem 2.3.1
4. We know
(Theorem 2.2.2)
that (B +C)x = Bx +Cx holds for every column x. If we write
A = a1 a2 · · · an in terms of its columns, we get
(B +C)A = (B +C)a1 (B +C)a2 · · · (B +C)an Definition 2.9
= Ba1 +Ca1 Ba2 +Ca2 · · · Ban +Can Theorem 2.2.2
= Ba1 Ba2 · · · Ban + Ca1 Ca2 · · · Can Adding Columns
6. As in Section 2.1, write A = [ai j ] and B = [bi j ], so that AT = [a′i j ] and BT = [b′i j ] where a′i j = a ji and
b′ji = bi j for all i and j. If ci j denotes the (i, j)-entry of BT AT , then ci j is the dot product of row i of
BT with column j of AT . Hence
But this is the dot product of row j of A with column i of B; that is, the ( j, i)-entry of AB; that is,
the (i, j)-entry of (AB)T . This proves (6).
Property 2 in Theorem 2.3.3 is called the associative law of matrix multiplication. It asserts that the
equation A(BC) = (AB)C holds for all matrices (if the products are defined). Hence this product is the
same no matter how it is formed, and so is written simply as ABC. This extends: The product ABCD of
four matrices can be formed several ways—for example, (AB)(CD), [A(BC)]D, and A[B(CD)]—but the
associative law implies that they are all equal and so are written as ABCD. A similar remark applies in
general: Matrix products can be written unambiguously with no parentheses.
However, a note of caution about matrix multiplication must be taken: The fact that AB and BA need
not be equal means that the order of the factors is important in a product of matrices. For example ABCD
and ADCB may not be equal.
Warning
If the order of the factors in a product of matrices is changed, the product matrix may change
(or may not be defined). Ignoring this warning is a source of many errors by students of linear
algebra!
Properties 3 and 4 in Theorem 2.3.3 are called distributive laws. They assert that A(B +C) = AB + AC
and (B +C)A = BA +CA hold whenever the sums and products are defined. These rules extend to more
72 Matrix Algebra
than two terms and, together with Property 5, ensure that many manipulations familiar from ordinary
algebra extend to matrices. For example
Note again that the warning is in effect: For example A(B −C) need not equal AB −CA. These rules make
possible a lot of simplification of matrix expressions.
Example 2.3.8
Simplify the expression A(BC −CD) + A(C − B)D − AB(C − D).
Solution.
A(BC −CD) + A(C − B)D − AB(C − D) = A(BC) − A(CD) + (AC − AB)D − (AB)C + (AB)D
= ABC − ACD + ACD − ABD − ABC + ABD
=0
Example 2.3.9 and Example 2.3.10 below show how we can use the properties in Theorem 2.3.2 to
deduce other facts about matrix multiplication. Matrices A and B are said to commute if AB = BA.
Example 2.3.9
Suppose that A, B, and C are n × n matrices and that both A and B commute with C; that is,
AC = CA and BC = CB. Show that AB commutes with C.
Solution. Showing that AB commutes with C means verifying that (AB)C = C(AB). The
computation uses the associative law several times, as well as the given facts that AC = CA and
BC = CB.
(AB)C = A(BC) = A(CB) = (AC)B = (CA)B = C(AB)
Example 2.3.10
Show that AB = BA if and only if (A − B)(A + B) = A2 − B2 .
Hence if AB = BA, then (A − B)(A + B) = A2 − B2 follows. Conversely, if this last equation holds,
then equation (2.6) becomes
A2 − B2 = A2 + AB − BA − B2
This gives 0 = AB − BA, and AB = BA follows.
2.3. Matrix Multiplication 73
In Section 2.2 we saw (in Theorem 2.2.1) that every system of linear equations has the form
Ax = b
where A is the coefficient matrix, x is the column of variables, and b is the constant matrix. Thus the
system of linear equations becomes a single matrix equation. Matrix multiplication can yield information
about such a system.
Example 2.3.11
Consider a system Ax = b of linear equations where A is an m × n matrix. Assume that a matrix C
exists such that CA = In . If the system Ax = b has a solution, show that this solution must be Cb.
Give a condition guaranteeing that Cb is in fact a solution.
Solution. Suppose that x is any solution to the system, so that Ax = b. Multiply both sides of this
matrix equation by C to obtain, successively,
This shows that if the system has a solution x, then that solution must be x = Cb, as required. But
it does not guarantee that the system has a solution. However, if we write x1 = Cb, then
The ideas in Example 2.3.11 lead to important information about matrices; this will be pursued in the
next section.
Block Multiplication
where the blocks have been labelled as indicated. This is a natural way to partition A into blocks in view of
the blocks I2 and 023 that occur. This notation is particularly useful when we are multiplying the matrices
A and B because the product AB can be computed in block form as follows:
4 −2
I 0 X IX + 0Y X 5 6
AB = = = = 30
P Q Y PX + QY PX + QY 8
8 27
This is easily checked to be the product AB, computed in the conventional manner.
In other words, we can compute the product AB by ordinary matrix multiplication, using blocks as
entries. The only requirement is that the blocks be compatible. That is, the sizes of the blocks must be
such that all (matrix) products of blocks that occur make sense. This means that the number of columns
in each block of A must equal the number of rows in the corresponding block of B.
Theorem 2.3.5
B X B1 X1
Suppose matrices A = and A1 = are partitioned as shown where B and B1
0 C 0 C1
are square matrices of the same size, and C and C1 are also square of the same size. These are
compatible partitionings and block multiplication gives
B X B1 X1 BB1 BX1 + XC1
AA1 = =
0 C 0 C1 0 CC1
2.3. Matrix Multiplication 75
Example 2.3.12
I X
Obtain a formula for Ak where A = is square and I is an identity matrix.
0 0
2
I X I X I IX + X 0 I X
Solution. We have = A2 = = = A. Hence
0 0 0 0 0 02 0 0
A3 = AA2 = AA = A2 = A. Continuing in this way, we see that Ak = A for every k ≥ 1.
Block multiplication has theoretical uses as we shall see. However, it is also useful in computing
products of matrices in a computer with limited memory capacity. The matrices are partitioned into blocks
in such a way that each product of blocks can be handled. Then the blocks are stored in auxiliary memory
and their products are computed one by one.
Directed Graphs
The study of directed graphs illustrates how matrix multiplication arises in ways other than the study of
linear equations or matrix transformations.
A directed graph consists of a set of points (called vertices) connected by arrows (called edges). For
example, the vertices could represent cities and the edges available flights. If the graph has n vertices
v1 , v2 , . . . , vn , the adjacency matrix A = ai j is the n × n matrix whose (i, j)-entry ai j is 1 if there is an
edge from v j to vi (note the order),
and zero otherwise. For example, the adjacency matrix of the directed
1 1 0
graph shown is A = 1 0 1 .
1 0 0
A path of length r (or an r-path) from vertex j to vertex i is a sequence
v1 v2 of r edges leading from v j to vi . Thus v1 → v2 → v1 → v1 → v3 is a 4-path
from v1 to v3 in the given graph. The edges are just the paths of length 1,
so the (i, j)-entry ai j of the adjacency matrix A is the number of 1-paths
v3 from v j to vi . This observation has an important extension:
Theorem 2.3.6
If A is the adjacency matrix of a directed graph with n vertices, then the (i, j)-entry of Ar is the
number of r-paths v j → vi .
Hence, since the (2, 1)-entry of A2 is 2, there are two 2-paths v1 → v2 (in fact they are v1 → v1 → v2 and
v1 → v3 → v2 ). Similarly, the (2, 3)-entry of A2 is zero, so there are no 2-paths v3 → v2 , as the reader
76 Matrix Algebra
can verify. The fact that no entry of A3 is zero shows that it is possible to go from any vertex to any other
vertex in exactly three steps.
To see why Theorem 2.3.6 is true, observe that it asserts that
holds for each r ≥ 1. We proceed by induction on r (see Appendix C). The case r = 1 is the definition of
the adjacency matrix. So assume inductively that (2.7) is true for some r ≥ 1; we must prove that (2.7)
also holds for r + 1. But every (r + 1)-path
v j → vi is the result of an r-path v j → vk for some k, followed
by a 1-path vk → vi . Writing A = ai j and A = bi j , there are bk j paths of the former type (by induction)
r
and aik of the latter type, and so there are aik bk j such paths in all. Summing over k, this shows that there
are
ai1 b1 j + ai2 b2 j + · · · + ain bn j (r + 1)-paths v j → vi
T
But this sum is the dot product of the ith row ai1 ai2 · · · ain of A with the jth column b1 j b2 j · · · bn j
of Ar . As such, it is the (i, j)-entry of the matrix product Ar A = Ar+1 . This shows that (2.7) holds for
r + 1, as required.
(A + B)2 = A2 + 2AB + B2
a. tr (A + B) = tr A + tr B.
b. tr (kA) = k tr (A) for any number k. b. AB = BA if and only if
c. tr (AT ) = tr (A). d. tr (AB) = tr (BA).
(A + B)(A − B) = (A − B)(A + B)
e. tr (AAT ) is the sum of the squares of all entries of
A.
Exercise 2.3.35 In Theorem 2.3.3, prove
Three basic operations on matrices, addition, multiplication, and subtraction, are analogs for matrices of
the same operations for numbers. In this section we introduce the matrix analog of numerical division.
To begin, consider how a numerical equation ax = b is solved when a and b are known numbers. If
a = 0, there is no solution (unless b = 0). But if a 6= 0, we can multiply both sides by the inverse a−1 = a1
to obtain the solution x = a−1 b. Of course multiplying by a−1 is just dividing by a, and the property of
a−1 that makes this work is that a−1 a = 1. Moreover, we saw in Section 2.2 that the role that 1 plays in
arithmetic is played in matrix algebra by the identity matrix I. This suggests the following definition.
AB = I and BA = I
Example 2.4.1
−1 1 0 1
Show that B = is an inverse of A = .
1 0 1 1
Example 2.4.2
0 0
Show that A = has no inverse.
1 3
a b
Solution. Let B = denote an arbitrary 2 × 2 matrix. Then
c d
0 0 a b 0 0
AB = =
1 3 c d a + 3c b + 3d
8 Only square matrices have inverses. Even though it is plausible that nonsquare matrices A and B could exist such that
AB = Im and BA = In , where A is m × n and B is n × m, we claim that this forces n = m. Indeed, if m < n there exists a nonzero
column x such that Ax = 0 (by Theorem 1.3.1), so x = In x = (BA)x = B(Ax) = B(0) = 0, a contradiction. Hence m ≥ n.
Similarly, the condition AB = Im implies that n ≥ m. Hence m = n so A is square.
2.4. Matrix Inverses 81
The argument in Example 2.4.2 shows that no zero matrix has an inverse. But Example 2.4.2 also
shows that, unlike arithmetic, it is possible for a nonzero matrix to have no inverse. However, if a matrix
does have an inverse, it has only one.
Theorem 2.4.1
If B and C are both inverses of A, then B = C.
If A is an invertible matrix, the (unique) inverse of A is denoted A−1 . Hence A−1 (when it exists) is a
square matrix of the same size as A with the property that
AA−1 = I and A−1 A = I
These equations characterize A−1 in the following sense:
Inverse Criterion: If somehow a matrix B can be found such that AB = I and BA = I , then A
is invertible and B is the inverse of A; in symbols, B = A−1 .
This is a way to verify that the inverse of a matrix exists. Example 2.4.3 and Example 2.4.4 offer illustra-
tions.
Example 2.4.3
0 −1
If A = , show that A3 = I and so find A−1 .
1 −1
2 0 −1 0 −1 −1 1
Solution. We have A = = , and so
1 −1 1 −1 −1 0
3 2 −1 1 0 −1 1 0
A =A A= = =I
−1 0 1 −1 0 1
a b
The next example presents a useful formula for the inverse of a 2 × 2 matrix A = when it
c d
exists. To state it, we define the determinant det A and the adjugate adj A of the matrix A as follows:
a b a b d −b
det = ad − bc, and adj =
c d c d −c a
82 Matrix Algebra
Example 2.4.4
a b
If A = , show that A has an inverse if and only if det A 6= 0, and in this case
c d
1
A−1 = det A adj A
d −b
Solution. For convenience, write e = det A = ad − bc and B = adj A = . Then
−c a
AB = eI = BA as the reader can verify. So if e 6= 0, scalar multiplication by 1e gives
A( 1e B) = I = ( 1e B)A
Hence A is invertible and A−1 = 1e B. Thus it remains only to show that if A−1 exists, then e 6= 0.
We prove this by showing that assuming e = 0 leads to a contradiction. In fact, if e = 0, then
AB = eI = 0, so left multiplication by A−1 gives A−1 AB = A−1 0; that is, IB = 0, so B = 0. But this
implies that a, b, c, and d are all zero, so A = 0, contrary to the assumption that A−1 exists.
2 4
As an illustration, if A = then det A = 2 · 8 − 4 · (−3) = 28 6= 0. Hence A is invertible and
−3 8
−1 1 1 8 −4
A = det A adj A = 28 , as the reader is invited to verify.
3 2
The determinant and adjugate will be defined in Chapter 3 for any square matrix, and the conclusions
in Example 2.4.4 will be proved in full generality.
Matrix inverses can be used to solve certain systems of linear equations. Recall that a system of linear
equations can be written as a single matrix equation
Ax = b
where A and b are known and x is to be determined. If A is invertible, we multiply each side of the equation
on the left by A−1 to get
A−1 Ax = A−1 b
Ix = A−1 b
x = A−1 b
This gives the solution to the system of equations (the reader should verify that x = A−1 b really does
satisfy Ax = b). Furthermore, the argument shows that if x is any solution, then necessarily x = A−1 b, so
the solution is unique. Of course the technique works only when the coefficient matrix A has an inverse.
This proves Theorem 2.4.2.
2.4. Matrix Inverses 83
Theorem 2.4.2
Suppose a system of n equations in n variables is written in matrix form as
Ax = b
If the n × n coefficient matrix A is invertible, the system has the unique solution
x = A−1 b
Example 2.4.5
5x1 − 3x2 = −4
Use Example 2.4.4 to solve the system .
7x1 + 4x2 = 8
5 −3 x1 −4
Solution. In matrix form this is Ax = b where A = ,x= , and b = . Then
7 4 x2 8
1 4 3
det A = 5 · 4 − (−3) · 7 = 41, so A is invertible and A−1 = 41 by Example 2.4.4. Thus
−7 5
Theorem 2.4.2 gives
−1 1 4 3 −4 1 8
x = A b = 41 = 41
−7 5 8 68
8 68
so the solution is x1 = 41 and x2 = 41 .
An Inversion Method
If a matrix A is n × n and invertible, it is desirable to have an efficient technique for finding the inverse.
The following procedure will be justified in Section 2.5.
Example 2.4.6
Use the inversion algorithm to find the inverse of the matrix
2 7 1
A = 1 4 −1
1 3 0
Next subtract 2 times row 1 from row 2, and subtract row 1 from row 3.
1 4 −1 0 1 0
0 −1 3 1 −2 0
0 −1 1 0 −1 1
Given any n × n matrix A, Theorem 1.2.1 shows that A can be carried by elementary row operations to
a matrix R in reduced row-echelon form. If R = I, the matrix A is invertible (this will be proved in the next
section), so the algorithm produces A−1 . If R 6= I, then R has a row of zeros (it is square), so no system of
linear equations Ax = b can have a unique solution. But then A is not invertible by Theorem 2.4.2. Hence,
the algorithm is effective in the sense conveyed in Theorem 2.4.3.
2.4. Matrix Inverses 85
Theorem 2.4.3
If A is an n × n matrix, either A can be reduced to I by elementary row operations or it cannot. In
the first case, the algorithm produces A−1 ; in the second case, A−1 does not exist.
Properties of Inverses
1. If AB = AC, then B = C.
2. If BA = CA, then B = C.
Solution. Given the equation AB = AC, left multiply both sides by A−1 to obtain A−1 AB = A−1 AC.
Thus IB = IC, that is B = C. This proves (1) and the proof of (2) is left to the reader.
Properties (1) and (2) in Example 2.4.7 are described by saying that an invertible matrix can be “left
cancelled” and “right cancelled”, respectively. Note however that “mixed” cancellation does not hold in
general: If A is invertible and AB = CA, then B and C may not be equal, even if both are 2 × 2. Here is a
specific example:
1 1 0 0 1 1
A= , B= , C=
0 1 1 2 1 1
Sometimes the inverse of a matrix is given by a formula. Example 2.4.4 is one illustration; Example 2.4.8
and Example 2.4.9 provide two more. The idea is the Inverse Criterion: If a matrix B can be found such
that AB = I = BA, then A is invertible and A−1 = B.
Example 2.4.8
If A is an invertible matrix, show that the transpose AT is also invertible. Show further that the
inverse of AT is just the transpose of A−1 ; in symbols, (AT )−1 = (A−1 )T .
Solution. A−1 exists (by assumption). Its transpose (A−1 )T is the candidate proposed for the
inverse of AT . Using the inverse criterion, we test it as follows:
Hence (A−1 )T is indeed the inverse of AT ; that is, (AT )−1 = (A−1 )T .
86 Matrix Algebra
Example 2.4.9
If A and B are invertible n × n matrices, show that their product AB is also invertible and
(AB)−1 = B−1 A−1 .
Solution. We are given a candidate for the inverse of AB, namely B−1 A−1 . We test it as follows:
Hence B−1 A−1 is the inverse of AB; in symbols, (AB)−1 = B−1 A−1 .
Theorem 2.4.4
All the following matrices are square matrices of the same size.
1. I is invertible and I −1 = I .
Proof.
1. This is an immediate consequence of the fact that I 2 = I.
2. The equations AA−1 = I = A−1 A show that A is the inverse of A−1 ; in symbols, (A−1 )−1 = A.
The reversal of the order of the inverses in properties 3 and 4 of Theorem 2.4.4 is a consequence of
the fact that matrix multiplication is not commutative. Another manifestation of this comes when matrix
equations are dealt with. If a matrix equation B = C is given, it can be left-multiplied by a matrix A to yield
AB = AC. Similarly, right-multiplication gives BA = CA. However, we cannot mix the two: If B = C, it
1 1 0 0
need not be the case that AB = CA even if A is invertible, for example, A = ,B= = C.
0 1 1 0
Part 7 of Theorem 2.4.4 together with the fact that (AT )T = A gives
Corollary 2.4.1
A square matrix A is invertible if and only if AT is invertible.
Example 2.4.10
2 1
Find A if (AT − 2I)−1 = .
−1 0
The following important theorem collects a number of conditions all equivalent9 to invertibility. It will
be referred to frequently below.
1. A is invertible.
2. The homogeneous system Ax = 0 has only the trivial solution x = 0.
3. A can be carried to the identity matrix In by elementary row operations.
9 If
p and q are statements, we say that p implies q (written p ⇒ q) if q is true whenever p is true. The statements are called
equivalent if both p ⇒ q and q ⇒ p (written p ⇔ q, spoken “p if and only if q”). See Appendix B.
88 Matrix Algebra
4. The system Ax = b has at least one solution x for every choice of column b.
Proof. We show that each of these conditions implies the next, and that (5) implies (1).
(1) ⇒ (2). If A−1 exists, then Ax = 0 gives x = In x = A−1 Ax = A−1 0 = 0.
(2) ⇒ (3). Assume that (2) is true. Certainly A → R by row operations where R is a reduced, row-
echelon matrix. It suffices to show that R = In . Suppose that this is not the case. Then R has a row
of
zeros (being square).
Now consider the augmented
matrix A 0 of the system Ax = 0. Then
A 0 → R 0 is the reduced form, and R 0 also has a row of zeros. Since R is square there
must be at least one nonleading variable, and hence at least one parameter. Hence the system Ax = 0 has
infinitely many solutions, contrary to (2). So R = In after all.
(3) ⇒ (4). Consider the augmented matrix A b of the system Ax =b. Using (3), let A → In by a
sequence of row operations. Then these same operations carry A b → In c for some column c.
Hence the system Ax = b has a solution (in fact unique) by gaussian elimination. This proves (4).
(4) ⇒ (5). Write In = e1 e2 · · · en where e1 , e2 , . . . , en are the columns of In . For each
j = 1, 2, . . . , n, the system Ax = e j has a solution c j by (4), so Ac j = e j . Now let C = c1 c2 · · · cn
be the n × n matrix with these matrices c j as its columns. Then Definition 2.9 gives (5):
AC = A c1 c2 · · · cn = Ac1 Ac2 · · · Acn = e1 e2 · · · en = In
(5) ⇒ (1). Assume that (5) is true so that AC = In for some matrix C. Then Cx = 0 implies x = 0 (because
x = In x = ACx = A0 = 0). Thus condition (2) holds for the matrix C rather than A. Hence the argument
above that (2) ⇒ (3) ⇒ (4) ⇒ (5) (with A replaced by C) shows that a matrix C′ exists such that CC′ = In .
But then
A = AIn = A(CC′ ) = (AC)C′ = InC′ = C′
Thus CA = CC′ = In which, together with AC = In , shows that C is the inverse of A. This proves (1).
The proof of (5) ⇒ (1) in Theorem 2.4.5 shows that if AC = I for square matrices, then necessarily
CA = I, and hence that C and A are inverses of each other. We record this important fact for reference.
Corollary 2.4.1
If A and C are square matrices such that AC = I , then also CA = I . In particular, both A and C are
invertible, C = A−1 , and A = C−1 .
1. If AC = I then C = A−1 .
2. If CA = I then C = A−1 .
Observe that Corollary 2.4.1 is false if A and C are not square matrices. For example, we have
−1 1 −1 1
1 2 1 1 2 1
1 −1 = I2 but 1 −1 6= I3
1 1 1 1 1 1
0 1 0 1
2.4. Matrix Inverses 89
Corollary 2.4.2
An n × n matrix A is invertible if and only if rank A = n.
Example 2.4.11
A X A 0
Let P = and Q = be block matrices where A is m × m and B is n × n (possibly
0 B Y B
m 6= n).
a. Show that P is invertible if and only if A and B are both invertible. In this case, show that
−1
−1 A −A−1 X B−1
P =
0 B−1
b. Show that Q is invertible if and only if A and B are both invertible. In this case, show that
−1 A−1 0
Q =
−B−1YA−1 B−1
AC + XW = Im , BW = 0, and BD = In
Let T = TA : Rn → Rn denote the matrix transformation induced by the n × n matrix A. Since A is square,
it may very well be invertible, and this leads to the question:
To answer this, let T ′ = TA−1 : Rn → Rn denote the transformation induced by A−1 . Then
T ′ [T (x)] = A−1 [Ax] = Ix = x
for all x in Rn (2.8)
T [T ′ (x)] = A A−1 x = Ix = x
The first of these equations asserts that, if T carries x to a vector T (x), then T ′ carries T (x) right back to
x; that is T ′ “reverses” the action of T . Similarly T “reverses” the action of T ′ . Conditions (2.8) can be
stated compactly in terms of composition:
T ′ ◦ T = 1 Rn and T ◦ T ′ = 1 Rn (2.9)
When these conditions hold, we say that the matrix transformation T ′ is an inverse of T , and we have
shown that if the matrix A of T is invertible, then T has an inverse (induced by A−1 ).
The converse is also true: If T has an inverse, then its matrix A must be invertible. Indeed, suppose
S : Rn → Rn is any inverse of T , so that S ◦ T = 1Rn and T ◦ S = 1Rn . It can be shown that S is also a matrix
transformation. If B is the matrix of S, we have
BAx = S [T (x)] = (S ◦ T )(x) = 1Rn (x) = x = In x for all x in Rn
It follows by Theorem 2.2.6 that BA = In , and a similar argument shows that AB = In . Hence A is invertible
with A−1 = B. Furthermore, the inverse transformation S has matrix A−1 , so S = T ′ using the earlier
notation. This proves the following important theorem.
Theorem 2.4.6
Let T : Rn → Rn denote the matrix transformation induced by an n × n matrix A. Then
In this case, T has exactly one inverse (which we denote as T −1 ), and T −1 : Rn → Rn is the
transformation induced by the matrix A−1 . In other words
Here is an example.
Example 2.4.12
0 1
Find the inverse of A = by viewing it as a linear
1 0
y
Q1
x
y
= transformation R2 → R2 .
y
x
y=x x 0 1 x y
Solution. If x = the vector Ax = =
y 1 0 y x
x is the result of reflecting x in the line y = x (see the diagram).
y Hence, if Q1 : R2 → R2 denotes reflection in the line y = x, then
A is the matrix of Q1 . Now observe that Q1 reverses itself because
0 x
reflecting a vector x twice results in x. Consequently Q−1 1 = Q1 .
−1 −1 −1
Since A is the matrix of Q1 and A is the matrix of Q, it follows that A = A. Of course this
conclusion is clear by simply observing directly that A2 = I, but the geometric method can often
work where these other methods may be less straightforward.
1 2 0 0 0 Exercise 2.4.6 Find A when:
1 0 7 5
0 0 1 3 0 0
1 3 6
k.
1 −1 5
l.
0 0 1 5 0
1 −1 3 0 1 −1
2
0 0 0 1 7 a. A−1 = 2 1 1 b. A−1 = 1 2 1
1 −1 5 1
0 0 0 0 1 0 2 −2 1 0 1
Exercise 2.4.24 In each case assume that A is a square Exercise 2.4.29 Prove property 6 of Theorem 2.4.4:
matrix that satisfies the given condition. Show that A is If A is invertible and a =
6 0, then aA is invertible and
invertible and find a formula for A−1 in terms of A. (aA)−1 = 1a A−1
Exercise 2.4.30 Let A, B, and C denote n × n matrices.
a. A3 − 3A + 2I = 0. Using only Theorem 2.4.4, show that:
a. If A and AB are invertible, show that B is invertible Exercise 2.4.31 Let A and B denote invertible n × n ma-
using only (2) and (3) of Theorem 2.4.4. trices.
b. If AB is invertible, show that both A and B are in- a. If A−1 = B−1 , does it mean that A = B? Explain.
vertible using Theorem 2.4.5. b. Show that A = B if and only if A−1 B = I.
Exercise 2.4.26 In each case find the inverse of the ma- Exercise 2.4.32 Let A, B, and C be n × n matrices, with
trix A using Example 2.4.11. A and B invertible. Show that
a. If A commutes with C, then A−1 commutes with
−1 1 2 3 1 0
a. A = 0 2 −1 b. A = 5 2 0 C.
0 1 −1 1 3 −1 b. If A commutes with B, then A−1 commutes with
B−1 .
3 4 0 0
2 3 0 0
c. A =
1 −1 1 3
Exercise 2.4.33 Let A and B be square matrices of the
3 1 1 4 same size.
2 1 5 2 a. Show that (AB)2 = A2 B2 if AB = BA.
1 1 −1 0
d. A =
0 0
b. If A and B are invertible and (AB)2 = A2 B2 , show
1 −1
0 0 1 −2 that AB = BA.
1 0 1 1
c. If A = and B = , show that
Exercise 2.4.27 If A and B are invertible symmetric ma- 0 0 0 0
trices such that AB = BA, show that A−1 , AB, AB−1 , and (AB)2 = A2 B2 but AB 6= BA.
A−1 B−1 are also invertible and symmetric.
Exercise 2.4.34 Let A and B be n × n matrices for which
Exercise 2.4.28 Let A be an n × n matrix and let I be the
AB is invertible. Show that A and B are both invertible.
n × n identity matrix.
1 3 −1
Exercise 2.4.35 Consider A = 2 1 5 ,
a. If A2 = 0, verify that (I − A)−1 = I + A. 1 −7 13
b. If A3 = 0, verify that (I − A)−1 = I + A + A2. 1 1 2
B = 3 0 −3 .
1 2 −1 −2 5 17
c. Find the inverse of 0 1 3 .
0 0 1 a. Show that A is not invertible by finding a nonzero
1 × 3 matrix Y such that YA = 0.
d. If An = 0, find the formula for (I − A)−1 . [Hint: Row 3 of A equals 2(row 2) − 3(row 1).]
2.5. Elementary Matrices 95
a. If J is the 4 × 4 matrix with every entry 1, show a. Show that A−1 + B−1 = A−1 (A + B)B−1 .
that I − 12 J is self-inverse and symmetric.
b. If A + B is also invertible, show that A−1 + B−1 is
b. If X is n × m and satisfies X T X = Im , show that invertible and find a formula for (A−1 + B−1)−1 .
In − 2X X T is self-inverse and symmetric.
Exercise 2.4.42 Let A and B be n × n matrices, and let I
Exercise 2.4.39 An n × n matrix P is called an idempo- be the n × n identity matrix.
tent if P2 = P. Show that:
a. Verify that A(I + BA) = (I + AB)A and that
a. I is the only invertible idempotent. (I + BA)B = B(I + AB).
It is now clear that elementary row operations are important in linear algebra: They are essential in solving
linear systems (using the gaussian algorithm) and in inverting a matrix (using the matrix inversion algo-
rithm). It turns out that they can be performed by left multiplying by certain invertible matrices. These
matrices are the subject of this section.
Hence
0 1 1 0 1 5
E1 = , E2 = , and E3 =
1 0 0 9 0 1
are elementary of types I, II, and III, respectively, obtained from the 2 × 2 identity matrix by interchanging
rows 1 and 2, multiplying row 2 by 9, and adding 5 times row 2 to row 1.
96 Matrix Algebra
a b c
Suppose now that the matrix A = is left multiplied by the above elementary matrices E1 ,
p q r
E2 , and E3 . The results are:
0 1 a b c p q r
E1 A = =
1 0 p q r a b c
1 0 a b c a b c
E2 A = =
0 9 p q r 9p 9q 9r
1 5 a b c a + 5p b + 5q c + 5r
E3 A = =
0 1 p q r p q r
In each case, left multiplying A by the elementary matrix has the same effect as doing the corresponding
row operation to A. This works in general.
Lemma 2.5.1: 10
If an elementary row operation is performed on an m × n matrix A, the result is EA where E is the
elementary matrix obtained by performing the same operation on the m × m identity matrix.
Proof. We prove it for operations of type III; the proofs for types I and II are left as exercises. Let E be the
elementary matrix corresponding to the operation that adds k times row p to row q 6= p. The proof depends
on the fact that each row of EA is equal to the corresponding row of E times A. Let K1 , K2 , . . . , Km denote
the rows of Im . Then row i of E is Ki if i 6= q, while row q of E is Kq + kK p . Hence:
If i 6= q then row i of EA = Ki A = (row i of A).
Row q of EA = (Kq + kK p )A = Kq A + k(K p A)
= (row q of A) plus k (row p of A).
Thus EA is the result of adding k times row p of A to row q, as required.
The effect of an elementary row operation can be reversed by another such operation (called its inverse)
which is also elementary of the same type (see the discussion following (Example 1.1.3). It follows that
each elementary matrix E is invertible. In fact, if a row operation on I produces E, then the inverse
operation carries E back to I. If F is the elementary matrix corresponding to the inverse operation, this
means FE = I (by Lemma 2.5.1). Thus F = E −1 and we have proved
Lemma 2.5.2
Every elementary matrix E is invertible, and E −1 is also a elementary matrix (of the same type).
Moreover, E −1 corresponds to the inverse of the row operation that produces E .
The following table gives the inverse of each type of elementary row operation:
Example 2.5.1
Find the inverse of each of the elementary matrices
0 1 0 1 0 0 1 0 5
E1 = 1 0 0 , E2 = 0 1 0 , and E3 = 0 1 0 .
0 0 1 0 0 9 0 0 1
Solution. E1 , E2 , and E3 are of type I, II, and III respectively, so the table gives
0 1 0 1 0 0 1 0 −5
E1−1 = 1 0 0 = E1 , E2−1 = 0 1 0 , and E3−1 = 0 1 0 .
0 0 1 1 0 0 1
0 0 9
A → E1 A → E2 E1 A → E3 E2 E1 A → · · · → Ek Ek−1 · · · E2 E1 A = B
In other words,
A → UA = B where U = Ek Ek−1 · · · E2 E1
The matrix U = Ek Ek−1 · · · E2 E1 is invertible, being a product of invertible matrices by Lemma 2.5.2.
Moreover, U can be computed without finding the Ei as follows: If the above series of operations carrying
A → B is performed
on Im in place
of A,the result is Im → U Im = U . Hence this series of operations carries
the block matrix A Im → B U . This, together with the above discussion, proves
Theorem 2.5.1
Suppose A is m × n and A → B by elementary row operations.
Example 2.5.2
2 3 1
If A = , express the reduced row-echelon form R of A as R = UA where U is invertible.
1 2 1
Solution. Reduce the double matrix A I → R U as follows:
2 3 1 1 0 1 2 1 0 1 1 2 1 0 1
A I = → →
1 2 1 0 1 2 3 1 1 0 0 −1 −1 1 −2
1 0 −1 2 −3
→
0 1 1 −1 2
1 0 −1 2 −3
Hence R = and U = .
0 1 1 −1 2
Theorem 2.5.2
A square matrix is invertible if and only if it is a product of elementary matrices.
It follows from Theorem 2.5.1 that A → B by row operations if and only if B = UA for some invertible
matrix B. In this case we say that A and B are row-equivalent. (See Exercise 2.5.17.)
Example 2.5.3
−2 3
Express A = as a product of elementary matrices.
1 0
Theorem 2.5.3
Let A be an m × n matrix of rank r. There exist invertible matrices U and V of size m × m and
n × n, respectively, such that
Ir 0
UAV =
0 0 m×n
Moreover, if R is the reduced row-echelon form of A, then:
1. U can be computed by A Im → R U ;
T Ir 0
2. V can be computed by R In → V .
T
0 0 n×m
Ir 0
If A is an m × n matrix of rank r, the matrix is called the Smith normal form11 of A.
0 0
Whereas the reduced row-echelon form of A is the “nicest” matrix to which A can be carried by row
operations, the Smith canonical form is the “nicest” matrix to which A can be carried by row and column
operations. This is because doing row operations to RT amounts to doing column operations to R and then
transposing.
Example 2.5.4
1 −1 1 2
Ir 0
Given A = 2 −2 1 −1 , find invertible matrices U and V such that UAV = ,
0 0
−1 1 0 3
where r = rank A.
Hence
1 −1 0 −3 −1 1 0
R= 0 0 1 5 and U = 2 −1 0
0 0 0 0 −1 1 1
T Ir 0
In particular, r = rank R = 2. Now row-reduce R I4 → VT :
0 0
1 0 0 1 0 0 0 1 0 0 1 0 0 0
−1 0 0 0 1 0
0 0 1 0 0 0 1 0
→
0 1 0 0 0 1 0 0 0 0 1 1 0 0
−3 5 0 0 0 0 1 0 0 0 3 0 −5 1
whence
1 0 0 0 1 0 1 3
0 0 1 0 0 0 1 0
VT = 1 1
so V =
0 0 0 1 0 −5
3 0 −5 −1 0 0 0 1
I 0
Then UAV = 2 as is easily verified.
0 0
In this short subsection, Theorem 2.5.1 is used to prove the following important theorem.
Theorem 2.5.4
If a matrix A is carried to reduced row-echelon matrices R and S by row operations, then R = S.
Proof. Observe first that U R = S for some invertible matrix U (by Theorem 2.5.1 there exist invertible
matrices P and Q such that R = PA and S = QA; take U = QP−1 ). We show that R = S by induction on
2.5. Elementary Matrices 101
the number m of rows of R and S. The case m = 1 is left to the reader. If R j and S j denote column j in R
and S respectively, the fact that U R = S gives
Since U is invertible, this shows that R and S have the same zero columns. Hence, by passing to the
matrices obtained by deleting the zero columns from R and S, we may assume that R and S have no zero
columns.
But then the first column of R and S is the first column of Im because R and S are row-echelon, so
(2.11) shows that the first column of U is column 1 of Im . Now write U , R, and S in block form as follows.
1 X 1 X 1 Z
U= , R= , and S =
0 V 0 R′ 0 S′
Since U R = S, block multiplication gives V R′ = S′ so, since V is invertible (U is invertible) and both R′
and S′ are reduced row-echelon, we obtain R′ = S′ by induction. Hence R and S have the same number
(say r) of leading 1s, and so both have m–r zero rows.
In fact, R and S have leading ones in the same columns, say r of them. Applying (2.11) to these
columns shows that the first r columns of U are the first r columns of Im . Hence we can write U , R, and S
in block form as follows:
Ir M R1 R2 S1 S2
U= , R= , and S =
0 W 0 0 0 0
where R1 and S1 are r × r. Then block multiplication gives U R = R; that is, S = R. This completes the
proof.
Exercise 2.5.1 For each of the following elementary Exercise 2.5.2 In each case find an elementary matrix
matrices, describe the corresponding elementary row op- E such that B = EA.
eration and write the inverse.
2 1 2 1
a. A = ,B=
1 0 3 0 0 1 3 −1 1 −2
a. E = 0 1 0 b. E = 0 1 0
0 0 1 1 0 0 −1 2 1 −2
b. A = ,B=
0 1 0 1
1 0 0 1 0 0
c. E = 0 21 0 d. E = −2 1 0 1 1 −1 2
c. A = ,B=
0 0 1 0 0 1 −1 2 1 1
0 1 0 1 0 0 4 1 1 −1
d. A = ,B=
e. E = 1 0 0 f. E = 0 1 0 3 2 3 2
0 0 1 0 0 5
−1 1 −1 1
e. A = ,B=
1 −1 −1 1
102 Matrix Algebra
2 1 −1 3 Exercise 2.5.8 In each case factor A as a product of el-
f. A = ,B=
−1 3 2 1 ementary matrices.
1 1 2 3
1 2 a. A = b. A =
Exercise 2.5.3 Let A = and 2 1 1 2
−1 1
1 0 2 1 0 −3
−1 1
C= . c. A = 0 1 1 d. A = 0 1 4
2 1
2 1 6 −2 2 15
a. Find elementary matrices E1 and E2 such that Exercise 2.5.9 Let E be an elementary matrix.
C = E2 E1 A.
a. Show that E T is also elementary of the same type.
b. Show that there is no elementary matrix E such
that C = EA. b. Show that E T = E if E is of type I or II.
1 −1 2 1 2 1 1 1 −1 3 2
a. A = b. A = a. A = b. A =
−2 1 0 5 12 −1 −2 −2 4 2 1
1 −1 2 1
1 2 −1 0
c. A = 2 −1 0 3
c. A = 3 1 1 2
0 1 −4 1
1 −3 3 2
1 1 0 −1
2 1 −1 0 d. A = 3 2 1 1
d. A = 3 −1 2 1 1 0 1 3
1 −2 3 1
Exercise 2.5.13 Prove Lemma 2.5.1 for elementary ma-
Exercise 2.5.7 In each case find an invertible matrix U trices of:
such that UA = B, and express U as a product of elemen-
tary matrices. a. type I; b. type II.
2 1 3 1 −1 −2 Exercise 2.5.14
While
trying to invert A, A I
a. A = ,B= is carried to P Q by row operations. Show that
−1 1 2 3 0 1
P = QA.
2 −1 0 3 0 1 Exercise 2.5.15 If A and B are n × n matrices and AB is
b. A = ,B=
1 1 1 2 −1 0 a product of elementary matrices, show that the same is
true of A.
2.5. Elementary Matrices 103
Exercise 2.5.20 Let A and B be m × n and n × m matri- b. Prove that two m × n matrices are equivalent if
ces, respectively. If m > n, show that AB is not invertible. they have the same rank . [Hint: Use part (a) and
[Hint: Use Theorem 1.3.1 to find x 6= 0 with Bx = 0.] Theorem 2.5.3.]
104 Matrix Algebra
is called the matrix transformation induced by A. In Section 2.2, we saw that many important geometric
transformations were in fact matrix transformations. These transformations can be characterized in a
different way. The new idea is that of a linear transformation, one of the basic notions in linear algebra. We
define these transformations in this section, and show that they are really just the matrix transformations
looked at in another way. Having these two ways to view them turns out to be useful because, in a given
situation, one perspective or the other may be preferable.
Linear Transformations
T1 T (x + y) = T (x) + T (y)
T2 T (ax) = aT (x)
Of course, x + y and ax here are computed in Rn , while T (x) + T (y) and aT (x) are in Rm . We say that T
preserves addition if T1 holds, and that T preserves scalar multiplication if T2 holds. Moreover, taking
a = 0 and a = −1 in T2 gives
Hence T preserves the zero vector and the negative of a vector. Even more is true.
Recall that a vector y in Rn is called a linear combination of vectors x1 , x2 , . . . , xk if y has the form
y = a1 x1 + a2 x2 + · · · + ak xk
for some scalars a1 , a2 , . . . , ak . Conditions T1 and T2 combine to show that every linear transformation
T preserves linear combinations in the sense of the following theorem. This result is used repeatedly in
linear algebra.
The proof for any k is similar, using the previous case k − 1 and Conditions T1 and T2.
The method of proof in Theorem 2.6.1 is called mathematical induction (Appendix C).
Theorem 2.6.1 shows that if T is a linear transformation and T (x1 ), T (x2 ), . . . , T (xk ) are all known,
then T (y) can be easily computed for any linear combination y of x1 , x2 , . . . , xk . This is a very useful
property of linear transformations, and is illustrated in the next example.
Example 2.6.1
1 2 1 5 4
If T : R2 → R2
is a linear transformation, T = and T = , find T .
1 −3 −2 1 3
4 1 1
Solution. Write z = ,x= , and y = for convenience. Then we know T (x) and
3 1 −2
T (y) and we want T (z), so it is enough by Theorem 2.6.1 to express z as a linear combination of x
and y. That is, we want to find numbers a and b such that z = ax + by. Equating entries gives two
equations 4 = a + b and 3 = a − 2b. The solution is, a = 11 1 11 1
3 and b = 3 , so z = 3 x + 3 y. Thus
Theorem 2.6.1 gives
11 1 11 2 1 5 1 27
T (z) = 3 T (x) + 3 T (y) = 3 +3 =3
−3 1 −32
Example 2.6.2
If A is m × n, the matrix transformation TA : Rn → Rm , is a linear transformation.
and
TA (ax) = A(ax) = a(Ax) = aTA (x)
hold for all x and y in Rn and all scalars a. Hence TA satisfies T1 and T2, and so is linear.
106 Matrix Algebra
The remarkable thing is that the converse of Example 2.6.2 is true: Every linear transformation
T : Rn → Rm is actually a matrix transformation. To see why, we define the standard basis of Rn to be
the set of columns
{e1 , e2 , . . . , en }
x1
x2
of the identity matrix In . Then each ei is in Rn and every vector x = .. in Rn is a linear combination
.
xn
of the ei . In fact:
x = x1 e1 + x2 e2 + · · · + xn en
as the reader can verify. Hence Theorem 2.6.1 shows that
T (x) = T (x1 e1 + x2 e2 + · · · + xn en ) = x1 T (e1 ) + x2 T (e2 ) + · · · + xn T (en )
Now observe that each T (ei ) is a column in Rm , so
A = T (e1 ) T (e2 ) · · · T (en )
is an m × n matrix. Hence we can apply Definition 2.5 to get
x1
x2
T (x) = x1 T (e1 ) + x2 T (e2 ) + · · · + xn T (en ) = T (e1 ) T (e2 ) · · · T (en ) .. = Ax
.
xn
Since this holds for every x in Rn , it shows that T is the matrix transformation induced by A, and so proves
most of the following theorem.
Theorem 2.6.2
Let T : Rn → Rm be a transformation.
Proof. It remains to verify that the matrix A is unique. Suppose that T is induced by another matrix B.
Then T (x) = Bx for all x in Rn . But T (x) = Ax for each x, so Bx = Ax for every x. Hence A = B by
Theorem 2.2.6.
Hence we can speak of the matrix of a linear transformation. Because of Theorem 2.6.2 we may (and
shall) use the phrases “linear transformation” and “matrix transformation” interchangeably.
2.6. Linear Transformations 107
Example 2.6.3
x1 x1
x1
Define T : R3 → R2 by T x2 = for all x2 in R3 . Show that T is a linear
x2
x3 x3
transformation and use Theorem 2.6.2 to find its matrix.
x1 y1 x1 + y1
Solution. Write x = x2 and y = y2 , so that x + y = x2 + y2 . Hence
x3 y3 x3 + y3
x1 + y1 x1 y1
T (x + y) = = + = T (x) + T (y)
x2 + y2 x2 y2
Similarly, the reader can verify that T (ax) = aT (x) for all a in R, so T is a linear transformation.
Now the standard basis of R3 is
1 0 0
e1 = 0 , e2 = 1 , and e3 = 0
0 0 1
To illustrate how Theorem 2.6.2 is used, we rederive the matrices of the transformations in Exam-
ples 2.2.13 and 2.2.15.
Example 2.6.4
Let Q0 : R2 → R2 denote reflection in the x axis (as in Example 2.2.13) and let R π : R2 → R2
2
denote counterclockwise rotation through π2 about the origin (as in Example 2.2.15). Use
Theorem 2.6.2 to find the matrices of Q0 and R π .
2
y
Solution. Observe that Q0 and R π are linear by Example 2.6.2
2
0
(they are matrix transformations), so Theorem 2.6.2 applies
1
e2
1
to them. The standard basis of R2
is {e1 , e2 } where e1 =
1
0
0
0
0 e1 x points along the positive x axis, and e2 = points along
1
Figure 2.6.1 the positive y axis (see Figure 2.6.1).
108 Matrix Algebra
The reflection of e1 in the x axis is e1 itself because e1 points along the x axis, and the reflection
of e2 in the x axis is −e2 because e2 is perpendicular to the x axis. In other words, Q0 (e1 ) = e1 and
Q0 (e2 ) = −e2 . Hence Theorem 2.6.2 shows that the matrix of Q0 is
1 0
Q0 (e1 ) Q0 (e2 ) = e1 −e2 =
0 −1
Example 2.6.5
Let Q1 : R2 → R2 denote reflection in the line y = x. Show that
y x y
T =
Q1 is a matrix transformation, find its matrix, and use it to illustrate
y x
Theorem 2.6.2.
y=x
x y
e2 x Solution. Figure 2.6.2 shows that Q1 = . Hence
y
y x
x 0 1 y
e1 x Q1 = , so Q1 is the matrix transformation
0
y 1 0 x
Figure 2.6.2 0 1
induced by the matrix A = . Hence Q1 is linear (by
1 0
1 0
Example 2.6.2) and so Theorem 2.6.2 applies. If e1 = and e2 = are the standard basis
0 1
of R2 , then it is clear
geometrically that
Q1(e1 ) = e2 and Q1 (e2 ) = e1 . Thus (by Theorem 2.6.2)
the matrix of Q1 is Q1 (e1 ) Q1 (e2 ) = e2 e1 = A as before.
Theorem 2.6.3
T S
Let Rk −→ Rn −→ Rm be linear transformations, and let A and B be the matrices of S and T
respectively. Then S ◦ T is linear with matrix AB.
Example 2.6.6
π
Show that reflection in the x axis followed by rotation through 2 is reflection in the line y = x.
This conclusion can also be seen geometrically. Let x be a typical point in R2 , and assume that x
makes an angle α with the positive x axis. The effect of first applying Q0 and then applying R π is shown
2
in Figure 2.6.3. The fact that R π [Q0 (x)] makes the angle α with the positive y axis shows that R π [Q0 (x)]
2 2
is the reflection of x in the line y = x.
y
y y
R π [Q0 (x)] y=x
2
x x α x
α
0 x 0 α x 0 α x
Q0 (x) Q0 (x)
Figure 2.6.3
In Theorem 2.6.3, we saw that the matrix of the composite of two linear transformations is the product
of their matrices (in fact, matrix products were defined so that this is the case). We are going to apply
this fact to rotations, reflections, and projections in the plane. Before proceeding, we pause to present
useful geometrical descriptions of vector addition and scalar multiplication in the plane, and to give a
short review of angles and the trigonometric functions.
110 Matrix Algebra
x2 Some Geometry
2x =
2 As we have seen, it is convenient to view a vector x in R2 as an arrow
4
from the origin to the point x (see Section 2.2). This enables us to visualize
x=
1 whatsums and scalar multiples
mean
geometrically.
1 For example
consider
2
1 2 2 1 2 1 − 12
x= in R . Then 2x = , 2x = and − 2 x = , and
1
2x =
1
2 2 4 1 −1
1
these are shown as arrows in Figure 2.6.4.
0 x1
− 12 x =
− 12 Observe that the arrow for 2x is twice as long as the arrow for x and in
−1 the same direction, and that the arrows for 21 x is also in the same direction
as the arrow for x, but only half as long. On the other hand, the arrow
Figure 2.6.4
for − 12 x is half as long as the arrow for x, but in the opposite direction.
More generally, we have the following geometrical description of scalar
multiplication in R2 :
x2
x + y = 34
y = 13
x=
2
Let x be a vector in R2 . The arrow for kx is |k| times12 as long as
1
the arrow for x, and is in the same direction as the arrow for x if
0 x1 k > 0, and in the opposite direction if k < 0.
Figure 2.6.5
2 1
Now consider two vectors x = and y = in R2 . They are
1 3
x2 x+y
3
plotted in Figure 2.6.5 along with their sum x + y = . It is a routine
4
y matter to verify that the four points 0, x, y, and x + y form the vertices of a
x
parallelogram–that is opposite sides are parallel and of the same length.
(The reader should verify that the side from 0 to x has slope of 21 , as does
0 x1 the side from y to x + y, so these sides are parallel.) We state this as
follows:
Figure 2.6.6
y
Radian
measure Parallelogram Law
p of θ
1 θ Consider vectors x and y in R2 . If the arrows for x and y are drawn
0
x (see Figure 2.6.6), the arrow for x + y corresponds to the fourth
vertex of the parallelogram determined by the points x, y, and 0.
(radius 1, centre at the origin). The radian measure of θ is the length of the arc on the unit circle from the
positive x axis to p. Thus 360◦ = 2π radians, 180◦ = π , 90◦ = π2 , and so on.
The point p in Figure 2.6.7 is also closely linked to the trigonometric functions cosine and sine, written
cos θ and sinθ respectively. In fact these functions are defined to be the x and y coordinates of p; that is
cos θ
p= . This defines cos θ and sin θ for the arbitrary angle θ (possibly negative), and agrees with
sin θ
the usual values when θ is an acute angle 0 ≤ θ ≤ π2 as the reader should verify. For more discussion
of this, see Appendix A.
Rotations
Figure 2.6.9 A similar argument shows that Rθ (ax) = aRθ (x) for any scalar a, so
Rθ : R2 → R2 is indeed a linear transformation.
y 1
sin θ With linearity established we can find the matrix of Rθ . Let e1 =
e2 cos θ 0
Rθ (e2 )
0
Rθ (e1 ) and e2 = denote the standard basis of R2 . By Figure 2.6.10 we see
1
θ
1 1
sin θ
θ
e1
x that
0 cos θ cos θ − sin θ
Rθ (e1 ) = and Rθ (e2 ) =
sin θ cos θ
Figure 2.6.10
Hence Theorem 2.6.2 shows that Rθ is induced by the matrix
cos θ − sin θ
Rθ (e1 ) Rθ (e2 ) =
sin θ cos θ
112 Matrix Algebra
We record this as
Theorem 2.6.4
cos θ − sin θ
The rotation Rθ : R2 → R2 is the linear transformation with matrix .
sin θ cos θ
0 −1 −1 0
For example, R π and Rπ have matrices and , respectively, by Theorem 2.6.4.
2 1 0 0 −1
x
The first of these confirms the result in Example 2.2.15. The second shows that rotating a vector x =
y
−1 0 x −x
through the angle π results in Rπ (x) = = = −x. Thus applying Rπ is the same
0 −1 y −y
as negating x, a fact that is evident without Theorem 2.6.4.
Example 2.6.7
Let θ and φ be angles. By finding the matrix of the composite
y Rθ ◦ Rφ , obtain expressions for cos(θ + φ ) and sin(θ + φ ).
Rφ R
Rθ Rφ (x)
Solution. Consider the transformations R2 −→ R2 −→ θ
R2 . Their
composite Rθ ◦ Rφ is the transformation that first rotates the
plane through φ and then rotates it through θ , and so is the rotation
Rφ (x)
θ through the angle θ + φ (see Figure 2.6.11).
φ x
In other words
x
0 Rθ +φ = Rθ ◦ Rφ
Figure 2.6.11 Theorem 2.6.3 shows that the corresponding equation holds
for the matrices of these transformations, so Theorem 2.6.4 gives:
cos(θ + φ ) − sin(θ + φ ) cos θ − sin θ cos φ − sin φ
=
sin(θ + φ ) cos(θ + φ ) sin θ cos θ sin φ cos φ
If we perform the matrix multiplication on the right, and then compare first column entries, we
obtain
These are the two basic identities from which most of trigonometry can be derived.
2.6. Linear Transformations 113
Reflections
The line through the origin with slope m has equation y = mx, and we let
y Qm : R2 → R2 denote reflection in the line y = mx.
This transformation is described geometrically in Figure 2.6.12. In
Qm (x)
words, Qm (x) is the “mirror image” of x in the line y = mx. If m = 0 then
y = mx Q0 is reflection in the x axis, so we already know Q0 is linear. While we
could show directly that Qm is linear (with an argument like that for Rθ ),
x
we prefer to do it another way that is instructive and derives the matrix of
x
0 Qm directly without using Theorem 2.6.2.
Let θ denote the angle between the positive x axis and the line y = mx.
Figure 2.6.12
The key observation is that the transformation Qm can be accomplished in
three steps: First rotate through −θ (so our line coincides with the x axis), then reflect in the x axis, and
finally rotate back through θ . In other words:
Qm = Rθ ◦ Q0 ◦ R−θ
Since R−θ , Q0 , and Rθ are all linear, this (with Theorem 2.6.3) shows that Qm is linear and that its matrix
is the product of the matrices of Rθ , Q0 , and R−θ . If we write c = cos θ and s = sin θ for simplicity, then
the matrices of Rθ , R−θ , and Q0 are
c −s c s 1 0
, , and respectively.13
s c −s c 0 −1
13 The matrix of R−θ comes from the matrix of Rθ using the fact that, for all angles θ , cos(−θ ) = cos θ and
sin(−θ ) = − sin(θ ).
114 Matrix Algebra
1 0
Note that if m = 0, the matrix in Theorem 2.6.5 becomes , as expected. Of course this
0 −1
analysis fails for reflection in the y axis because vertical lines have no slope. Howeverit is an easy
−1 0 14
exercise to verify directly that reflection in the y axis is indeed linear with matrix .
0 1
Example 2.6.8
Let T : R2 → R2 be rotation through − π2 followed by reflection in the y axis. Show that T is a
reflection in a line through the origin and find the line.
cos(− π2 ) − sin(− π2 ) 0 1
Solution. The matrix of R− π is = and the matrix of
2 π
sin(− 2 ) π
cos(− 2 ) −1 0
−1 0
reflection in the y axis is . Hence the matrix of T is
0 1
−1 0 0 1 0 −1
= and this is reflection in the line y = −x (take m = −1 in
0 1 −1 0 −1 0
Theorem 2.6.5).
Projections
The method in the proof of Theorem 2.6.5 works more generally. Let
y Pm : R2 → R2 denote projection on the line y = mx. This transformation is
described geometrically in Figure 2.6.14.
y = mx x x x
If m = 0, then P0 = for all in R2 , so P0 is linear with
Pm (x) y 0 y
1 0
matrix . Hence the argument above for Qm goes through for Pm .
x 0 0
First observe that
x Pm = Rθ ◦ P0 ◦ R−θ
0
as before. So, Pm is linear with matrix
Figure 2.6.14
2
c −s 1 0 c s c sc
=
s c 0 0 −s c sc s2
14 Note −1 0 1 1 − m2 2m
that = lim .
0 1 m→∞ 1+m
2
2m m2 − 1
2.6. Linear Transformations 115
This gives:
Theorem 2.6.6
Let Pm : R2 → R2 be projection on the line y = mx. Then Pm is a linear transformation with matrix
1 1 m
.
1+m2 m m2
1 0
Again, if m = 0, then the matrix in Theorem 2.6.6 reduces to as expected. As the y axis has
0 0
no
slope, the analysis fails for projection on the y axis, but this transformation is indeed linear with matrix
0 0
as is easily verified directly.
0 1
Note that the formula for the matrix of Qm in Theorem 2.6.5 can be derived from the above formula
for the matrix of Pm . Using Figure 2.6.12, observe that Qm (x) = x + 2[Pm (x) − x] so Qm (x) = 2Pm (x) − x.
Substituting the matrices for Pm (x) and 1R2 (x) gives the desired formula.
Example 2.6.9
Given x in R2 , write y = Pm (x). The fact that y lies on the line y = mx means that Pm (y) = y. But
then
(Pm ◦ Pm )(x) = Pm (y) = y = Pm (x) for all x in R2 , that is, Pm ◦ Pm = Pm .
1 1 m
In particular, if we write the matrix of Pm as A = 1+m2 , then A2 = A. The reader should
m m2
verify this directly.
Exercise 2.6.1 Let T : R3 → R2 be a linear transforma- Exercise 2.6.2 Let T : R4 → R3 be a linear transforma-
tion. tion.
1 1
8 1 2
2 3 1
a. Find T 3 if T 0 = a. Find T = 3
3 −2 if T 0
7 −1 −1
2 −3 −1
−1 0
and T 1 = . 5
0 −1
3 and T = 0 .
1
1
5 3 1
3
b. Find T 6 if T 2 =
5
−13 −1 5 1
−1 5
2 1
−1 b. Find T
2 if T = 1
and T 0 = . 1
2 −3
5 −4 1
116 Matrix Algebra
−1 x −3x + 4y
2 1
1 a. T = 5
and T y 4x + 3y
0 = 0 .
1
2 x 1 x+y
b. T = 2
√
y −x + y
Exercise 2.6.3 In each case assume that the transfor- √
mation T is linear, and use Theorem 2.6.2 to obtain the x 1 x − 3y
√
c. T = 3
√
matrix A of T . y 3x + y
x 1 8x + 6y
a. T : R2 → R2 is reflection in the line y = −x. d. T = − 10
y 6x − 8y
b. T : R2 → R2 is given by T (x) = −x for each x in R2 .
Exercise 2.6.9 Express reflection in the line y = −x as
c. T : R2 → R2 is clockwise rotation through π4 . the composition of a rotation followed by reflection in
the line y = x.
d. T : R2 → R2 is counterclockwise rotation through π4 .
Exercise 2.6.10 Find the matrix of T : R3 → R3 in each
case:
Exercise 2.6.4 In each case use Theorem 2.6.2 to obtain
the matrix A of the transformation T . You may assume a. T is rotation through θ about the x axis (from the
that T is linear in each case. y axis to the z axis).
a. T : R3 → R3 is reflection in the x − z plane. b. T is rotation through θ about the y axis (from the
x axis to the z axis).
b. T : R3 → R3 is reflection in the y − z plane.
Exercise 2.6.11 Let Tθ : R2 → R2 denote reflection in
Exercise 2.6.5 Let T : Rn → Rm be a linear transforma- the line making an angle θ with the positive x axis.
tion.
cos 2θ sin 2θ
a. Show that the matrix of Tθ is
sin 2θ − cos 2θ
a. If x is in R , we say that x is in the kernel of T if
n
for all θ .
T (x) = 0. If x1 and x2 are both in the kernel of T ,
show that ax1 + bx2 is also in the kernel of T for b. Show that Tθ ◦ R2φ = Tθ −φ for all θ and φ .
all scalars a and b.
Exercise 2.6.12 In each case find a rotation or reflection
b. If y is in Rn , we say that y is in the image of T if that equals the given transformation.
y = T (x) for some x in Rn . If y1 and y2 are both
in the image of T , show that ay1 + by2 is also in a. Reflection in the y axis followed by rotation
the image of T for all scalars a and b. through π2 .
b. Rotation through π followed by reflection in the x
Exercise 2.6.6 Use Theorem 2.6.2 to find the matrix of axis.
the identity transformation 1Rn : Rn → Rn defined by
π
1Rn (x) = x for each x in Rn . c. Rotation through 2 followed by reflection in the
line y = x.
Exercise 2.6.7 In each case show that T : R2 → R2 is
not a linear transformation. d. Reflection in the x axis followed by rotation
through π2 .
x xy x 0
a. T = b. T = e. Reflection in the line y = x followed by reflection
y 0 y y2
in the x axis.
Exercise 2.6.8 In each case show that T is either reflec- f. Reflection in the x axis followed by reflection in
tion in a line or rotation through an angle, and find the the line y = x.
line or angle.
2.6. Linear Transformations 117
Exercise 2.6.13 Let R and S be matrix transformations Exercise 2.6.18 Let Q0 : R2 → R2 be reflection in the x
Rn → Rm induced by matrices A and B respectively. In axis, let Q1 : R2 → R2 be reflection in the line y = x, let
each case, show that T is a matrix transformation and Q−1 : R2 → R2 be reflection in the line y = −x, and let
describe its matrix in terms of A and B. R π : R2 → R2 be counterclockwise rotation through π2 .
2
c. Show that R π ◦ Q0 = Q1 .
2
Exercise 2.6.14 Show that the following hold for all lin- d. Show that Q0 ◦ R π = Q−1 .
ear transformations T : Rn → Rm : 2
2.7 LU-Factorization15
The solution to a system Ax = b of linear equations can be solved quickly if A can be factored as A = LU
where L and U are of a particularly nice form. In this section we show that gaussian elimination can be
used to find such factorizations.
Triangular Matrices
As for square matrices, if A = ai j is an m × n matrix, the elements a11 , a22 , a33 , . . . form the main
diagonal of A. Then A is called upper triangular if every entry below and to the left of the main diagonal
is zero. Every row-echelon matrix is upper triangular, as are the matrices
1 1 1
1 −1 0 3 0 2 1 0 5 0 −1 1
0 2 1 1 0 0 0 3 1 0
0 0
0 0 −3 0 0 0 1 0 1
0 0 0
By analogy, a matrix A is called lower triangular if its transpose is upper triangular, that is if each entry
above and to the right of the main diagonal is zero. A matrix is called triangular if it is upper or lower
triangular.
Example 2.7.1
Solve the system
x1 + 2x2 − 3x3 − x4 + 5x5 = 3
5x3 + x4 + x5 = 8
2x5 = 6
where the coefficient matrix is upper triangular.
x3 = 1 − 15 t
x1 = −9 − 2s + 52 t
The method used in Example 2.7.1 is called back substitution because later variables are substituted
into earlier equations. It works because the coefficient matrix is upper triangular. Similarly, if the coeffi-
15 This section is not used later and so may be omitted with no loss of continuity.
2.7. LU-Factorization 119
cient matrix is lower triangular the system can be solved by forward substitution where earlier variables
are substituted into later equations. As observed in Section 1.2, these procedures are more numerically
efficient than gaussian elimination.
Now consider a system Ax = b where A can be factored as A = LU where L is lower triangular and U
is upper triangular. Then the system Ax = b can be solved in two stages as follows:
Lemma 2.7.1
Let A and B denote matrices.
1. If A and B are both lower (upper) triangular, the same is true of AB.
2. If A is n × n and lower (upper) triangular, then A is invertible if and only if every main
diagonal entry is nonzero. In this case A−1 is also lower (upper) triangular.
LU-Factorization
Let A be an m × n matrix. Then A can be carried to a row-echelon matrix U (that is, upper triangular). As
in Section 2.5, the reduction is
A → E1 A → E2 E1 A → E3 E2 E1 A → · · · → Ek Ek−1 · · · E2 E1 A = U
where E1 , E2 , . . . , Ek are elementary matrices corresponding to the row operations used. Hence
A = LU
Theorem 2.7.1
If A can be lower reduced to a row-echelon matrix U , then
A = LU
where L is lower triangular and invertible and U is upper triangular and row-echelon.
Such a factorization may not exist (Exercise 2.7.4) because A cannot be carried to row-echelon form
using no row interchange. A procedure for dealing with this situation will be outlined later. However, if
an LU-factorization A = LU does exist, then the gaussian algorithm gives U and also leads to a procedure
for finding L. Example 2.7.2 provides an illustration. For convenience, the first nonzero column from the
left in a matrix A is called the leading column of A.
Example 2.7.2
0 2 −6 −2 4
Find an LU-factorization of A = 0 −1 3 3 2 .
0 −1 3 7 10
0 2 −6 −2 4 0 1 −3 −1 2 0 1 −3 −1 2
A = 0 −1 3 3 2 → 0 0 0 2 4 → 0 0 0 1 2 =U
0 −1 3 7 10 0 0 0 6 12 0 0 0 0 0
The circled columns are determined as follows: The first is the leading column of A, and is used
(by lower reduction) to create the first leading 1 and create zeros below it. This completes the work
on row 1, and we repeat the procedure on the matrix consisting of the remaining rows. Thus the
second circled column is the leading column of this smaller matrix, which we use to create the
second leading 1 and the zeros below it. As the remaining row is zero here, we are finished. Then
A = LU where
2 0 0
L = −1 2 0
−1 6 1
This matrix L is obtained from I3 by replacing the bottom of the first two columns by the circled
columns in the reduction. Note that the rank of A is 2 here, and this is the number of circled
columns.
The calculation in Example 2.7.2 works in general. There is no need to calculate the elementary
2.7. LU-Factorization 121
matrices Ei , and the method is suitable for use in a computer because the circled columns can be stored in
memory as they are created. The procedure can be formally stated as follows:
LU-Algorithm
Let A be an m × n matrix of rank r, and suppose that A can be lower reduced to a row-echelon
matrix U . Then A = LU where the lower triangular, invertible matrix L is constructed as follows:
1. If A = 0, take L = Im and U = 0.
2. If A 6= 0, write A1 = A and let c1 be the leading column of A1 . Use c1 to create the first
leading 1 and create zeros below it (using lower reduction). When this is completed, let A2
denote the matrix consisting of rows 2 to m of the matrix just created.
4. Continue in this way until U is reached, where all rows below the last leading 1 consist of
zeros. This will happen after r steps.
Example 2.7.3
5 −5 10 0 5
−3 3 2 2 1
Find an LU-factorization for A =
−2
.
2 0 −1 0
1 −1 10 2 5
5 −5 10 0 5 1 −1 2 0 1
−3 3 2 2 1 2 4
→ 0 0 8
−2 2 0 −1 0 0 0 4 −1 2
1 −1 10 2 5 0 0 8 2 4
1 −1 2 0 1
1 1
0 0 1 4 2
→
0 0 0 −2 0
0 0 0 0 0
1 −1 2 0 1
1 1
0 0 1 4 2
→ =U
0 0 0 1 0
0 0 0 0 0
The next example deals with a case where no row of zeros is present in U (in fact, A is invertible).
Example 2.7.4
2 4 2
Find an LU-factorization for A = 1 1 2 .
−1 0 2
2 0 0
Hence A = LU where L = 1 −1 0 .
−1 2 5
2.7. LU-Factorization 123
0 1
There are matrices (for example ) that have no LU-factorization and so require at least one
1 0
row interchange when being carried to row-echelon form via the gaussian algorithm. However, it turns
out that, if all the row interchanges encountered in the algorithm are carried out first, the resulting matrix
requires no interchanges and so has an LU-factorization. Here is the precise result.
Theorem 2.7.2
Suppose an m × n matrix A is carried to a row-echelon matrix U via the gaussian algorithm. Let
P1 , P2 , . . . , Ps be the elementary matrices corresponding (in order) to the row interchanges used,
and write P = Ps · · · P2 P1 . (If no interchanges are used take P = Im .) Then:
2. PA has an LU-factorization.
Example 2.7.5
0 0 −1 2
−1 −1 1 2
If A =
2
, find a permutation matrix P such that PA has an LU-factorization,
1 −3 6
0 1 −1 4
and then find the factorization.
Solution. Apply the gaussian algorithm to A:
−1 −1 1 2 1 1 −1 −2 1 1 −1 −2
∗ 0 0 −1 2 0 0 −1 2 ∗ 0 −1 −1 10
A−→ 2
→ →−
1 −3 6 0 −1 −1 10 0 0 −1 2
0 1 −1 4 0 1 −1 4 0 1 −1 4
1 1 −1 −2 1 1 −1 −2
0 1
1 −10 0 1 1 −10
→ 0 →
0 −1 2 0 0 1 −2
0 0 −2 14 0 0 0 10
Two row interchanges were needed (marked with ∗), first rows 1 and 2 and then rows 2 and 3.
Hence, as in Theorem 2.7.2,
1 0 0 0 0 1 0 0 0 1 0 0
0 0 1 0 1 0 0 0 0 0 1 0
P=
0 1 0 0 0 0 1 0 = 1 0 0 0
0 0 0 1 0 0 0 1 0 0 0 1
124 Matrix Algebra
If we do these interchanges (in order) to A, the result is PA. Now apply the LU-algorithm to PA:
−1 −1 1 2 1 1 −1 −2 1 1 −1 −2
2
1 −3 6 0 −1
−1 10 0 1 1 −10
PA =
0 → →
0 −1 2 0 0 −1 2 0 0 −1 2
0 1 −1 4 0 1 −1 4 0 0 −2 14
1 1 −1 −2 1 1 −1 −2
0 1
1 −10 0 1 1 −10
→ 0 → =U
0 1 −2 0 0 1 −2
0 0 0 10 0 0 0 1
−1 0 0 0 1 1 −1 −2
2 −1 0 0 1 −10
Hence, PA = LU , where L = and U = 0 1 .
0 0 −1 0 0 0 1 −2
0 1 −2 10 0 0 0 1
Theorem 2.7.2 provides an important general factorization theorem for matrices. If A is any m × n
matrix, it asserts that there exists a permutation matrix P and an LU-factorization PA = LU . Moreover,
it shows that either P = I or P = Ps · · · P2 P1 , where P1 , P2 , . . . , Ps are the elementary permutation matri-
ces arising in the reduction of A to row-echelon form. Now observe that Pi−1 = Pi for each i (they are
elementary row interchanges). Thus, P−1 = P1P2 · · · Ps , so the matrix A can be factored as
A = P−1 LU
where P−1 is a permutation matrix, L is lower triangular and invertible, and U is a row-echelon matrix.
This is called a PLU-factorization of A.
The LU-factorization in Theorem 2.7.1 is not unique. For example,
1 0 1 −2 3 1 0 1 −2 3
=
3 2 0 0 0 3 1 0 0 0
However, it is necessary here that the row-echelon matrix has a row of zeros. Recall that the rank of a
matrix A is the number of nonzero rows in any row-echelon matrix U to which A can be carried by row
operations. Thus, if A is m × n, the matrix U has no row of zeros if and only if A has rank m.
Theorem 2.7.3
Let A be an m × n matrix that has an LU-factorization
A = LU
If A has rank m (that is, U has no row of zeros), then L and U are uniquely determined by A.
is lower triangular and invertible (Lemma 2.7.1) and NU = V , so it suffices to prove that N = I. If N is
m × m, we use induction on m. The case m = 1 is left to the reader. If m > 1, observe first that column 1
of V is N times column 1 of U . Thus if either column is zero, so is the other (N is invertible). Hence, we
can assume (by deleting zero columns) that the (1, 1)-entry is 1 in both U and V .
a 0 1 Y 1 Z
Now we write N = ,U= , and V = in block form. Then NU = V
X N1 0 U1 0 V1
a aY 1 Z
becomes = . Hence a = 1, Y = Z, X = 0, and N1U1 = V1 . But N1U1 = V1
X XY + N1U1 0 V1
implies N1 = I by induction, whence N = I.
If A is an m × m invertible matrix, then A has rank m by Theorem 2.4.5. Hence, we get the following
important special case of Theorem 2.7.3.
Corollary 2.7.1
If an invertible matrix A has an LU-factorization A = LU , then L and U are uniquely determined by
A.
Of course, in this case U is an upper triangular matrix with 1s along the main diagonal.
Proofs of Theorems
Proof of the LU-Algorithm. If c1 , c2 , . . . , cr are columns of lengths m, m−1, . . . , m−r +1, respectively,
write L(m) (c1 , c2 , . . . , cr ) for the lower triangular m × m matrix obtained from Im by placing c1 , c2 , . . . , cr
at the bottom of the first r columns of Im .
Proceed by induction on n. If A = 0 or n = 1, it is left to the reader. If n > 1, let c1 denote the leading
column of A and let k1 denote the first column of the m × m identity matrix. There exist elementary
matrices E1 , . . . , Ek such that, in block form,
X1
(Ek · · · E2 E1 )A = 0 k1 where (Ek · · · E2 E1 )c1 = k1
A1
in block form. Now, by induction, let A1 = L1U1 be an LU-factorization of A1 , where L1 = L(m−1) [c2 , . . . , cr ]
and U1 is row-echelon. Then block multiplication gives
−1 X1 1 0 0 1 X1
G A = 0 k1 =
L1U1 0 L1 0 0 U1
126 Matrix Algebra
0 1 X1
Hence A = LU , where U = is row-echelon and
0 0 U1
0 1 0 0
L= c1 = c1 = L(m) [c1 , c2 , . . . , cr ]
Im−1 0 L1 L
in block form. Then let P2 be a permutation matrix (either elementary or Im ) such that
(m) −1 0 1 X1
P2 · L [c1 ] · P1 · A =
0 0 A′1
and the first nonzero column c2 of A′1 has a nonzero entry on top. Thus,
0 1 X1
L(m) [k1 , c2 ]−1 · P2 · L(m) [c1 ]−1 · P1 · A = 0 1 X2
0 0
0 0 A2
Pk L j = L′j Pk
h i
where =L′j L(m) k1 , . . . , k j−1 , c j for some column c′′j of length m − j + 1. Given that this is true, we
′′
If we write P = Pr Pr−1 · · · P2 P1 , this shows that PA has an LU-factorization because Lr L′r−1 · · · L′2 L′1 is lower
triangular and invertible. All that remains is to prove the following rather technical result.
2.7. LU-Factorization 127
Lemma 2.7.2
Let Pk result from interchanging row k of Im with a row below it. If j < k, let c j be a column of
length m − j + 1. Then there is another column c′j of length m − j + 1 such that
Pk · L(m) k1 , . . . , k j−1 , c j = L(m) k1 , . . . , k j−1 , c′j · Pk
Exercise 2.7.1 Find an LU-factorization of the follow- Exercise 2.7.2 Find a permutation matrix P and an LU-
ing matrices. factorization of PA if A is:
0 0 2 0 −1 2
2 6 −2 0 2 a. 0 −1 4 b. 0 0 4
a. 3 9 −3 3 1 3 5 1 −1 2 1
−1 −3 1 −3 1
0 −1 2 1 3
−1 1 3 1 4
2 4 2 c.
1 −1 −3 6 2
b. 1 −1 3
−1 7 −7 2 −2 −4 1 0
−1 −2 3 0
2 6 −2 0 2 2 4 −6 5
1 5 −1 2 5 d.
1
c. 1 −1 3
3 7 −3 −2 5 2 5 −10 1
−1 −1 1 2 3
Exercise 2.7.3 In each case use the given LU-
−1 −3 1 0 −1
1 decomposition of A to solve the system Ax = b by finding
4 1 1 1
d.
1
y such that Ly = b, and then x such that U x = y:
2 −3 −1 1
0 −2 −4 −2 0
2 0 0 1 0 0 1
a. A = 0 −1 0 0 0 1 2 ;
2 2 4 6 0 2
1 −1 2 1 3 1 1 1 3 0 0 0 1
e. −2 2 −4 −1 1 6 1
0 2 0 3 4 8 b = −1
−2 4 −4 1 −2 6 2
2 2 −2 4 2 2 0 0 1 1 0 −1
1 −1 b. A = 1 3 0 0 1 0 1 ;
0 2 1
f.
3
−1 2 1 0 0 0 0
1 −2 6 3
1 3 −2 2 1 −2
b = −1
1
128 Matrix Algebra
1 −1 2 1 b. Use part (a) to prove Theorem 2.7.3 in the case
−2 0 0 0
1 −1 0 0 0 1 1
−4 that A is invertible.
c. A = ;
−1 0 2 0 0 0 1 1
−2
0 1 0 2 Exercise 2.7.7 Prove Lemma 2.7.1(1). [Hint: Use block
0 0 0 1 multiplication and induction.]
1
−1 Exercise 2.7.8 Prove Lemma 2.7.1(2). [Hint: Use block
b= 2
multiplication and induction.]
0
Exercise 2.7.9 A triangular matrix is called unit trian-
2 0 0 0 1 −1 0 1 gular if it is square and every main diagonal element is a
1 −1 0 0 0 1 −2 −1 1.
d. A =
−1
;
1 2 0 0 0 1 1
3 0 1 −1 0 0 0 0
a. If A can be carried by the gaussian algorithm
4
−6 to row-echelon form using no row interchanges,
b= 4
show that A = LU where L is unit lower triangular
5 and U is upper triangular.
b. Show that the factorization in (a.) is unique.
0 1
Exercise 2.7.4 Show that = LU is impossible
1 0
where L is lower triangular and U is upper triangular. Exercise 2.7.10 Let c1 , c2 , . . . , cr be columns
of lengths m, m − 1, . . . , m − r + 1. If k j de-
Exercise 2.7.5 Show that we can accomplish any row
notes column j of Im , show that L(m) [c1 , c2 , . . . , cr ] =
interchange by using only row operations of other types.
L(m) [c1 ] L(m) [k1 , c2 ] L(m) [k1 , k2 , c3 ] · · ·
Exercise 2.7.6 L(m) [k1 , k2 , . . . , kr−1 , cr ]. The notation is as in the
proof of Theorem 2.7.2. [Hint: Use induction on m and
a. Let L and L1 be invertible lower triangular matri-
block multiplication.]
ces, and let U and U1 be invertible upper triangu-
−1
lar matrices. Show that LU = L1U1 if and only if Exercise 2.7.11 Prove Lemma 2.7.2. [Hint: Pk = Pk .
there exists an invertible diagonal matrix D such Ik 0
Write Pk = in block form where P0 is an
that L1 = LD and U1 = D−1U . [Hint: Scrutinize 0 P0
L−1 L1 = UU1−1 .] (m − k) × (m − k) permutation matrix.]
In 1973 Wassily Leontief was awarded the Nobel prize in economics for his work on mathematical mod-
els.17 Roughly speaking, an economic system in this model consists of several industries, each of which
produces a product and each of which uses some of the production of the other industries. The following
example is typical.
16 The applications in this section and the next are independent and may be taken in any order.
17 See W. W. Leontief, “The world economy of the year 2000,” Scientific American, Sept. 1980.
2.8. An Application to Input-Output Economic Models 129
Example 2.8.1
A primitive society has three basic needs: food, shelter, and clothing. There are thus three
industries in the society—the farming, housing, and garment industries—that produce these
commodities. Each of these industries consumes a certain proportion of the total output of each
commodity according to the following table.
OUTPUT
Farming Housing Garment
Farming 0.4 0.2 0.3
CONSUMPTION Housing 0.2 0.6 0.4
Garment 0.4 0.2 0.3
Find the annual prices that each industry must charge for its income to equal its expenditures.
Solution. Let p1 , p2 , and p3 be the prices charged per year by the farming, housing, and garment
industries, respectively, for their total output. To see how these prices are determined, consider the
farming industry. It receives p1 for its production in any year. But it consumes products from all
these industries in the following amounts (from row 1 of the table): 40% of the food, 20% of the
housing, and 30% of the clothing. Hence, the expenditures of the farming industry are
0.4p1 + 0.2p2 + 0.3p3 , so
0.4p1 + 0.2p2 + 0.3p3 = p1
A similar analysis of the other two industries leads to the following system of equations.
(I − E)p = 0
where t is a parameter. Thus, the pricing must be such that the total output of the farming industry
has the same value as the total output of the garment industry, whereas the total value of the
housing industry must be 23 as much.
130 Matrix Algebra
In general, suppose an economy has n industries, each of which uses some (possibly none) of the
production of every industry. We assume first that the economy is closed (that is, no product is exported
or imported) and that all product is used. Given two industries i and j, let ei j denote
the proportion of the
total annual output of industry j that is consumed by industry i. Then E = ei j is called the input-output
matrix for the economy. Clearly,
0 ≤ ei j ≤ 1 for all i and j (2.12)
Moreover, all the output from industry j is used by some industry (the model is closed), so
This condition asserts that each column of E sums to 1. Matrices satisfying conditions (2.12) and (2.13)
are called stochastic matrices.
As in Example 2.8.1, let pi denote the price of the total annual production of industry i. Then pi is the
annual revenue of industry i. On the other hand, industry i spends ei1 p1 + ei2 p2 + · · · + ein pn annually for
the product it uses (ei j p j is the cost for product from industry j). The closed economic system is said to
be in equilibrium if the annual expenditure equals the annual revenue for each industry—that is, if
e1 j p1 + e2 j p2 + · · · + ei j pn = pi for each i = 1, 2, . . . , n
p1
p2
If we write p = .. , these equations can be written as the matrix equation
.
pn
Ep = p
This is called the equilibrium condition, and the solutions p are called equilibrium price structures.
The equilibrium condition can be written as
(I − E)p = 0
which is a system of homogeneous equations for p. Moreover, there is always a nontrivial solution p.
Indeed, the column sums of I − E are all 0 (because E is stochastic), so the row-echelon form of I − E has
a row of zeros. In fact, more is true:
Theorem 2.8.1
Let E be any n × n stochastic matrix. Then there is a nonzero n × 1 vector p with nonnegative
entries such that E p = p. If all the entries of E are positive, the matrix p can be chosen with all
entries positive.
Theorem 2.8.1 guarantees the existence of an equilibrium price structure for any closed input-output
system of the type discussed here. The proof is beyond the scope of this book.18
18 The
interested reader is referred to P. Lancaster’s Theory of Matrices (New York: Academic Press, 1969) or to E. Seneta’s
Non-negative Matrices (New York: Wiley, 1973).
2.8. An Application to Input-Output Economic Models 131
Example 2.8.2
Find the equilibrium price structures for four industries if the input-output matrix is
0.6 0.2 0.1 0.1
0.3 0.4 0.2 0
E= 0.1 0.3 0.5 0.2
259.67
to five figures.
We now assume that there is a demand for products in the open sector of the economy, which is the part of
the economy other than the producing industries (for example, consumers). Let di denote the total value of
the demand for product i in the open sector. If pi and ei j are as before, the value of the annual demand for
product i by the producing industries themselves is ei1 p1 + ei2 p2 + · · · + ein pn , so the total annual revenue
pi of industry i breaks down as follows:
p = Ep + d
132 Matrix Algebra
or
(I − E)p = d (2.14)
This is a system of linear equations for p, and we ask for a solution p with every entry nonnegative. Note
that every entry of E is between 0 and 1, but the column sums of E need not equal 1 as in the closed model.
Before proceeding, it is convenient to introduce a useful notation. If A = ai j and B = bi j are
matrices of the same size, we write A > B if ai j > bi j for all i and j, and we write A ≥ B if ai j ≥ bi j for all
i and j. Thus P ≥ 0 means that every entry of P is nonnegative. Note that A ≥ 0 and B ≥ 0 implies that
AB ≥ 0.
Now, given a demand matrix d ≥ 0, we look for a production matrix p ≥ 0 satisfying equation (2.14).
This certainly exists if I − E is invertible and (I − E)−1 ≥ 0. On the other hand, the fact that d ≥ 0 means
any solution p to equation (2.14) satisfies p ≥ Ep. Hence, the following theorem is not too surprising.
Theorem 2.8.2
Let E ≥ 0 be a square matrix. Then I − E is invertible and (I − E)−1 ≥ 0 if and only if there exists
a column p > 0 such that p > E p.
Heuristic Proof.
If (I − E)−1 ≥ 0, the existence of p > 0 with p > Ep is left as Exercise 2.8.11. Conversely, suppose such
a column p exists. Observe that
(I − E)(I + E + E 2 + · · · + E k−1 ) = I − E k
holds for all k ≥ 2. If we can show that every entry of E k approaches 0 as k becomes large then, intuitively,
the infinite matrix sum
U = I + E + E2 + · · ·
exists and (I − E)U = I. Since U ≥ 0, this does it. To show that E k approaches 0, it suffices to show that
EP < µ P for some number µ with 0 < µ < 1 (then E k P < µ k P for all k ≥ 1 by induction). The existence
of µ is left as Exercise 2.8.12.
The condition p > Ep in Theorem 2.8.2 has a simple economic interpretation. If p is a production
matrix, entry i of Ep is the total value of all product used by industry i in a year. Hence, the condition
p > Ep means that, for each i, the value of product produced by industry i exceeds the value of the product
it uses. In other words, each industry runs at a profit.
Example 2.8.3
0.6 0.2 0.3
If E = 0.1 0.4 0.2 , show that I − E is invertible and (I − E)−1 ≥ 0.
0.2 0.5 0.1
If p0 = (1, 1, 1)T , the entries of Ep0 are the row sums of E. Hence p0 > Ep0 holds if the row sums of
E are all less than 1. This proves the first of the following useful facts (the second is Exercise 2.8.10).
2.8. An Application to Input-Output Economic Models 133
Corollary 2.8.1
Let E ≥ 0 be a square matrix. In each case, I − E is invertible and (I − E)−1 ≥ 0:
0.2 0.3 0.6 0.7 b. I − E has an inverse but not all entries of (I − E)−1
are nonnegative.
0.5 0 0.1 0.1
0.2 0.7 0 0.1
d.
0.1 0.2 0.8 0.2
Exercise 2.8.8 If E is a 2 × 2 matrix with entries
0.2 0.1 0.1 0.6 between 0 and 1, show that I − E is invertible and
(I − E) −1 ≥ 0 if and only if tr E < 1 + det E. Here, if
Exercise 2.8.2 Three industries A, B, and C are such a b
E= , then tr E = a + d and det E = ad − bc.
that all the output of A is used by B, all the output of B is c d
used by C, and all the output of C is used by A. Find the
Exercise 2.8.9 In each case show that I − E is invertible
possible equilibrium price structures.
and (I − E)−1 ≥ 0.
Exercise 2.8.3 Find the possible equilibrium price struc-
tures
for three industries where the input-output matrix 0.6 0.5 0.1 0.7 0.1 0.3
1 0 0 a. 0.1 0.3 0.3 b. 0.2 0.5 0.2
is 0 0 1 . Discuss why there are two parameters 0.2 0.1 0.4 0.1 0.1 0.4
0 1 0
0.6 0.2 0.1 0.8 0.1 0.1
here.
c. 0.3 0.4 0.2 d. 0.3 0.1 0.2
Exercise 2.8.4 Prove Theorem 2.8.1 for a 2 × 2 0.2 0.5 0.1 0.3 0.3 0.2
stochastic
matrix E by first writing it in the form E =
a b
, where 0 ≤ a ≤ 1 and 0 ≤ b ≤ 1. Exercise 2.8.10 Prove that (1) implies (2) in the Corol-
1−a 1−b lary to Theorem 2.8.2.
Exercise 2.8.5 If E is an n × n stochastic matrix and c Exercise 2.8.11 If (I − E)−1 ≥ 0, find p > 0 such that
is an n × 1 matrix, show that the sum of the entries of c p > Ep.
equals the sum of the entries of the n × 1 matrix Ec.
Exercise 2.8.12 If Ep < p where E ≥ 0 and p > 0, find
Exercise 2.8.6 Let W = 1 1 1 · · · 1 . Let E a number µ such that Ep < µ p and 0 < µ < 1.
and F denote n × n matrices with nonnegative entries.
[Hint: If Ep = (q1 , . . . , qn )n
T and p = (p , . . . , p )T ,
o1 n
a. Show that E is a stochastic matrix if and only if take any number µ where max q1 , . . . , qn < µ < 1.]
p1 pn
WE =W.
134 Matrix Algebra
Many natural phenomena progress through various stages and can be in a variety of states at each stage.
For example, the weather in a given city progresses day by day and, on any given day, may be sunny or
rainy. Here the states are “sun” and “rain,” and the weather progresses from one state to another in daily
stages. Another example might be a football team: The stages of its evolution are the games it plays, and
the possible states are “win,” “draw,” and “loss.”
The general setup is as follows: A real conceptual “system” is run generating a sequence of outcomes.
The system evolves through a series of “stages,” and at any stage it can be in any one of a finite number of
“states.” At any given stage, the state to which it will go at the next stage depends on the past and present
history of the system—that is, on the sequence of states it has occupied to date.
Even in the case of a Markov chain, the state the system will occupy at any stage is determined only
in terms of probabilities. In other words, chance plays a role. For example, if a football team wins a
particular game, we do not know whether it will win, draw, or lose the next game. On the other hand, we
may know that the team tends to persist in winning streaks; for example, if it wins one game it may win
the next game 12 of the time, lose 10 4
of the time, and draw 10 1
of the time. These fractions are called the
probabilities of these various possibilities. Similarly, if the team loses, it may lose the next game with
probability 21 (that is, half the time), win with probability 14 , and draw with probability 14 . The probabilities
of the various outcomes after a drawn game will also be known.
We shall treat probabilities informally here: The probability that a given event will occur is the long-
run proportion of the time that the event does indeed occur. Hence, all probabilities are numbers between
0 and 1. A probability of 0 means the event is impossible and never occurs; events with probability 1 are
certain to occur.
If a Markov chain is in a particular state, the probabilities that it goes to the various states at the next
stage of its evolution are called the transition probabilities for the chain, and they are assumed to be
known quantities. To motivate the general conditions that follow, consider the following simple example.
Here the system is a man, the stages are his successive lunches, and the states are the two restaurants he
chooses.
Example 2.9.1
A man always eats lunch at one of two restaurants, A and B. He never eats at A twice in a row.
However, if he eats at B, he is three times as likely to eat at B next time as at A. Initially, he is
equally likely to eat at either restaurant.
a. What is the probability that he eats at A on the third day after the initial one?
19 The name honours Andrei Andreyevich Markov (1856–1922) who was a professor at the university in St. Petersburg,
Russia.
2.9. An Application to Markov Chains 135
Solution. The table of transition probabilities follows. The A column indicates that if he eats at A
on one day, he never eats there again on the next day and so is certain to go to B.
Present Lunch
A B
Next A 0 0.25
Lunch B 1 0.75
The B column shows that, if he eats at B on one day, he will eat there on the next day 34 of the time
and switches to A only 14 of the time.
The restaurant he visits on a given day is not determined. The most that we can expect is to know
the probability
thathe will visit A or B on that day.
(m)
s1
Let sm = denote the state vector for day m. Here s(m) denotes the probability that he
(m) 1
s2
(m)
eats at A on day m, and s2 is the probability that he eats at B on day m. It is convenient to let s0
correspond to the initial day. Because
heis equally likely to eat at A or B on that initial day,
(0) (0) 0.5
s1 = 0.5 and s2 = 0.5, so s0 = . Now let
0.5
0 0.25
P=
1 0.75
denote the transition matrix. We claim that the relationship
sm+1 = Psm
holds for all integers m ≥ 0. This will be derived later; for now, we use it as follows to successively
compute s1 , s2 , s3 , . . . .
0 0.25 0.5 0.125
s1 = Ps0 = =
1 0.75 0.5 0.875
0 0.25 0.125 0.21875
s2 = Ps1 = =
1 0.75 0.875 0.78125
0 0.25 0.21875 0.1953125
s3 = Ps2 = =
1 0.75 0.78125 0.8046875
Hence, the probability that his third lunch (after the initial one) is at A is approximately 0.195,
whereas the probability that it is at B is 0.805. If we carry these calculations on, the next state
vectors are (to five figures):
0.20117 0.19971
s4 = s5 =
0.79883 0.80029
0.20007 0.19998
s6 = s7 =
0.79993 0.80002
Moreover,
as m increases the entries of sm get closer and closer to the corresponding entries of
0.2
. Hence, in the long run, he eats 20% of his lunches at A and 80% at B.
0.8
136 Matrix Algebra
Present Next Example 2.9.1 incorporates most of the essential features of all Markov
State State chains. The general model is as follows: The system evolves through
various stages and at each stage can be in exactly one of n distinct states. It
state
1
progresses through a sequence of states as time goes on. If a Markov chain
p1 j is in state j at a particular stage of its development, the probability pi j that
state
it goes to state i at the
next
stage is called the transition probability. The
p2 j state
j 2 n × n matrix P = pi j is called the transition matrix for the Markov
chain. The situation is depicted graphically in the diagram.
pn j
We
make one important assumption about the transition matrix P =
state pi j : It does not depend on which stage the process is in. This assumption
n means that the transition probabilities are independent of time—that is,
they do not change as time goes on. It is this assumption that distinguishes
Markov chains in the literature of this subject.
Example 2.9.2
Suppose the transition matrix of a three-state Markov chain is
Present state
1 2 3
p11 p12 p13 0.3 0.1 0.6 1
P = p21 p22 p23 = 0.5 0.9 0.2 2 Next state
p31 p32 p33 0.2 0.0 0.2 3
If, for example, the system is in state 2, then column 2 lists the probabilities of where it goes next.
Thus, the probability is p12 = 0.1 that it goes from state 2 to state 1, and the probability is
p22 = 0.9 that it goes from state 2 to state 2. The fact that p32 = 0 means that it is impossible for it
to go from state 2 to state 3 at the next stage.
If the system is in state j at some stage of its evolution, the transition probabilities p1 j , p2 j , . . . , pn j
represent the fraction of the time that the system will move to state 1, state 2, . . . , state n, respectively, at
the next stage. We assume that it has to go to some state at each transition, so the sum of these probabilities
is 1:
p1 j + p2 j + · · · + pn j = 1 for each j
Thus, the columns of P all sum to 1 and the entries of P lie between 0 and 1. Hence P is called a stochastic
matrix.
(m)
As in Example 2.9.1, we introduce the following notation: Let si denote the probability that the
2.9. An Application to Markov Chains 137
Theorem 2.9.1
Let P be the transition matrix for an n-state Markov chain. If sm is the state vector at stage m, then
sm+1 = Psm
for each m = 0, 1, 2, . . . .
Heuristic Proof. Suppose that the Markov chain has been run N times, each time starting with the same
initial state vector. Recall that pi j is the proportion of the time the system goes from state j at some stage
(m)
to state i at the next stage, whereas si is the proportion of the time it is in state i at stage m. Hence
sm+1
i N
is (approximately) the number of times the system is in state i at stage m + 1. We are going to calculate
this number another way. The system got to state i at stage m + 1 through some other state (say state j)
(m)
at stage m. The number of times it was in state j at that stage is (approximately) s j N, so the number of
(m)
times it got to state i via state j is pi j (s j N). Summing over j gives the number of times the system is in
state i (at stage m + 1). This is the number we calculated before, so
(m+1) (m) (m) (m)
si N = pi1 s1 N + pi2 s2 N + · · · + pin sn N
(m+1) (m) (m) (m)
Dividing by N gives si = pi1 s1 + pi2 s2 + · · · + pin sn for each i, and this can be expressed as the
matrix equation sm+1 = Psm .
If the initial probability vector s0 and the transition matrix P are given, Theorem 2.9.1 gives s1 , s2 , s3 , . . . ,
one after the other, as follows:
s1 = Ps0
s2 = Ps1
s3 = Ps2
..
.
138 Matrix Algebra
Example 2.9.3
A wolf pack always hunts in one of three regions R1 , R2 , and R3 . Its hunting habits are as follows:
1. If it hunts in some region one day, it is as likely as not to hunt there again the next day.
3. If it hunts in R2 or R3 , it is equally likely to hunt in each of the other regions the next day.
If the pack hunts in R1 on Monday, find the probability that it hunts there on Thursday.
Solution. The stages of this process are the successive days; the states are the three regions. The
transition matrix P is determined as follows (see the table): The first habit asserts that
p11 = p22 = p33 = 12 . Now column 1 displays what happens when the pack starts in R1 : It never
goes to state 2, so p21 = 0 and, because the column must sum to 1, p31 = 21 . Column 2 describes
what happens if it starts in R2 : p22 = 12 and p12 and p32 are equal (by habit 3), so p12 = p32 = 12
because the column sum must equal 1. Column 3 is filled in a similar way.
R1 R2 R3
1 1 1
R1 2 4 4
1 1
R2 0 2 4
1 1 1
R3 2 4 2
1
Now let Monday be the initial stage. Then s0 = 0 because the pack hunts in R1 on that day.
0
Then s1 , s2 , and s3 describe Tuesday, Wednesday, and Thursday, respectively, and we compute
them using Theorem 2.9.1.
1 3 11
2 8 32
1 6
s1 = Ps0 = 0 s2 = Ps1 = 8
s3 = Ps2 = 32
1 4 15
2 8 32
11
Hence, the probability that the pack hunts in Region R1 on Thursday is 32 .
2.9. An Application to Markov Chains 139
Another phenomenon that was observed in Example 2.9.1 can be expressed in general terms. The state
0.2
vectors s0 , s1 , s2 , . . . were calculated in that example and were found to “approach” s = . This
0.8
means that the first component of sm becomes and remains very close to 0.2 as m becomes large, whereas
the second component gets close to 0.8 as m increases. When this is the case, we say that sm converges to
s. For large m, then, there is very little error in taking sm = s, so the long-term probability that the system
is in state 1 is 0.2, whereas the probability that it is in state 2 is 0.8. In Example 2.9.1, enough state vectors
were computed for the limiting vector s to be apparent. However, there is a better way to do this that works
in most cases.
Suppose P is the transition matrix of a Markov chain, and assume that the state vectors sm converge to
a limiting vector s. Then sm is very close to s for sufficiently large m, so sm+1 is also very close to s. Thus,
the equation sm+1 = Psm from Theorem 2.9.1 is closely approximated by
s = Ps
so it is not surprising that s should be a solution to this matrix equation. Moreover, it is easily solved
because it can be written as a system of homogeneous linear equations
(I − P)s = 0
0 0.25
The matrix P = of Example 2.9.1 is regular (in this case, each entry of P2 is positive), and
1 0.75
the general theorem is as follows:
Theorem 2.9.2
Let P be the transition matrix of a Markov chain and assume that P is regular. Then there is a
unique column matrix s satisfying the following conditions:
1. Ps = s.
(I − P)s = 0
140 Matrix Algebra
and so gives a homogeneous system of linear equations for s. Finally, the sequence of state vectors
s0 , s1 , s2 , . . . converges to s in the sense that if m is large enough, each entry of sm is closely
approximated by the corresponding entry of s.
Example 2.9.4
A man eats one of three soups—beef, chicken, and vegetable—each day. He never eats the same
soup two days in a row. If he eats beef soup on a certain day, he is equally likely to eat each of the
others the next day; if he does not eat beef soup, he is twice as likely to eat it the next day as the
alternative.
a. If he has beef soup one day, what is the probability that he has it again two days later?
b. What are the long-run probabilities that he eats each of the three soups?
Solution. The states here are B, C, and V , the three soups. The transition matrix P is given in the
table. (Recall that, for each state, the corresponding column lists the probabilities for the next
state.)
B C V
2 2
B 0 3 3
1 1
C 2 0 3
1 1
V 2 3 0
If he has beef soup initially, then the initial state vector is
1
s0 = 0
0
Then two days later the state vector is s2 . If P is the transition matrix, then
0 4
1 1
s1 = Ps0 = 2 1 , s2 = Ps1 = 6 1
1 1
so he eats beef soup two days later with probability 23 . This answers (a.) and also shows that he
eats chicken and vegetable soup each with probability 16 .
20 The interested reader can find an elementary proof in J. Kemeny, H. Mirkil, J. Snell, and G. Thompson, Finite Mathematical
To find the long-run probabilities, we must find the steady-state vector s. Theorem 2.9.2 applies
because P is regular (P2 has positive entries), so s satisfies Ps = s. That is, (I − P)s = 0 where
6 −4 −4
I − P = 61 −3 6 −2
−3 −2 6
4t 0.4
The solution is s = 3t , where t is a parameter, and we use s = 0.3 because the entries of
3t 0.3
s must sum to 1. Hence, in the long run, he eats beef soup 40% of the time and eats chicken soup
and vegetable soup each 30% of the time.
Exercise 2.9.1 Which of the following stochastic matri- a. What proportion of his time does he spend in A, in
ces is regular? B, and in C?
0 0 12 1
0 1 b. If he hunts in A on Monday (C on Monday), what
2 3
is the probability that he will hunt in B on Thurs-
1 1
a. 1 0 2 b. 4 1 13
day?
1 1
0 1 0 4 0 3
Exercise 2.9.5 The prime minister says she will call Exercise 2.9.9 If a stochastic matrix has a 1 on its main
an election. This gossip is passed from person to person diagonal, show that it cannot be regular. Assume it is not
with a probability p 6= 0 that the information is passed in- 1 × 1.
correctly at any stage. Assume that when a person hears
Exercise 2.9.10 If sm is the stage-m state vector for a
the gossip he or she passes it to one person who does not
Markov chain, show that sm+k = Pk sm holds for all m ≥ 1
know. Find the long-term probability that a person will
and k ≥ 1 (where P is the transition matrix).
hear that there is going to be an election.
Exercise 2.9.11 A stochastic matrix is doubly stochas-
Exercise 2.9.6 John makes it to work on time one Mon-
tic if all the row sums also equal 1. Find the steady-state
day out of four. On other work days his behaviour is as
vector for a doubly stochastic matrix.
follows: If he is late one day, he is twice as likely to come
to work on time the next day as to be late. If he is on time Exercise 2.9.12 Consider the 2 × 2 stochastic matrix
one day, he is as likely to be late as not the next day. Find 1− p q
the probability of his being late and that of his being on P= ,
p 1−q
time Wednesdays. where 0 < p < 1 and 0 < q < 1.
Exercise 2.9.7 Suppose you have 1¢ and match coins
with a friend. At each match you either win or lose 1¢ 1 q
a. Show that p+q is the steady-state vector for
with equal probability. If you go broke or ever get 4¢, p
you quit. Assume your friend never quits. If the states P.
are 0, 1, 2, 3, and 4 representing your wealth, show that b. Show that Pm converges to the matrix
the corresponding transition matrix P is not regular. Find q q
1
the probability that you will go broke after 3 matches. p+q by first verifying inductively that
p p
Exercise 2.9.8 A mouse is put into a maze of compart- 1 q q (1−p−q)m p −q
m
P = p+q + p+q for
ments, as in the diagram. Assume that he always leaves p p −p q
any compartment he enters and that he is equally likely m = 1, 2, . . . . (It can be shown that the sequence
to take any tunnel entry. of powers P, P2 , P3 , . . . of any regular transi-
tion matrix converges to the matrix each of whose
3 columns equals the steady-state vector for P.)
1
Exercise 2.1 Solve for the matrix X if: Exercise 2.6 Let Ipq denote the n × n matrix with (p, q)-
entry equal to 1 and all other entries 0. Show that:
a. PX Q = R; b. X P = S;
a. In = I11 + I22 + · · · + Inn .
1 0 Ips if q = r
1 1 −1 b. Ipq Irs = .
where P = 2 −1 , Q = , 0 if q 6= r
2 0 3
0 3
c. If A = [ai j ] is n × n, then A = ∑ni=1 ∑nj=1 ai j Ii j .
−1 1 −4
1 6
R = −4 0
−6 , S = d. If A = [ai j ], then Ipq AIrs = aqr Ips for all p, q, r, and
3 1
6 6 −6 s.
Exercise 2.2 Consider
Exercise 2.7 A matrix of the form aIn , where a is a
3 2 number, is called an n × n scalar matrix.
p(X ) = X − 5X + 11X − 4I.
Exercise 2.11 Let E and F be elementary matrices ob- Exercise 2.13 Show that the following are equivalent
tained from the identity matrix by adding multiples of for matrices P, Q:
row k to rows p and q. If k 6= p and k 6= q, show that
EF = FE.
Exercise 2.12 If A is a 2 × 2 real matrix, A 2 1. P, Q, and P + Q are all invertible and
= A and
0 0
AT = A, show that either A is one of ,
0 0 (P + Q)−1 = P−1 + Q−1
1 0 0 0 1 0 a b
, , , or A =
0 0 0 1 0 1 b 1−a
where a2 + b2 = a, − 21 ≤ b ≤ 12 and b 6= 0. 2. P is invertible and Q = PG where G2 + G + I = 0.
3. Determinants and Diagonalization
With each square matrix we can calculate a number, called the determinant of the matrix, which tells us
whether or not the matrix is invertible. In fact, determinants can be used to give a formula for the inverse
of a matrix. They also arise in calculating certain numbers (called eigenvalues) associated with the matrix.
These eigenvalues are essential to a technique called diagonalization that is used in many applications
where it is desired to predict the future behaviour of a system. For example, we use it to predict whether a
species will become extinct.
Determinants were first studied by Leibnitz in 1696, and the term “determinant” was first used in
1801 by Gauss is his Disquisitiones Arithmeticae. Determinants are much older than matrices (which
were introduced by Cayley in 1878) and were used extensively in the eighteenth and nineteenth centuries,
primarily because of their significance in geometry (see Section 4.4). Although they are somewhat less
important today, determinants still play a role in the theory and application of matrix algebra.
and showed (in Example 2.4.4) that A has an inverse if and only if det A 6= 0. One objective of this chapter
is to do this for any square matrix A. There is no difficulty for 1 × 1 matrices: If A = [a], we define
det A = det [a] = a and note that A is invertible if and only if a 6= 0.
If A is 3 × 3 and invertible, we look for a suitable definition of det A by trying to carry A to the identity
matrix by row operations. The first column is not zero (A is invertible); suppose the (1, 1)-entry a is not
zero. Then row operations give
a b c a b c a b c a b c
A = d e f → ad ae a f → 0 ae − bd a f − cd = 0 u a f − cd
g h i ag ah ai 0 ah − bg ai − cg 0 v ai − cg
where u = ae − bd and v = ah − bg. Since A is invertible, one of u and v is nonzero (by Example 2.4.11);
suppose that u 6= 0. Then the reduction proceeds
a b c a b c a b c
A → 0 u a f − cd → 0 u a f − cd → 0 u a f − cd
0 v ai − cg 0 uv u(ai − cg) 0 0 w
1 Determinants are commonly written |A| = det A using vertical bars. We will use both notations.
145
146 Determinants and Diagonalization
where w = u(ai − cg) − v(a f − cd) = a(aei + b f g + cdh − ceg − a f h − bdi). We define
This last expression can be described as follows: To compute the determinant of a 3 × 3 matrix A, multiply
each entry in row 1 by a sign times the determinant of the 2 × 2 matrix obtained by deleting the row and
column of that entry, and add the results. The signs alternate down row 1, starting with +. It is this
observation that we generalize below.
Example 3.1.1
2 3 7
0 6
det −4 0 6 = 2 − 3 −4 6 + 7 −4 0
5 0 1 0 1 5
1 5 0
= 2(−30) − 3(−6) + 7(−20)
= −182
This suggests an inductive method of defining the determinant of any square matrix in terms of de-
terminants of matrices one size smaller. The idea is to define determinants of 3 × 3 matrices in terms of
determinants of 2 × 2 matrices, then we do 4 × 4 matrices in terms of 3 × 3 matrices, and so on.
To describe this, we need some terminology.
The sign of a position is clearly 1 or −1, and the following diagram is useful for remembering it:
+ − + − ···
− + − + ···
+ − + − ···
− + − + ···
.. .. .. ..
. . . .
Note that the signs alternate along each row and column with + in the upper left corner.
Example 3.1.2
Find the cofactors of positions (1, 2), (3, 1), and (2, 3) in the following matrix.
3 −1 6
A= 5 2 7
8 9 4
5 7
Solution. Here A12 is the matrix that remains when row 1 and column 2 are deleted. The
8 4
sign of position (1, 2) is (−1)1+2 = −1 (this is also the (1, 2)-entry in the sign diagram), so the
(1, 2)-cofactor is
1+2 5 7
c12 (A) = (−1) = (−1)(5 · 4 − 7 · 8) = (−1)(−36) = 36
8 4
Clearly other cofactors can be found—there are nine in all, one for each position in the matrix.
It asserts that det A can be computed by multiplying the entries of row 1 by the corresponding cofac-
tors, and adding the results. The astonishing thing is that det A can be computed by taking the cofactor
expansion along any row or column: Simply multiply each entry of that row or column by the correspond-
ing cofactor and add.
Example 3.1.3
3 4 5
Compute the determinant of A = 1 7 2 .
9 8 −6
Note that the signs alternate along the row (indeed along any row or column). Now we compute
det A by expanding along the first column.
The reader is invited to verify that det A can be computed by expanding along any other row or
column.
The fact that the cofactor expansion along any row or column of a matrix A always gives the same
result (the determinant of A) is remarkable, to say the least. The choice of a particular row or column can
simplify the calculation.
2 The cofactor expansion is due to Pierre Simon de Laplace (1749–1827), who discovered it in 1772 as part of a study of
linear differential equations. Laplace is primarily remembered for his work in astronomy and applied mathematics.
3.1. The Cofactor Expansion 149
Example 3.1.4
3 0 0 0
5 1 2 0
Compute det A where A =
2
.
6 0 −1
−6 3 1 0
Solution. The first choice we must make is which row or column to use in the cofactor expansion.
The expansion involves multiplying entries by cofactors, so the work is minimized when the row
or column contains as many zero entries as possible. Row 1 is a best choice in this matrix (column
4 would do as well), and the expansion is
This is the first stage of the calculation, and we have succeeded in expressing the determinant of
the 4 × 4 matrix A in terms of the determinant of a 3 × 3 matrix. The next stage involves this 3 × 3
matrix. Again, we can use any row or column for the cofactor expansion. The third column is
preferred (with two zeros), so
6 0 1 2 1 2
det A = 3 0
− (−1) +0
3 1 3 1 6 0
= 3[0 + 1(−5) + 0]
= −15
Computing the determinant of a matrix A can be tedious. For example, if A is a 4 × 4 matrix, the
cofactor expansion along any row or column involves calculating four cofactors, each of which involves
the determinant of a 3 × 3 matrix. And if A is 5 × 5, the expansion involves five determinants of 4 × 4
matrices! There is a clear need for some techniques to cut down the work.3
The motivation for the method is the observation (see Example 3.1.4) that calculating a determinant
is simplified a great deal when a row or column consists mostly of zeros. (In fact, when a row or column
consists entirely of zeros, the determinant is zero—simply expand along that row or column.)
Recall next that one method of creating zeros in a matrix is to apply elementary row operations to it.
Hence, a natural question to ask is what effect such a row operation has on the determinant of the matrix.
It turns out that the effect is easy to determine and that elementary column operations can be used in the
same way. These observations lead to a technique for evaluating determinants that greatly reduces the
a b c a b c a b
3 If A = d e f we can calculate det A by considering d e f d e obtained from A by adjoining columns
g h i g h i g h
1 and 2 on the right. Then det A = aei + b f g + cdh − ceg − a f h − bdi, where the positive terms aei, b f g, and cdh are the
products down and to the right starting at a, b, and c, and the negative terms ceg, a f h, and bdi are the products down and to the
left starting at c, a, and b. Warning: This rule does not apply to n × n matrices where n > 3 or n = 2.
150 Determinants and Diagonalization
Theorem 3.1.2
Let A denote an n × n matrix.
2. If two distinct rows (or columns) of A are interchanged, the determinant of the resulting
matrix is − det A.
5. If a multiple of one row of A is added to a different row (or if a multiple of a column is added
to a different column), the determinant of the resulting matrix is det A.
The cofactors of these elements in B are the same as in A (they do not involve row q): in symbols,
cq j (B) = cq j (A) for each j. Hence, expanding B along row q gives
det A = (aq1 + ua p1 )cq1 (A) + (aq2 + ua p2 )cq2 (A) + · · · + (aqn + ua pn )cqn (A)
= [aq1 cq1 (A) + aq2 cq2 (A) + · · · + aqn cqn (A)] + u[a p1cq1 (A) + a p2 cq2 (A) + · · · + a pn cqn (A)]
= det A + u det C
where C is the matrix obtained from A by replacing row q by row p (and both expansions are along row
q). Because rows p and q of C are equal, det C = 0 by property 4. Hence, det B = detA, as required. As
before, a similar proof holds for columns.
To illustrate Theorem 3.1.2, consider the following determinants.
3.1. The Cofactor Expansion 151
3 −1 2
2 5 1 =0 (because the last row consists of zeros)
0 0 0
3 −1 5 5 −1 3
2 8 7 = − 7 8 2 (because two columns are interchanged)
1 2 −1 −1 2 1
8 1 2 8 1 2
3 0 9 = 3 1 0 3 (because the second row of the matrix on the left is 3 times
1 2 −1 1 2 −1 the second row of the matrix on the right)
2 1 2
4 0 4 =0 (because two columns are identical)
1 3 1
2 5 2 0 9 20
−1 2 9 = −1 2 9 (because twice the second row of the matrix on the left was
3 1 1 3 1 1 added to the first row)
The following four examples illustrate how Theorem 3.1.2 is used to evaluate determinants.
Example 3.1.5
1 −1 3
Evaluate det A when A = 1 0 −1 .
2 1 6
Solution. The matrix does have zero entries, so expansion along (say) the second row would
involve somewhat less work. However, a column operation can be used to get a zero in position
(2, 3)—namely, add column 1 to column 3. Because this does not change the value of the
determinant, we obtain
1 −1 3 1 −1 4
−1 4
det A = 1 0 −1 = 1 0 0 = − = 12
2 1 8
1 6 2 1 8
Example 3.1.6
a b c a+x b+y c+z
If det p q r = 6, evaluate det A where A = 3x 3y 3z .
x y z −p −q −r
152 Determinants and Diagonalization
Now subtract the second row from the first and interchange the last two rows.
a b c a b c
det A = −3 det x y z = 3 det p q r = 3 · 6 = 18
p q r x y z
The determinant of a matrix is a sum of products of its entries. In particular, if these entries are
polynomials in x, then the determinant itself is a polynomial in x. It is often of interest to determine which
values of x make the determinant zero, so it is very useful if the determinant is given in factored form.
Theorem 3.1.2 can help.
Example 3.1.7
1 x x
Find the values of x for which det A = 0, where A = x 1 x .
x x 1
Solution. To evaluate det A, first subtract x times row 1 from rows 2 and 3.
1 x x 1 x x
2 2
1 − x2 x − x2
det A = x 1 x = 0 1 − x x − x =
x x 1 0 x − x2 1 − x2 x − x2 1 − x2
At this stage we could simply evaluate the determinant (the result is 2x3 − 3x2 + 1). But then we
would have to factor this polynomial to find the values of x that make it zero. However, this
factorization can be obtained directly by first factoring each entry in the determinant and taking a
common factor of (1 − x) from each row.
(1 − x)(1 + x) x(1 − x)
2 1+x x
det A = = (1 − x)
x(1 − x) (1 − x)(1 + x) x 1+x
= (1 − x)2 (2x + 1)
Example 3.1.8
If a1 , a2 , and a3 are given show that
1 a1 a21
det 1 a2 a22 = (a3 − a1 )(a3 − a2 )(a2 − a1 )
1 a3 a23
Solution. Begin by subtracting row 1 from rows 2 and 3, and then expand along column 1:
1 a1 a21 1 a1 a21 2 − a2
a 2 − a 1 a
det 1 a2 a22 = det 0 a2 − a1 a22 − a21 = 2 1
2 2 2 a3 − a1 a23 − a21
1 a3 a3 0 a3 − a1 a3 − a1
Now (a2 − a1 ) and (a3 − a1 ) are common factors in rows 1 and 2, respectively, so
1 a1 a21
2 1 a2 + a1
det 1 a2 a2 = (a2 − a1 )(a3 − a1 ) det
1 a3 + a1
1 a3 a23
= (a2 − a1 )(a3 − a1 )(a3 − a2 )
The matrix in Example 3.1.8 is called a Vandermonde matrix, and the formula for its determinant can be
generalized to the n × n case (see Theorem 3.2.7).
If A is an n × n matrix, forming uA means multiplying every row of A by u. Applying property 3 of
Theorem 3.1.2, we can take the common factor u out of each row and so obtain the following useful result.
Theorem 3.1.3
If A is an n × n matrix, then det (uA) = un det A for any number u.
The next example displays a type of matrix whose determinant is easy to compute.
Example 3.1.9
a 0 0 0
u b 0 0
Evaluate det A if A =
v w c 0 .
x y z d
b 0 0
Solution. Expand along row 1 to get det A = a w c 0 . Now expand this along the top row to
y z d
c 0
get det A = ab = abcd, the product of the main diagonal entries.
z d
154 Determinants and Diagonalization
A square matrix is called a lower triangular matrix if all entries above the main diagonal are zero
(as in Example 3.1.9). Similarly, an upper triangular matrix is one for which all entries below the main
diagonal are zero. A triangular matrix is one that is either upper or lower triangular. Theorem 3.1.4
gives an easy rule for calculating the determinant of any triangular matrix. The proof is like the solution
to Example 3.1.9.
Theorem 3.1.4
If A is a square triangular matrix, then det A is the product of the entries on the main diagonal.
Theorem 3.1.4 is useful in computer calculations because it is a routine matter to carry a matrix to trian-
gular form using row operations.
Block matrices such as those in the next theorem arise frequently in practice, and the theorem gives an
easy method for computing their determinants. This dovetails with Example 2.4.11.
Theorem 3.1.5
A X A 0
Consider matrices and in block form, where A and B are square matrices.
0 B Y B
Then
A X A 0
det = det A det B and det = det A det B
0 B Y B
A X
Proof. Write T = det and proceed by induction on k where A is k × k. If k = 1, it is the cofactor
0 B
expansion along column 1. In general let Si (T ) denote the matrix obtained from T by deleting row i and
column 1. Then the cofactor expansion of det T along the first column is
det T = a11 det (S1 (T )) − a21 det (S2 (T )) + · · · ± ak1 det (Sk (T )) (3.2)
Si (A) Xi
where a11 , a21 , · · · , ak1 are the entries in the first column of A. But Si (T ) = for each
0 B
i = 1, 2, · · · , k, so det (Si (T )) = det (Si (A)) · det B by induction. Hence, Equation 3.2 becomes
det T = {a11 det (S1 (T )) − a21 det (S2 (T )) + · · · ± ak1 det (Sk (T ))} det B
= { det A} det B
Example 3.1.10
2 3 1 3 2 1 3 3
1 −2 −1 1 1 −1 −2 1 2 1 1 1
det
0
= −
= −
4 1
= −(−3)(−3) = −9
1 0 1 0 0 1 1 1 −1
0 4 0 1 0 0 4 1
3.1. The Cofactor Expansion 155
The next result shows that det A is a linear transformation when regarded as a function of a fixed
column of A. The proof is Exercise 3.1.21.
Theorem 3.1.6
Given columns c1 , · · · , c j−1 , c j+1 , · · · , cn in Rn , define T : Rn → R by
T (x) = det c1 · · · c j−1 x c j+1 · · · cn for all x in Rn
a b c Exercise 3.1.10 Compute the determinant of each ma-
b. det a + b 2b c + b trix, using Theorem 3.1.5.
2 2 2
1 −1 2 0 −2
0 1 0 4 1
a b c
a.
1 1 5 0 0
Exercise 3.1.7 If det p q r = −1 compute:
0 0 0 3 −1
x y z
0 0 0 1 1
−x −y −z 1 2 0 3 0
−1 3 1 4 0
a. det 3p + a 3q + b 3r + c
2p 2q 2r b.
0 0 2 1 1
0 0 −1 0 2
0 0 3 0 1
−2a −2b −2c
b. det 2p + x 2q + y 2r + z
3x 3y 3z Exercise 3.1.11 If det A = 2, det B = −1, and det C =
3, find:
Exercise 3.1.9 In each case either prove the statement Exercise 3.1.13
or give an example showing that it is false:
a. Find det A if A is 3 × 3 and det (2A) = 6.
b. If det A = 0, then A has two equal rows. Exercise 3.1.14 Evaluate by first adding all other rows
to the first row.
c. If A is 2 × 2, then det (AT ) = det A.
x−1 2 3
d. If R is the reduced row-echelon form of A, then a. det 2 −3 x − 2
det A = det R. −2 x −2
e. If A is 2 × 2, then det (7A) = 49 det A. x − 1 −3 1
b. det 2 −1 x − 1
f. det (AT ) = − det A. −3 x + 2 −2
2 x −1 Exercise 3.1.22 Show that
b. Find c if det 1 y 3 = ax + by + cz.
0 0 · · · 0 a1
−3 z 4 0
0 · · · a2 ∗
.. .
. .
. .
.
det . . . . = (−1)k a1 a2 · · · an
Exercise 3.1.16 Find the real numbers x and y such that 0 an−1 · · · ∗ ∗
det A = 0 if: an ∗ ··· ∗ ∗
In this section, several theorems about determinants are derived. One consequence of these theorems is
that a square matrix A is invertible if and only if det A 6= 0. Moreover, determinants are used to give a
formula for A−1 which, in turn, yields a formula (called Cramer’s rule) for the solution of any system of
linear equations with an invertible coefficient matrix.
We begin with a remarkable theorem (due to Cauchy in 1812) about the determinant of a product of
matrices. The proof is given at the end of this section.
The complexity of matrix multiplication makes the product theorem quite unexpected. Here is an
example where it reveals an important numerical identity.
Example 3.2.1
a b c d ac − bd ad + bc
If A = and B = then AB = .
−b a −d c −(ad + bc) ac − bd
Hence det A det B = det (AB) gives the identity
Theorem 3.2.1 extends easily to det (ABC) = det A det B det C. In fact, induction gives
for any square matrices A1 , . . . , Ak of the same size. In particular, if each Ai = A, we obtain
Theorem 3.2.2
1
An n × n matrix A is invertible if and only if det A 6= 0. When this is the case, det (A−1 ) = det A
Conversely, if det A 6= 0, we show that A can be carried to I by elementary row operations (and invoke
Theorem 2.4.5). Certainly, A can be carried to its reduced row-echelon form R, so R = Ek · · · E2 E1 A where
the Ei are elementary matrices (Theorem 2.5.1). Hence the product theorem gives
Since det E 6= 0 for all elementary matrices E, this shows det R 6= 0. In particular, R has no row of zeros,
so R = I because R is square and reduced row-echelon. This is what we wanted.
Example 3.2.2
1 0 −c
For which values of c does A = −1 3 1 have an inverse?
0 2c −4
Solution. Compute det A by first adding c times column 1 to column 3 and then expanding along
row 1.
1 0 −c 1 0 0
det A = det −1 3 1 = det −1 3 1 − c = 2(c + 2)(c − 3)
0 2c −4 0 2c −4
Hence, det A = 0 if c = −2 or c = 3, and A has an inverse if c 6= −2 and c 6= 3.
Example 3.2.3
If a product A1 A2 · · · Ak of square matrices is invertible, show that each Ai is invertible.
Solution. We have det A1 det A2 · · · det Ak = det (A1 A2 · · · Ak ) by the product theorem, and
det (A1 A2 · · · Ak ) 6= 0 by Theorem 3.2.2 because A1 A2 · · · Ak is invertible. Hence
so det Ai 6= 0 for each i. This shows that each Ai is invertible, again by Theorem 3.2.2.
Theorem 3.2.3
If A is any square matrix, det AT = det A.
Proof. Consider first the case of an elementary matrix E. If E is of type I or II, then E T = E; so certainly
det E T = det E. If E is of type III, then E T is also of type III; so det E T = 1 = det E by Theorem 3.1.2.
Hence, det E T = det E for every elementary matrix E.
Now let A be any square matrix. If A is not invertible, then neither is AT ; so det AT = 0 = det A by
Theorem 3.2.2. On the other hand, if A is invertible, then A = Ek · · · E2 E1 , where the Ei are elementary
matrices (Theorem 2.5.2). Hence, AT = E1T E2T · · · EkT so the product theorem gives
160 Determinants and Diagonalization
det AT = det E1T det E2T · · · det EkT = det E1 det E2 · · · det Ek
= det Ek · · · det E2 det E1
= det A
This completes the proof.
Example 3.2.4
If det A = 2 and det B = 5, calculate det (A3 B−1 AT B2 ).
det (A3 B−1 AT B2 ) = det (A3 ) det (B−1 ) det (AT ) det (B2 )
= ( det A)3 det1 B det A( det B)2
= 23 · 51 · 2 · 52
= 80
Example 3.2.5
A square matrix is called orthogonal if A−1 = AT . What are the possible values of det A if A is
orthogonal?
Hence Theorems 2.6.4 and 2.6.5 imply that rotation about the origin and reflection about a line through
the origin in R2 have orthogonal matrices with determinants 1 and −1 respectively. In fact they are the
only such transformations of R2 . We have more to say about this in Section 8.2.
Adjugates
a b d −b
In Section 2.4 we defined the adjugate of a 2 × 2 matrix A = to be adj (A) = . Then
c d −c a
we verified that A( adj A) = ( det A)I = ( adj A)A and hence that, if det A 6= 0, A−1 = det1 A adj A. We are
now able to define the adjugate of an arbitrary square matrix and to show that this formula for the inverse
remains valid (when the inverse exists).
Recall that the (i, j)-cofactor ci j (A) of a square matrix A is a number defined for eachposition
(i, j)
in the matrix. If A is a square matrix, the cofactor matrix of A is defined to be the matrix ci j (A) whose
(i, j)-entry is the (i, j)-cofactor of A.
3.2. Determinants and Matrix Inverses 161
This agrees with the earlier definition for a 2 × 2 matrix A as the reader can verify.
Example 3.2.6
1 3 −2
Compute the adjugate of A = 0 1 5 and calculate A( adj A) and ( adj A)A.
−2 −6 7
1 5 0 5 0 1
−
−6 7 −2 7 −2 −6
c11 (A) c12 (A) c13 (A)
1 −2
3
c21 (A) c22 (A) c23 (A) = − 3 −2 − 1
−6 7 −2 7 −2 −6
c31 (A) c32 (A) c33 (A)
3 −2 1 −2 1 3
−
1 5 0 5 0 1
37 −10 2
= −9 3 0
17 −5 1
and the reader can verify that also ( adj A)A = 3I. Hence, analogy with the 2 × 2 case would
indicate that det A = 3; this is, in fact, the case.
The relationship A( adj A) = ( det A)I holds for any square matrix A. To see why this is so, consider
4 This is also called the classical adjoint of A, but the term “adjoint” has another meaning.
162 Determinants and Diagonalization
It is important to note that this theorem is not an efficient way to find the inverse of the matrix A. For
example, if A were 10 × 10, the calculation of adj A would require computing 102 = 100 determinants of
9 × 9 matrices! On the other hand, the matrix inversion algorithm would find A−1 with about the same
effort as finding det A. Clearly, Theorem 3.2.4 is not a practical result: its virtue is that it gives a formula
for A−1 that is useful for theoretical purposes.
3.2. Determinants and Matrix Inverses 163
Example 3.2.7
2 1 3
Find the (2, 3)-entry of A−1 if A = 5 −7 1 .
3 0 −6
Example 3.2.8
If A is n × n, n ≥ 2, show that det ( adj A) = ( det A)n−1 .
Solution. Write d = det A; we must show that det ( adj A) = d n−1 . We have A( adj A) = dI by
Theorem 3.2.4, so taking determinants gives d det ( adj A) = d n . Hence we are done if d 6= 0.
Assume d = 0; we must show that det ( adj A) = 0, that is, adj A is not invertible. If A 6= 0, this
follows from A( adj A) = dI = 0; if A = 0, it follows because then adj A = 0.
Cramer’s Rule
c11 (A) c21 (A) · · · cn1 (A) b1
c12 (A) c22 (A) · · · cn2 (A)
1 b2
= det A .. .. .. .
. . . ..
c1n (A) c2n (A) · · · cnn (A) bn
Now the quantity b1 c11 (A) + b2 c21 (A) + · · · + bn cn1 (A) occurring in the formula for x1 looks like the
cofactor expansion of the determinant of a matrix. The cofactors involved are c11 (A), c21 (A), . . . , cn1 (A),
corresponding to the first column of A. If A1 is obtained from A by replacing the first column of A by b,
then ci1 (A1 ) = ci1 (A) for each i because column 1 is deleted when computing them. Hence, expanding
det (A1 ) by the first column gives
Ax = b
Example 3.2.9
Find x1 , given the following system of equations.
5x1 + x2 − x3 = 4
9x1 + x2 − x3 = 1
x1 − x2 + 5x3 = 2
5 Gabriel Cramer (1704–1752) was a Swiss mathematician who wrote an introductory work on algebraic curves. He popu-
larized the rule that bears his name, but the idea was known earlier.
3.2. Determinants and Matrix Inverses 165
Solution. Compute the determinants of the coefficient matrix A and the matrix A1 obtained from it
by replacing the first column by the column of constants.
5 1 −1
det A = det 9 1 −1 = −16
1 −1 5
4 1 −1
det A1 = det 1 1 −1 = 12
2 −1 5
det A1
Hence, x1 = det A = − 34 by Cramer’s rule.
Cramer’s rule is not an efficient way to solve linear systems or invert matrices. True, it enabled us to
calculate x1 here without computing x2 or x3 . Although this might seem an advantage, the truth of the
matter is that, for large systems of equations, the number of computations needed to find all the variables
by the gaussian algorithm is comparable to the number required to find one of the determinants involved in
Cramer’s rule. Furthermore, the algorithm works when the matrix of the system is not invertible and even
when the coefficient matrix is not square. Like the adjugate formula, then, Cramer’s rule is not a practical
numerical technique; its virtue is theoretical.
Polynomial Interpolation
Example 3.2.10
A forester wants to estimate the age (in years) of a tree by measuring
Age (15, 6) the diameter of the trunk (in cm). She obtains the following data:
6 (10, 5)
(5, 3)
Tree 1 Tree 2 Tree 3
4
Trunk Diameter 5 10 15
2 Age 3 5 6
Diameter
0 5 10 12 15 Estimate the age of a tree with a trunk diameter of 12 cm.
Solution.
The forester decides to “fit” a quadratic polynomial
p(x) = r0 + r1 x + r2 x2
to the data, that is choose the coefficients r0 , r1 , and r2 so that p(5) = 3, p(10) = 5, and p(15) = 6,
and then use p(12) as the estimate. These conditions give three linear equations:
r0 + 5r1 + 25r2 = 3
r0 + 10r1 + 100r2 = 5
r0 + 15r1 + 225r2 = 6
166 Determinants and Diagonalization
7 1
The (unique) solution is r0 = 0, r1 = 10 , and r2 = − 50 , so
7 1 2 1
p(x) = 10 x − 50 x = 50 x(35 − x)
As in Example 3.2.10, it often happens that two variables x and y are related but the actual functional
form y = f (x) of the relationship is unknown. Suppose that for certain values x1 , x2 , . . . , xn of x the
corresponding values y1 , y2 , . . . , yn are known (say from experimental measurements). One way to
estimate the value of y corresponding to some other value a of x is to find a polynomial6
that “fits” the data, that is p(xi ) = yi holds for each i = 1, 2, . . . , n. Then the estimate for y is p(a). As we
will see, such a polynomial always exists if the xi are distinct.
The conditions that p(xi ) = yi are
Theorem 3.2.6
Let n data pairs (x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ) be given, and assume that the xi are distinct. Then
there exists a unique polynomial
The polynomial in Theorem 3.2.6 is called the interpolating polynomial for the data.
6A polynomial is an expression of the form a0 + a1 x + a2 x2 + · · · + an xn where the ai are numbers and x is a variable. If
an 6= 0, the integer n is called the degree of the polynomial, and an is called the leading coefficient. See Appendix D.
3.2. Determinants and Matrix Inverses 167
is called a Vandermonde determinant.7 There is a simple formula for this determinant. If n = 2, it equals
(a2 − a1 ); if n = 3, it is (a3 − a2 )(a3 − a1 )(a2 − a1 ) by Example 3.1.8. The general result is the product
∏ (ai − a j )
1≤ j<i≤n
Theorem 3.2.7
Let a1 , a2 , . . . , an be numbers where n ≥ 2. Then the corresponding Vandermonde determinant is
given by
1 a1 a21 · · · an−1
1
1 a a2 · · · an−1
2 2 2
1 a a2 · · · an−1
det 3 = ∏ (ai − a j )
. . .3 3
.
. .
. . . . .
. 1≤ j<i≤n
1 an a2n · · · ann−1
Proof. We may assume that the ai are distinct; otherwise both sides are zero. We proceed by induction on
n ≥ 2; we have it for n = 2, 3. So assume it holds for n − 1. The trick is to replace an by a variable x, and
consider the determinant
1 a1 a21 · · · an−1
1
1 a a22 · · · an−1
2 2
p(x) = det ... .. .. ..
. . .
1 an−1 a2 · · · a n−1
n−1 n−1
1 x x2 · · · xn−1
Then p(x) is a polynomial of degree at most n − 1 (expand along the last row), and p(ai ) = 0 for each
i = 1, 2, . . . , n − 1 because in each case there are two identical rows in the determinant. In particular,
p(a1 ) = 0, so we have p(x) = (x − a1 )p1 (x) by the factor theorem (see Appendix D). Since a2 6= a1 , we
obtain p1 (a2 ) = 0, and so p1 (x) = (x − a2 )p2 (x). Thus p(x) = (x − a1 )(x − a2 )p2 (x). As the ai are distinct,
this process continues to obtain
where d is the coefficient of xn−1 in p(x). By the cofactor expansion of p(x) along the last row we get
1 a1 a21 · · · an−2
1
1 a a22 · · · an−2
2 2
d = (−1)n+n det . .. .. ..
.. . . .
1 an−1 a2n−1 · · · an−1n−2
Because (−1)n+n = 1 the induction hypothesis shows that d is the product of all factors (ai − a j ) where
1 ≤ j < i ≤ n − 1. The result now follows from Equation 3.4 by substituting an for x in p(x).
Proof of Theorem 3.2.1. If A and B are n × n matrices we must show that
Recall that if E is an elementary matrix obtained by doing one row operation to In , then doing that operation
to a matrix C (Lemma 2.5.1) results in EC. By looking at the three types of elementary matrices separately,
Theorem 3.1.2 shows that
det (EC) = det E det C for any matrix C (3.6)
Thus if E1 , E2 , . . . , Ek are all elementary matrices, it follows by induction that
det (Ek · · · E2 E1C) = det Ek · · · det E2 det E1 det C for any matrix C (3.7)
det (AB) = det [(E1 E2 · · · Ek )B] = det E1 det E2 · · · det Ek det B = det A det B
Exercise 3.2.11 Let A be n × n. Show that uA = (uI)A, Exercise 3.2.21 If A is 2 × 2 and det A = 0, show that
and use this with Theorem 3.2.1 to deduce the result in one column of A is a scalar multiple of the other. [Hint:
Theorem 3.1.3: det (uA) = un det A. Definition 2.5 and Part (2) of Theorem 2.4.5.]
Exercise 3.2.12 If A and B are n × n matrices, if AB = Exercise 3.2.22 Find a polynomial p(x) of degree 2
−BA, and if n is odd, show that either A or B has no in- such that:
verse.
Exercise 3.2.13 Show that det AB = det BA holds for a. p(0) = 2, p(1) = 3, p(3) = 8
any two n × n matrices A and B. b. p(0) = 5, p(1) = 3, p(2) = 5
Exercise 3.2.14 If Ak = 0 for some k ≥ 1, show that A
is not invertible. Exercise 3.2.23 Find a polynomial p(x) of degree 3
Exercise 3.2.15 If A−1 = AT , describe the cofactor ma- such that:
trix of A in terms of A.
a. p(0) = p(1) = 1, p(−1) = 4, p(2) = −5
Exercise 3.2.16 Show that no 3 × 3 matrix A exists such
that A2 + I = 0. Find a 2 × 2 matrix A with this property. b. p(0) = p(1) = 1, p(−1) = 2, p(−2) = −3
Exercise 3.2.17 Show that det (A + BT ) = det (AT + B)
for any n × n matrices A and B. Exercise 3.2.24 Given the following data pairs, find
the interpolating polynomial of degree 3 and estimate the
Exercise 3.2.18 Let A and B be invertible n×n matrices.
value of y corresponding to x = 1.5.
Show that det A = det B if and only if A = U B where U
is a matrix with det U = 1.
a. (0, 1), (1, 2), (2, 5), (3, 10)
Exercise 3.2.19 For each of the matrices in Exercise 2,
find the inverse for those values of c for which it exists. b. (0, 1), (1, 1.49), (2, −0.42), (3, −11.33)
Exercise 3.2.20 In each case either prove the statement c. (0, 2), (1, 2.03), (2, −0.40), (−1, 0.89)
or give an example showing that it is false:
1 a b
a. If adj A exists, then A is invertible.
Exercise 3.2.25 If A = −a 1 c show that
b. If A is invertible and adj A = A−1 , then det A = 1. −b −c 1
2 2 2
det A = 1 + a + b + c . Hence, find A−1 for any a, b,
c. det (AB) = det (BT A). and c.
d. If det A 6= 0 and AB = AC, then B = C. Exercise 3.2.26
e. If AT = −A, then det A = −1.
a p q
f. If adj A = 0, then A = 0. a. Show that A = 0 b r has an inverse if and
0 0 c
g. If A is invertible, then adj A is invertible. only if abc 6= 0, and find A−1 in that case.
h. If A has a row of zeros, so also does adj A. b. Show that if an upper triangular matrix is invert-
ible, the inverse is also upper triangular.
i. det (AT A) > 0 for all square matrices A.
j. det (I + A) = 1 + det A. Exercise 3.2.27 Let A be a matrix each of whose entries
are integers. Show that each of the following conditions
k. If AB is invertible, then A and B are invertible.
implies the other.
l. If det A = 1, then adj A = A.
1. A is invertible and A−1 has integer entries.
m. If A is invertible and det A = d, then adj A =
dA−1 . 2. det A = 1 or −1.
3.3. Diagonalization and Eigenvalues 171
3 0 1 Exercise 3.2.33 Show that adj (uA) = un−1 adj A for all
Exercise 3.2.28 If A−1 = 0 2 3 find adj A. n × n matrices A.
3 1 −1
Exercise 3.2.34 Let A and B denote invertible n × n ma-
Exercise 3.2.29 If A is 3 × 3 and det A = 2, find trices. Show that:
det (A−1 + 4 adj A).
0 A
Exercise 3.2.30 Show that det = det A det B a. adj ( adj A) = ( det A)n−2 A (here n ≥ 2) [Hint: See
B X
Example 3.2.8.]
when A and B are 2 × 2. What if A and B are 3 × 3?
0 I
[Hint: Block multiply by .] b. adj (A−1 ) = ( adj A)−1
I 0
Exercise 3.2.31 Let A be n × n, n ≥ 2, and assume one c. adj (AT ) = ( adj A)T
column of A consists of zeros. Find the possible values
of rank ( adj A). d. adj (AB) = ( adj B)( adj A) [Hint: Show that
AB adj (AB) = AB adj B adj A.]
Exercise 3.2.32 If A is 3 × 3 and invertible, compute
det (−A2 ( adj A)−1 ).
The world is filled with examples of systems that evolve in time—the weather in a region, the economy
of a nation, the diversity of an ecosystem, etc. Describing such systems is difficult in general and various
methods have been developed in special cases. In this section we describe one such method, called diag-
onalization, which is one of the most important techniques in linear algebra. A very fertile example of
this procedure is in modelling the growth of the population of an animal species. This has attracted more
attention in recent years with the ever increasing awareness that many species are endangered. To motivate
the technique, we begin by setting up a simple model of a bird population in which we make assumptions
about survival and reproduction rates.
Example 3.3.1
Consider the evolution of the population of a species of birds. Because the number of males and
females are nearly equal, we count only females. We assume that each female remains a juvenile
for one year and then becomes an adult, and that only adults have offspring. We make three
assumptions about reproduction and survival rates:
1. The number of juvenile females hatched in any year is twice the number of adult females
alive the year before (we say the reproduction rate is 2).
2. Half of the adult females in any year survive to the next year (the adult survival rate is 12 ).
3. One quarter of the juvenile females in any year survive into adulthood (the juvenile survival
rate is 14 ).
If there were 100 adult females and 40 juvenile females alive initially, compute the population of
females k years later.
172 Determinants and Diagonalization
Solution. Let ak and jk denote, respectively, the number of adult and juvenile females after k years,
so that the total female population is the sum ak + jk . Assumption 1 shows that jk+1 = 2ak , while
assumptions 2 and 3 show that ak+1 = 12 ak + 14 jk . Hence the numbers ak and jk in successive years
are related by the following equations:
ak+1 = 12 ak + 41 jk
jk+1 = 2ak
1 1
ak 2 4
If we write vk = and A = these equations take the matrix form
jk 2 0
Taking k = 0 gives v1 = Av0 , then taking k = 1 gives v2 = Av1 = A2 v0 , and taking k = 2 gives
v3 = Av2 = A3 v0 . Continuing in this way, we get
vk = Ak v0 , for each k = 0, 1, 2, . . .
a0 100
Since v0 = = is known, finding the population profile vk amounts to computing Ak
j0 40
for all k ≥ 0. We will complete this calculation in Example 3.3.12 after some new techniques have
been developed.
Theorem 3.3.1
If A = PDP−1 then Ak = PDk P−1 for each k = 1, 2, . . . .
Hence computing Ak comes down to finding an invertible matrix P as in equation Equation 3.8. To do
this it is necessary to first compute certain numbers (called eigenvalues) associated with the matrix A.
Example 3.3.2
3 5 5
If A = and x = then Ax = 4x so λ = 4 is an eigenvalue of A with corresponding
1 −1 1
eigenvector x.
The matrix A in Example 3.3.2 has another eigenvalue in addition to λ = 4. To find it, we develop a
general procedure for any n × n matrix A.
By definition a number λ is an eigenvalue of the n ×n matrix A if and only if Ax = λ x for some column
x 6= 0. This is equivalent to asking that the homogeneous system
(λ I − A)x = 0
of linear equations has a nontrivial solution x 6= 0. By Theorem 2.4.5 this happens if and only if the matrix
λ I − A is not invertible and this, in turn, holds if and only if the determinant of the coefficient matrix is
zero:
det (λ I − A) = 0
This last condition prompts the following definition:
Note that cA (x) is indeed a polynomial in the variable x, and it has degree n when A is an n × n matrix (this
is illustrated in the examples below). The above discussion shows that a number λ is an eigenvalue of A if
and only if cA (λ ) = 0, that is if and only if λ is a root of the characteristic polynomial cA (x). We record
these observations in
Theorem 3.3.2
Let A be an n × n matrix.
(λ I − A)x = 0
In practice, solving the equations in part 2 of Theorem 3.3.2 is a routine application of gaussian elimina-
tion, but finding the eigenvalues can be difficult, often requiring computers (see Section 8.5). For now,
the examples and exercises will be constructed so that the roots of the characteristic polynomials are rela-
tively easy to find (usually integers). However, the reader should not be misled by this into thinking that
eigenvalues are so easily obtained for the matrices that occur in practical applications!
Example 3.3.3
3 5
Find the characteristic polynomial of the matrix A = discussed in Example 3.3.2, and
1 −1
then find all the eigenvalues and their eigenvectors.
x 0 3 5 x − 3 −5
Solution. Since xI − A = − = we get
0 x 1 −1 −1 x + 1
x − 3 −5
cA (x) = det = x2 − 2x − 8 = (x − 4)(x + 2)
−1 x + 1
Hence, the roots of cA (x) are λ1 = 4 and λ2 = −2, so these are the eigenvalues of A. Note that
λ1 = 4 was the eigenvalue mentioned in Example 3.3.2, but we have found a new one: λ2 = −2.
To find the eigenvectors corresponding to λ2 = −2, observe that in this case
λ2 − 3 −5 −5 −5
(λ2 I − A)x = =
−1 λ2 + 1 −1 −1
−1
so the general solution to (λ2I − A)x = 0 is x = t where t is an arbitrary real number.
1
−1
Hence, the eigenvectors x corresponding to λ 2 are x = t where t 6= 0 is arbitrary. Similarly,
1
5
λ1 = 4 gives rise to the eigenvectors x = t , t 6= 0 which includes the observation in
1
Example 3.3.2.
3.3. Diagonalization and Eigenvalues 175
Note that a square matrix A has many eigenvectors associated with any given eigenvalue λ . In fact
every nonzero solution x of (λ I − A)x = 0 is an eigenvector. Recall that these solutions are all linear com-
binations of certain basic solutions determined by the gaussian algorithm (see Theorem 1.3.2). Observe
that any nonzero multiple of an eigenvector is again an eigenvector,9 and such multiples are often more
convenient.10 Any set of nonzero multiples of the basic solutions of (λ I − A)x = 0 will be called a set of
basic eigenvectors corresponding to λ .
Example 3.3.4
Find the characteristic polynomial, eigenvalues, and basic eigenvectors for
2 0 0
A = 1 2 −1
1 3 −2
Example 3.3.5
If A is a square matrix, show that A and AT have the same characteristic polynomial, and hence the
same eigenvalues.
by Theorem 3.2.3. Hence cAT (x) and cA (x) have the same roots, and so AT and A have the same
eigenvalues (by Theorem 3.3.2).
1 1
The eigenvalues of a matrix need not be distinct. For example, if A = the characteristic poly-
0 1
nomial is (x − 1)2 so the eigenvalue 1 occurs twice. Furthermore, eigenvalues are usually not computed
as the roots of the characteristic polynomial. There are iterative, numerical methods (for example the
QR-algorithm in Section 8.5) that are much more efficient for large matrices.
A-Invariance
If A is a 2 × 2 matrix, we can describe the eigenvectors of A geometrically using the following concept. A
line L through the origin in R2 is called A-invariant if Ax is in L whenever x is in L. If we think of A as a
linear transformation R2 → R2 , this asks that A carries L into itself, that is the image Ax of each vector x
in L is again in L.
Example 3.3.6
x
The x axis L = | x in R is A-invariant for any matrix of the form
0
a b a b x ax x
A= because = is L for all x = in L
0 c 0 c 0 0 0
Theorem 3.3.3
Let A be a 2 × 2 matrix, let x 6= 0 be a vector in R2 , and let Lx be the line through the origin in R2
containing x. Then
Example 3.3.7
cos θ − sin θ
1. If θ is not a multiple of π , show that A = has no real eigenvalue.
sin θ cos θ
1 1 − m2 2m
2. If m is real show that B = 1+m2 has a 1 as an eigenvalue.
2m m2 − 1
Solution.
1. A induces rotation about the origin through the angle θ (Theorem 2.6.4). Since θ is not a
multiple of π , this shows that no line through the origin is A-invariant. Hence A has no
eigenvector by Theorem 3.3.3, and so has no eigenvalue.
2. B induces reflection Qm in the line through the origin with slope m by Theorem 2.6.5. If x is
any nonzero point on this line then it is clear that Qm x = x, that is Qm x = 1x. Hence 1 is an
eigenvalue (with eigenvector x).
0 −1
π
If θ = in Example 3.3.7, then A =
2 so cA (x) = x2 + 1. This polynomial has no root
1 0
in R, so A has no (real) eigenvalue, and hence no eigenvector.
In fact
its eigenvalues are the complex
1 1
numbers i and −i, with corresponding eigenvectors and In other words, A has eigenvalues
−i i
and eigenvectors, just not real ones.
Note that every polynomial has complex roots,11 so every matrix has complex eigenvalues. While
these eigenvalues may very well be real, this suggests that we really should be doing linear algebra over the
complex numbers. Indeed, everything we have done (gaussian elimination, matrix algebra, determinants,
etc.) works if all the scalars are complex.
11 This is called the Fundamental Theorem of Algebra and was first proved by Gauss in his doctoral dissertation.
178 Determinants and Diagonalization
Diagonalization
An n × n matrix D is called a diagonal matrix if all its entries off the main diagonal are zero, that is if D
has the form
λ1 0 · · · 0
0 λ2 · · · 0
D = .. .. . . .. = diag (λ1 , λ2 , · · · , λn )
. . . .
0 0 · · · λn
where λ1 , λ2 , . . . , λn are numbers. Calculations with diagonal matrices are very easy. Indeed, if
D = diag (λ1 , λ2 , . . . , λn ) and E = diag (µ1 , µ2 , . . . , µn ) are two diagonal matrices, their product DE and
sum D + E are again diagonal, and are obtained by doing the same operations to corresponding diagonal
elements:
DE = diag (λ1 µ1 , λ2 µ2 , . . . , λn µn )
D + E = diag (λ1 + µ1 , λ2 + µ2 , . . . , λn + µn )
Because of the simplicity of these formulas, and with an eye on Theorem 3.3.1 and the discussion preced-
ing it, we make another definition:
To discover when such a matrix P exists, we let x1 , x2 , . . . , xn denote the columns of P and look
for ways to determine when such xi exist and how to compute them. To this end, write P in terms of its
columns as follows:
P = [x1 , x2 , · · · , xn ]
Observe that P−1 AP = D for some diagonal matrix D holds if and only if
AP = PD
If we write D = diag (λ1 , λ2 , . . . , λn ), where the λi are numbers to be determined, the equation AP = PD
becomes
λ1 0 · · · 0
0 λ2 · · · 0
A [x1 , x2 , · · · , xn ] = [x1 , x2 , · · · , xn ] .. .. . . ..
. . . .
0 0 · · · λn
By the definition of matrix multiplication, each side simplifies as follows
Ax1 Ax2 · · · Axn = λ1 x1 λ2 x2 · · · λn xn
3.3. Diagonalization and Eigenvalues 179
In other words, P−1 AP = D holds if and only if the diagonal entries of D are eigenvalues of A and the
columns of P are corresponding eigenvectors. This proves the following fundamental result.
Theorem 3.3.4
Let A be an n × n matrix.
1. A is diagonalizable
if and
only if it has eigenvectors x1 , x2 , . . . , xn such that the matrix
P = x1 x2 . . . xn is invertible.
2. When this is the case, P−1 AP = diag (λ1 , λ2 , . . . , λn ) where, for each i, λi is the eigenvalue
of A corresponding to xi .
Example 3.3.8
2 0 0
Diagonalize the matrix A = 1 2 −1 in Example 3.3.4.
1 3 −2
In Example 3.3.8, suppose we let Q = x2 x1 x3 be the matrix formed from the eigenvectors x1 ,
x2 , and x3 of A, but in a different order than that used to form P. Then Q−1 AQ = diag (λ2 , λ1 , λ3 ) is diag-
onal by Theorem 3.3.4, but the eigenvalues are in the new order. Hence we can choose the diagonalizing
matrix P so that the eigenvalues λi appear in any order we want along the main diagonal of D.
In every example above each eigenvalue has had only one basic eigenvector. Here is a diagonalizable
matrix where this is not the case.
180 Determinants and Diagonalization
Example 3.3.9
0 1 1
Diagonalize the matrix A = 1 0 1
1 1 0
Solution. To compute the characteristic polynomial of A first add rows 2 and 3 of xI − A to row 1:
x −1 −1 x−2 x−2 x−2
cA (x) = det −1 x −1 = det −1 x −1
−1 −1 x −1 −1 x
x−2 0 0
= det −1 x + 1 0 = (x − 2)(x + 1)2
−1 0 x+1
Hence the eigenvalues are λ1 = 2 and λ2 = −1, with λ2 repeated twice (we say that λ2 has
For λ1 = 2, the system of equations
multiplicity two). However, A is diagonalizable.
1
(λ1 I − A)x = 0 has general solution x = t 1 as the reader can verify, so a basic λ1 -eigenvector
1
1
is x1 = 1 .
1
Turning to the repeated eigenvalue λ2 = −1, wemustsolve (λ2 I − A)x = 0. By gaussian
−1 −1
elimination, the general solution is x = s
1 +t 0 where s and t are arbitrary. Hence
0 1
−1 −1
the gaussian algorithm produces two basic λ2 -eigenvectors x2 = 1 and y2 = 0 If we
0 1
1 −1 −1
take P = x1 x2 y2 = 1 1 0 we find that P is invertible. Hence
1 0 1
−1
P AP = diag (2, −1, −1) by Theorem 3.3.4.
Example 3.3.9 typifies every diagonalizable matrix. To describe the general case, we need some ter-
minology.
For example, the eigenvalue λ2 = −1 in Example 3.3.9 has multiplicity 2. In that example the gaussian
algorithm yields two basic λ2 -eigenvectors, the same number as the multiplicity. This works in general.
3.3. Diagonalization and Eigenvalues 181
Theorem 3.3.5
A square matrix A is diagonalizable if and only if every eigenvalue λ of multiplicity m yields
exactly m basic eigenvectors; that is, if and only if the general solution of the system (λ I − A)x = 0
has exactly m parameters.
Theorem 3.3.6
An n × n matrix with n distinct eigenvalues is diagonalizable.
The proofs of Theorem 3.3.5 and Theorem 3.3.6 require more advanced techniques and are given in Chap-
ter 5. The following procedure summarizes the method.
Diagonalization Algorithm
To diagonalize an n × n matrix A:
Step 3. The matrix A is diagonalizable if and only if there are n basic eigenvectors in all.
Step 4. If A is diagonalizable, the n × n matrix P with these basic eigenvectors as its columns is
a diagonalizing matrix for A, that is, P is invertible and P−1 AP is diagonal.
The diagonalization algorithm is valid even if the eigenvalues are nonreal complex numbers. In this case
the eigenvectors will also have complex entries, but we will not pursue this here.
Example 3.3.10
1 1
Show that A = is not diagonalizable.
0 1
Solution 1. The characteristic polynomial is cA (x) = (x − 1)2 , so A has only one eigenvalue
λ1 = 1
1
of multiplicity 2. But the system of equations (λ1 I − A)x = 0 has general solution t , so there
0
1
is only one parameter, and so only one basic eigenvector . Hence A is not diagonalizable.
2
Diagonalizable matrices share many properties of their eigenvalues. The following example illustrates
why.
Example 3.3.11
If λ 3 = 5λ for every eigenvalue of the diagonalizable matrix A, show that A3 = 5A.
Solution. Let P−1 AP = D = diag (λ1 , . . . , λn ). Because λi3 = 5λi for each i, we obtain
Hence A3 = (PDP−1 )3 = PD3 P−1 = P(5D)P−1 = 5(PDP−1) = 5A using Theorem 3.3.1. This is
what we wanted.
If p(x) is any polynomial and p(λ ) = 0 for every eigenvalue of the diagonalizable matrix A, an argu-
ment similar to that in Example 3.3.11 shows that p(A) = 0. Thus Example 3.3.11 deals with the case
p(x) = x3 − 5x. In general, p(A) is called the evaluation of the polynomial p(x) at the matrix A. For
example, if p(x) = 2x3 − 3x + 5, then p(A) = 2A3 − 3A + 5I—note the use of the identity matrix.
In particular, if cA (x) denotes the characteristic polynomial of A, we certainly have cA (λ ) = 0 for each
eigenvalue λ of A (Theorem 3.3.2). Hence cA (A) = 0 for every diagonalizable matrix A. This is, in fact,
true for any square matrix, diagonalizable or not, and the general result is called the Cayley-Hamilton
theorem. It is proved in Section 8.7 and again in Section 11.1.
We began Section 3.3 with an example from ecology which models the evolution of the population of a
species of birds as time goes on. As promised, we now complete the example—Example 3.3.12 below.
ak
The bird population was described by computing the female population profile vk = of the
jk
species, where ak and jk represent the number of adult and juvenile females present k years after the initial
values a0 and j0 were observed. The model assumes that these numbers are related by the following
equations:
ak+1 = 21 ak + 41 jk
jk+1 = 2ak
1 1
If we write A = 2 4 the columns vk satisfy vk+1 = Avk for each k = 0, 1, 2, . . . .
2 0
Hence vk = Ak v0 for each k = 1, 2, . . . . We can now use our diagonalization techniques to determine the
population profile vk for all values of k in terms of the initial values.
Example 3.3.12
Assuming that the initial values were a0 = 100 adult females and j0 = 40 juvenile females,
compute ak and jk for k = 1, 2, . . . .
3.3. Diagonalization and Eigenvalues 183
1 1
Solution. The characteristic polynomial of the matrix A = is 2 4
2 0
cA (x) = x2 − 12 x − 12 = (x − 1)(x + 12 ), so the eigenvalues are λ1 = 1 and λ2 = − 12 and gaussian
1 1
−4
elimination gives corresponding basic eigenvectors 2 and . For convenience, we can
1 1
1 −1
use multiples x1 = and x2 = respectively. Hence a diagonalizing matrix is
2 4
1 −1
P= and we obtain
2 4
−1 1 0
P AP = D where D =
0 − 12
k k −1 1 −1 1 0 1 4 1
A = PD P =
2 4 0 (− 12 )k 6 −2 4
" #
1 k 1 k
4 + 2(− ) 1 − (− )
= 16 2 2
8 − 8(− 2 ) 2 + 4(− 12 )k
1 k
Hence we obtain
" #
ak k 1 4 + 2(− 12 )k 1 − (− 12 )k 100
= vk = A v0 = 6
jk 8 − 8(− 21 )k 2 + 4(− 12 )k 40
" #
1 k
1 440 + 160(− 2 )
= 6
880 − 640(− 12 )k
Equating top and bottom entries, we obtain exact formulas for ak and jk :
220
k k
ak = 3 + 80 1
3 −2 and jk = 440
3 + 320 1
3 −2 for k = 1, 2, · · ·
In practice, the exact values of ak and jk are not usually required. What is needed is a measure of
how these numbers behave for large values of k. This is easy to obtain here. Since (− 12 )k is nearly
zero for large k, we have the following approximate values
220 440
ak ≈ 3 and jk ≈ 3 if k is large
Hence, in the long term, the female population stabilizes with approximately twice as many
juveniles as adults.
184 Determinants and Diagonalization
Hence the columns vk are determined by the powers Ak of the matrix A and, as we have seen, these powers
can be efficiently computed if A is diagonalizable. In fact Equation 3.9 can be used to give a nice “formula”
for the columns vk in this case.
Assume that A is diagonalizable
with eigenvalues
λ1 , λ2 , . . . , λn and corresponding basic eigenvectors
x1 , x2 , . . . , xn . If P = x1 x2 . . . xn is a diagonalizing matrix with the xi as columns, then P is
invertible and
P−1 AP = D = diag (λ1 , λ2 , · · · , λn )
by Theorem 3.3.4. Hence A = PDP−1 so Equation 3.9 and Theorem 3.3.1 give
for each k = 1, 2, . . . . For convenience, we denote the column P−1 v0 arising here as follows:
b1
b2
−1
b = P v0 = ..
.
bn
vk = PDk (P−1 v0 )
λ1k 0 ··· 0 b1
0 λ2k ··· 0
b2
= x1 x2 · · · xn .. .. .. .. ..
. . . . .
0 0 · · · λnk bn
b1 λ1k
b2 λ2k
= x1 x2 · · · xn ..
.
b3 λnk
= b1 λ1k x1 + b2 λ2k x2 + · · · + bn λnk xn (3.10)
for each k ≥ 0. This is a useful exact formula for the columns vk . Note that, in particular,
v0 = b1 x1 + b2 x2 + · · · + bn xn
3.3. Diagonalization and Eigenvalues 185
However, such an exact formula for vk is often not required in practice; all that is needed is to estimate
vk for large values of k (as was done in Example 3.3.12). This can be easily done if A has a largest
eigenvalue. An eigenvalue λ of a matrix A is called a dominant eigenvalue of A if it has multiplicity 1
and
|λ | > |µ | for all eigenvalues µ 6= λ
where |λ | denotes the absolute value of the number λ . For example, λ1 = 1 is dominant in Example 3.3.12.
Returning to the above discussion, suppose that A has a dominant eigenvalue. By choosing the order
in which the columns xi are placed in P, we may assume that λ1 is dominant among the eigenvalues
λ1 , λ2 , . . . , λn of A (see the discussion following Example 3.3.8). Now recall the exact expression for vk
in Equation 3.10 above:
vk = b1 λ1k x1 + b2 λ2k x2 + · · · + bn λnk xn
Take λ1k out as a common factor in this equation to get
k k
λ2
k
vk = λ 1 b1 x1 + b2 λ x2 + · · · + bn λλn xn
1 1
for each k ≥ 0. Since λ1 is dominant, we have |λi | < |λ1 | for each i ≥ 2, so each of the numbers (λi /λ1 )k
become small in absolute value as k increases. Hence vk is approximately equal to the first term λ1k b1 x1 ,
and we write this as vk ≈ λ1k b1 x1 . These observations are summarized in the following theorem (together
with the above exact formula for vk ).
Theorem 3.3.7
Consider the dynamical system v0 , v1 , v2 , . . . with matrix recurrence
where A and v0 are given. Assume that A is a diagonalizable n × n matrix with eigenvalues
λ1 , λ2 , . . . , λn and corresponding
basic eigenvectors x1 , x2 , . . . , xn , and let
P = x1 x2 . . . xn be the diagonalizing matrix. Then an exact formula for vk is
12 Similar results can be found in other situations. If for example, eigenvalues λ1 and λ2 (possibly equal) satisfy |λ1 | = |λ2 | >
|λi | for all i > 2, then we obtain vk ≈ b1 λ1k x1 + b2 λ2k x2 for large k.
186 Determinants and Diagonalization
Example 3.3.13
Returning
to Example 3.3.12,
wesee that λ1= 1 is the dominant eigenvalue,
with eigenvector
1 1 −1 100 220
x1 = . Here P = and v0 = so P−1 v0 = 31 . Hence b1 = 220
3 in
2 2 4 40 −80
the notation of Theorem 3.3.7, so
ak k 220 k 1
= vk ≈ b1 λ 1 x1 = 3 1
jk 2
220 440
where k is large. Hence ak ≈ 3 and jk ≈ 3 as in Example 3.3.12.
This next example uses Theorem 3.3.7 to solve a “linear recurrence.” See also Section 3.4.
Example 3.3.14
Suppose a sequence x0 , x1 , x2 , . . . is determined by insisting that
Solution. Using the linear recurrence xk+2 = 2xk − xk+1 repeatedly gives
The reader should check this for the first few values of k.
3.3. Diagonalization and Eigenvalues 187
If a dynamical system vk+1 = Avk is given, the sequence v0 , v1 , v2 , . . . is called the trajectory of
the
xk
system starting at v0 . It is instructive to obtain a graphical plot of the system by writing vk = and
yk
plotting the successive values as points in the plane, identifying vk with the point (xk , yk ) in the plane. We
give several examples which illustrate properties of dynamical systems. For ease of calculation we assume
that the matrix A is simple, usually diagonal.
Example 3.3.15
1
0
Let A = 2 Then the eigenvalues are 12 and 13 , with
1
0 3
y
1 0
corresponding eigenvectors x1 = and x2 = .
0 1
The exact formula is
1 k 1
1 k 0
x vk = b1 2 + b2 3
O 0 1
Example 3.3.16
3
0
Let A = 2 . Here the eigenvalues are 32 and 43 , with
0 43
y
1 0
corresponding eigenvectors x1 = and x2 = as before.
0 1
The exact formula is
3 k 1
4 k 0
O x vk = b1 2 + b2 3
0 1
Example 3.3.17
1 − 12
Let A = . Now the eigenvalues are 32 and 12 , with
− 12 1
y
−1 1
corresponding eigenvectors x1 = and x2 = The
1 1
exact formula is
3 k −1
1 k 1
x vk = b1 2 + b2 2
O 1 1
3
for k = 0, 1, 2, . . . . In this caseis the dominant
2 eigenvalue
k −1
so, if b1 6= 0, we have vk ≈ b1 32 for large k and vk
1
is approaching the line y = −x.
1 k 1
However, if b1 = 0, then vk = b2 2 and so approaches
1
the origin along the line y = x. In general the trajectories appear
as in the diagram, and the origin is called a saddle point for the
dynamical system in this case.
Example 3.3.18
0 12
Let A = . Now the characteristic polynomial is cA (x) = x2 + 41 , so the eigenvalues are
− 12 0
the complex numbers 2i and − 2i where i2 = −1. Hence A is not diagonalizable as a real matrix.
1
However, the trajectories are not difficult to describe. If we start with v0 = then the
1
trajectory begins as
" # " # " # " # " # " #
1 1 1 1 1 1
2 − 4 − 8 −
v1 = , v2 = , v3 = , v4 = 16 , v5 = 32 , v6 = 64 , ...
− 12 − 14 1
8
1
16 − 1
32 − 1
64
The first five of these points are plotted in the diagram. Here
y each trajectory spirals in toward the origin, so the origin is an
attractor. Note that the two (complex) eigenvalues have absolute
1 v0 value less than 1 here. If they had absolute value greater than
1, the trajectories would spiral out from the origin.
v3
x
O 1
v2
v1
3.3. Diagonalization and Eigenvalues 189
Google PageRank
Dominant eigenvalues are useful to the Google search engine for finding information on the Web. If an
information query comes in from a client, Google has a sophisticated method of establishing the “rele-
vance” of each site to that query. When the relevant sites have been determined, they are placed in order of
importance using a ranking of all sites called the PageRank. The relevant sites with the highest PageRank
are the ones presented to the client. It is the construction of the PageRank that is our interest here.
The Web contains many links from one site to another. Google interprets a link from site j to site
i as a “vote” for the importance of site i. Hence if site i has more links to it than does site j, then i is
regarded as more “important” and assigned a higher PageRank. One way to look at this is to view the sites
as vertices in a huge directed graph (see Section 2.2). Then if site j links to site i there is an edge from j
to i, and hence the (i, j)-entry is a 1 in the associated adjacency matrix (called the connectivity matrix in
this context). Thus a large number of 1s in row i of this matrix is a measure of the PageRank of site i.13
However this does not take into account the PageRank of the sites that link to i. Intuitively, the higher
the rank of these sites, the higher the rank of site i. One approach is to compute a dominant eigenvector x
for the connectivity matrix. In most cases the entries of x can be chosen to be positive with sum 1. Each
site corresponds to an entry of x, so the sum of the entries of sites linking to a given site i is a measure of
the rank of site i. In fact, Google chooses the PageRank of a site so that it is proportional to this sum.14
1 3 2 2 Exercise 3.3.10 If A is an n × n matrix, show that A is
d. A = −1 2 1 , v0 = 0 diagonalizable if and only if AT is diagonalizable.
4 −1 −1 1
Exercise 3.3.11 If A is diagonalizable, show that each
of the following is also diagonalizable.
Exercise 3.3.3 Show that A has λ = 0 as an eigenvalue
if and only if A is not invertible. a. An , n ≥ 1
Exercise 3.3.4 Let A denote an n × n matrix and put b. kA, k any scalar.
A1 = A − α I, α in R. Show that λ is an eigenvalue of
A if and only if λ − α is an eigenvalue of A1 . (Hence, c. p(A), p(x) any polynomial (Theorem 3.3.1)
the eigenvalues of A1 are just those of A “shifted” by α .) d. U −1 AU for any invertible matrix U .
How do the eigenvectors compare?
e. kI + A for any scalar k.
Exercise 3.3.5 Show that the eigenvalues of
cos θ − sin θ
are eiθ and e−iθ . Exercise 3.3.12 Give an example of two diagonalizable
sin θ cos θ
(See Appendix A) matrices A and B whose sum A + B is not diagonalizable.
Exercise 3.3.6 Find the characteristic polynomial of the Exercise 3.3.13 If A is diagonalizable and 1 and −1 are
−1 = A.
n×n identity matrix I. Show that I has exactly one eigen- the only eigenvalues, show that A
value and find the eigenvectors. Exercise 3.3.14 If A is diagonalizable and 0 and 1 are
the only eigenvalues, show that A2 = A.
a b
Exercise 3.3.7 Given A = show that:
c d Exercise 3.3.15 If A is diagonalizable and λ ≥ 0 for
each eigenvalue of A, show that A = B2 for some matrix
a. cA (x) = x2 − tr Ax + det A, where tr A = a + d is B.
called the trace of A. Exercise 3.3.16 If P−1 AP and P−1 BP are both diago-
h p i nal, show that AB = BA. [Hint: Diagonal matrices com-
b. The eigenvalues are 12 (a + d) ± (a − b)2 + 4bc .
mute.]
Exercise 3.3.17 A square matrix A is called nilpotent if
Exercise 3.3.8 In each case, find P−1 AP and then com- An = 0 for some n ≥ 1. Find all nilpotent diagonalizable
pute An . matrices. [Hint: Theorem 3.3.1.]
Exercise 3.3.18 Let A be any n × n matrix and r 6= 0 a
6 −5 1 5 real number.
a. A = , P=
2 −1 1 2
a. Show that the eigenvalues of rA are precisely the
−7 −12 −3 4 numbers rλ , where λ is an eigenvalue of A.
b. A = , P=
6 −10 2 −3
b. Show that crA (x) = rn cA xr .
[Hint: (PDP−1 )n = PDn P−1 for each n =
1, 2, . . . .]
Exercise 3.3.19
Exercise 3.3.9 a. If all rows of A have the same sum s, show that s
is an eigenvalue.
1 3 2 0
a. If A = and B = verify that A b. If all columns of A have the same sum s, show that
0 2 0 1
and B are diagonalizable, but AB is not. s is an eigenvalue.
1 0 Exercise 3.3.20 Let A be an invertible n × n matrix.
b. If D = find a diagonalizable matrix A
0 −1
such that D + A is not diagonalizable. a. Show that the eigenvalues of A are nonzero.
3.3. Diagonalization and Eigenvalues 191
b. Show that the eigenvalues of A−1 are precisely the Exercise 3.3.25 Let A2 = I, and assume that A 6= I and
numbers 1/λ , where λ is an eigenvalue of A. A 6= −I.
(−x)n 1
c. Show that cA−1 (x) = det A cA x . a. Show that the only eigenvalues of A are λ = 1 and
λ = −1.
b. Show that λ 3 − 2λ + 3 is an eigenvalue of d. Now prove (c) geometrically using Theorem 3.3.3.
A3 − 2A + 3I.
2 3 −3
c. Show that p(λ ) is an eigenvalue of p(A) for any Exercise 3.3.26 Let A = 1 0 −1 and B =
nonzero polynomial p(x). 1 1 −2
0 1 0
3 0 1 . Show that cA (x) = cB (x) = (x + 1)2 (x −
Exercise 3.3.22 If A is an n × n matrix, show that 2 0 0
2), but A is diagonalizable and B is not.
cA2 (x2 ) = (−1)n cA (x)cA (−x).
Exercise 3.3.27
Exercise 3.3.23 An n × n matrix A is called nilpotent if
Am = 0 for some m ≥ 1.
a. Show that the only diagonalizable matrix A that
has only one eigenvalue λ is the scalar matrix
A = λ I.
a. Show that every triangular matrix with zeros on
the main diagonal is nilpotent. 3 −2
b. Is diagonalizable?
2 −1
b. If A is nilpotent, show that λ = 0 is the only eigen-
value (even complex) of A. Exercise 3.3.28 Characterize the diagonalizable n × n
matrices A such that A2 − 3A + 2I = 0 in terms of their
c. Deduce that cA (x) = xn , if A is n × n and nilpotent. eigenvalues. [Hint: Theorem 3.3.1.]
B 0
Exercise 3.3.29 Let A = where B and C are
0 C
Exercise 3.3.24 Let A be diagonalizable with real eigen- square matrices.
values and assume that Am = I for some m ≥ 1.
a. If B and C are diagonalizable via Q and R (that is,
Q−1 BQ and R−1CR are diagonal),
show that A is
2
a. Show that A = I. Q 0
diagonalizable via
0 R
b. If m is odd, show that A = I.
5 3
b. Use (a) to diagonalize A if B = and
[Hint: Theorem A.3] 3 5
7 −1
C= .
−1 7
192 Determinants and Diagonalization
B 0
Exercise 3.3.30 Let A = where B and C are
0 C
square matrices. Exercise 3.3.32 In the model of Example 3.3.1, does the
final outcome depend on the initial population of adult
a. Show that cA (x) = cB (x)cC (x). and juvenile females? Support your answer.
b. If x and y are eigenvectors
of
B and C, respec- Exercise 3.3.33 In Example 3.3.1, keep the same repro-
tively, show that
x
and
0
are eigenvec- duction rate of 2 and the same adult survival rate of 12 ,
0 y but suppose that the juvenile survival rate is ρ . Deter-
tors of A, and show how every eigenvector of A mine which values of ρ cause the population to become
arises from such eigenvectors. extinct or to become large.
Exercise 3.3.31 Referring to the model in Exam- Exercise 3.3.34 In Example 3.3.1, let the juvenile sur-
ple 3.3.1, determine if the population stabilizes, becomes vival rate be 25 and let the reproduction rate be 2. What
extinct, or becomes large in each case. Denote the adult values of the adult survival rate α will ensure that the
and juvenile survival rates as A and J, and the reproduc- population stabilizes?
tion rate as R.
R A J
1 1
a. 2 2 2
1 1
b. 3 4 4
1 1
c. 2 4 3
3 1
d. 3 5 5
It often happens that a problem can be solved by finding a sequence of numbers x0 , x1 , x2 , . . . where the
first few are known, and subsequent numbers are given in terms of earlier ones. Here is a combinatorial
example where the object is to count the number of ways to do something.
Example 3.4.1
An urban planner wants to determine the number xk of ways that a row of k parking spaces can be
filled with cars and trucks if trucks take up two spaces each. Find the first few values of xk .
Solution. Clearly, x0 = 1 and x1 = 1, while x2 = 2 since there can be two cars or one truck. We
have x3 = 3 (the 3 configurations are ccc, cT, and Tc) and x4 = 5 (cccc, ccT, cTc, Tcc, and TT). The
key to this method is to find a way to express each subsequent xk in terms of earlier values. In this
case we claim that
xk+2 = xk + xk+1 for every k ≥ 0 (3.11)
Indeed, every way to fill k + 2 spaces falls into one of two categories: Either a car is parked in the
first space (and the remaining k + 1 spaces are filled in xk+1 ways), or a truck is parked in the first
two spaces (with the other k spaces filled in xk ways). Hence, there are xk+1 + xk ways to fill the
k + 2 spaces. This is Equation 3.11.
3.4. An Application to Linear Recurrences 193
The recurrence in Equation 3.11 determines xk for every k ≥ 2 since x0 and x1 are given. In fact,
the first few values are
x0 = 1
x1 = 1
x2 = x0 + x1 = 2
x3 = x1 + x2 = 3
x4 = x2 + x3 = 5
x5 = x3 + x4 = 8
.. .. ..
. . .
Clearly, we can find xk for any value of k, but one wishes for a “formula” for xk as a function of k.
It turns out that such a formula can be found using diagonalization. We will return to this example
later.
Example 3.4.2
Suppose the numbers x0 , x1 , x2 , . . . are given by the linear recurrence relation
xk+2 = xk+1 + 6xk for k ≥ 0
where x0 and x1 are specified. Find a formula for xk when x0 = 1 and x1 = 3, and also when x0 = 1
and x1 = 1.
Solution. If x0 = 1 and x1 = 3, then
x2 = x1 + 6x0 = 9, x3 = x2 + 6x1 = 27, x4 = x3 + 6x2 = 81
and it is apparent that
xk = 3k for k = 0, 1, 2, 3, and 4
This formula holds for all k because it is true for k = 0 and k = 1, and it satisfies the recurrence
xk+2 = xk+1 + 6xk for each k as is readily checked.
However, if we begin instead with x0 = 1 and x1 = 1, the sequence continues
x2 = 7, x3 = 13, x4 = 55, x5 = 133, ...
In this case, the sequence is uniquely determined but no formula is apparent. Nonetheless, a simple
device transforms the recurrence into a matrix recurrence to which our diagonalization techniques
apply.
194 Determinants and Diagonalization
Returning to Example 3.4.1, these methods give an exact formula and a good approximation for the num-
bers xk in that problem.
Example 3.4.3
In Example 3.4.1, an urban planner wants to determine xk , the number of ways that a row of k
parking spaces can be filled with cars and trucks if trucks take up two spaces each. Find a formula
for xk and estimate it for large k.
Solution. We saw in Example 3.4.1 that the numbers xk satisfy a linear recurrence
xk
If we write vk = as before, this recurrence becomes a matrix recurrence for the vk :
xk+1
xk+1 xk+1 0 1 xk
vk+1 = = = = Avk
xk+2 xk + xk+1 1 1 xk+1
0 1
for all k ≥ 0 where A = . Moreover, A is diagonalizable here. The characteristic
1 1 h √ i
polynomial is cA (x) = x2 − x − 1 with roots 12 1 ± 5 by the quadratic formula, so A has
eigenvalues
1
√ 1
√
λ1 = 2 1 + 5 and λ2 = 2 1 − 5
1 1
Corresponding eigenvectors are x1 = and x2 = respectively as the reader can verify.
λ1 λ2
1 1
As the matrix P = x1 x2 = is invertible, it is a diagonalizing matrix for A. We
λ1 λ2
compute the coefficients b1 and b2 (in Theorem 3.3.7) as follows:
b1 1 λ 2 −1 1 λ 1
−1
= P v0 = √ = √1
b2 − 5 −λ1 1 1 5 −λ2
Finally, observe that λ1 is dominant here (in fact, λ1 = 1.618 and λ2 = −0.618 to three decimal
places) so λ2k+1 is negligible compared with λ1k+1 is large. Thus,
This is a good approximation, even for as small a value as k = 12. Indeed, repeated use of the
recurrence xk+2 = xk + xk+1 gives the exact value x12 = 233, while the approximation is
13
x12 ≈ (1.618)
√ = 232.94.
5
The sequence x0 , x1 , x2 , . . . in Example 3.4.3 was first discussed in 1202 by Leonardo Pisano of Pisa,
also known as Fibonacci,15 and is now called the Fibonacci sequence. It is completely determined by
the conditions x0 = 1, x1 = 1 and the recurrence xk+2 = xk + xk+1 for each k ≥ 0. These numbers have
15 Fibonacci was born in Italy. As a young man he travelled to India where he encountered the “Fibonacci” sequence. He
returned to Italy and published this in his book Liber Abaci in 1202. In the book he is the first to bring the Hindu decimal
system for representing numbers to Europe.
196 Determinants and Diagonalization
been studied for centuries and have many interesting properties (there is even a journal, the Fibonacci
Quarterly, devoted exclusively to them). For example, biologists have discovered that the arrangement
h ofi
1 k+1 k+1
leaves around the stems of some plants follow a Fibonacci pattern. The formula xk = √ λ1 − λ2
5
in Example 3.4.3 is called the Binet formula. It is remarkable in that the xk are integers but λ1 and λ2 are
not. This phenomenon can occur even if the eigenvalues λi are nonreal complex numbers.
We conclude with an example showing that nonlinear recurrences can be very complicated.
Example 3.4.4
Suppose a sequence x0 , x1 , x2 , . . . satisfies the following recurrence:
1
x if xk is even
xk+1 = 2 k
3xk + 1 if xk is odd
7, 22, 11, 34, 17, 52, 26, 13, 40, 20, 10, 5, 16, 8, 4, 2, 1, . . .
and it again cycles. However, it is not known whether every choice of x0 will lead eventually to 1.
It is quite possible that, for some x0 , the sequence will continue to produce different values
indefinitely, or will repeat a value and cycle without reaching 1. No one knows for sure.
Exercise 3.4.1 Solve the following linear recurrences. b. xk+3 = −2xk+2 + xk+1 + 2xk , where x0 = 1, x1 = 0,
and x2 = 1.
a. xk+2 = 3xk + 2xk+1 , where x0 = 1 and x1 = 1. xk
[Hint: Use vk = xk+1 .]
b. xk+2 = 2xk − xk+1 , where x0 = 1 and x1 = 2. xk+2
Exercise 3.4.3 In Example 3.4.1 suppose buses are also
c. xk+2 = 2xk + xk+1 , where x0 = 0 and x1 = 1. allowed to park, and let xk denote the number of ways a
row of k parking spaces can be filled with cars, trucks,
and buses.
d. xk+2 = 6xk − xk+1 , where x0 = 1 and x1 = 1.
a. xk+3 = 6xk+2 − 11xk+1 + 6xk , where x0 = 1, x1 = 0, b. If buses take up 4 spaces, find a recurrence for the
and x2 = 1. xk and compute x10 .
3.4. An Application to Linear Recurrences 197
Exercise 3.4.4 A man must climb a flight of k steps. 1
He always takes one or two steps at a time. Thus he can b. If λ is any eigenvalue of A, show that x = λ
climb 3 steps in the following ways: 1, 1, 1; 1, 2; or 2, 1. λ2
Find sk , the number of ways he can climb the flight of k is a λ -eigenvector.
steps. [Hint: Fibonacci.] [Hint: Show directly that Ax = λ x.]
Exercise 3.4.5 How many “words” of k letters can be c. Generalize (a) and (b) to a recurrence
made from the letters {a, b} if there are no adjacent a’s?
Exercise 3.4.6 How many sequences of k flips of a coin xk+4 = axk + bxk+1 + cxk+2 + dxk+3
are there with no HH?
of length 4.
Exercise 3.4.7 Find xk , the number of ways to make
a stack of k poker chips if only red, blue, and gold chips Exercise 3.4.12 Consider the recurrence
are used and no two gold chips are adjacent. [Hint: Show
that xk+2 = 2xk+1 + 2xk by considering how many stacks xk+2 = axk+1 + bxk + c
have a red, blue, or gold chip on top.]
where c may not be zero.
Exercise 3.4.8 A nuclear reactor contains α - and β -
particles. In every second each α -particle splits into three
a. If a + b 6= 1 show that p can be found such that,
β -particles, and each β -particle splits into an α -particle
if we set yk = xk + p, then yk+2 = ayk+1 + byk .
and two β -particles. If there is a single α -particle in the
[Hence, the sequence xk can be found provided yk
reactor at time t = 0, how many α -particles are there at
can be found by the methods of this section (or
t = 20 seconds? [Hint: Let xk and yk denote the number
otherwise).]
of α - and β -particles at time t = k seconds. Find xk+1
and yk+1 in terms of xk and yk .] b. Use (a) to solve xk+2 = xk+1 +6xk +5 where x0 = 1
Exercise 3.4.9 The annual yield of wheat in a certain and x1 = 1.
country has been found to equal the average of the yield
in the previous two years. If the yields in 1990 and 1991 Exercise 3.4.13 Consider the recurrence
were 10 and 12 million tons respectively, find a formula
for the yield k years after 1990. What is the long-term xk+2 = axk+1 + bxk + c(k) (3.12)
average yield?
where c(k) is a function of k, and consider the related
Exercise 3.4.10 Find the general solution to the recur- recurrence
rence xk+1 = rxk + c where r and c are constants. [Hint: xk+2 = axk+1 + bxk (3.13)
Consider the cases r = 1 and r 6= 1 separately. If r 6= 1,
you will need the identity 1 + r + r2 + · · · + rn−1 = 1−r
n
Suppose that xk = pk is a particular solution of Equation
1−r
for n ≥ 1.] 3.12.
Exercise 3.4.11 Consider the length 3 recurrence a. If qk is any solution of Equation 3.13, show that
xk+3 = axk + bxk+1 + cxk+2 . qk + pk is a solution of Equation 3.12.
xk 0 1 0 b. Show that every solution of Equation 3.12 arises
a. If vk = xk+1 and A = 0 0 1 show that as in (a) as the sum of a solution of Equation 3.13
xk+2 a b c plus the particular solution pk of Equation 3.12.
vk+1 = Avk .
198 Determinants and Diagonalization
A function f of a real variable is said to be differentiable if its derivative exists and, in this case, we let f ′
denote the derivative. If f and g are differentiable functions, a system
f ′ = 3 f + 5g
g′ = − f + 2g
is called a system of first order differential equations, or a differential system for short. Solving many
practical problems often comes down to finding sets of functions that satisfy such a system (often in-
volving more than two functions). In this section we show how diagonalization can help. Of course an
acquaintance with calculus is required.
It is easily verified that f (x) = eax is one solution; in fact, Equation 3.14 is simple enough for us to find
all solutions. Suppose that f is any solution, so that f ′ (x) = a f (x) for all x. Consider the new function g
given by g(x) = f (x)e−ax . Then the product rule of differentiation gives
g′ (x) = f (x) −ae−ax + f ′ (x)e−ax
= −a f (x)e−ax + [a f (x)] e−ax
=0
for all x. Hence the function g(x) has zero derivative and so must be a constant, say g(x) = c. Thus
c = g(x) = f (x)e−ax , that is
f (x) = ceax
In other words, every solution f (x) of Equation 3.14 is just a scalar multiple of eax . Since every such
scalar multiple is easily seen to be a solution of Equation 3.14, we have proved
Theorem 3.5.1
The set of solutions to f ′ = a f is {ceax | c any constant} = Reax .
Remarkably, this result together with diagonalization enables us to solve a wide variety of differential
systems.
3.5. An Application to Systems of Differential Equations 199
Example 3.5.1
Assume that the number n(t) of bacteria in a culture at time t has the property that the rate of
change of n is proportional to n itself. If there are n0 bacteria present when t = 0, find the number
at time t.
Solution. Let k denote the proportionality constant. The rate of change of n(t) is its time-derivative
n′ (t), so the given relationship is n′ (t) = kn(t). Thus Theorem 3.5.1 shows that all solutions n are
given by n(t) = cekt , where c is a constant. In this case, the constant c is determined by the
requirement that there be n0 bacteria present when t = 0. Hence n0 = n(0) = cek0 = c, so
n(t) = n0 ekt
gives the number at time t. Of course the constant k depends on the strain of bacteria.
The condition that n(0) = n0 in Example 3.5.1 is called an initial condition or a boundary condition
and serves to select one solution from the available solutions.
Solving a variety of problems, particularly in science and engineering, comes down to solving a system
of linear differential equations. Diagonalization enters into this as follows. The general problem is to find
differentiable functions f1 , f2 , . . . , fn that satisfy a system of equations of the form
where the ai j are constants. This is called a linear system of differential equations or simply a differen-
tial system. The first step is to put it in matrix form. Write
f1 f1′ a11 a12 · · · a1n
f2 f′ a21 a22 · · · a2n
2
f = .. f′ = .. A = .. .. ..
. . . . .
fn fn′ an1 an2 · · · ann
f′ = Af
Hence, given the matrix A, the problem is to find a column f of differentiable functions that satisfies this
condition. This can be done if A is diagonalizable. Here is an example.
200 Determinants and Diagonalization
Example 3.5.2
Find a solution to the system
f1′ = f1 + 3 f2
f2′ = 2 f1 + 2 f2
that satisfies f1 (0) = 0, f2 (0) = 5.
′ f1 1 3
Solution. This is f = Af, where f = and A = . The reader can verify that
f2 2 2
1 3
cA (x) = (x − 4)(x + 1), and that x1 = and x2 = are eigenvectors corresponding to
1 −2
the eigenvalues
4 and
−1, respectively. Hence the diagonalization
algorithm gives
4 0 1 3
P−1 AP = , where P = x1 x2 = . Now consider new functions g1 and g2
0 −1 1 −2
−1 g1
given by f = Pg (equivalently, g = P f ), where g = Then
g2
f1 1 3 g1 f = g1 + 3g2
= that is, 1
f2 1 −2 g2 f2 = g1 − 2g2
g′ = P−1 APg
That is,
f(x) = cx1 e4x + dx2 e−x
3.5. An Application to Systems of Differential Equations 201
Theorem 3.5.2
Consider a linear system
f′ = Af
of differential equations, where A is an n × n diagonalizable matrix. Let P−1 AP be diagonal, where
P is given in terms of its columns
P = [x1 , x2 , · · · , xn ]
and {x1 , x2 , . . . , xn } are eigenvectors of A. If xi corresponds to the eigenvalue λi for each i, then
every solution f of f′ = Af has the form
Proof. By Theorem 3.3.4, the matrix P = x1 x2 . . . xn is invertible and
λ1 0 ··· 0
0 λ2 ··· 0
P−1 AP = .. .. ..
. . .
0 0 · · · λn
f1 g1
f2 g2
As in Example 3.5.2, write f = .. and define g = .. by g = P−1 f; equivalently, f = Pg. If
. .
fn gn
P = pi j , this gives
fi = pi1 g1 + pi2 g2 + · · · + pin gn
202 Determinants and Diagonalization
so f′ = Pg′ . Substituting this into f′ = Af gives Pg′ = APg. But then left multiplication by P−1 gives
g′ = P−1 APg, so the original system of equations f′ = Af for f becomes much simpler in terms of g:
g′1 λ1 0 · · · 0 g1
g′ 0 λ 2 · · · 0 g2
2
.. = .. .. .. ..
. . . . .
gn′ 0 0 · · · λn gn
Hence g′i = λi gi holds for each i, and Theorem 3.5.1 implies that the only solutions are
where the coefficients ci are arbitrary. Hence this is called the general solution to the system of differential
equations. In most cases the solution functions fi (x) are required to satisfy boundary conditions, often of
the form fi (a) = bi , where a, b1 , . . . , bn are prescribed numbers. These conditions determine the constants
ci . The following example illustrates this and displays a situation where one eigenvalue has multiplicity
greater than 1.
Example 3.5.3
Find the general solution to the system
f1′ = 5 f1 + 8 f2 + 16 f3
f2′ = 4 f1 + f2 + 8 f3
f3′ = −4 f1 − 4 f2 − 11 f3
Then find a solution satisfying the boundary conditions f1 (0) = f2 (0) = f3 (0) = 1.
5 8 16
Solution. The system has the form f′ = Af, where A = 4 1 8 . In this case
−4 −4 −11
cA (x) = (x + 3)2 (x − 1) and eigenvectors corresponding to the eigenvalues −3, −3, and 1 are,
3.5. An Application to Systems of Differential Equations 203
respectively,
−1 −2 2
x1 = 1 x2 = 0 x3 = 1
0 1 −1
Hence, by Theorem 3.5.2, the general solution is
−1 −2 2
f(x) = c1 1 e−3x + c2 0 e−3x + c3 1 ex , ci constants.
0 1 −1
Exercise 3.5.4 The population N(t) of a region at time Exercise 3.5.7 Writing f ′′′ = ( f ′′ )′ , consider the third
t increases at a rate proportional to the population. If order differential equation
the population doubles every 5 years and is 3 million ini-
tially, find N(t). f ′′′ − a1 f ′′ − a2 f ′ − a3 f = 0
Exercise 3.5.5 Let A be an invertible diagonalizable
n × n matrix and let b be an n-column of constant func- where a1 , a2 , and a3 are real numbers. Let
′ − a f and f = f ′′ − a f ′ − a f ′′ .
′
tions. We can solve the system f = Af + b as follows: f 1 = f , f 2 = f 1 3 1 2
a. If g satisfies g′ = Ag (using Theorem 3.5.2), show f1
that f = g − A−1 b is a solution to f′ = Af + b. a. Show that f2 is a solution to the system
f3
b. Show that every solution to f′ = Af + b arises as in
′
(a) for some solution g to g′ = Ag. f1 = a1 f1 + f2
f ′ = a2 f1 + f3 ,
2′
Exercise 3.5.6 Denote the second derivative of f by f3 = a3 f1
′
f ′′ = ( f ′ )′ . Consider the second order differential equa- f1 a1 1 0 f1
tion that is f2′ = a2 0 1 f2
f3′ a3 0 0 f3
f ′′ − a1 f ′ − a2 f = 0, a1 and a2 real numbers (3.15)
f1
a. If f is a solution to Equation 3.15 let f1 = f and
b. Show further that if f2 is any solution to this
f2 = f ′ − a1 f . Show that
′ f3
f1 = a1 f1 + f2 system, then f = f1 is a solution to Equation 3.15.
,
f2′ = a2 f1
′
f1 a1 1 f1
that is = Remark. A similar construction casts every linear differ-
f2′ a2 0 f2
ential equation of order n (with constant coefficients) as
f1 an n × n linear system of first order equations. However,
b. Conversely, if is a solution to the system in
f2 the matrix need not be diagonalizable, so other methods
(a), show that f1 is a solution to Equation 3.15. have been developed.
Recall that our definition of the term determinant is inductive: The determinant of any 1 × 1 matrix is
defined first; then it is used to define the determinants of 2 × 2 matrices. Then that is used for the 3 × 3
case, and so on. The case of a 1 × 1 matrix [a] poses no problem. We simply define
det [a] = a
as in Section 3.1. Given an n × n matrix A, define Ai j to be the (n − 1) × (n − 1) matrix obtained from A by
deleting row i and column j. Now assume that the determinant of any (n − 1) × (n − 1) matrix has been
defined. Then the determinant of A is defined to be
det A = a11 det A11 − a21 det A21 + · · · + (−1)n+1 an1 det An1
n
= ∑ (−1)i+1 ai1 det Ai1
i=1
3.6. Proof of the Cofactor Expansion Theorem 205
where summation notation has been introduced for convenience.16 Observe that, in the terminology of
Section 3.1, this is just the cofactor expansion of det A along the first column, and that (−1)i+ j det Ai j is
the (i, 17
(previously denoted as ci j (A)). To illustrate the definition, consider the 2 × 2 matrix
j)-cofactor
a11 a12
A= . Then the definition gives
a21 a22
a11 a12
det = a11 det [a22 ] − a21 det [a12 ] = a11 a22 − a21 a12
a21 a22
Lemma 3.6.1
Let A, B, and C be n × n matrices that are identical except that the pth row of A is the sum of the
pth rows of B and C. Then
det A = det B + det C
Proof. We proceed by induction on n, the cases n = 1 and n = 2 being easily checked. Consider ai1 and
Ai1 :
Case 1: If i 6= p,
ai1 = bi1 = ci1 and det Ai1 = det Bi1 = det Ci1
by induction because Ai1 , Bi1 , Ci1 are identical except that one row of Ai1 is the sum of the corresponding
rows of Bi1 and Ci1 .
Case 2: If i = p,
a p1 = b p1 + c p1 and A p1 = B p1 = C p1
Now write out the defining sum for det A, splitting off the pth term for special attention.
where det Ai1 = det Bi1 + det Ci1 by induction. But the terms here involving Bi1 and b p1 add up to det B
because ai1 = bi1 if i 6= p and A p1 = B p1 . Similarly, the terms involving Ci1 and c p1 add up to det C. Hence
det A = det B + det C, as required.
16 Summation notation is a convenient shorthand way to write sums of similar expressions. For example a1 + a2 + a3 + a4 =
∑4i=1 ai ,
a5 b5 + a6b6 + a7b7 + a8 b8 = ∑8k=5 ak bk , and 12 + 22 + 32 + 42 + 52 = ∑5j=1 j2 .
17
Note that we used the expansion along row 1 at the beginning of Section 3.1. The column 1 expansion definition is more
convenient here.
206 Determinants and Diagonalization
Lemma 3.6.2
Let A = ai j denote an n × n matrix.
1. If B = bi j is formed from A by multiplying a row of A by a number u, then det B = u det A.
Proof. For later reference the defining sums for det A and det B are as follows:
n
det A = ∑ ai1 (−1)i+1 det Ai1 (3.16)
i=1
n
det B = ∑ bi1 (−1)i+1 det Bi1 (3.17)
i=1
Property 1. The proof is by induction on n, the cases n = 1 and n = 2 being easily verified. Consider
the ith term in the sum 3.17 for det B where B is the result of multiplying row p of A by u.
a. If i 6= p, then bi1 = ai1 and det Bi1 = u det Ai1 by induction because Bi1 comes from Ai1 by multi-
plying a row by u.
b. If i = p, then b p1 = ua p1 and B p1 = A p1 .
In either case, each term in Equation 3.17 is u times the corresponding term in Equation 3.16, so it is clear
that det B = u det A.
Property 2. This is clear by property 1 because the row of zeros has a common factor u = 0.
Property 3. Observe first that it suffices to prove property 3 for interchanges of adjacent rows. (Rows
p and q (q > p) can be interchanged by carrying out 2(q − p) − 1 adjacent changes, which results in an
odd number of sign changes in the determinant.) So suppose that rows p and p + 1 of A are interchanged
to obtain B. Again consider the ith term in Equation 3.17.
a. If i 6= p and i 6= p + 1, then bi1 = ai1 and det Bi1 = − det Ai1 by induction because Bi1 results from
interchanging adjacent rows in Ai1 . Hence the ith term in Equation 3.17 is the negative of the ith
term in Equation 3.16. Hence det B = − det A in this case.
This means that terms p and p + 1 in Equation 3.17 are the same as these terms in Equation 3.16,
except that the order is reversed and the signs are changed. Thus the sum 3.17 is the negative of the sum
3.16; that is, det B = − det A.
Property 4. If rows p and q in A are identical, let B be obtained from A by interchanging these rows.
Then B = A so det A = det B. But det B = − det A by property 3 so det A = − det A. This implies that
det A = 0.
Property 5. Suppose B results from adding u times row q of A to row p. Then Lemma 3.6.1 applies to
B to show that det B = det A + det C, where C is obtained from A by replacing row p by u times row q. It
now follows from properties 1 and 4 that det C = 0 so det B = det A, as asserted.
These facts are enough to enable us to prove Theorem 3.1.1. For convenience, it is restated here in the
notation of the foregoing lemmas. The only difference between the notations is that the (i, j)-cofactor of
an n × n matrix A was denoted earlier by
Theorem 3.6.1
If A = ai j is an n × n matrix, then
Here Ai j denotes the matrix obtained from A by deleting row i and column j.
Proof. Lemma 3.6.2 establishes the truth of Theorem 3.1.2 for rows. With this information, the arguments
in Section 3.2 proceed exactly as written to establish that det A = det AT holds for any n × n matrix A.
Now suppose B is obtained from A by interchanging two columns. Then BT is obtained from AT by
interchanging two rows so, by property 3 of Lemma 3.6.2,
Finally, to prove the row expansion, write B = AT . Then Bi j = (ATij ) and bi j = a ji for all i and j.
Expanding det B along column j gives
n
det A = det AT = det B = ∑ bi j (−1)i+ j det Bi j
i=1
n n
= ∑ a ji (−1) j+i det (ATji ) = ∑ a ji (−1) j+i det A ji
i=1 i=1
Exercise 3.6.1 Prove Lemma 3.6.1 for columns. Exercise 3.6.3 If u is a number and A is an n × n matrix,
prove that det (uA) = un det A by induction on n, using
Exercise 3.6.2 Verify that interchanging rows p and q
only the definition of det A.
(q > p) can be accomplished using 2(q − p) − 1 adjacent
interchanges.
a. Show that (Ai j )T = (AT ) ji for all i, j, and all 3 −4
square matrices A. Exercise 3.6 Let A = and let vk = Ak v0 for
2 −3
each k ≥ 0.
b. Use (a) to prove that det AT = det A. [Hint: In-
duction on n where A is n × n.] a. Show that A has no dominant eigenvalue.
b. Find vk if v0 equals:
0 In
Exercise 3.3 Show that det = (−1)nm for all
Im 0 1
n ≥ 1 and m ≥ 1. i.
1
Exercise 3.4 Show that
2
ii.
1
1 a a3
det 1 b b3 = (b − a)(c − a)(c − b)(a + b + c) x 1 2
iii. 6= or
1 c c3 y 1 1
4. Vector Geometry
In this chapter we study the geometry of 3-dimensional space. We view a point in 3-space as an arrow from
the origin to that point. Doing so provides a “picture” of the point that is truly worth a thousand words.
We used this idea earlier, in Section 2.6, to describe rotations, reflections, and projections of the plane R2 .
We now apply the same techniques to 3-space to examine similar transformations of R3 . Moreover, the
method enables us to completely describe all lines and planes in space.
Vectors in R3
Introduce a coordinate system in 3-dimensional space in the usual way. First choose a point O called the
origin, then choose three mutually perpendicular lines through O, called the x, y, and z axes, and establish
a number scale on each axis with zero at the origin. Given a point P in 3-space we associate three numbers
x, y, and z with P, as described in Figure 4.1.1. These numbers are called the coordinates of P, and we
denote the point as (x, y, z), or P(x, y, z) to emphasize the label P. The result is called a cartesian1
coordinate system for 3-space, and the resulting description of 3-space is called cartesian geometry.
As in the plane, we introduce vectors by identifying each point
z x
P(x, y, z) with the vector v = y in R3 , represented by the arrow
P(x, y, z)
z
x
from the origin to P as in Figure 4.1.1. Informally, we say that the point P
v= y has vector v, and that vector v has point P. In this way 3-space is identi-
fied with R3 , and this identification will be made throughout this chapter,
O z
y often without comment. In particular, the terms “vector” and “point” are
x P0 (x, y, 0) interchangeable.2 The resulting description
of 3-space is called vector
0
Figure 4.1.1 geometry. Note that the origin is 0 = 0 .
0
1 Named after René Descartes who introduced the idea in 1637.
2 Recallthat we defined Rn as the set of all ordered n-tuples of real numbers, and reserved the right to denote them as rows
or as columns.
209
210 Vector Geometry
We are going to discuss two fundamental geometric properties of vectors in R3 : length and direction. First,
if v is a vector with point P, the length kvk of vector v is defined to be the distance from the origin to P,
that is the length of the arrow representing v. The following properties of length will be used frequently.
Theorem 4.1.1
x
Let v = y be a vector.
z
p
1. kvk = x2 + y2 + z2 . 3
3 When √
we write p we mean the positive square root of p.
4 Recall a if a ≥ 0
that the absolute value |a| of a real number is defined by |a| = .
−a if a < 0
5 Pythagoras’ theorem states that if a and b are sides of right triangle with hypotenuse c, then a2 + b2 = c2 . A proof is given
Example 4.1.1
2
√ √ 3
If v = −1 then kvk = 4 + 1 + 9 = 14. Similarly if v = in 2-space then
−4
3
√
kvk = 9 + 16 = 5.
When we view two nonzero vectors as arrows emanating from the origin, it is clear geometrically
what we mean by saying that they have the same or opposite direction. This leads to a fundamental new
description of vectors.
Theorem 4.1.2
Let v 6= 0 and w 6= 0 be vectors in R3 . Then v = w as matrices if and only if v and w have the same
direction and the same length.6
z Proof. If v = w, they clearly have the same direction and length. Conversely,
P let v and w be vectors with points P(x, y, z) and Q(x1 , y1 , z1 ) respectively. If
v
Q
v and w have the same length and direction then, geometrically, P and Q must
O
w be the
same
point
(see Figure 4.1.3). Hence x = x1 , y = y1 , and z = z1 , that is
y
x x1
x
v = y = y1 = w.
Figure 4.1.3 z z1
A characterization of a vector in terms of its length and direction only is called an intrinsic description
of the vector. The point to note is that such a description does not depend on the choice of coordinate
system in R3 . Such descriptions are important in applications because physical laws are often stated in
terms of vectors, and these laws cannot depend on the particular coordinate system used to describe the
situation.
Geometric Vectors
If A and B are distinct points in space, the arrow from A to B has length and direction.
z
B
−
→
A AB
O y
x
Figure 4.1.4
6 Itis Theorem 4.1.2 that gives vectors their power in science and engineering because many physical quantities are deter-
mined by their length and magnitude (and are called vector quantities). For example, saying that an airplane is flying at 200
km/h does not describe where it is going; the direction must also be specified. The speed and direction comprise the velocity
of the airplane, a vector quantity.
212 Vector Geometry
Hence:
−→
Note that if v is any vector in R3 with point P then v = OP is itself
−
→
y a geometric vector where O is the origin. Referring to AB as a “vector”
seems justified by Theorem 4.1.2 because it has a direction (from A to B)
−→
B(2, 3) and a length kABk. However there appears to be a problem because two
Q(0, 2) geometric vectors can have the same length and direction even if the tips
−→ −→
A(3, 1) and tails are √different. For example AB and PQ in Figure 4.1.5 have the
P(1, 0) same length 5 and the same direction (1 unit left and 2 units up) so, by
x
O Theorem 4.1.2, they are the same vector! The best way to understand this
−→ −→
Figure 4.1.5 apparent paradox is to see AB and
PQ as different representations of the
−1
same7 underlying vector . Once it is clarified, this phenomenon is
2
a great benefit because, thanks to Theorem 4.1.2, it means that the same
geometric vector can be positioned anywhere in space; what is important is the length and direction, not
the location of the tip and tail. This ability to move geometric vectors about is very useful as we shall soon
see.
7 6 14
Fractions provide another example of quantities that can be the same but look different. For example 9 and 21 certainly
appear different, but they are equal fractions—both equal 23 in “lowest terms”.
8 Recall that a parallelogram is a four-sided figure whose opposite sides are parallel and of equal length.
4.1. Vectors and Lines 213
Because a vector can be positioned with its tail at any point, the parallelo-
gram law leads to another way to view vector addition. In Figure 4.1.7(a) the
v sum v + w of two vectors v and w is shown as given by the parallelogram law. If
v+w w is moved so its tail coincides with the tip of v (Figure 4.1.7(b)) then the sum
P
(a) w w v + w is seen as “first v and then w. Similarly, moving the tail of v to the tip of w
shows in Figure 4.1.7(c) that v + w is “first w and then v.” This will be referred
v to as the tip-to-tail rule, and it gives a graphic illustration of why v +w = w +v.
v+w −
→
(b) Since AB denotes the vector from a point A to a point B, the tip-to-tail rule
takes the easily remembered form
w+v v −
→ − → − →
AB + BC = AC
(c)
w
for any points A, B, and C. The next example uses this to derive a theorem in
Figure 4.1.7 geometry without using coordinates.
Example 4.1.2
Show that the diagonals of a parallelogram bisect each other.
B
Solution. Let the parallelogram have vertices A, B, C, and D,
as shown; let E denote the intersection of the two diagonals;
A and let M denote the midpoint of diagonal AC. We must show
M E
that M = E and that this is the midpoint of diagonal BD. This
−→ −−→
is accomplished by showing that BM = MD. (Then the fact
C that these vectors have the same direction means that M = E,
and the fact that they have the same length means that M = E
D −→ −→
is the midpoint of BD.) Now AM = MC because M is the midpoint
−
→ −→
of AC, and BA = CD because the figure is a parallelogram. Hence
−→ − → −→ −→ −→ −→ −→ −−→
BM = BA + AM = CD + MC = MC + CD = MD
where the first and last equalities use the tip-to-tail rule of vector addition.
w One reason for the importance of the tip-to-tail rule is that it means two
or more vectors can be added by placing them tip-to-tail in sequence. This
u
v gives a useful “picture” of the sum of several vectors, and is illustrated for
three vectors in Figure 4.1.8 where u + v + w is viewed as first u, then v,
u+v+w then w.
u w There is a simple geometrical way to visualize the (matrix) difference
v −w of two vectors. If v and w are positioned so that they have a common
v
tail A (see Figure 4.1.9), and if B and C are their respective tips, then the
Figure 4.1.8
214 Vector Geometry
−
→ −→
tip-to-tail rule gives w + CB = v. Hence v − w = CB is the vector from the tip of w to the tip of v. Thus
both v − w and v + w appear as diagonals in the parallelogram determined by v and w (see Figure 4.1.9).
We record this for reference.
B
v
−
→
CB Theorem 4.1.3
A
w
If v and w have a common tail, then v − w is the vector from the tip
C
of w to the tip of v.
v−w v+w
v One of the most useful applications of vector subtraction is that it gives
a simple formula for the vector from one point to another, and for the
w
distance between the points.
Figure 4.1.9
Theorem 4.1.4
Let P1 (x1 , y1 , z1 ) and P2 (x2 , y2 , z2 ) be two points. Then:
x2 − x1
−→
1. P1 P2 = y2 − y1 .
z2 − z1
p
2. The distance between P1 and P2 is (x2 − x1 )2 + (y2 − y1 )2 + (z2 − z1 )2 .
Example 4.1.3
p √
The distance between P1(2, −1, 3) and P2 (1, 1, 4) is (−1)2 + (2)2 + (1)2 = 6, and the vector
−1
−→
from P1 to P2 is P1 P2 = 2 .
1
4.1. Vectors and Lines 215
As for the parallelogram law, the intrinsic rule for finding the length and direction of a scalar multiple
of a vector in R3 follows easily from the same situation in R2 .
Proof.
1. This is part of Theorem 4.1.1.
2. Let O denote the origin in R3 , let v have point P, and choose any plane containing O and P. If we
−→
set up a coordinate system in this plane with O as origin, then v = OP so the result in (2) follows
from the scalar multiple law in the plane (Section 2.6).
Figure 4.1.11 gives several examples of scalar multiples of a vector v.
(−2)v
Consider a line L through the origin, let P be any point on L other than
2v −→
v
(− 21 )v
the origin O, and let p = OP. If t 6= 0, then tp is a point on L because it
1
2v
has direction the same or opposite as that of p. Moreover t > 0 or t < 0
according as the point tp lies on the same or opposite side of the origin as
Figure 4.1.11 P. This is illustrated in Figure 4.1.12.
1
L A vector u is called a unit vector if kuk = 1. Then i = 0 ,
P
3 0
p 2p
O
1 0 0
2p j = 1 , and k = 0 are unit vectors, called the coordinate vectors.
− 21 p
0 1
We discuss them in more detail in Section 4.2.
Figure 4.1.12
Example 4.1.4
1
If v 6= 0 show that kvk v is the unique unit vector in the same direction as v.
Solution. The vectors in the same direction as v are the scalar multiples av where a > 0. But
1
kavk = |a|kvk = akvk when a > 0, so av is a unit vector if and only if a = kvk .
The next example shows how to find the coordinates of a point on the line segment between two given
points. The technique is important and will be used again below.
9 Since the zero vector has no direction, we deal only with the case av 6= 0.
216 Vector Geometry
Example 4.1.5
Let p1 and p2 be the vectors of two points P1 and P2 . If M is the point one third the way from P1 to
P2 , show that the vector m of M is given by
m = 23 p1 + 13 p2
Note that in Example 4.1.5 m = 23 p1 + 13 p2 is a “weighted average” of p1 and p2 with more weight on p1
because m is closer to p1 .
The point M halfway between points P1 and P2 is called the midpoint between these points. In the
same way, the vector m of M is
m = 21 p1 + 12 p2 = 12 (p1 + p2 )
as the reader can verify, so m is the “average” of p1 and p2 in this case.
Example 4.1.6
Show that the midpoints of the four sides of any quadrilateral are the vertices of a parallelogram.
Here a quadrilateral is any figure with four vertices and straight sides.
Solution. Suppose that the vertices of the quadrilateral are A, B, C, and D (in that order) and that
E, F, G, and H are the midpoints of the sides as shown in the diagram. It suffices to show
−→ −→
EF = HG (because then sides EF and HG are parallel and of equal length).
4.1. Vectors and Lines 217
−→ −
→
Now the fact that E is the midpoint of AB means that EB = 12 AB.
−→ −
→
F C Similarly, BF = 12 BC, so
−→ −→ −→ 1 − → −
→ −
→ − → −
→
B EF = EB + BF = 2 AB + 21 BC = 12 (AB + BC) = 12 AC
E G −→ −
→ −→ −→
A similar argument shows that HG = 12 AC too, so EF = HG
A as required.
H D
Many geometrical propositions involve this notion, so the following theorem will be referred to repeat-
edly.
Theorem 4.1.5
Two nonzero vectors v and w are parallel if and only if one is a scalar multiple of the other.
Proof. If one of them is a scalar multiple of the other, they are parallel by the scalar multiple law.
kvk
Conversely, assume that v and w are parallel and write d = kwk for convenience. Then v and w have
the same or opposite direction. If they have the same direction we show that v = dw by showing that v
and dw have the same length and direction. In fact, kdwk = |d|kwk = kvk by Theorem 4.1.1; as to the
direction, dw and w have the same direction because d > 0, and this is the direction of v by assumption.
Hence v = dw in this case by Theorem 4.1.2. In the other case, v and w have opposite direction and a
similar argument shows that v = −dw. We leave the details to the reader.
Example 4.1.7
−→ −
→
Given points P(2, −1, 4), Q(3, −1, 3), A(0, 2, 1), and B(1, 3, 0), determine if PQ and AB are
parallel.
−→ −
→ −→ −
→
Solution. By Theorem 4.1.3, PQ = (1, 0, −1) and AB = (1, 1, −1). If PQ = t AB then
−→
(1, 0, −1) = (t, t, −t), so 1 = t and 0 = t, which is impossible. Hence PQ is not a scalar multiple
−
→
of AB, so these vectors are not parallel by Theorem 4.1.5.
218 Vector Geometry
Lines in Space
These vector techniques can be used to give a very simple way of describing straight lines in space. In
order to do this, we first need a way to specify the orientation of such a line, much as the slope does in the
plane.
−→
Of course it is then parallel to CD for any distinct points C and D on the line.
P0 d In particular, any nonzero scalar multiple of d will also serve as a direction
P vector of the line.
P0 P
We use the fact that there is exactly one line that passes through a par-
p0
p a
ticular point P0 (x0 , y0 , z0 ) and has a given direction vector d = b . We
Origin c
want to describe this line by givinga condition
on
x, y, and z that the point
Figure 4.1.13 x0 x
P(x, y, z) lies on this line. Let p0 = y0 and p = y denote the vectors
z0 z
of P0 and P, respectively (see Figure 4.1.13). Then
−→
p = p0 + P0 P
−→ −→
Hence P lies on the line if and only if P0 P is parallel to d—that is, if and only if P0 P = td for some scalar
t by Theorem 4.1.5. Thus p is the vector of a point on the line if and only if p = p0 + td for some scalar t.
This discussion is summed up as follows.
p = p0 + t d t any scalar
In other words, the point P with vector p is on this line if and only if a real number t exists such
that p = p0 + t d.
x = x0 + ta
y = y0 + tb t any scalar
z = z0 + tc
In other words, the point P(x, y, z) is on this line if and only if a real number t exists such that
x = x0 + ta, y = y0 + tb, and z = z0 + tc.
Example 4.1.8
Find the equations of the line through the points P0 (2, 0, 1) and P1 (4, −1, 1).
2
−→
Solution. Let d = P0 P1 = 1 denote the vector from P0 to P1 . Then d is parallel to the line (P0
0
and P1 are on the line), so d serves as a direction vector for the line. Using P0 as the point on the
line leads to the parametric equations
x = 2 + 2t
y = −t t a parameter
z=1
x = 4 + 2s
y = −1 − s s a parameter
z=1
These are different from the preceding equations, but this is merely the result of a change of
parameter. In fact, s = t − 1.
Example 4.1.9
Find the equations of the line through P0 (3, −1, 2) parallel to the line with equations
x = −1 + 2t
y = 1+t
z = −3 + 4t
220 Vector Geometry
2
Solution. The coefficients of t give a direction vector d = 1 of the given line. Because the
4
line we seek is parallel to this line, d also serves as a direction vector for the new line. It passes
through P0 , so the parametric equations are
x = 3 + 2t
y = −1 + t
z = 2 + 4t
Example 4.1.10
Determine whether the following lines intersect and, if so, find the point of intersection.
x = 1 − 3t x = −1 + s
y = 2 + 5t y = 3 − 4s
z = 1+t z = 1−s
where the first (second) equation is because P lies on the first (second) line. Hence the lines
intersect if and only if the three equations
1 − 3t = −1 + s
2 + 5t = 3 − 4s
1+t = 1−s
have a solution. In this case, t = 1 and s = −1 satisfy all three equations, so the lines do intersect
and the point of intersection is
1 − 3t −2
p = 2 + 5t = 7
1+t 2
−1 + s
using t = 1. Of course, this point can also be found from p = 3 − 4s using s = −1.
1−s
4.1. Vectors and Lines 221
Example 4.1.11
1
Show that the line through P0(x0 , y0 ) with slope m has direction vector d = and equation
m
y − y0 = m(x − x0 ). This equation is called the point-slope formula.
0
Note that the vertical line through P0 (x0 , y0 ) has a direction vector d = that is not of the form
1
1
for any m. This result confirms that the notion of slope makes no sense in this case. However, the
m
vector method gives parametric equations for the line:
x = x0
y = y0 + t
Pythagoras’ Theorem
The Pythagorean theorem was known earlier, but Pythagoras (c. 550 B . C .)
B is credited with giving the first rigorous, logical, deductive proof of the
D
p c result. The proof we give depends on a basic property of similar triangles:
a ratios of corresponding sides are equal.
q
C
b A
Figure 4.1.14
Proof. Let A, B, and C be the vertices of the triangle as in Figure 4.1.14. Draw a perpendicular line from
C to the point D on the hypotenuse, and let p and q be the lengths of BD and DA respectively. Then DBC
222 Vector Geometry
and CBA are similar triangles so ap = ac . This means a2 = pc. In the same way, the similarity of DCA and
CBA gives qb = bc , whence b2 = qc. But then
a2 + b2 = pc + qc = (p + q)c = c2
Exercise 4.1.1 Compute kvk if v equals: Exercise 4.1.5 Use vectors to show that the line joining
the midpoints of two sides of a triangle is parallel to the
2 1 third side and half as long.
a. −1 b. −1
2 2 Exercise 4.1.6 Let A, B, and C denote the three vertices
of a triangle.
1 −1
c. 0 d. 0
a. If E is the midpoint of side BC, show that
−1 2
−→ 1 − → − →
1 1 AE = 2 (AB + AC)
e. 2 −1 f. −3 1
2 2 b. If F is the midpoint of side AC, show that
Exercise 4.1.8 Let p and q be the vectors of points P 2 1
and Q, respectively, and let R be the point whose vector a. x = −1 b. x = 3
is p + q. Express the following in terms of p and q. 6 0
−→ −→
a. QP b. QR
3 4
−→ −→
c. RP d. RO where O is the origin Exercise 4.1.13 Let u = −1 , v = 0 , and
0 1
−→ −→
Exercise 4.1.9 In each case, find PQ and kPQk. 1
z = 1 . In each case, show that there are no num-
a. P(1, −1, 3), Q(3, 1, 0) 1
bers a, b, and c such that:
b. P(2, 0, 1), Q(1, −1, 6)
c. P(1, 0, 1), Q(1, 0, −3) 1
a. au + bv + cz = 2
d. P(1, −1, 2), Q(1, −1, 2) 1
e. P(1, 0, −3), Q(−1, 0, 3)
5
f. P(3, −1, 6), Q(1, 1, 4) b. au + bv + cz = 6
−1
Exercise 4.1.10 In each case, find a point Q such that
−→
PQ has (i) the same direction as v; (ii) the opposite direc-
Exercise 4.1.14 Given P1 (2, 1, −2) and P2 (1, −2, 0).
tion to v.
Find the coordinates of the point P:
1
1
a. P(−1, 2, 2), v = 3 a. 5 the way from P1 to P2
1 1
b. 4 the way from P2 to P1
2
b. P(3, 0, −1), v = −1 Exercise 4.1.15 Find the two points trisecting the seg-
3 ment between P(2, 3, 5) and Q(8, −6, 2).
Exercise 4.1.16 Let P1 (x1 , y1 , z1 ) and P2 (x2 , y2 , z2 ) be
3 4
Exercise 4.1.11 Let u = −1 , v = 0 , and two points with vectors p1 and p2 , respectively. If rrand s
0 1 are positive integers, show that the point P lying r+s the
way from P1 to P2 has vector
−1
w = 1 . In each case, find x such that: s
r
p= p1 + p2
5 r+s r+s
2 2 b. The line passing through P(3, −1, 4) and
Exercise 4.1.18 Let u = 0 and v = 1 . In Q(1, 0, −1).
−4 −2
each case find x: c. The line passing through P(3, −1, 4) and
Q(3, −1, 5).
a. 2u − kvkv = 32 (u − 2x)
1
b. 3u + 7v = kuk2 (2x + v) d. The line parallel to 1 and passing through
1
P(1, 1, 1).
Exercise
4.1.19
Find all vectors u that are parallel to
3 e. The line passing through P(1, 0, −3) and parallel
v = −2 and satisfy kuk = 3kvk. to the line with parametric equations x = −1 + 2t,
1 y = 2 − t, and z = 3 + 3t.
Exercise 4.1.20 Let P, Q, and R be the vertices of a par- f. The line passing through P(2, −1, 1) and paral-
allelogram with adjacent sides PQ and PR. In each case, lel to the line with parametric equations x = 2 − t,
find the other vertex S. y = 1, and z = t.
a. P(3, −1, −1), Q(1, −2, 0), R(1, −1, 2) g. The lines through P(1, 0, 1) that meet the line
1 2
b. P(2, 0, −1), Q(−2, 4, 1), R(3, −1, 0) with vector equation p = 2 + t −1 at
0 2
Exercise 4.1.21 In each case either prove the statement points at distance 3 from P0 (1, 2, 0).
or give an example showing that it is false.
Exercise 4.1.23 In each case, verify that the points P
a. The zero vector 0 is the only vector of length 0. and Q lie on the line.
b. If kv − wk = 0, then v = w. a. x = 3 − 4t P(−1, 3, 0), Q(11, 0, 3)
c. If v = −v, then v = 0. y = 2+t
z = 1−t
d. If kvk = kwk, then v = w.
b. x = 4 − t P(2, 3, −3), Q(−1, 3, −9)
e. If kvk = kwk, then v = ±w. y=3
z = 1 − 2t
f. If v = tw for some scalar t, then v and w have the
same direction. Exercise 4.1.24 Find the point of intersection (if any)
g. If v, w, and v + w are nonzero, and v and v + w of the following pairs of lines.
parallel, then v and w are parallel.
a. x = 3 + t x = 4 + 2s
h. k − 5vk = −5kvk, for all v. y = 1 − 2t y = 6 + 3s
z = 3 + 3t z = 1 + s
i. If kvk = k2vk, then v = 0.
x = 1−t x = 2s
j. kv + wk = kvk + kwk, for all v and w. b. y = 2 + 2t y = 1+s
z = −1 + 3t z = 3
Exercise 4.1.22 Find the vector and parametric equa-
tions of the following lines. x 3 1
c. y = −1 + t 1
z 2 −1
2
a. The line parallel to −1 and passing through x 1 2
0 y = 1 + s 0
P(1, −1, 3). z −2 3
4.1. Vectors and Lines 225
x 4 1 Exercise 4.1.32 Consider a quadrilateral with vertices
d. y = −1 + t 0 A, B, C, and D in order (as shown in the diagram).
z 5 1
B
x 2 0 A
y = −7 + s −2
z 12 3
D C
Exercise 4.1.25 Show that if a line passes through the
origin, the vectors of points on the line are all scalar mul-
tiples of some fixed nonzero vector. If the diagonals AC and BD bisect each other, show
that the quadrilateral is a parallelogram. (This is the con-
Exercise 4.1.26 Show that every line parallel to the z verse of Example 4.1.2.) [Hint: Let E be the intersec-
axis has parametric equations x = x0 , y = y0 , z = t for tion of the diagonals. Show that − → −→
AB = DC by writing
some fixed numbers x0 and y0 . −→ −→ −→
AB = AE + EB.]
a Exercise 4.1.33 Consider the parallelogram ABCD (see
Exercise 4.1.27 Let d = b be a vector where a, diagram), and let E be the midpoint of side AD.
c
b, and c are all nonzero. Show that the equations of the
C
line through P0 (x0 , y0 , z0 ) with direction vector d can be
written in the form B
x−x0 y−y0 z−z0
a = b = c F D
This is called the symmetric form of the equations. E
Exercise 4.1.28 A parallelogram has sides AB, BC, CD,
A
and DA. Given A(1, −1, 2), C(2, 1, 0), and the midpoint
−→ Show that BE and AC trisect each other; that is, show
M(1, 0, −3) of AB, find BD.
that the intersection point is one-third of the way from E
Exercise 4.1.29 Find all points C on the line through to B and from A to C. [Hint: If F is one-third of the
−→ −→
A(1, −1, 2) and B = (2, 0, 1) such that kACk = 2kBCk. way from A to C, show that 2− → −→
EF = FB and argue as in
Exercise 4.1.30 Let A, B, C, D, E, and F be the ver- Example 4.1.2.]
tices of a regular hexagon, taken in order. Show that Exercise 4.1.34 The line from a vertex of a triangle to
−→ − → −→ −→ −→ −→
AB + AC + AD + AE + AF = 3AD. the midpoint of the opposite side is called a median of
Exercise 4.1.31 the triangle. If the vertices of a triangle have vectors
u, v, and w, show that the point on each median that
1
a. Let P1 , P2 , P3 , P4 , P5 , and P6 be six points equally is 3 the way from the midpoint to the vertex has vec-
spaced on a circle with centre C. Show that tor 31 (u + v + w). Conclude that the point C with vector
1
−
→ − → − → − → − → − → 3 (u + v + w) lies on all three medians. This point C is
CP1 + CP2 + CP3 + CP4 + CP5 + CP6 = 0 called the centroid of the triangle.
Exercise 4.1.35 Given four noncoplanar points in space,
b. Show that the conclusion in part (a) holds for any the figure with these points as vertices is called a tetra-
even set of points evenly spaced on the circle. hedron. The line from a vertex through the centroid (see
previous exercise) of the triangle formed by the remain-
c. Show that the conclusion in part (a) holds for three
ing vertices is called a median of the tetrahedron. If u, v,
points.
w, and x are the vectors of the four vertices, show that the
d. Do you think it works for any finite set of points point on a median one-fourth the way from the centroid
evenly spaced around the circle? to the vertex has vector 14 (u + v + w + x). Conclude that
the four medians are concurrent.
226 Vector Geometry
v · w = x1 x2 + y1 y2 + z1 z2 = vT w
Example 4.2.1
2 1
If v = −1 and w = 4 , then v · w = 2 · 1 + (−1) · 4 + 3 · (−1) = −5.
3 −1
The next theorem lists several basic properties of the dot product.
Theorem 4.2.1
Let u, v, and w denote vectors in R3 (or R2 ).
1. v · w is a real number.
2. v · w = w · v.
3. v · 0 = 0 = 0 · v.
4. v · v = kvk2 .
11 Similarly, x1 x2
if v = and w = in R2 , then v · w = x1 x2 + y1 y2 .
y1 y2
4.2. Projections and Planes 227
6. u · (v ± w) = u · v ± u · w
Proof. (1), (2), and (3) are easily verified, and (4) comes from Theorem 4.1.1. The rest are properties of
matrix arithmetic (because w · v = vT w), and are left to the reader.
The properties in Theorem 4.2.1 enable us to do calculations like
and such computations will be used without comment below. Here is an example.
Example 4.2.2
Verify that kv − 3wk2 = 1 when kvk = 2, kwk = 1, and v · w = 2.
There is an intrinsic description of the dot product of two nonzero vectors in R3 . To understand it we
require the following result from trigonometry.
Law of Cosines
If a triangle has sides a, b, and c, and if θ is the interior angle opposite c then
c2 = a2 + b2 − 2ab cos θ
0≤θ ≤π
v θ acute This angle θ will be called the angle between v and w. Figure 4.2.3 il-
θ lustrates when θ is acute (less than π2 ) and obtuse (greater than π2 ). Clearly
w v and w are parallel if θ is either 0 or π . Note that we do not define the
angle between v and w if one of these vectors is 0.
Figure 4.2.3
The next result gives an easy way to compute the angle between two
nonzero vectors using the dot product.
Theorem 4.2.2
Let v and w be nonzero vectors. If θ is the angle between v and w, then
v · w = kvkkwk cos θ
Proof. We calculate kv − wk2 in two ways. First apply the law of cosines
v v−w
to the triangle in Figure 4.2.4 to obtain:
θ
w kv − wk2 = kvk2 + kwk2 − 2kvkkwk cos θ
Figure 4.2.4
On the other hand, we use Theorem 4.2.1:
kv − wk2 = (v − w) · (v − w)
= v·v−v·w−w·v+w·w
= kvk2 − 2(v · w) + kwk2
Comparing these we see that −2kvkkwk cos θ = −2(v · w), and the result follows.
If v and w are nonzero vectors, Theorem 4.2.2 gives an intrinsic description of v · w because kvk, kwk,
and the angle θ between v and w do not depend on the choice of coordinate system. Moreover, since kvk
and kwk are nonzero (v and w are nonzero vectors), it gives a formula for the cosine of the angle θ :
v·w
cos θ = kvkkwk (4.1)
Example 4.2.3
−1 2
Compute the angle between u = 1 and v = 1 .
2 −1
4.2. Projections and Planes 229
y v·w
Solution. Compute cos θ = kvkkwk = −2+1−2
√ √ = − 12 . Now recall
−1 ,
√
3
6 6
that cos θ and sin θ are defined so that (cos θ , sin θ ) is the point on
2 2
2π
3
the unit circle determined by the angle θ (drawn counterclockwise,
−1 O
x starting from the positive x axis). In the present case, we know
that cos θ = − 12 and that 0 ≤ θ ≤ π . Because cos π3 = 21 , it follows
2
If v and w are nonzero, equation (4.1) shows that cos θ has the same sign as v · w, so
In this last case, the (nonzero) vectors are perpendicular. The following terminology is used in linear
algebra:
Theorem 4.2.3
Two vectors v and w are orthogonal if and only if v · w = 0.
Example 4.2.4
Show that the points P(3, −1, 1), Q(4, 1, 4), and R(6, 0, 4) are the vertices of a right triangle.
Example 4.2.5 demonstrates how the dot product can be used to verify geometrical theorems involving
perpendicular lines.
230 Vector Geometry
Example 4.2.5
A parallelogram with sides of equal length is called a rhombus. Show that the diagonals of a
rhombus are perpendicular.
Projections
In applications of vectors, it is frequently useful to write a vector as the sum of two orthogonal vectors.
Here is an example.
Example 4.2.6
Suppose a ten-kilogram block is placed on a flat surface inclined 30◦ to the horizontal as in the
diagram. Neglecting friction, how much force is required to keep the block from sliding down the
surface?
P
u − u1 If a nonzero vector d is specified, the key idea in Example 4.2.6 is to
d be able to write an arbitrary vector u as a sum of two vectors,
u
P1 u = u1 + u2
u1
Q where u1 is parallel to d and u2 = u − u1 is orthogonal to d. Suppose that
(a)
u and d 6= 0 emanate from a common tail Q (see Figure 4.2.5). Let P be
d the tip of u, and let P1 denote the foot of the perpendicular from P to the
P
u
line through Q parallel to d.
u − u1 −→
u1
Q Then u1 = QP1 has the required properties:
P1
1. u1 is parallel to d.
(b)
Figure 4.2.5 2. u2 = u − u1 is orthogonal to d.
3. u = u1 + u2 .
u1 = proj d u
In Figure 4.2.5(a) the vector u1 = proj d u has the same direction as d; however, u1 and d have opposite
directions if the angle between u and d is greater than π2 (Figure 4.2.5(b)). Note that the projection
u1 = proj d u is zero if and only if u and d are orthogonal.
Calculating the projection of u on d 6= 0 is remarkably easy.
Theorem 4.2.4
Let u and d 6= 0 be vectors.
u·d
1. The projection of u on d is given by proj d u = kdk2
d.
Proof. The vector u1 = proj d u is parallel to d and so has the form u1 = td for some scalar t. The
requirement that u − u1 and d are orthogonal determines t. In fact, it means that (u − u1 ) · d = 0 by
Theorem 4.2.3. If u1 = td is substituted here, the condition is
Example 4.2.7
2 1
Find the projection of u = −3 on d = −1 and express u = u1 + u2 where u1 is parallel
1 3
to d and u2 is orthogonal to d.
Example 4.2.8
P(1, 3, −2)
Find the shortest distance (see diagram) from the point P(1, 3, −2)
u − u1 1
u
u1 d to the line through P0 (2, 0, −1) with direction vector d = −1 .
Q 0
Also find the point Q that lies on the line and is closest to P.
P0 (2, 0, −1)
1 2 −1
Solution. Let u = 3 − 0 = 3 denote the vector from P0 to P, and let u1 denote
−2 −1 −1
the projection of u on d. Thus
−2
u·d
u1 = kdk −1−3+0
2 d = 12 +(−1)2 +02 d = −2d =
2
0
by Theorem 4.2.4. We see geometrically that the point Q on the line is closest to P, so the distance
is
1
−→
√
kQPk = ku − u1 k =
1
= 3
−1
To find
the coordinates
of Q, let p
0 and qdenote the vectors of P0 and Q, respectively. Then
2 0
p0 = 0 and q = p0 + u1 = 2 . Hence Q(0, 2, −1) is the required point. It can be
−1 −1
√
checked that the distance from Q to P is 3, as expected.
4.2. Projections and Planes 233
Planes
It is evident geometrically that among all planes that are perpendicular to a given straight line there is
exactly one containing any given point. This fact can be used to give a very simple description of a plane.
To do this, it is necessary to introduce the following notion:
For example, the coordinate vector k is a normal for the x-y plane.
P Given a point P0 = P0 (x0 , y0 , z0 ) and a nonzero vector n, there is a
n unique plane through P0 with normal n, shaded in Figure 4.2.6. A point
−→
P0 P = P(x, y, z) lies on this plane if and only if the vector P0P is orthogonal
x − x0
−→ −→
to n—that is, if and only if n · P0 P = 0. Because P0 P = y − y0 this
z − z0
Figure 4.2.6 gives the following result:
In other words, a point P(x, y, z) is on this plane if and only if x, y, and z satisfy this equation.
Example 4.2.9
3
Find an equation of the plane through P0 (1, −1, 3) with n = −1 as normal.
2
3(x − 1) − (y + 1) + 2(z − 3) = 0
a
If we write d = ax0 + by0 + cz0 , the scalar equation shows that every plane with normal n = b has
c
234 Vector Geometry
Example 4.2.10
Find an equation of the plane through P0 (3, −1, 2) that is parallel to the plane with equation
2x − 3y = 6.
2
Solution. The plane with equation 2x − 3y = 6 has normal n = −3 . Because the two planes
0
are parallel, n serves as a normal for the plane we seek, so the equation is 2x − 3y = d for some d
by Equation 4.2. Insisting that P0 (3, −1, 2) lies on the plane determines d; that is,
d = 2 · 3 − 3(−1) = 9. Hence, the equation is 2x − 3y = 9.
x0 x
Consider points P0 (x0 , y0 , z0 ) and P(x, y, z) with vectors p0 = y0 and p = y . Given a nonzero
z0 z
a
vector n, the scalar equation of the plane through P0 (x0 , y0 , z0 ) with normal n = b takes the vector
c
form:
n · (p − p 0 ) = 0
In other words, the point with vector p is on the plane if and only if p satisfies this condition.
Every plane with normal n has vector equation n · p = d for some number d.
Example 4.2.11
Find the shortest distance from the point P(2, 1, −3) to the plane with equation 3x − y + 4z = 1.
Also find the point Q on this plane closest to P.
4.2. Projections and Planes 235
3
Solution 1. The plane in question has normal n = −1 .
n 4
u1 P(2, 1, −3) Choose any point P0 on the plane—say P0 (0, −1, 0)—and let
Q(x, y, z) be the point on the plane
closest
to P (see the diagram).
u 2
Q(x, y, z) The vector from P0 to P is u = 2 . Now erect n with its
P0 (0, −1, 0)
−3
−→
tail at P0 . Then QP = u1 and u1 is the projection of u on n:
3 3
−8
u1 = knkn·u
2 n = 26 −1 = −413
−1
4 4
√ x
−→ 4 26
Hence the distance is kQPk = ku1 k = 13 . To calculate the point Q, let q = y and
z
0
p0 = −1 be the vectors of Q and P0 . Then
0
38
0 2 3 13
4
q = p0 + u − u1 = −1 + 2 + 13 −1 = 9
13
0 −3 4 −23
13
as before. This determines Q (in the diagram), and the reader can verify that the required distance
−→ 4
√
is kQPk = 13 26, as before.
236 Vector Geometry
If P, Q, and R are three distinct points in R3 that are not all on some line, it is clear geometrically that
−→ −→
there is a unique plane containing all three. The vectors PQ and PR both lie in this plane, so finding a
−→ −→
normal amounts to finding a nonzero vector orthogonal to both PQ and PR. The cross product provides a
systematic way to do this.
Example 4.2.12
2 1
If v = −1 and w = 3 , then
4 7
i 2 1
−1 3 2 1
v1 × v2 = det j −1 3 =
i− j+ 2 1 k
4 7 4 7 −1 3
k 4 7
= −19i − 10j + 7k
−19
= −10
7
Observe that v × w is orthogonal to both v and w in Example 4.2.12. This holds in general as can be
verified directly by computing v · (v × w) and w · (v × w), and is recorded as the first part of the following
theorem. It will follow from a more general result which, together with the second part, will be proved in
Section 4.3 where a more detailed study of the cross product will be undertaken.
Theorem 4.2.5
Let v and w be vectors in R3 .
It is interesting to contrast Theorem 4.2.5(2) with the assertion (in Theorem 4.2.3) that
v·w = 0 if and only if v and w are orthogonal.
Example 4.2.13
Find the equation of the plane through P(1, 3, −2), Q(1, 1, 5), and R(2, −2, 3).
0 1
−→ −
→
Solution. The vectors PQ = −2 and PR = −5 lie in the plane, so
7 5
i 0 1 25
−→ − →
PQ × PR = det j −2 −5 = 25i + 7j + 2k = 7
k 7 5 2
−→ −
→
is a normal for the plane (being orthogonal to both PQ and PR). Hence the plane has equation
Since P(1, 3, −2) lies in the plane we have 25 · 1 + 7 · 3 + 2(−2) = d. Hence d = 42 and the
−→ −→
equation is 25x + 7y + 2z = 42. Incidentally, the same equation is obtained (verify) if QP and QR,
−
→ −→
or RP and RQ, are used as the vectors in the plane.
Example 4.2.14
Find the shortest distance between the nonparallel lines
x 1 2 x 3 1
y = 0 + t 0 and y = 1 + s 1
z −1 1 z 0 −1
Then find the points A and B on the lines that are closest together.
2 1
Solution. Direction vectors for the two lines are d1 = 0 and d2 = 1 , so
1 −1
i 2 1 −1
n = d1 × d2 = det j 0 1 = 3
k 1 −1 2
n P2
B is perpendicular to both lines. Consider the plane shaded in
u the diagram containing the first line with n as normal. This plane
A contains P1 (1, 0, −1) and is parallel to the second line. Because
P1 P2 (3, 1, 0) is on the second line, the distance in question is just the
shortest distance between P2 (3, 1, 0)
and this plane. The vector
2
−→
u from P1 to P2 is u = P1 P2 = 1 and so, as in Example 4.2.11,
1
the distance is the length of the projection of u on n.
√
u·n
|u·n| √3
distance =
knk2 n
= knk = 14 = 3 1414
Exercise 4.2.7 Show that the triangle with vertices Exercise 4.2.12 Calculate the distance from the point P
A(4, −7, 9), B(6, 4, 4), and C(7, 10, −6) is not a right- to the line in each case and find the point Q on the line
angled triangle. closest to P.
Exercise 4.2.8 Find the three internal angles of the tri-
angle with vertices: a. P(3, 2 − 1)
x 2 3
line: y = 1 + t −1
a. A(3, 1, −2), B(3, 0, −1), and C(5, 2, −1)
z 3 −2
b. A(3, 1, −2), B(5, 2, −1), and C(4, 3, −3)
b. P(1,
−1, 3)
x 1 3
Exercise 4.2.9 Show that the line through P0 (3, 1, 4)
line: y = 0 +t 1
and P1 (2, 1, 3) is perpendicular to the line through z −1 4
P2 (1, −1, 2) and P3 (0, 5, 3).
Exercise 4.2.10 In each case, compute the projection of Exercise 4.2.13 Compute u × v where:
u on v.
1 1
5 2 a. u = 2 , v = 1
a. u = 7 , v = −1 3 2
1 3
3 −6
3 4 b. u = −1 , v = 2
b. u = −2 , v = 1
0 0
1 1
3 1
1 3 c. u = −2 , v = 1
c. u = −1 , v = −1 1 −1
2 1
2 1
3 −6 d. u = 0 , v = 4
d. u = −2 , v = 4 −1 7
−1 2
Exercise 4.2.21 Find the equation of all planes: x 3 2
a. y = 0 + s 1 ;
a.
Perpendicular
to the line
z 1 −3
x 2 2 x 1 1
y = −1 + t 1 . y = 1 +t 0
z 3 3 z −1 1
b.
Perpendicular
to the line
x 1 1
x 1 3 b. y = −1 + s 1 ;
y = 0 + t 0 .
z 0 1
z −1 2
x 2 3
c. Containing the origin. y = −1 + t 1
z 3 0
d. Containing P(3, 2, −4).
x 3 1
e. Containing P(1, 1, −1) and Q(0, 1, 1). y =
c. 1 + s 1 ;
f. Containing P(2, −1, 1) and Q(1, 0, 0). z −1 −1
x 1 1
g.
Containing
the line y = 2 +t 0
x 2 1 z 0 2
y = 1 + t −1 .
z 0 0 x 1 2
d. y = 2 + s 0 ;
h.
Containing
the line z 3 −1
x 3 1
y = 0 + t −2 . x 3 1
y = −1 + t 1
z 2 −1
z 0 0
Exercise 4.2.22 If a plane contains two distinct points
P1 and P2 , show that it contains every point on the line Exercise 4.2.25 Show that two lines in the plane with
through P1 and P2 . slopes m1 and m2 are perpendicular if and only if
m1 m2 = −1. [Hint: Example 4.1.11.]
Exercise 4.2.23 Find the shortest distance between the
following pairs of parallel lines. Exercise 4.2.26
a. Show that, of the four diagonals of a cube, no pair
x 2 1
is perpendicular.
a. y = −1 + t −1 ;
z 3 4 b. Show that each diagonal is perpendicular to the
x 1 1 face diagonals it does not meet.
y = 0 + t −1
z 1 4 Exercise 4.2.27 √ Given a rectangular solid with sides of
lengths 1, 1, and 2, find the angle between a diagonal
x 3 3 and one of the longest sides.
b. y = 0 +t 1 ;
z 2 0 Exercise 4.2.28 Consider a rectangular solid with sides
of lengths a, b, and c. Show that it has two orthogonal
x −1 3
y = 2 +t 1 diagonals if and only if the sum of two of a2 , b2 , and c2
z 2 0 equals the third.
Exercise 4.2.29 Let A, B, and C(2, −1,1) be the ver-
Exercise 4.2.24 Find the shortest distance between the 1
following pairs of nonparallel lines and find the points on tices of a triangle where −→ −
→
AB is parallel to −1 , AC is
the lines that are closest together. 1
4.2. Projections and Planes 243
2 Exercise 4.2.38 Let u, v, and w be pairwise orthogonal
parallel to 0 , and angle C = 90◦ . Find the equa- vectors.
−1
tion of the line through B and C.
a. Show that ku + v + wk2 = kuk2 + kvk2 + kwk2 .
Exercise 4.2.30 If the diagonals of a parallelogram have
equal length, show that the parallelogram is a rectangle. b. If u, v, and w are all the same length, show that
they all make the same angle with u + v + w.
x
Exercise 4.2.31 Given v = y in component form,
z Exercise 4.2.39
show that the projections of v on i, j, and k are xi, yj, and
zk, respectively. a
a. Show that n = is orthogonal to every vector
b
Exercise 4.2.32
along the line ax + by + c = 0.
a. Can u · v = −7 if kuk = 3 and kvk = 2? Defend
b. Show that the shortest distance from P0 (x0 , y0 ) to
your answer.
the line is |ax√0 +by 0 +c|
a2 +b2
.
2 −→
b. Find u · v if u = −1 , kvk = 6, and the angle [Hint: If P1 is on the line, project u = P1 P0 on n.]
2
2π
between u and v is 3 . Exercise 4.2.40 Assume u and v are nonzero vectors
that are not parallel. Show that w = kukv + kvku is a
Exercise 4.2.33 Show (u + v) · (u − v) = kuk2 − kvk2 nonzero vector that bisects the angle between u and v.
for any vectors u and v.
Exercise 4.2.41 Let α , β , and γ be the angles a vector
Exercise 4.2.34 v 6= 0 makes with the positive x, y, and z axes, respec-
tively. Then cos α , cos β , and cos γ are called the direc-
a. Show ku + vk2 + ku − vk2 = 2(kuk2 + kvk2 ) for
tion cosines of the vector v.
any vectors u and v.
b. What does this say about parallelograms? a
a. If v = b , show that cos α = kvk a b
, cos β = kvk ,
Exercise 4.2.35 Show that if the diagonals of a paral- c
lelogram are perpendicular, it is necessarily a rhombus. and cos γ = kvk c
.
[Hint: Example 4.2.5.]
Exercise 4.2.36 Let A and B be the end points of a di- b. Show that cos2 α + cos2 β + cos2 γ = 1.
ameter of a circle (see the diagram). If C is any point on
the circle, show that AC and BC are perpendicular. [Hint: Exercise 4.2.42 Let v 6= 0 be any nonzero vector and
−
→ − → − → −→ −→
Express AB · (AB × AC) = 0 and BC in terms of u = OA suppose that a vector u can be written as u = p+q, where
−→
and v = OC, where O is the centre.] p is parallel to v and q is orthogonal to v. Show that p
must equal the projection of u on v. [Hint: Argue as in
C the proof of Theorem 4.2.4.]
Exercise 4.2.43 Let v 6= 0 be a nonzero vector and let
a 6= 0 be a scalar. If u is any vector, show that the projec-
A B
O tion of u on v equals the projection of u on av.
Exercise 4.2.44
b. Show that |u · v| = kukkvk if and only if u and v d. Show that |xy + yz + zx| ≤ x2 + y2 + z2 for all x, y,
are parallel. and z.
[Hint: When is cos θ = ±1?] e. Show that (x + y + z)2 ≤ 3(x2 + y2 + z2 ) holds for
all x, y, and z.
c. Show
q that |x1 x2 +qy1 y2 + z1 z2 |
≤ x1 + y1 + z1 x22 + y22 + z22
2 2 2 Exercise 4.2.45 Prove that the triangle inequality
ku + vk ≤ kuk + kvk holds for all vectors u and v. [Hint:
holds for all numbers x1 , x2 , y1 , y2 , z1 , and z2 . Consider the triangle with u and v as two sides.]
Theorem 4.3.1
x0 x1 x2 x0 x1 x2
If u = y0 , v = y1 , and w = y2 , then u · (v × w) = det y0 y1 y2 .
z0 z1 z2 z0 z1 z2
where u v w denotes the matrix with u, v, and w as its columns. Now it is clear that v × w is
orthogonal to both v and w because the determinant of a matrix is zero if two columns are identical.
Because of (4.3) and Theorem 4.3.1, several of the following properties of the cross product follow
from properties of determinants (they can also be verified directly).
Theorem 4.3.2
Let u, v, and w denote arbitrary vectors in R3 .
4. u × u = 0. 8. (v + w) × u = (v × u) + (w × u).
5. u × v = −(v × u).
Proof. (1) is clear; (2) follows from Theorem 4.3.1; and (3) and (4) follow because the determinant of a
matrix is zero if one column is zero or if two columns are identical. If two columns are interchanged, the
determinant changes sign, and this proves (5). The proofs of (6), (7), and (8) are left as Exercise 4.3.15.
We now come to a fundamental relationship between the dot and cross products.
x1 x2
Proof. Given u and v, introduce a coordinate system and write u = y1 and v = y2 in component
z1 z2
form. Then all the terms in the identity can be computed in terms of the components. The detailed proof
is left as Exercise 4.3.14.
An expression for the magnitude of the vector u × v can be easily obtained from the Lagrange identity.
If θ is the angle between u and v, substituting u · v = kukkvk cos θ into the Lagrange identity gives
using the fact that 1 − cos2 θ = sin2 θ . But sin θ is nonnegative on the range 0 ≤ θ ≤ π , so taking the
positive square root of both sides gives
ku × vk = kukkvk sin θ
Theorem 4.3.4
If u and v are two nonzero vectors and θ is the angle between u and v, then
Proof of (2). By (1), u × v = 0 if and only if the area of the parallelogram is zero. By Figure 4.3.1 the area
vanishes if and only if u and v have the same or opposite direction—that is, if and only if they are parallel.
Example 4.3.1
Find the area of the triangle with vertices P(2, 1, 0), Q(3, −1, 1),
and R(1, 0, 1).
P
1 2
−
→ −→
Q Solution. We have RP = 1 and RQ = −1 . The area of
−1 0
R the triangle is half the area of the parallelogram (see the diagram),
→ −→
−
and so equals 12 kRP × RQk. We have
i 1 2 −1
−→ −→
RP × RQ = det j 1 −1 = −2
k −1 0 −3
→ −→
− √ √
so the area of the triangle is 21 kRP × RQk = 21 1 + 4 + 9 = 12 14.
4.3. More on the Cross Product 247
Theorem 4.3.5
The volume of the parallelepiped determined by three vectors w, u, and v (Figure 4.3.2) is given
by |w · (u × v)|.
Example 4.3.2
Find the volume of the parallelepiped determined by the vectors
1 1 −2
w = 2 , u = 1 , v = 0
−1 0 1
1 1 −2
Solution. By Theorem 4.3.1, w · (u × v) = det 2 1 0 = −3. Hence the volume is
−1 0 1
|w · (u × v)| = | − 3| = 3 by Theorem 4.3.5.
O
We can now give an intrinsic description of the cross product u × v.
y Its magnitude ku × vk = kukkvk sin θ is coordinate-free. If u × v 6= 0, its
x
direction is very nearly determined by the fact that it is orthogonal to both
z
u and v and so points along the line normal to the plane determined by u
Left-hand system and v. It remains only to decide which of the two possible directions is
correct.
z
Before this can be done, the basic issue of how coordinates are as-
signed must be clarified. When coordinate axes are chosen in space, the
y procedure is as follows: An origin is selected, two perpendicular lines (the
O x and y axes) are chosen through the origin, and a positive direction on
x each of these axes is selected quite arbitrarily. Then the line through the
Right-hand system origin normal to this x-y plane is called the z axis, but there is a choice of
which direction on this axis is the positive one. The two possibilities are
Figure 4.3.3 shown in Figure 4.3.3, and it is a standard convention that cartesian coor-
dinates are always right-hand coordinate systems. The reason for this
248 Vector Geometry
terminology is that, in such a system, if the z axis is grasped in the right hand with the thumb pointing in
the positive z direction, then the fingers curl around from the positive x axis to the positive y axis (through
a right angle).
Suppose now that u and v are given and that θ is the angle between them (so 0 ≤ θ ≤ π ). Then the
direction of ku × vk is given by the right-hand rule.
Right-hand Rule
If the vector u × v is grasped in the right hand and the fingers curl around from u to v through the
angle θ , the thumb points in the direction for u × v.
Exercise 4.3.5 Find the volume of the parallelepiped Exercise 4.3.12 Use Theorem 4.3.5 to confirm that, if
determined by w, u, and v when: u, v, and w are mutually perpendicular, the (rectangular)
parallelepiped they determine has volume kukkvkkwk.
2 1 2 Exercise 4.3.13 Show that the volume of the paral-
a. w = 1 , v = 0 , and u = 1 lelepiped determined by u, v, and u × v is ku × vk2 .
1 2 −1
Exercise 4.3.14 Complete the proof of Theorem 4.3.3.
1 2 1 Exercise 4.3.15 Prove the following properties in The-
b. w = 0 , v =
1 , and u = 1 orem 4.3.2.
3 −3 1
a. Property 6 b. Property 7
c. Property 8
Exercise 4.3.6 Let P0 be a point with vector p0 , and let
ax +by +cz = d be the equation of a plane with normal
Exercise 4.3.16
a
n = b . a. Show that w · (u × v) = u · (v × w) = v × (w × u)
c holds for all vectors w, u, and v.
b. Show that v − w and (u × v) + (v × w) + (w × u)
a. Show that the point on the plane closest to P0 has
are orthogonal.
vector p given by
Exercise 4.3.17 Show u×(v×w) = (u·w)v−(u×v)w.
0 ·n)
p = p0 + d−(p
knk2
n. [Hint: First do it for u = i, j, and k; then write u =
xi + yj + zk and use Theorem 4.3.2.]
[Hint: p = p0 + tn for some t, and p · n = d.]
Exercise 4.3.18 Prove the Jacobi identity:
b. Show that the shortest distance from P0 to the u × (v × w) + v × (w × u) + w × (u × v) = 0
plane is |d−(p
knk
0 ·n)|
.
[Hint: The preceding exercise.]
c. Let P0′ denote the reflection of P0 in the plane— Exercise 4.3.19 Show that
that is, the point on the opposite side of the plane
′ u·w u·z
such that the line through P0 and P0 is perpendicu- (u × v) · (w × z) = det
v·w v·z
lar to the plane.
Show that p0 + 2 d−(p 0 ·n)
n is the vector of P0′ . [Hint: Exercises 4.3.16 and 4.3.17.]
knk2
Exercise 4.3.20 Let P, Q, R, and S be four points, not
all on one plane, as in the diagram. Show that the volume
Exercise 4.3.7 Simplify (au + bv) × (cu + dv). of the pyramid they determine is
Exercise 4.3.8 Show that the shortest distance from a 1 − → − → − →
6 |PQ · (PR × PS)|.
point P to the line through P0 with direction vector d is
−→
kP0 P×dk [Hint: The volume of a cone with base area A and height
kdk .
h as in the diagram below right is 31 Ah.]
Exercise 4.3.9 Let u and v be nonzero, nonorthogo-
nal vectors. If θ is the angle between them, show that Q
tan θ = ku×vk
u·v .
h
Exercise 4.3.10 Show that points A, B, and C are all on
−
→ − → P S
one line if and only if AB × AC = 0
Exercise 4.3.11 Show that points A, B, C, and D are all R
−
→ − → − →
on one plane if and only if AB · (AB × AC) = 0
250 Vector Geometry
Exercise 4.3.21 Consider a triangle with vertices A, B, Exercise 4.3.23 Let A and B be points other than the
and C, as in the diagram below. Let α , β , and γ denote origin, and let a and b be their vectors. If a and b are not
the angles at A, B, and C, respectively, and let a, b, and parallel, show that the plane through A, B, and the origin
c denote the lengths of the sides opposite A, B, and C, is given by
−→ −→ −→
respectively. Write u = AB, v = BC, and w = CA.
x
B {P(x, y, z) | y = sa + tb for some s and t}
z
β
c a Exercise 4.3.24 Let A be a 2 × 3 matrix of rank 2 with
rows r1 and r2 . Show that
α b γ
A C P = {X A | X = [xy]; x, y arbitrary}
b. Show that u × v = w × u = v × w. [Hint: Compute Exercise 4.3.25 Given the cube with vertices P(x, y, z),
u × (u + v + w) and v × (u + v + w).] where each of x, y, and z is either 0 or 2, consider the
plane perpendicular to the diagonal through P(0, 0, 0)
c. Deduce the law of sines: and P(2, 2, 2) and bisecting it.
sin α sin β sin γ
a = b = c
a. Show that the plane meets six of the edges of the
Exercise 4.3.22 Show that the (shortest) distance be- cube and bisects them.
tween two planes n · p = d1 and n · p = d2 with n as nor-
mal is |d2knk
−d1 |
. b. Show that the six points in (a) are the vertices of a
regular hexagon.
Recall that a transformation T : Rn → Rm is called linear if T (x + y) = T (x) + T (y) and T (ax) = aT (x)
holds for all x and y in Rn and all scalars a. In this case we showed (in Theorem 2.6.2) that there exists
an m × n matrix A such that T (x) = Ax for all x in Rn , and we say that T is the matrix transformation
induced by A.
In Section 2.6 we investigated three important linear operators on R2 : rotations about the origin, reflections
in a line through the origin, and projections on this line.
In this section we investigate the analogous operators on R3 : Rotations about a line through the origin,
reflections in a plane through the origin, and projections onto a plane or line through the origin in R3 . In
every case we show that the operator is linear, and we find the matrices of all the reflections and projections.
4.4. Linear Operators on R3 251
To do this we must prove that these reflections, projections, and rotations are actually linear operators
on R3 . In the case of reflections and rotations, it is convenient to examine a more general situation. A
transformation T : R3 → R3 is said to be distance preserving if the distance between T (v) and T (w) is
the same as the distance between v and w for all v and w in R3 ; that is,
Clearly reflections and rotations are distance preserving, and both carry 0 to 0, so the following theorem
shows that they are both linear.
Theorem 4.4.1
If T : R3 → R3 is distance preserving, and if T (0) = 0, then T is linear.
Proof. Since T (0) = 0, taking w = 0 in (4.4) shows that kT (v)k = kvk for
z all v in R3 , that is T preserves length. Also, kT (v) − T (w)k2 = kv − wk2
T (v + w) by (4.4). Since kv − wk2 = kvk2 − 2v · w + kwk2 always holds, it follows
T (v) that T (v) · T (w) = v · w for all v and w. Hence (by Theorem 4.2.2) the
angle between T (v) and T (w) is the same as the angle between v and w
T (w) w for all (nonzero) vectors v and w in R3 .
y With this we can show that T is linear. Given nonzero vectors v and w
in R3 , the vector v + w is the diagonal of the parallelogram determined by
x v+w
v v and w. By the preceding paragraph, the effect of T is to carry this entire
parallelogram to the parallelogram determined by T (v) and T (w), with
Figure 4.4.1
diagonal T (v + w). But this diagonal is T (v) + T (w) by the parallelogram
law (see Figure 4.4.1).
In other words, T (v + w) = T (v) + T (w). A similar argument shows that T (av) = aT (v) for all scalars
a, proving that T is indeed linear.
Distance-preserving linear operators are called isometries, and we return to them in Section 10.4.
so the fact that QL is linear (by Theorem 4.4.1) shows that PL is also linear.13
a
However, Theorem 4.2.4 gives us the matrix of PL directly. In fact, if d = b 6= 0 is a direction
c
x
vector for L, and we write v = y , then
z
a a2 ab ac x
ax+by+cz 1 2
v·d
PL (v) = kdk 2 d = a2 +b2 +c2 b = a2 +b2 +c2
ab b bc y
c ac bc c2 z
as the reader can verify. Note that this shows directly that PL is a matrix transformation and so gives
another proof that it is linear.
Theorem 4.4.2
a
Let L denote the line through the origin in R3 with direction vector d = b = 6 0. Then PL and
c
QL are both linear and 2
a ab ac
PL has matrix a2 +b12 +c2 ab b2 bc
ac bc c2
2
a − b2 − c2 2ab 2ac
QL has matrix a2 +b12 +c2 2ab b2 − a2 − c2 2bc
2ac 2bc 2 2
c −a −b 2
Proof.It remains
to find the matrix of QL . But (4.5) implies that QL (v) = 2PL (v) − v for each v in R3 , so
x
if v = y we obtain (with some matrix arithmetic):
z
2
a ab ac 1 0 0 x
QL (v) = a2 +b22 +c2 ab b2 bc − 0 1 0 y
ac bc c2 0 0 1 z
2 2 2
a −b −c 2ab 2ac x
1 2 2 2
= a2 +b2 +c2 2ab b −a −c 2bc y
2ac 2bc c2 − a2 − b2 z
as required.
13 Note that Theorem 4.4.1 does not apply to PL since it does not preserve distance.
4.4. Linear Operators on R3 253
v
Figure 4.4.3 so the fact that QM is linear (again by Theorem 4.4.1) shows that PM is
also linear.
Again we can obtain the matrix directly. If n is a normal for the plane M, then Figure 4.4.3 shows that
v·n
PM (v) = v − proj n v = v − knk 2 n for all vectors v.
a x
If n = b 6= 0 and v = y , a computation like the above gives
c z
1 0 0 x a
PM (v) = 0 1 0 y − aax+by+cz
2 +b2 +c2
b
0 0 1 z c
2
b + c2 −ab −ac x
= a2 +b12 +c2 −ab a2 + c2 −bc y
−ac −bc b2 + c2 z
Theorem 4.4.3
a
Let M denote the plane through the origin in R3 with normal n = b = 6 0. Then PM and QM are
c
both linear and 2
b + c2 −ab −ac
PM has matrix a2 +b12 +c2 −ab a2 + c2 −bc
−ac −bc a2 + b2
2
b + c2 − a2 −2ab −2ac
QM has matrix a2 +b12 +c2 −2ab a2 + c2 − b2 −2bc
−2ac −2bc 2 2
a +b −c 2
Proof. It remains to compute the matrix of QM . Since QM (v) = 2PM (v) − v for each v in R3 , the compu-
tation is similar to the above and is left as an exercise for the reader.
254 Vector Geometry
Rotations
In Section 2.6 we studied the rotation Rθ : R2 → R2 counterclockwise about theorigin through the
angle
cos θ − sin θ
θ . Moreover, we showed in Theorem 2.6.4 that Rθ is linear and has matrix . One
sin θ cos θ
extension of this is given in the following example.
Example 4.4.1
Let Rz, θ : R3 → R3 denote rotation of R3 about the z axis through an angle θ from the positive x
axis toward the positive y axis. Show that Rz, θ is linear and find its matrix.
Example 4.4.1 begs to be generalized. Given a line L through the origin in R3 , every rotation about L
through a fixed angle is clearly distance preserving, and so is a linear operator by Theorem 4.4.1. However,
giving a precise description of the matrix of this rotation is not easy and will have to wait until more
techniques are available.
4.4. Linear Operators on R3 255
v Theorem 4.4.4
u
If T : R3 → R3 (or R2 → R2 ) is a linear operator, the image of the
w parallelogram determined by vectors v and w is the parallelogram
O
T (w) determined by T (v) and T (w).
T (u) This result is illustrated in Figure 4.4.7, and was used in Examples 2.2.15
and 2.2.16 to reveal the effect of expansion and shear transformations.
O
We now describe the effect of a linear transformation T : R3 → R3 on
T (v) the parallelepiped determined by three vectors u, v, and w in R3 (see the
discussion preceding Theorem 4.3.5). If T has matrix A, Theorem 4.4.4
Figure 4.4.7 shows that this parallelepiped is carried to the parallelepiped determined
by T (u) = Au, T (v) = Av, and T (w) = Aw. In particular, we want to
discover how the volume changes, and it turns out to be closely related to
the determinant of the matrix A.
Theorem 4.4.5
Let vol (u, v, w) denote the volume of the parallelepiped determined by three vectors u, v, and w
in R3 , and let area (p, q) denote the area of the parallelogram determined by two vectors p and q
in R2 . Then:
1. If A is a 3 × 3 matrix, then vol (Au, Av, Aw) = | det (A)| · vol (u, v, w).
2. If A is a 2 × 2 matrix, then area (Ap, Aq) = | det (A)| · area (p, q).
256 Vector Geometry
Proof.
1. Let u v w denote the 3 × 3 matrix with columns u, v, and w. Then
where we used Definition 2.9 and the product theorem for determinants. Finally (1) follows from
Theorem 4.3.5 by taking absolute values.
k x
x
q1 2. Given p = in R2 , p1 = y in R3 . By the diagram,
y
0
area (p, q) = vol (p1 , q1 , k) where k is the (length 1) coordinate
p1 A 0
vector along the z axis. If A is a 2 × 2 matrix, write A1 =
0 1
in block form, and observe that (Av)1 = (A1 v1 ) for all v in R2 and
A1 k = k. Hence part (1) of this theorem shows
as required.
Define the unit square and unit cube to be the square and cube corresponding to the coordinate
vectors in R2 and R3 , respectively. Then Theorem 4.4.5 gives a geometrical meaning to the determinant
of a matrix A:
• If A is a 2 × 2 matrix, then | det (A)| is the area of the image of the unit square under multiplication
by A;
• If A is a 3 × 3 matrix, then | det (A)| is the volume of the image of the unit cube under multiplication
by A.
These results, together with the importance of areas and volumes in geometry, were among the reasons for
the initial development of determinants.
4.4. Linear Operators on R3 257
1 Exercise 4.4.9 Let L be the 2
line
through the origin in R
b. Find the rotation of v = 0 about the z axis a
3 with direction vector d = 6= 0.
b
through θ = π6 .
a. If PL denotes projection on L, show that PL has
1 a2 ab
matrix a2 +b .
2
ab b2
Exercise 4.4.5 Find the matrix of the rotation in R3
b. If QL denotes
2reflection in L, show
that QL has ma-
about the x axis through the angle θ (from the positive 2
1 a − b 2ab
y axis to the positive z axis). trix a2 +b .
2
2ab b2 − a2
Exercise 4.4.6 Find the matrix of the rotation about the
y axis through the angle θ (from the positive x axis to the Exercise 4.4.10 Let n be a nonzero vector in R3 , let L be
positive z axis). the line through the origin with direction vector n, and let
M be the plane through the origin with normal n. Show
Exercise 4.4.7 If A is 3 × 3, show that the image of that PL (v) = QL (v) + PM (v) for all v in R3 . [In this case,
the line in R3 through p0 with direction vector d is the we say that PL = QL + PM .]
line through Ap0 with direction vector Ad, assuming that Exercise 4.4.11 If M is the plane through the origin in
Ad 6= 0. What happens if Ad = 0?
a
R3 with normal n = b , show that QM has matrix
Exercise 4.4.8 If A is 3 × 3 and invertible, show that the c
image of the plane through the origin with normal n is
the plane through the origin with normal n1 = Bn where b 2 + c2 − a 2 −2ab −2ac
B = (A−1 )T . [Hint: Use the fact that v · w = vT w to show 1 −2ab a 2 + c2 − b 2 −2bc
3 a2 +b2 +c2
that n1 · (Ap) = n · p for each p in R .] −2ac −2bc 2 2
a +b −c 2
Computer graphics deals with images displayed on a computer screen, and so arises in a variety of appli-
cations, ranging from word processors, to Star Wars animations, to video games, to wire-frame images of
an airplane. These images consist of a number of points on the screen, together with instructions on how
to fill in areas bounded by lines and curves. Often curves are approximated by a set of short straight-line
segments, so that the curve is specified by a series of points on the screen at the end of these segments.
Matrix transformations are important here because matrix images of straight line segments are again line
segments.14 Note that a colour image requires that three images are sent, one to each of the red, green,
and blue phosphorus dots on the screen, in varying intensities.
Consider displaying the letter A. In reality, it is depicted on the screen, as in Figure 4.5.1, by specifying
the coordinates of the 11 corners and filling in the interior.
For simplicity, we will disregard the thickness of the letter, so we require only five coordinates as in
Figure 4.5.2.
14 Ifv0 and v1 are vectors, the vector from v0 to v1 is d = v1 − v0 . So a vector v lies on the line segment between v0 and
v1 if and only if v = v0 + td for some number t in the range 0 ≤ t ≤ 1. Thus the image of this segment is the set of vectors
Av = Av0 + tAd with 0 ≤ t ≤ 1, that is the image is the segment between Av0 and Av1 .
4.5. An Application to Computer Graphics 259
Vertex 1 2 3 4 5
0 6 5 1 3
D=
0 0 3 3 9
where the columns are the coordinates of the vertices in order. Then if we want
to transform the letter by a 2 × 2 matrix A, we left-multiply this data matrix by
Figure 4.5.1 A (the effect is to multiply each column by A and so transform each vertex).
For example,
we can
slant the letter to the right by multiplying by an x-shear
5 1 0.2
matrix A = —see Section 2.2. The result is the letter with data matrix
0 1
4 3 1 0.2 0 6 5 1 3 0 6 5.6 1.6 4.8
A= =
0 1 0 0 3 3 9 0 0 3 3 9
1 2
Origin
which is shown in Figure 4.5.3.
If we want to make this slanted matrix narrower, we can now apply an x-
Figure 4.5.2 0.8 0
scale matrix B = that shrinks the x-coordinate by 0.8. The result is
0 1
the composite transformation
0.8 0 1 0.2 0 6 5 1 3
BAD =
0 1 0 1 0 0 3 3 9
0 4.8 4.48 1.28 3.84
=
0 0 3 3 9
x
x
The idea is to represent a point v = as a 3 × 1 column y , called the homogeneous coordi-
y
1
p
nates of v. Then translation by w = can be achieved by multiplying by a 3 × 3 matrix:
q
1 0 p x x+ p
0 1 q y = y+q = Tw (v)
1
0 0 1 1 1
Thus, by using homogeneous coordinates we can implement the translation
Tw in the top two coordinates.
a b
On the other hand, the matrix transformation induced by A = is also given by a 3 × 3 matrix:
c d
a b 0 x ax + by
c d 0 y = cx + dy = Av
1
0 0 1 1 1
So everything can be accomplished at the expense of using 3 × 3 matrices and homogeneous coordinates.
Example 4.5.1
π 4
Rotate the letter A in Figure 4.5.2 through 6 about the point .
5
Solution. Using homogeneous coordinates for the vertices of the letter results in a data matrix with
three rows:
0 6 5 1 3
Kd = 0 0 3 3 9
1 1 1 1 1
4
If we write w = , the idea is to use a composite of
5
Origin transformations: First translate the letter by −w so that the point
w moves to the origin, then rotate this translated letter, and then
translate it by w back to its original position. The matrix arithmetic
Figure 4.5.6 is as follows (remember the order of composition!):
1 0 4 0.866 −0.5 0 1 0 −4 0 6 5 1 3
0 1 5 0.5 0.866 0 0 1 −5 0 0 3 3 9
0 0 1 0 0 1 0 0 1 1 1 1 1 1
3.036 8.232 5.866 2.402 1.134
= −1.33 1.67 3.768 1.768 7.964
1 1 1 1 1
This discussion merely touches the surface of computer graphics, and the reader is referred to special-
ized books on the subject. Realistic graphic rendering requires an enormous number of matrix calcula-
tions. In fact, matrix multiplication algorithms are now embedded in microchip circuits, and can perform
4.5. An Application to Computer Graphics 261
over 100 million matrix multiplications per second. This is particularly important in the field of three-
dimensional graphics where the homogeneous coordinates have four components and 4 × 4 matrices are
required.
Exercise 4.5.1 Consider the letter A described in Fig- Exercise 4.5.4 Find the 3×3 matrix for rotating through
ure 4.5.2. Find the data matrix for the letter obtained by: the angle θ about the point P(a, b).
a. Rotating the letter through π4 about the origin.
Exercise 4.5.5 Find the reflection of the point P in the
b. Rotating the letter through π4 about the point line y = 1 + 2x in R2 if:
1
.
2
a. P = P(1, 1)
Exercise 4.5.2 Find the matrix for turning the letter A
in Figure 4.5.2 upside-down in place. b. P = P(1, 4)
Exercise 4.5.3 Find the 3 × 3matrix for reflecting in
c. What about P = P(1, 3)? Explain. [Hint: Exam-
1
the line y = mx + b. Use as direction vector for ple 4.5.1 and Section 4.4.]
m
the line.
Exercise 4.1 Suppose that u and v are nonzero vectors. knots, and an airplane flies heading east at 100 knots.
If u and v are not parallel, and au + bv = a1 u + b1 v, show Find the resulting velocity of the airplane.
that a = a1 and b = b1 .
Exercise 4.5 An airplane pilot flies at 300 km/h in a di-
Exercise 4.2 Consider a triangle with vertices A, B, rection 30◦ south of east. The wind is blowing from the
and C. Let E and F be the midpoints of sides AB and south at 150 km/h.
AC, respectively, and let the medians EC and FB meet at
−→ −→ −→ −→
O. Write EO = sEC and FO = t FB, where s and t are a. Find the resulting direction and speed of the air-
−→
scalars. Show that s = t = 13 by expressing AO two ways plane.
−→ −
→
in the form aEO + bAC, and applying Exercise 4.1. Con-
b. Find the speed of the airplane if the wind is from
clude that the medians of a triangle meet at the point on
the west (at 150 km/h).
each that is one-third of the way from the midpoint to the
vertex (and so are concurrent).
Exercise 4.6 A rescue boat has a top speed of 13 knots.
Exercise 4.3 A river flows at 1 km/h and a swimmer
The captain wants to go due east as fast as possible in wa-
moves at 2 km/h (relative to the water). At what angle
ter with a current of 5 knots due south. Find the velocity
must he swim to go straight across? What is his resulting
vector v = (x, y) that she must achieve, assuming the x
speed?
and y axes point east and north, respectively, and find her
Exercise 4.4 A wind is blowing from the south at 75 resulting speed.
262 Vector Geometry
Exercise 4.7 A boat goes 12 knots heading north. The Exercise 4.10 The line through a vertex of a triangle,
current is 5 knots from the west. In what direction does perpendicular to the opposite side, is called an altitude
the boat actually move and at what speed? of the triangle. Show that the three altitudes of any tri-
angle are concurrent. (The intersection of the altitudes
Exercise 4.8 Show that the distance from a point A (with
is called the orthocentre of the triangle.) [Hint: If P is
vector a) to the plane with vector equation n · p = d is
1 the intersection of two of the altitudes, show that the line
knk |n · a − d|. through P and the remaining vertex is perpendicular to
Exercise 4.9 If two distinct points lie in a plane, show the remaining side.]
that the line through these points is contained in the
plane.
5. Vector Space Rn
In Section 2.2 we introduced the set Rn of all n-tuples (called vectors), and began our investigation of the
matrix transformations Rn → Rm given by matrix multiplication by an m × n matrix. Particular attention
was paid to the euclidean plane R2 where certain simple geometric transformations were seen to be ma-
trix transformations. Then in Section 2.6 we introduced linear transformations, showed that they are all
matrix transformations, and found the matrices of rotations and reflections in R2 . We returned to this in
Section 4.4 where we showed that projections, reflections, and rotations of R2 and R3 were all linear, and
where we related areas and volumes to determinants.
In this chapter we investigate Rn in full generality, and introduce some of the most important concepts
and methods in linear algebra. The n-tuples in Rn will continue to be denoted x, y, and so on, and will be
written as rows or columns depending on the context.
Subspaces of Rn
We say that the subset U is closed under addition if S2 holds, and that U is closed under scalar multi-
plication if S3 holds.
Clearly Rn is a subspace of itself, and this chapter is about these subspaces and their properties. The
set U = {0}, consisting of only the zero vector, is also a subspace because 0 + 0 = 0 and a0 = 0 for each a
in R; it is called the zero subspace. Any subspace of Rn other than {0} or Rn is called a proper subspace.
1 We use the language of sets. Informally, a set X is a collection of objects, called the elements of the set. The fact that x is
an element of X is denoted x ∈ X. Two sets X and Y are called equal (written X = Y ) if they have the same elements. If every
element of X is in the set Y , we say that X is a subset of Y , and write X ⊆ Y . Hence X ⊆ Y and Y ⊆ X both hold if and only if
X = Y.
263
264 Vector Space Rn
x M M = {v in R3 | n · v = 0}
x
where v = y and n · v denotes the dot product introduced in Sec-
z
tion 2.2 (see the diagram).2 Then M is a subspace of R3 . Indeed we show
that M satisfies S1, S2, and S3 as follows:
S1. 0 ∈ M because n · 0 = 0;
S2. If v ∈ M and v1 ∈ M , then n · (v + v1 ) = n · v + n · v1 = 0 + 0 = 0 , so v + v1 ∈ M;
S3. If v ∈ M , then n · (av) = a(n · v) = a(0) = 0 , so av ∈ M.
Example 5.1.1
Planes and lines through the origin in R3 are all subspaces of R3 .
z
L Solution. We dealt with planes above. If L is a line through
the origin with direction vector d, then L = {td | t ∈ R} (see
d
the diagram). We leave it as an exercise to verify that L satisfies
y S1, S2, and S3.
Example 5.1.1 shows that lines through the origin in R2 are subspaces; in fact, they are the only proper
subspaces of R2 (Exercise 5.1.24). Indeed, we shall see in Example 5.2.14 that lines and planes through
the origin in R3 are the only proper subspaces of R3 . Thus the geometry of lines and planes through the
origin is captured by the subspace concept. (Note that every line or plane is just a translation of one of
these.)
Subspaces can also be used to describe important features of an m × n matrix A. The null space of A,
denoted null A, and the image space of A, denoted im A, are defined by
In the language of Chapter 2, null A consists of all solutions x in Rn of the homogeneous system Ax = 0,
and im A is the set of all vectors y in Rm such that Ax = y has a solution x. Note that x is in null A if it
2 We are using set notation here. In general {q | p} means the set of all objects q with property p.
5.1. Subspaces and Spanning 265
satisfies the condition Ax = 0, while im A consists of vectors of the form Ax for some x in Rn . These two
ways to describe subsets occur frequently.
Example 5.1.2
If A is an m × n matrix, then:
1. null A is a subspace of Rn .
2. im A is a subspace of Rm .
Solution.
1. The zero vector 0 ∈ Rn lies in null A because A0 = 0.3 If x and x1 are in null A, then x + x1
and ax are in null A because they satisfy the required condition:
2. The zero vector 0 ∈ Rm lies in im A because 0 = A0. Suppose that y and y1 are in im A, say
y = Ax and y1 = Ax1 where x and x1 are in Rn . Then
show that y + y1 and ay are both in im A (they have the required form). Hence im A is a
subspace of Rm .
There are other important subspaces associated with a matrix A that clarify basic properties of A. If A
is an n × n matrix and λ is any number, let
Eλ (A) = {x ∈ Rn | Ax = λ x}
A vector x is in Eλ (A) if and only if (λ I − A)x = 0, so Example 5.1.2 gives:
Example 5.1.3
Eλ (A) = null (λ I − A) is a subspace of Rn for each n × n matrix A and number λ .
Eλ (A) is called the eigenspace of A corresponding to λ . The reason for the name is that, in the terminology
of Section 3.3, λ is an eigenvalue of A if Eλ (A) 6= {0}. In this case the nonzero vectors in Eλ (A) are called
the eigenvectors of A corresponding to λ .
The reader should not get the impression that every subset of Rn is a subspace. For example:
x
U1 = x ≥ 0 satisfies S1 and S2, but not S3;
y
3 We are using 0 to represent the zero vector in both Rm and Rn . This abuse of notation is common and causes no confusion
once everybody knows what is going on.
266 Vector Space Rn
x 2
U2 = x = y2 satisfies S1 and S3, but not S2;
y
Spanning Sets
Let v and w be two nonzero, nonparallel vectors in R3 with their tails at the origin. The plane M through
the origin containing these vectors is described in Section 4.2 by saying that n = v × w is a normal for M,
and that M consists of all vectors p such that n · p = 0.4 While this is a very useful way to look at planes,
there is another approach that is at least as useful in R3 and, more importantly, works for all subspaces of
Rn for any n ≥ 1.
The idea is as follows: Observe that, by the diagram, a vector p is in
M if and only if it has the form
av
p
p = av + bw
v
0 M for certain real numbers a and b (we say that p is a linear combination of
w bw v and w). Hence we can describe M as
M = {ax + bw | a, b ∈ R}.5
and we say that {v, w} is a spanning set for M. It is this notion of a spanning set that provides a way to
describe all subspaces of Rn .
As in Section 1.3, given vectors x1 , x2 , . . . , xk in Rn , a vector of the form
is called a linear combination of the xi , and ti is called the coefficient of xi in the linear combination.
If V = span {x1 , x2 , . . . , xk }, we say that V is spanned by the vectors x1 , x2 , . . . , xk , and that the
vectors x1 , x2 , . . . , xk span the space V .
In particular, the above discussion shows that, if v and w are two nonzero, nonparallel vectors in R3 , then
M = span {v, w}
is the plane in R3 containing v and w. Moreover, if d is any nonzero vector in R3 (or R2 ), then
L = span {v} = {td | t ∈ R} = Rd
is the line with direction vector d. Hence lines and planes can both be described in terms of spanning sets.
Example 5.1.4
Let x = (2, −1, 2, 1) and y = (3, 4, −1, 1) in R4 . Determine whether p = (0, −11, 8, 1) or
q = (2, 3, 1, 2) are in U = span {x, y}.
Solution. The vector p is in U if and only if p = sx + ty for scalars s and t. Equating components
gives equations
This linear system has solution s = 3 and t = −2, so p is in U . On the other hand, asking that
q = sx + ty leads to equations
2s + 3t = 2, −s + 4t = 3, 2s − t = 1, and s + t = 2
Proof.
1. The zero vector 0 is in U because 0 = 0x1 + 0x2 + · · · + 0xk is a linear combination of the xi . If
x = t1 x1 + t2 x2 + · · · + tk xk and y = s1 x1 + s2 x2 + · · · + sk xk are in U , then x + y and ax are in U
because
x + y = (t1 + s1 )x1 + (t2 + s2 )x2 + · · · + (tk + sk )xk , and
ax = (at1 )x1 + (at2 )x2 + · · · + (atk )xk
Finally each xi is in U (for example, x2 = 0x1 + 1x2 + · · · + 0xk ) so S1, S2, and S3 are satisfied for
U , proving (1).
2. Let x = t1 x1 + t2 x2 + · · · + tk xk where the ti are scalars and each xi ∈ W . Then each ti xi ∈ W because
W satisfies S3. But then x ∈ W because W satisfies S2 (verify). This proves (2).
Condition (2) in Theorem 5.1.1 can be expressed by saying that span {x1 , x2 , . . . , xk } is the smallest
subspace of Rn that contains each xi . This is useful for showing that two subspaces U and W are equal,
since this amounts to showing that both U ⊆ W and W ⊆ U . Here is an example of how it is used.
268 Vector Space Rn
Example 5.1.5
If x and y are in Rn , show that span {x, y} = span {x + y, x − y}.
Solution. Since both x + y and x − y are in span {x, y}, Theorem 5.1.1 gives
It turns out that many important subspaces are best described by giving a spanning set. Here are three
examples, beginning with an important spanning set for Rn itself. Column j of the n × n identity matrix
In is denoted e j and called the jth
coordinate vector in Rn , and the set {e1 , e2 , . . . , en } is called the
x1
x2
standard basis of Rn . If x = .. is any vector in Rn , then x = x1 e1 + x2 e2 + · · · + xn en , as the reader
.
xn
can verify. This proves:
Example 5.1.6
Rn = span {e1 , e2 , . . . , en } where e1 , e2 , . . . , en are the columns of In .
If A is an m × n matrix A, the next two examples show that it is a routine matter to find spanning sets
for null A and im A.
Example 5.1.7
Given an m × n matrix A, let x1 , x2 , . . . , xk denote the basic solutions to the system Ax = 0 given
by the gaussian algorithm. Then
Solution. If x ∈ null A, then Ax = 0 so Theorem 1.3.2 shows that x is a linear combination of the
basic solutions; that is, null A ⊆ span {x1 , x2 , . . . , xk }. On the other hand, if x is in
span {x1 , x2 , . . . , xk }, then x = t1x1 + t2 x2 + · · · + tk xk for scalars ti , so
This shows that x ∈ null A, and hence that span {x1 , x2 , . . . , xk } ⊆ null A. Thus we have equality.
5.1. Subspaces and Spanning 269
Example 5.1.8
Let c1 , c2 , . . . , cn denote the columns of the m × n matrix A. Then
im A = span {c1 , c2 , . . . , cn }
y = Ax = x1 c1 + x2 c2 + · · · + xn cn is in span {c1 , c2 , . . . , cn }
We often write vectors in Rn as rows. a. x = (2, −1, 0, 1), y = (1, 0, 0, 1), and
Exercise 5.1.1 In each case determine whether U is a z = (0, 1, 0, 1).
subspace of R3 . Support your answer.
b. x = (1, 2, 15, 11), y = (2, −1, 0, 2), and
z = (1, −1, −3, 1).
a. U = {(1, s, t) | s and t in R}.
c. x = (8, 3, −13, 20), y = (2, 1, −3, 5), and
b. U = {(0, s, t) | s and t in R}.
z = (−1, 0, 2, −3).
c. U = {(r, s, t) | r, s, and t in R,
− r + 3s + 2t = 0}. d. x = (2, 5, 8, 3), y = (2, −1, 0, 5), and
z = (−1, 2, 2, −3).
d. U = {(r, 3s, r − 2) | r and s in R}.
e. U = {(r, 0, s) | r2 + s2 = 0, r and s in R}. Exercise 5.1.3 In each case determine if the given vec-
tors span R4 . Support your answer.
f. U = {(2r, −s2 , t) | r, s, and t in R}.
Exercise 5.1.4 Is it possible that {(1, 2, 0), (2, 0, 3)} d. If x is in U and U = span {y, z}, then U =
can span the subspace U = {(r, s, 0) | r and s in R}? De- span {x, y, z}.
fend your answer.
e. The empty set of vectors in Rn is a subspace of
Exercise 5.1.5 Give a spanning set for the zero subspace Rn .
{0} of Rn .
0 1 2
Exercise 5.1.6 Is R2 a subspace of R3 ? Defend your f. is in span , .
1 0 0
answer.
Exercise 5.1.7 If U = span {x, y, z} in Rn , show that Exercise 5.1.17
U = span {x + tz, y, z} for every t in R.
a. If A and B are m × n matrices, show that
Exercise 5.1.8 If U = span {x, y, z} in Rn , show that U = {x in Rn | Ax = Bx} is a subspace of Rn .
U = span {x + y, y + z, z + x}.
b. What if A is m × n, B is k × n, and m 6= k?
Exercise 5.1.9 If a 6= 0 is a scalar, show that
span {ax} = span {x} for every vector x in Rn .
Exercise 5.1.18 Suppose that x1 , x2 , . . . , xk are vectors
Exercise 5.1.10 If a1 , a2 , . . . , ak are nonzero in Rn . If y = a1 x1 + a2 x2 + · · · + ak xk where a1 6= 0, show
scalars, show that span {a1 x1 , a2 x2 , . . . , ak xk } = that span {x1 x2 , . . . , xk } = span {y1 , x2 , . . . , xk }.
span {x1 , x2 , . . . , xk } for any vectors xi in Rn .
Exercise 5.1.19 If U 6= {0} is a subspace of R, show
Exercise 5.1.11 If x 6= 0 in Rn , determine all subspaces that U = R.
of span {x}.
Exercise 5.1.20 Let U be a nonempty subset of Rn .
Exercise 5.1.12 Suppose that U = span {x1 , x2 , . . . , xk } Show that U is a subspace if and only if S2 and S3 hold.
where each xi is in Rn . If A is an m×n matrix and Axi = 0
for each i, show that Ay = 0 for every vector y in U . Exercise 5.1.21 If S and T are nonempty sets of vectors
in Rn , and if S ⊆ T , show that span {S} ⊆ span {T }.
Exercise 5.1.13 If A is an m × n matrix, show that, for
each invertible m × m matrix U , null (A) = null (UA). Exercise 5.1.22 Let U and W be subspaces of Rn . De-
fine their intersection U ∩ W and their sum U + W as
Exercise 5.1.14 If A is an m × n matrix, show that, for follows:
each invertible n × n matrix V , im (A) = im (AV ).
U ∩W = {x ∈ Rn | x belongs to both U and W }.
Exercise 5.1.15 Let U be a subspace of R , and let x be
n
U +W = {x ∈ Rn | x is a sum of a vector in U
a vector in R .n
and a vector in W }.
Some spanning sets are better than others. If U = span {x1 , x2 , . . . , xk } is a subspace of Rn , then every
vector in U can be written as a linear combination of the xi in at least one way. Our interest here is in
spanning sets where each vector in U has a exactly one representation as a linear combination of these
vectors.
Linear Independence
r1 x1 + r2 x2 + · · · + rk xk = s1 x1 + s2 x2 + · · · + sk xk
We are looking for a condition on the set {x1 , x2 , . . . , xk } of vectors that guarantees that this representation
is unique; that is, ri = si for each i. Taking all terms to the left side gives
so the required condition is that this equation forces all the coefficients ri − si to be zero.
If t1x1 + t2 x2 + · · · + tk xk = 0 then t1 = t2 = · · · = tk = 0
Theorem 5.2.1
If {x1 , x2 , . . . , xk } is an independent set of vectors in Rn , then every vector in
span {x1 , x2 , . . . , xk } has a unique representation as a linear combination of the xi .
It is useful to state the definition of independence in different language. Let us say that a linear
combination vanishes if it equals the zero vector, and call a linear combination trivial if every coefficient
is zero. Then the definition of independence can be compactly stated as follows:
A set of vectors is independent if and only if the only linear combination that vanishes is the
trivial one.
Independence Test
To verify that a set {x1 , x2 , . . . , xk } of vectors in Rn is independent, proceed as follows:
2. Show that ti = 0 for each i (that is, the linear combination is trivial).
Of course, if some nontrivial linear combination vanishes, the vectors are not independent.
Example 5.2.1
Determine whether {(1, 0, −2, 5), (2, 1, 0, −1), (1, 1, 2, 1)} is independent in R4 .
r + 2s + t = 0, s + t = 0, −2r + 2t = 0, and 5r − s + t = 0
The only solution is the trivial one r = s = t = 0 (verify), so these vectors are independent by the
independence test.
Example 5.2.2
Show that the standard basis {e1 , e2 , . . . , en } of Rn is independent.
Example 5.2.3
If {x, y} is independent, show that {2x + 3y, x − 5y} is also independent.
Solution. If s(2x + 3y) + t(x − 5y) = 0, collect terms to get (2s + t)x + (3s − 5t)y = 0. Since
{x, y} is independent this combination must be trivial; that is, 2s + t = 0 and 3s − 5t = 0. These
equations have only the trivial solution s = t = 0, as required.
5.2. Independence and Dimension 273
Example 5.2.4
Show that the zero vector in Rn does not belong to any independent set.
Example 5.2.5
Given x in Rn , show that {x} is independent if and only if x 6= 0.
Solution. A vanishing linear combination from {x} takes the form tx = 0, t in R. This implies that
t = 0 because x 6= 0.
Example 5.2.6
Show that the nonzero rows of a row-echelon matrix R are independent.
Equating second entries show that t1 = 0, so the condition becomes t2R2 + t3 R3 = 0. Now the same
argument shows that t2 = 0. Finally, this gives t3 R3 = 0 and we obtain t3 = 0.
A set of vectors in Rn is called linearly dependent (or simply dependent) if it is not linearly indepen-
dent, equivalently if some nontrivial linear combination vanishes.
Example 5.2.7
If v and w are nonzero vectors in R3 , show that {v, w} is dependent if and only if v and w are
parallel.
Solution. If v and w are parallel, then one is a scalar multiple of the other (Theorem 4.1.4), say
v = aw for some scalar a. Then the nontrivial linear combination v − aw = 0 vanishes, so {v, w}
is dependent.
Conversely, if {v, w} is dependent, let sv + tw = 0 be nontrivial, say s 6= 0. Then v = − st w so v
and w are parallel (by Theorem 4.1.4). A similar argument works if t 6= 0.
274 Vector Space Rn
With this we can give a geometric description of what it means for a set {u, v, w} in R3 to be in-
dependent. Note that this requirement means that {v, w} is also independent (av + bw = 0 means that
0u + av + bw = 0), so M = span {v, w} is the plane containing v, w, and 0 (see the discussion preceding
Example 5.1.4). So we assume that {v, w} is independent in the following example.
Example 5.2.8
Let u, v, and w be nonzero vectors in R3 where {v, w}
u independent. Show that {u, v, w} is independent if and only
if u is not in the plane M = span {v, w}. This is illustrated in
v the diagrams.
w
Solution. If {u, v, w} is independent, suppose u is in the plane
M
{u, v, w} independent
M = span {v, w}, say u = av + bw, where a and b are in R. Then
1u − av − bw = 0, contradicting the independence of {u, v, w}.
v
On the other hand, suppose that u is not in M; we must show
u that {u, v, w} is independent. If ru + sv + tw = 0 where r, s,
w and t are in R3 , then r = 0 since otherwise u = − rs v + −t
r w is
M in M. But then sv + tw = 0, so s = t = 0 by our assumption.
{u, v, w} not independent This shows that {u, v, w} is independent, as required.
By the inverse theorem, the following conditions are equivalent for an n × n matrix A:
1. A is invertible.
2. If Ax = 0 where x is in Rn , then x = 0.
3. Ax = b has a solution x for every vector b in Rn .
While condition 1 makes no sense if A is not square, conditions 2 and 3 are meaningful for any matrix A
and, in fact, are
related to independence and spanning. Indeed, if c1 , c2 , . . . , cn are the columns of A, and
x1
x2
if we write x = .. , then
.
xn Ax = x1 c1 + x2 c2 + · · · + xn cn
by Definition 2.5. Hence the definitions of independence and spanning show, respectively, that condition
2 is equivalent to the independence of {c1 , c2 , . . . , cn } and condition 3 is equivalent to the requirement
that span {c1 , c2 , . . . , cn } = Rm . This discussion is summarized in the following theorem:
Theorem 5.2.2
If A is an m × n matrix, let {c1 , c2 , . . . , cn } denote the columns of A.
1. {c1 , c2 , . . . , cn } is independent in Rm if and only if Ax = 0, x in Rn , implies x = 0.
2. Rm = span {c1 , c2 , . . . , cn } if and only if Ax = b has a solution x for every vector b in Rm .
5.2. Independence and Dimension 275
For a square matrix A, Theorem 5.2.2 characterizes the invertibility of A in terms of the spanning and
independence of its columns (see the discussion preceding Theorem 5.2.2). It is important to be able to
discuss these notions for rows. If x1 , x2 , . . . , xk are 1 × n rows, we define span {x1 , x2 , . . . , xk } to be
the set of all linear combinations of the xi (as matrices), and we say that {x1 , x2 , . . . , xk } is linearly
independent if the only vanishing linear combination is the trivial one (that is, if {xT1 , xT2 , . . . , xTk } is
independent in Rn , as the reader can verify).6
Theorem 5.2.3
The following are equivalent for an n × n matrix A:
1. A is invertible.
Example 5.2.9
Show that S = {(2, −2, 5), (−3, 1, 1), (2, 7, −4)} is independent in R3 .
2 −2 5
Solution. Consider the matrix A = −3 1 1 with the vectors in S as its rows. A routine
2 7 −4
computation shows that det A = −117 6= 0, so A is invertible. Hence S is independent by
Theorem 5.2.3. Note that Theorem 5.2.3 also shows that R3 = span S.
6 It
is best to view columns and rows as just two different notations for ordered n-tuples. This discussion will become
redundant in Chapter 6 where we define the general notion of a vector space.
276 Vector Space Rn
Dimension
It is common geometrical language to say that R3 is 3-dimensional, that planes are 2-dimensional and
that lines are 1-dimensional. The next theorem is a basic tool for clarifying this idea of “dimension”. Its
importance is difficult to exaggerate.
2. U = span {x1 , x2 , . . . , xm }.
Proof. We have k ≤ m by the fundamental theorem because {x1 , x2 , . . . , xm } spans U , and {y1 , y2 , . . . , yk }
is independent. Similarly, by interchanging x’s and y’s we get m ≤ k. Hence m = k.
The invariance theorem guarantees that there is no ambiguity in the following definition:
dim U = m
The importance of the invariance theorem is that the dimension of U can be determined by counting the
number of vectors in any basis.8
7
The plural of “basis” is “bases”.
8 We will show in Theorem 5.2.6 that every subspace of Rn does indeed have a basis.
5.2. Independence and Dimension 277
Let {e1 , e2 , . . . , en } denote the standard basis of Rn , that is the set of columns of the identity matrix.
Then Rn = span {e1 , e2 , . . . , en } by Example 5.1.6, and {e1 , e2 , . . . , en } is independent by Example 5.2.2.
Hence it is indeed a basis of Rn in the present terminology, and we have
Example 5.2.10
dim (Rn ) = n and {e1 , e2 , . . . , en } is a basis.
This agrees with our geometric sense that R2 is two-dimensional and R3 is three-dimensional. It also
says that R1 = R is one-dimensional, and {1} is a basis. Returning to subspaces of Rn , we define
dim {0} = 0
This amounts to saying {0} has a basis containing no vectors. This makes sense because 0 cannot belong
to any independent set (Example 5.2.4).
Example 5.2.11
r
Let U = s | r, s in R . Show that U is a subspace of R3 , find a basis, and calculate dim U .
r
r 1 0
Solution. Clearly, s = ru + sv where u = 0 and v = 1 . It follows that
r 1 0
3
= span{u, v},
U and hence that U is a subspace of R . Moreover, if ru + sv = 0, then
r 0
s = 0 so r = s = 0. Hence {u, v} is independent, and so a basis of U . This means
r 0
dim U = 2.
Example 5.2.12
Let B = {x1 , x2 , . . . , xn } be a basis of Rn . If A is an invertible n × n matrix, then
D = {Ax1 , Ax2 , . . . , Axn } is also a basis of Rn .
While we have found bases in many subspaces of Rn , we have not yet shown that every subspace has
a basis. This is part of the next theorem, the proof of which is deferred to Section 6.4 (Theorem 6.4.1)
where it will be proved in more generality.
278 Vector Space Rn
Theorem 5.2.6
Let U 6= {0} be a subspace of Rn . Then:
2. Any independent set in U can be enlarged (by adding vectors from the standard basis) to a
basis of U .
3. Any spanning set for U can be cut down (by deleting vectors) to a basis of U .
Example 5.2.13
Find a basis of R4 containing S = {u, v} where u = (0, 1, 2, 3) and v = (2, −1, 0, 1).
Solution. By Theorem 5.2.6 we can find such a basis by adding vectors from the standard basis of
R4 to S. If we try e1 = (1, 0, 0, 0), we find easily that {e1 , u, v} is independent. Now add another
vector from the standard basis, say e2 .
Again we find that B = {e1 , e2 , u, v} is independent. Since B has 4 = dim R4 vectors, then B
must span R4 by Theorem 5.2.7 below (or simply verify it directly). Hence B is a basis of R4 .
Theorem 5.2.7
Let U be a subspace of Rn where dim U = m and let B = {x1 , x2 , . . . , xm } be a set of m vectors in
U . Then B is independent if and only if B spans U .
Proof. Suppose B is independent. If B does not span U then, by Theorem 5.2.6, B can be enlarged to a
basis of U containing more than m vectors. This contradicts the invariance theorem because dim U = m,
so B spans U . Conversely, if B spans U but is not independent, then B can be cut down to a basis of U
containing fewer than m vectors, again a contradiction. So B is independent, as required.
As we saw in Example 5.2.13, Theorem 5.2.7 is a “labour-saving” result. It asserts that, given a
subspace U of dimension m and a set B of exactly m vectors in U , to prove that B is a basis of U it suffices
to show either that B spans U or that B is independent. It is not necessary to verify both properties.
Theorem 5.2.8
Let U ⊆ W be subspaces of Rn . Then:
1. dim U ≤ dim W .
1. If dim U > k, then B is an independent set in W containing more than k vectors, contradicting the
fundamental theorem. So dim U ≤ k = dim W .
It follows from Theorem 5.2.8 that if U is a subspace of Rn , then dim U is one of the integers 0, 1, 2, . . . , n,
and that:
dim U = 0 if and only if U = {0},
dim U = n if and only if U = Rn
The other subspaces of Rn are called proper. The following example uses Theorem 5.2.8 to show that the
proper subspaces of R2 are the lines through the origin, while the proper subspaces of R3 are the lines and
planes through the origin.
Example 5.2.14
Proof.
1. Since dim U = 1, let {u} be a basis of U . Then U = span {u} = {tu | t in R}, so U is the line
through the origin with direction vector u. Conversely each line L with direction vector d 6= 0 has
the form L = {td | t in R}. Hence {d} is a basis of U , so U has dimension 1.
2. If U ⊆ R3 has dimension 2, let {v, w} be a basis of U . Then v and w are not parallel (by Exam-
ple 5.2.7) so n = v × w 6= 0. Let P = {x in R3 | n · x = 0} denote the plane through the origin with
normal n. Then P is a subspace of R3 (Example 5.1.1) and both v and w lie in P (they are orthogonal
to n), so U = span {v, w} ⊆ P by Theorem 5.1.1. Hence
U ⊆ P ⊆ R3
Since dim U = 2 and dim (R3 ) = 3, it follows from Theorem 5.2.8 that dim P = 2 or 3, whence
P = U or R3 . But P 6= R3 (for example, n is not in P) and so U = P is a plane through the origin.
Conversely, if U is a plane through the origin, then dim U = 0, 1, 2, or 3 by Theorem 5.2.8. But
dim U 6= 0 or 3 because U 6= {0} and U 6= R3 , and dim U 6= 1 by (1). So dim U = 2.
Note that this proof shows that if v and w are nonzero, nonparallel vectors in R3 , then span {v, w} is the
plane with normal n = v × w. We gave a geometrical verification of this fact in Section 5.1.
280 Vector Space Rn
d. {x + y, y + z, z + w, w + x}
Exercise 5.2.5 Suppose that {x, y, z, w} is a basis of
R4 . Show that:
Exercise 5.2.3 Find a basis and calculate the dimension
of the following subspaces of R4 . a. {x + aw, y, z, w} is also a basis of R4 for any
choice of the scalar a.
a. span {(1, −1, 2, 0), (2, 3, 0, 3), (1, 9, −6, 6)}
b. {x + w, y + w, z + w, w} is also a basis of R4 .
b. span {(2, 1, 0, −1), (−1, 1, 1, 1), (2, 7, 4, 1)}
c. {x, x + y, x + y + z, x + y + z + w} is also a basis
c. span {(−1, 2, 1, 0), (2, 0, 3, −1), (4, 4, 11, −3), of R4 .
(3, −2, 2, −1)}
d. span {(−2, 0, 3, 1), (1, 2, −1, 0), (−2, 8, 5, 3), Exercise 5.2.6 Use Theorem 5.2.3 to determine if the
(−1, 2, 2, 1)} following sets of vectors are a basis of the indicated
space.
Exercise 5.2.4 Find a basis and calculate the dimension
of the following subspaces of R4 . a. {(3, −1), (2, 2)} in R2
b. {(1, 1, −1), (1, −1, 1), (0, 0, 1)} in R3
a
a. U =
a+b a and b in R c. {(−1, 1, −1), (1, −1, 2), (0, 0, 1)} in R3
a−b
b d. {(5, 2, −1), (1, 0, 1), (3, −1, 0)} in R3
5.2. Independence and Dimension 281
e. {(2, 1, −1, 3), (1, 1, 0, 2), (0, 1, 0, −3), Exercise 5.2.12 If {x1 , x2 , x3 , . . . , xk } is independent,
(−1, 2, 3, 1)} in R4 show {x1 , x1 + x2 , x1 + x2 + x3 , . . . , x1 + x2 + · · · + xk }
is also independent.
f. {(1, 0, −2, 5), (4, 4, −3, 2), (0, 1, 0, −3),
(1, 3, 3, −10)} in R4 Exercise 5.2.13 If {y, x1 , x2 , x3 , . . . , xk } is indepen-
dent, show that {y + x1 , y + x2 , y + x3 , . . . , y + xk } is
Exercise 5.2.7 In each case show that the statement is also independent.
true or give an example showing that it is false.
Exercise 5.2.14 If {x1 , x2 , . . . , xk } is independent in
a. If {x, y} is independent, then {x, y, x + y} is in- Rn , and if y is not in span {x1 , x2 , . . . , xk }, show that
dependent. {x1 , x2 , . . . , xk , y} is independent.
b. If {x, y, z} is independent, then {y, z} is indepen- Exercise 5.2.15 If A and B are matrices and the columns
dent. of AB are independent, show that the columns of B are in-
dependent.
c. If {y, z} is dependent, then {x, y, z} is dependent
for any x. Exercise 5.2.16
Suppose
that {x, y} is a basis of R2 ,
a b
d. If all of x1 , x2 , . . . , xk are nonzero, then and let A = c d .
{x1 , x2 , . . . , xk } is independent.
a. If A is invertible, show that {ax + by, cx + dy} is
e. If one of x1 , x2 , . . . , xk is zero, then
a basis of R2 .
{x1 , x2 , . . . , xk } is dependent.
b. If {ax + by, cx + dy} is a basis of R2 , show that A
f. If ax + by + cz = 0, then {x, y, z} is independent.
is invertible.
g. If {x, y, z} is independent, then ax + by + cz = 0
for some a, b, and c in R. Exercise 5.2.17 Let A denote an m × n matrix.
h. If {x1 , x2 , . . . , xk } is dependent, then t1 x1 +t2 x2 + a. Show that null A = null (UA) for every invertible
· · ·+tk xk = 0 for some numbers ti in R not all zero. m × m matrix U .
i. If {x1 , x2 , . . . , xk } is independent, then t1 x1 + b. Show that dim ( null A) = dim ( null (AV )) for
t2 x2 + · · · + tk xk = 0 for some ti in R. every invertible n × n matrix V . [Hint: If
j. Every non-empty subset of a linearly independent {x1 , x2 , . . . , xk } is a basis of null A, show
set is again linearly independent. that {V −1 x1 , V −1 x2 , . . . , V −1 xk } is a basis of
null (AV ).]
k. Every set containing a spanning set is again a
spanning set. Exercise 5.2.18 Let A denote an m × n matrix.
Exercise 5.2.8 If A is an n× n matrix, show that det A = a. Show that im A = im (AV ) for every invertible
0 if and only if some column of A is a linear combination n × n matrix V .
of the other columns.
b. Show that dim ( im A) = dim ( im (UA)) for ev-
Exercise 5.2.9 Let {x, y, z} be a linearly independent ery invertible m × m matrix U . [Hint: If
set in R4 . Show that {x, y, z, ek } is a basis of R4 for {y1 , y2 , . . . , yk } is a basis of im (UA), show that
some ek in the standard basis {e1 , e2 , e3 , e4 }. {U −1 y1 , U −1 y2 , . . . , U −1 yk } is a basis of im A.]
Exercise 5.2.10 If {x1 , x2 , x3 , x4 , x5 , x6 } is an inde-
pendent set of vectors, show that the subset {x2 , x3 , x5 } Exercise 5.2.19 Let U and W denote subspaces of R ,
n
the system Ax = bi has a solution xi for each i. If and assume that U ⊆ W . If dim W = 1, show that either
{b1 , b2 , b3 , . . . , bk } is independent in Rm , show that U = {0} or U = W .
{x1 , x2 , x3 , . . . , xk } is independent in Rn .
282 Vector Space Rn
5.3 Orthogonality
Length and orthogonality are basic concepts in geometry and, in R2 and R3 , they both can be defined
using the dot product. In this section we extend the dot product to vectors in Rn , and so endow Rn with
euclidean geometry. We then introduce the idea of an orthogonal basis—one of the most useful concepts
in linear algebra, and begin exploring some of its applications.
If x = (x1 , x2 , . . . , xn ) and y = (y1 , y2 , . . . , yn ) are two n-tuples in Rn , recall that their dot product was
defined in Section 2.2 as follows:
x · y = x1 y1 + x2 y2 + · · · + xn yn
Observe that if x and y are written as columns then x · y = xT y is a matrix product (and x · y = xyT if they
are written as rows). Here x · y is a 1 × 1 matrix, which we take to be a number.
1
A vector x of length 1 is called a unit vector. If x 6= 0, then kxk =
6 0 and it follows easily that kxk x is a
unit vector (see Theorem 5.3.6 below), a fact that we shall use later.
Example 5.3.1
If x = (1, 4
√ −1, −3, 1) and
√y = (2,√1, 1, 0) in R1 , then x · y = 2 − 1 − 3 + 0 = −2
1
and
kxk = 1 + 1 + 9 + 1 = 12 = 2 3. Hence 2 3 x is a unit vector; similarly y is a unit vector.
√ √
6
These definitions agree with those in R2 and R3 , and many properties carry over to Rn :
Theorem 5.3.1
Let x, y, and z denote vectors in Rn . Then:
1. x · y = y · x.
2. x · (y + z) = x · y + x · z.
4. kxk2 = x · x.
Proof. (1), (2), and (3) follow from matrix arithmetic because x · y = xT y; (4) is clear q from the definition;
√
and (6) is a routine verification since |a| = a2 . If x = (x1 , x2 , . . . , xn ), then kxk = x21 + x22 + · · · + x2n
so kxk = 0 if and only if x21 + x22 + · · · + x2n = 0. Since each xi is a real number this happens if and only if
xi = 0 for each i; that is, if and only if x = 0. This proves (5).
Because of Theorem 5.3.1, computations with dot products in Rn are similar to those in R3 . In partic-
ular, the dot product
(x1 + x2 + · · · + xm ) · (y1 + y2 + · · · + yk )
equals the sum of mk terms, xi · y j , one for each choice of i and j. For example:
Example 5.3.2
Show that kx + yk2 = kxk2 + 2(x · y) + kyk2 for any x and y in Rn .
kx + yk2 = (x + y) · (x + y) = x · x + x · y + y · x + y · y
= kxk2 + 2(x · y) + kyk2
Example 5.3.3
Suppose that Rn = span {f1 , f2 , . . . , fk } for some vectors fi . If x · fi = 0 for each i where x is in Rn ,
show that x = 0.
Solution. We show x = 0 by showing that kxk = 0 and using (5) of Theorem 5.3.1. Since the fi
span Rn , write x = t1 f1 + t2 f2 + · · · + tk fk where the ti are in R. Then
kxk2 = x · x = x · (t1f1 + t2 f2 + · · · + tk fk )
= t1(x · f1 ) + t2(x · f2 ) + · · · + tk (x · fk )
= t1(0) + t2(0) + · · · + tk (0)
=0
284 Vector Space Rn
We saw in Section 4.2 that if u and v are nonzero vectors in R3 , then kukkvku·v
= cos θ where θ is the angle
between u and v. Since | cos θ | ≤ 1 for any angle θ , this shows that |u · v| ≤ kukkvk. In this form the result
holds in Rn .
Proof. The inequality holds if x = 0 or y = 0 (in fact it is equality). Otherwise, write kxk = a > 0 and
kyk = b > 0 for convenience. A computation like that preceding Example 5.3.2 gives
kbx − ayk2 = 2ab(ab − x · y) and kbx + ayk2 = 2ab(ab + x · y) (5.1)
It follows that ab−x·y ≥ 0 and ab+x·y ≥ 0, and hence that −ab ≤ x·y ≤ ab. Hence |x·y| ≤ ab = kxkkyk,
proving the Cauchy inequality.
If equality holds, then |x · y| = ab, so x · y = ab or x · y = −ab. Hence Equation 5.1 shows that
bx − ay = 0 or bx + ay = 0, so one of x and y is a multiple of the other (even if a = 0 or b = 0).
The Cauchy inequality is equivalent to (x · y)2 ≤ kxk2 kyk2 . In R5 this becomes
(x1 y1 + x2 y2 + x3 y3 + x4 y4 + x5 y5 )2 ≤ (x21 + x22 + x23 + x24 + x25 )(y21 + y22 + y23 + y24 + y25 )
for all xi and yi in R.
There is an important consequence of the Cauchy inequality. Given x and y in Rn , use Example 5.3.2
and the fact that x · y ≤ kxkkyk to compute
kx + yk2 = kxk2 + 2(x · y) + kyk2 ≤ kxk2 + 2kxkkyk + kyk2 = (kx + yk)2
Taking positive square roots gives:
w The reason for the name comes from the observation that in R3 the
v
inequality asserts that the sum of the lengths of two sides of a triangle is
not less than the length of the third side. This is illustrated in the diagram.
v+w
9 Augustin Louis Cauchy (1789–1857) was born in Paris and became a professor at the École Polytechnique at the age of
26. He was one of the great mathematicians, producing more than 700 papers, and is best remembered for his work in analysis
in which he established new standards of rigour and founded the theory of functions of a complex variable. He was a devout
Catholic with a long-term interest in charitable work, and he was a royalist, following King Charles X into exile in Prague after
he was deposed in 1830. Theorem 5.3.2 first appeared in his 1812 memoir on determinants.
5.3. Orthogonality 285
d(x, y) = kx − yk
v−w The motivation again comes from R3 as is clear in the diagram. This
w
distance function has all the intuitive properties of distance in R3 , includ-
v ing another version of the triangle inequality.
Theorem 5.3.3
If x, y, and z are three vectors in Rn we have:
Proof. (1) and (2) restate part (5) of Theorem 5.3.1 because d(x, y) = kx − yk, and (3) follows because
kuk = k − uk for every vector u in Rn . To prove (4) use the Corollary to Theorem 5.3.2:
10 The reason for insisting that orthogonal sets consist of nonzero vectors is that we will be primarily concerned with orthog-
onal bases.
286 Vector Space Rn
Example 5.3.4
The standard basis {e1 , e2 , . . . , en } is an orthonormal set in Rn .
Example 5.3.5
If {x1 , x2 , . . . , xk } is orthogonal, so also is {a1 x1 , a2 x2 , . . . , ak xk } for any nonzero scalars ai .
1
If x 6= 0, it follows from item (6) of Theorem 5.3.1 that kxk x is a unit vector, that is it has length 1.
Example 5.3.6
1 1 −1 −1
1
If f1 = , f2 = 0 , f3 = 0 , and f4 = 3 then {f1 , f2 , f3 , f4 } is an orthogonal
1 1 1 −1
−1 2 0 1
4
set in R as is easily verified. After normalizing, the corresponding orthonormal set is
{ 12 f1 , √16 f2 , √12 f3 , 2√1 3 f4 }
as in the diagram. In this form the result holds for any orthogonal set in Rn .
Theorem 5.3.5
Every orthogonal set in Rn is linearly independent.
Proof. Let {x1 , x2 , . . . , xk } be an orthogonal set in Rn and suppose a linear combination vanishes, say:
t1 x1 + t2 x2 + · · · + tk xk = 0. Then
0 = x1 · 0 = x1 · (t1x1 + t2 x2 + · · · + tk xk )
= t1(x1 · x1 ) + t2(x1 · x2 ) + · · · + tk (x1 · xk )
= t1kx1 k2 + t2 (0) + · · · + tk (0)
= t1kx1 k2
Proof. Since {f1 , f2 , . . . , fm } spans U , we have x = t1 f1 +t2 f2 + · · · +tm fm where the ti are scalars. To find
t1 we take the dot product of both sides with f1 :
x · f1 = (t1f1 + t2 f2 + · · · + tm fm ) · f1
= t1 (f1 · f1 ) + t2 (f2 · f1 ) + · · · + tm (fm · f1 )
= t1 kf1 k2 + t2 (0) + · · · + tm (0)
= t1 kf1 k2
288 Vector Space Rn
x·f1 x·fi
Since f1 6= 0, this gives t1 = kf1 k2
. Similarly, ti = kfi k2
for each i.
The expansion in Theorem 5.3.6 of x as a linear combination of the orthogonal basis {f1 , f2 , . . . , fm } is
called the Fourier expansion of x, and the coefficients t1 = kfx·fki2 are called the Fourier coefficients. Note
i
that if {f1 , f2 , . . . , fm } is actually orthonormal, then ti = x · fi for each i. We will have a great deal more to
say about this in Section 10.5.
Example 5.3.7
Expand x = (a, b, c, d) as a linear combination of the orthogonal basis {f1 , f2 , f3 , f4 } of R4 given
in Example 5.3.6.
t1 = x·f1
kf1 k2
= 41 (a + b + c + d) t3 = x·f3
kf3 k2
= 12 (−a + c)
t2 = x·f2
kf2 k2
= 61 (a + c + 2d) t4 = x·f4
kf4 k2
= 1
12 (−a + 3b − c + d)
A natural question arises here: Does every subspace U of Rn have an orthogonal basis? The answer is
“yes”; in fact, there is a systematic procedure, called the Gram-Schmidt algorithm, for turning any basis
of U into an orthogonal one. This leads to a definition of the projection onto a subspace U that generalizes
the projection along a vector used in R2 and R3 . All this is discussed in Section 8.1.
We often write vectors in Rn as row n-tuples. Exercise 5.3.3 In each case, show that B is an or-
3
Exercise 5.3.1 Obtain orthonormal bases of R3 by nor- thogonal basis of R and use Theorem 5.3.6 to expand
malizing the following. x = (a, b, c) as a linear combination of the basis vectors.
Exercise 5.3.2 In each case, show that the set of vectors d. B = {(1, 1, 1), (1, −1, 0), (1, 1, −2)}
is orthogonal in R4 .
Exercise 5.3.4 In each case, write x as a linear combi-
a. {(1, −1, 2, 5), (4, 1, 1, −1), (−7, 28, 5, 5)} nation of the orthogonal basis of the subspace U .
b. {(2, −1, 4, 5), (0, −1, 1, −1), (0, 3, 2, −1)} a. x = (13, −20, 15); U = span {(1, −2, 3), (−1, 1, 1)}
5.3. Orthogonality 289
1
b. x = (14, 1, −8, 5); 2 (x + y)are called, respectively, the geometric mean and
U = span {(2, −1, 0, 3), (2, 1, −2, −1)} arithmetic mean of x and y.
√ √
x y
Exercise 5.3.5 In each case, find all (a, b, c, d) in R4 [Hint: Use x = √ and y = √ .]
y x
such that the given set is orthogonal.
Exercise 5.3.11 Use the Cauchy inequality to prove
a. {(1, 2, 1, 0), (1, −1, 1, 3), (2, −1, 0, −1), that:
(a, b, c, d)}
a. r1 + r2 + · · · + rn ≤ n(r12 + r22 + · · · + rn2 ) for all ri in
b. {(1, 0, −1, 1), (2, 1, 1, −1), (1, −3, 1, 0), R and all n ≥ 1.
(a, b, c, d)}
b. r1 r2 + r1 r3 + r2 r3 ≤ r12 + r22 + r32 for all r1 , r2 , and
Exercise 5.3.6 If kxk = 3, kyk = 1, and x·y = −2, com- r3 in R. [Hint: See part (a).]
pute:
Exercise 5.3.12
a. k3x − 5yk b. k2x + 7yk
c. (3x − y) · (2y − x) d. (x − 2y) · (3x + 5y) a. Show that x and y are orthogonal in Rn if and only
if kx + yk = kx − yk.
Exercise 5.3.7 In each case either show that the state- b. Show that x + y and x − y are orthogonal in Rn if
ment is true or give an example showing that it is false. and only if kxk = kyk.
a. Every independent set in Rn is orthogonal. Exercise 5.3.13
b. If {x, y} is an orthogonal set in Rn , then {x, x+y}
a. Show that kx + yk2 = kxk2 + kyk2 if and only if x
is also orthogonal.
is orthogonal to y.
c. If {x, y} and {z, w} are both orthogonal in Rn ,
1 1 −2
then {x, y, z, w} is also orthogonal. b. If x = , y= and z = , show
1 0 3
d. If {x1 , x2 } and {y1 , y2 , y3 } are both or- that kx + y + zk2 = kxk2 + kyk2 + kzk2 but
thogonal and xi · y j = 0 for all i and j, then x · y 6= 0, x · z 6= 0, and y · z 6= 0.
{x1 , x2 , y1 , y2 , y3 } is orthogonal.
Exercise 5.3.14
e. If {x1 , x2 , . . . , xn } is orthogonal in Rn , then
Rn = span {x1 , x2 , . . . , xn }. a. Show that x · y = 14 [kx + yk2 − kx − yk2 ] for all x,
f. If x 6= 0 in Rn , then {x} is an orthogonal set. y in Rn .
b. Show that kxk2 + kyk2 = 12 kx + yk2 + kx − yk2
Exercise 5.3.8 Let v denote a nonzero vector in Rn . for all x, y in Rn .
a. Show that P = {x in Rn | x · v = 0} is a subspace
Exercise 5.3.15 If A is n× n, show that every eigenvalue
of Rn .
of AT A is nonnegative. [Hint: Compute kAxk2 where x
b. Show that Rv = {tv | t in R} is a subspace of Rn . is an eigenvector.]
c. Describe P and Rv geometrically when n = 3. Exercise 5.3.16 If Rn = span {x1 , . . . , xm } and
x·xi = 0 for all i, show that x = 0. [Hint: Show kxk = 0.]
Exercise 5.3.9 If A is an m × n matrix with orthonormal Exercise 5.3.17 If Rn = span {x1 , . . . , xm } and x · xi =
columns, show that AT A = In . [Hint: If c1 , c2 , . . . , cn are y · xi for all i, show that x = y. [Hint: Exercise 5.3.16]
the columns of A, show that column j of AT A has entries
c1 · c j , c2 · c j , . . . , cn · c j ]. Exercise 5.3.18 Let {e1 , . . . , en } be an orthogonal basis
of Rn . Given x and y in Rn , show that
Exercise 5.3.10 Use the Cauchy inequality to show that
√ √
xy ≤ 12 (x + y) for all x ≥ 0 and y ≥ 0. Here xy and x · y = (x·eke1 )(y·e
1k
2
1)
+ · · · + (x·eken )(y·e
nk
2
n)
290 Vector Space Rn
In this section we use the concept of dimension to clarify the definition of the rank of a matrix given in
Section 1.2, and to study its properties. This requires that we deal with rows and columns in the same way.
While it has been our custom to write the n-tuples in Rn as columns, in this section we will frequently
write them as rows. Subspaces, independence, spanning, and dimension are defined for rows using matrix
operations, just as for columns. If A is an m × n matrix, we define:
Lemma 5.4.1
Let A and B denote m × n matrices.
Proof. We prove (1); the proof of (2) is analogous. It is enough to do it in the case when A → B by a single
row operation. Let R1 , R2 , . . . , Rm denote the rows of A. The row operation A → B either interchanges
two rows, multiplies a row by a nonzero constant, or adds a multiple of a row to a different row. We leave
the first two cases to the reader. In the last case, suppose that a times row p is added to row q where p < q.
Then the rows of B are R1 , . . . , R p , . . . , Rq + aR p , . . . , Rm , and Theorem 5.1.1 shows that
Lemma 5.4.2
If R is a row-echelon matrix, then
Proof. The rows of R are independent by Example 5.2.6, and they span row R by definition. This proves
(1).
5.4. Rank of a Matrix 291
Example 5.4.1
Find a basis of U = span {(1, 1, 2, 3), (2, 4, 1, 0), (1, 5, −4, −9)}.
1 1 2 3
Solution. U is the row space of 2 4 1 0 . This matrix has row-echelon form
1 5 −4 −9
1 1 2 3
0 1 − 3 −3 , so {(1, 1, 2, 3), (0, 1, − 3 , −3)} is basis of U by Lemma 5.4.2.
2 2
0 0 0 0
Note that {(1, 1, 2, 3), (0, 2, −3, −6)} is another basis that avoids fractions.
Lemmas 5.4.1 and 5.4.2 are enough to prove the following fundamental theorem.
Proof. We have row A = row R by Lemma 5.4.1, so (1) follows from Lemma 5.4.2.
Moreover, R = UA
for some invertible matrix U by Theorem 2.5.1. Now write A = c1 c2 . . . cn where c1 , c2 , . . . , cn
are the columns of A. Then
R = UA = U c1 c2 · · · cn = U c1 U c2 · · · U cn
292 Vector Space Rn
Thus, in the notation of (2), the set B = {U c j1 , U c j2 , . . . , U c jr } is a basis of col R by Lemma 5.4.2. So, to
prove (2) and the fact that dim ( col A) = r, it is enough to show that D = {c j1 , c j2 , . . . , c jr } is a basis of
col A. First, D is linearly independent because U is invertible (verify), so we show that, for each j, column
c j is a linear combination of the c ji . But U c j is column j of R, and so is a linear combination of the U c ji ,
say U c j = a1U c j1 + a2U c j2 + · · · + arU c jr where each ai is a real number.
Since U is invertible, it follows that c j = a1 c j1 + a2 c j2 + · · · + ar c jr and the proof is complete.
Example 5.4.2
1 2 2 −1
Compute the rank of A = 3 6 5 0 and find bases for row A and col A.
1 2 1 2
Theorem 5.4.1 has several important consequences. The first, Corollary 5.4.1 below, follows because
the rows of A are independent (respectively span row A) if and only if their transposes are independent
(respectively span col A).
Corollary 5.4.1
If A is any matrix, then rank A = rank (AT ).
If A is an m × n matrix, we have col A ⊆ Rm and row A ⊆ Rn . Hence Theorem 5.2.8 shows that
dim ( col A) ≤ dim (Rm ) = m and dim ( row A) ≤ dim (Rn ) = n. Thus Theorem 5.4.1 gives:
Corollary 5.4.2
If A is an m × n matrix, then rank A ≤ m and rank A ≤ n.
Corollary 5.4.3
rank A = rank (UA) = rank (AV ) whenever U and V are invertible.
5.4. Rank of a Matrix 293
Proof. Lemma 5.4.1 gives rank A = rank (UA). Using this and Corollary 5.4.1 we get
Lemma 5.4.3
Let A, U , and V be matrices of sizes m × n, p × m, and n × q respectively.
Proof.For (1), write V = v1 , v2 , . . . , vq where v j is column j of V . Then we have
AV = Av1 , Av2 , . . . , Avq , and each Av j is in col A by Definition 2.4. It follows that col (AV ) ⊆ col A.
If VV ′ = In , we obtain col A = col [(AV )V ′ ] ⊆ col (AV ) in the same way. This proves (1).
As to (2), we have col (UA)T = col (AT U T ) ⊆ col (AT ) by (1), from which row (UA) ⊆ row A. If
U ′U = Im , this is equality as in the proof of (1).
Corollary 5.4.4
If A is m × n and B is n × m, then rank AB ≤ rank A and rank AB ≤ rank B.
Proof. By Lemma 5.4.3, col (AB) ⊆ col A and row (BA) ⊆ row A, so Theorem 5.4.1 applies.
In Section 5.1 we discussed two other subspaces associated with an m × n matrix A: the null space
null (A) and the image space im (A)
Using rank, there are simple ways to find bases of these spaces. If A has rank r, we have im (A) = col (A)
by Example 5.1.8, so dim [ im (A)] = dim [ col (A)] = r. Hence Theorem 5.4.1 provides a method of finding
a basis of im (A). This is recorded as part (2) of the following theorem.
Theorem 5.4.2
Let A denote an m × n matrix of rank r. Then
1. The n − r basic solutions to the system Ax = 0 provided by the gaussian algorithm are a
basis of null (A), so dim [ null (A)] = n − r.
2. Theorem 5.4.1 provides a basis of im (A) = col (A), and dim [ im (A)] = r.
Proof. It remains to prove (1). We already know (Theorem 2.2.1) that null (A) is spanned by the n − r
basic solutions of Ax = 0. Hence using Theorem 5.2.7, it suffices to show that dim [ null (A)] = n − r. So
let {x1 , . . . , xk } be a basis of null (A), and extend it to a basis {x1 , . . . , xk , xk+1 , . . . , xn } of Rn (by
294 Vector Space Rn
Theorem 5.2.6). It is enough to show that {Axk+1 , . . . , Axn } is a basis of im (A); then n − k = r by the
above and so k = n − r as required.
Spanning. Choose Ax in im (A), x in Rn , and write x = a1 x1 +· · ·+ak xk +ak+1 xk+1 +· · ·+an xn where
the ai are in R. Then Ax = ak+1 Axk+1 + · · · + an Axn because {x1 , . . . , xk } ⊆ null (A).
Independence. Let tk+1 Axk+1 + · · · + tn Axn = 0, ti in R. Then tk+1 xk+1 + · · · + tnxn is in null A, so
tk+1 xk+1 + · · · + tnxn = t1 x1 + · · · + tk xk for some t1 , . . . , tk in R. But then the independence of the xi
shows that ti = 0 for every i.
Example 5.4.3
1 −2 1 1
If A = −1 2 0 1 , find bases of null (A) and im (A), and so find their dimensions.
2 −4 1 0
However Theorem 5.4.2 asserts that {x1 , x2 } is a basis of null (A). (In fact it is easy to verify
directly that {x1 , x2 } is independent in this case.) In particular, dim [ null (A)] = 2 = n − r, as
Theorem 5.4.2 asserts.
Let A be an m×n matrix. Corollary 5.4.2 of Theorem 5.4.1 asserts that rank A ≤ m and rank A ≤ n, and
it is natural to ask when these extreme cases arise. If c1 , c2 , . . . , cn are the columns of A, Theorem 5.2.2
shows that {c1 , c2 , . . . , cn } spans Rm if and only if the system Ax = b is consistent for every b in Rm , and
5.4. Rank of a Matrix 295
that {c1 , c2 , . . . , cn } is independent if and only if Ax = 0, x in Rn , implies x = 0. The next two useful
theorems improve on both these results, and relate them to when the rank of A is n or m.
Theorem 5.4.3
The following are equivalent for an m × n matrix A:
1. rank A = n.
6. If Ax = 0, x in Rn , then x = 0.
Proof. (1) ⇒ (2). We have row A ⊆ Rn , and dim ( row A) = n by (1), so row A = Rn by Theorem 5.2.8.
This is (2).
(2) ⇒ (3). By (2), row A = Rn , so rank A = n. This means dim ( col A) = n. Since the n columns of
A span col A, they are independent by Theorem 5.2.7.
(3) ⇒ (4). If (AT A)x = 0, x in Rn , we show that x = 0 (Theorem 2.4.5). We have
kAxk2 = (Ax)T Ax = xT AT Ax = xT 0 = 0
Theorem 5.4.4
The following are equivalent for an m × n matrix A:
1. rank A = m.
Example 5.4.4
3 x+y+z
Show that is invertible if x, y, and z are not all equal.
x + y + z x2 + y2 + z2
1 x
Solution. The given matrix has the form AT A where A = 1 y has independent columns
1 z
because x, y, and z are not all equal (verify). Hence Theorem 5.4.3 applies.
Theorem 5.4.3 and Theorem 5.4.4 relate several important properties of an m × n matrix A to the
invertibility of the square, symmetric matrices AT A and AAT . In fact, even if the columns of A are not
independent or do not span Rm , the matrices AT A and AAT are both symmetric and, as such, have real
eigenvalues as we shall see. We return to this in Chapter 7.
Exercise 5.4.1 In each case find bases for the row and Exercise 5.4.2 In each case find a basis of the subspace
column spaces of A and determine the rank of A. U.
2 −4 6 8 2 −1 1
2 −1 3 2 −2 1 1 a. U = span {(1, −1, 0, 3), (2, 1, 5, 1), (4, −2, 5, 7)}
a.
4 −5 9 10
b.
4 −2 3
0 −1 1 2 −6 3 0
b. U = span {(1, −1, 2, 5, 1), (3, 1, 4, 2, 7),
1 −1 5 −2 2 (1, 1, 0, 0, 0), (5, 1, 6, 7, 8)}
2 −2 −2 5 1
c.
0
0 −12 9 −3 1 0 1 0
−1 1 7 −7 1
1 0 0 1
c. U = span , ,
0 1 1 0 ,
1 2 −1 3
d.
−3 −6 3 −2 0 1 0 1
5.4. Rank of a Matrix 297
e. Can the null space of a 3 × 6 matrix have dimen- a. Show that A2 = 0 if and only if col A ⊆ null A.
sion 2? Explain. b. Conclude that if A2 = 0, then rank A ≤ n2 .
f. Suppose that A is 5×4 and null (A) = Rx for some c. Find a matrix A for which col A = null A.
column x 6= 0. Can dim ( im A) = 2?
Exercise 5.4.11 Let B be m × n and let AB be k × n. If
Exercise 5.4.4 If A is m × n show that rank B = rank (AB), show that null B = null (AB). [Hint:
Theorem 5.4.1.]
col (A) = {Ax | x in Rn } Exercise 5.4.12 Give a careful argument why
rank (AT ) = rank A.
Exercise 5.4.5 If A is m × n and B is n × m, show that Exercise 5.4.13 Let A be an m × n matrix with
AB = 0 if and only if col B ⊆ null A. columns c1 , c2 , . . . , cn . If rank A = n, show that
Exercise 5.4.6 Show that the rank does not change when {A c1 , A c2 , . . . , A cn } is a basis of R .
T T T n
an elementary row or column operation is performed on Exercise 5.4.14 If A is m × n and b is m × 1, show that
a matrix. b lies in the column space of A if and only if
Exercise 5.4.7 In each case find a basis of the null rank [A b] = rank A.
space of A. Then compute rank A and verify (1) of The- Exercise 5.4.15
orem 5.4.2.
a. Show that Ax = b has a solution if and only if
rank A = rank [A b]. [Hint: Exercises 5.4.12 and
3 1 1
2 0 1 5.4.14.]
a. A = 4 2 1 b. If Ax = b has no solution, show that
1 −1 1 rank [A b] = 1 + rank A.
3 5 5 2 0 Exercise 5.4.16 Let X be a k × m matrix. If I is the
1 0 2 2 1
b. A = m × m identity matrix, show that I + X T X is invertible.
1 1 1 −2 −2
I
−2 0 −4 −4 −2 [Hint: I + X X = A A where A =
T T in block
X
form.]
298 Vector Space Rn
Exercise 5.4.17 If A is m × n of rank r, show that A b. Show that if A and B have independent rows, so
can be factored as A = PQ where P is m × r with r in- does AB.
dependent columns, and Q is r × n with r independent
Ir 0
rows. [Hint: Let UAV = by Theorem 2.5.3,
0 0 Exercise 5.4.19 A matrix obtained from A by deleting
−1 U1 U2 −1 V1 V2 rows and columns is called a submatrix of A. If A has an
and write U = and V = in
U3 U4 V3 V4 invertible k × k submatrix, show that rank A ≥ k. [Hint:
block form, where U1 and V1 are r × r.] Showthat rowand column operations carry
I P
Exercise 5.4.18 A→ k in block form.] Remark: It can be shown
0 Q
a. Show that if A and B have independent columns, that rank A is the largest integer r such that A has an in-
so does AB. vertible r × r submatrix.
In Section 3.3 we studied diagonalization of a square matrix A, and found important applications (for
example to linear dynamical systems). We can now utilize the concepts of subspace, basis, and dimension
to clarify the diagonalization process, reveal some new results, and prove some theorems which could not
be demonstrated in Section 3.3.
Before proceeding, we introduce a notion that simplifies the discussion of diagonalization, and is used
throughout the book.
Similar Matrices
Note that A ∼ B if and only if B = QAQ−1 where Q is invertible (write P−1 = Q). The language of
similarity is used throughout linear algebra. For example, a matrix A is diagonalizable if and only if it is
similar to a diagonal matrix.
If A ∼ B, then necessarily B ∼ A. To see why, suppose that B = P−1 AP. Then A = PBP−1 = Q−1 BQ
where Q = P−1 is invertible. This proves the second of the following properties of similarity (the others
are left as an exercise):
1. A ∼ A for all square matrices A.
2. If A ∼ B, then B ∼ A. (5.2)
3. If A ∼ B and B ∼ A, then A ∼ C.
These properties are often expressed by saying that the similarity relation ∼ is an equivalence relation on
the set of n × n matrices. Here is an example showing how these properties are used.
5.5. Similarity and Diagonalization 299
Example 5.5.1
If A is similar to B and either A or B is diagonalizable, show that the other is also diagonalizable.
The proofs are routine matrix computations using Theorem 3.3.1. Thus, for example, if A is diagonaliz-
able, so also are AT , A−1 (if it exists), and Ak (for each k ≥ 1). Indeed, if A ∼ D where D is a diagonal
matrix, we obtain AT ∼ DT , A−1 ∼ D−1 , and Ak ∼ Dk , and each of the matrices DT , D−1 , and Dk is
diagonal.
We pause to introduce a simple matrix function that will be referred to later.
In other words:
If A = ai j , then tr A = a11 + a22 + · · · + ann .
It is evident that tr (A + B) = tr A + tr B and that tr (cA) = c tr A holds for all n × n matrices A and B and
all scalars c. The following fact is more surprising.
Lemma 5.5.1
Let A and B be n × n matrices. Then tr (AB) = tr (BA).
Proof. Write A = ai j and B = bi j . For each i, the (i, i)-entry di of the matrix AB is given as follows:
di = ai1 b1i + ai2 b2i + · · · + ain bni = ∑ j ai j b ji . Hence
!
tr (AB) = d1 + d2 + · · · + dn = ∑ di = ∑ ∑ ai j b ji
i i j
Similarly we have tr (BA) = ∑i (∑ j bi j a ji ). Since these two double sums are the same, Lemma 5.5.1 is
proved.
As the name indicates, similar matrices share many properties, some of which are collected in the next
theorem for reference.
300 Vector Space Rn
Theorem 5.5.1
If A and B are similar n × n matrices, then A and B have the same determinant, rank, trace,
characteristic polynomial, and eigenvalues.
Example 5.5.2
Sharing the five
properties
in Theorem
5.5.1
does not guarantee that two matrices are similar. The
1 1 1 0
matrices A = and I = have the same determinant, rank, trace, characteristic
0 1 0 1
polynomial, and eigenvalues, but they are not similar because P−1 IP = I for any invertible matrix
P.
Diagonalization Revisited
Recall that a square matrix A is diagonalizable if there exists an invertible matrix P such that P−1 AP = D
is a diagonal matrix, that is if A is similar
to a diagonal matrix D. Unfortunately, not all matrices are
1 1
diagonalizable, for example (see Example 3.3.10). Determining whether A is diagonalizable is
0 1
closely related to the eigenvalues and eigenvectors of A. Recall that a number λ is called an eigenvalue of
A if Ax = λ x for some nonzero column x in Rn , and any such nonzero vector x is called an eigenvector of
A corresponding to λ (or simply a λ -eigenvector of A). The eigenvalues and eigenvectors of A are closely
related to the characteristic polynomial cA (x) of A, defined by
cA (x) = det (xI − A)
If A is n ×n this is a polynomial of degree n, and its relationship to the eigenvalues is given in the following
theorem (a repeat of Theorem 3.3.2).
5.5. Similarity and Diagonalization 301
Theorem 5.5.2
Let A be an n × n matrix.
(λ I − A)x = 0
Example 5.5.3
Show that the eigenvalues of a triangular matrix are the main diagonal entries.
Solution. Assume that A is triangular. Then the matrix xI − A is also triangular and has diagonal
entries (x − a11 ), (x − a22 ), . . . , (x − ann ) where A = ai j . Hence Theorem 3.1.4 gives
and the result follows because the eigenvalues are the roots of cA (x).
Theorem 3.3.4 asserts (in part) that an n × n matrix A is diagonalizable if and only if it has n eigen-
vectors x1 , . . . , xn such that the matrix P = x1 · · · xn with the xi as columns is invertible. This is
equivalent to requiring that {x1 , . . . , xn } is a basis of Rn consisting of eigenvectors of A. Hence we can
restate Theorem 3.3.4 as follows:
Theorem 5.5.3
Let A be an n × n matrix.
The next result is a basic tool for determining when a matrix is diagonalizable. It reveals an important
connection between eigenvalues and linear independence: Eigenvectors corresponding to distinct eigen-
values are necessarily linearly independent.
Theorem 5.5.4
Let x1 , x2 , . . . , xk be eigenvectors corresponding to distinct eigenvalues λ1 , λ2 , . . . , λk of an n × n
matrix A. Then {x1 , x2 , . . . , xk } is a linearly independent set.
302 Vector Space Rn
If we multiply (5.3) by λ1 and subtract the result from (5.4), the first terms cancel and we obtain
Since x2 , x3 , . . . , xk+1 correspond to distinct eigenvalues λ2 , λ3 , . . . , λk+1 , the set {x2 , x3 , . . . , xk+1 } is
independent by the induction hypothesis. Hence,
and so t2 = t3 = · · · = tk+1 = 0 because the λi are distinct. Hence (5.3) becomes t1 x1 = 0, which implies
that t1 = 0 because x1 6= 0. This is what we wanted.
Theorem 5.5.4 will be applied several times; we begin by using it to give a useful condition for when
a matrix is diagonalizable.
Theorem 5.5.5
If A is an n × n matrix with n distinct eigenvalues, then A is diagonalizable.
Proof. Choose one eigenvector for each of the n distinct eigenvalues. Then these eigenvectors are inde-
pendent by Theorem 5.5.4, and so are a basis of Rn by Theorem 5.2.7. Now use Theorem 5.5.3.
Example 5.5.4
1 0 0
Show that A = 1 2 3 is diagonalizable.
−1 1 0
Solution. A routine computation shows that cA (x) = (x − 1)(x − 3)(x + 1) and so has distinct
eigenvalues 1, 3, and −1. Hence Theorem 5.5.5 applies.
However, a matrix can have multiple eigenvalues as we saw in Section 3.3. To deal with this situation,
we prove an important lemma which formalizes a technique that is basic to diagonalization, and which
will be used three times below.
5.5. Similarity and Diagonalization 303
Lemma 5.5.2
Let {x1 , x2 , . . . , xk } be a linearly independent set of eigenvectors of an n × n matrix A, extend it to
a basis {x1 , x2 , . . . , xk , . . . , xn } of Rn , and let
P = x1 x2 · · · xn
be the (invertible) n × n matrix with the xi as its columns. If λ1 , λ2 , . . . , λk are the (not necessarily
distinct) eigenvalues of A corresponding to x1 , x2 , . . ., xk respectively, then P−1 AP has block form
−1 diag (λ1 , λ2 , . . . , λk ) B
P AP =
0 A1
Comparing columns, we have P−1 xi = ei for each 1 ≤ i ≤ n. On the other hand, observe that
P−1 AP = P−1 A x1 x2 · · · xn = (P−1 A)x1 (P−1 A)x2 · · · (P−1 A)xn
This describes the first k columns of P−1 AP, and Lemma 5.5.2 follows.
Note that Lemma 5.5.2 (with k = n) shows that an n × n matrix A is diagonalizable if Rn has a basis of
eigenvectors of A, as in (1) of Theorem 5.5.3.
Eλ (A) = {x in Rn | Ax = λ x}
This is a subspace of Rn and the eigenvectors corresponding to λ are just the nonzero vectors in Eλ (A). In
fact Eλ (A) is the null space of the matrix (λ I − A):
Hence, by Theorem 5.4.2, the basic solutions of the homogeneous system (λ I − A)x = 0 given by the
gaussian algorithm form a basis for Eλ (A). In particular
Now recall (Definition 3.7) that the multiplicity11 of an eigenvalue λ of A is the number of times λ occurs
as a root of the characteristic polynomial cA (x) of A. In other words, the multiplicity of λ is the largest
integer m ≥ 1 such that
cA (x) = (x − λ )m g(x)
for some polynomial g(x). Because of (5.5), the assertion (without proof) in Theorem 3.3.5 can be stated
as follows: A square matrix is diagonalizable if and only if the multiplicity of each eigenvalue λ equals
dim [Eλ (A)]. We are going to prove this, and the proof requires the following result which is valid for any
square matrix, diagonalizable or not.
Lemma 5.5.3
Let λ be an eigenvalue of multiplicity m of a square matrix A. Then dim [Eλ (A)] ≤ m.
Proof. Write dim [Eλ (A)] = d. It suffices to show that cA (x) = (x − λ )d g(x) for some polynomial g(x),
because m is the highest power of (x − λ ) that divides cA (x). To this end, let {x1 , x2 , . . . , xd } be a basis
of Eλ (A). Then Lemma 5.5.2 shows that an invertible n × n matrix P exists such that
−1 λ Id B
P AP =
0 A1
in block form, where Id denotes the d × d identity matrix. Now write A′ = P−1 AP and observe that
cA′ (x) = cA (x) by Theorem 5.5.1. But Theorem 3.1.5 gives
′ (x − λ )Id −B
cA (x) = cA′ (x) = det (xIn − A ) = det
0 xIn−d − A1
= det [(x − λ )Id ] det [(xIn−d − A1 )]
= (x − λ )d g(x)
It is impossible to ignore the question when equality holds in Lemma 5.5.3 for each eigenvalue λ . It
turns out that this characterizes the diagonalizable n × n matrices A for which cA (x) factors completely
over R. By this we mean that cA (x) = (x − λ1 )(x − λ2 ) · · · (x − λn ), where the λi are real numbers (not
necessarily distinct); in other words, every eigenvalue of A is real. This need not happen (consider A =
0 −1
), and we investigate the general case below.
1 0
Theorem 5.5.6
The following are equivalent for a square matrix A for which cA (x) factors completely.
1. A is diagonalizable.
2. dim [Eλ (A)] equals the multiplicity of λ for every eigenvalue λ of the matrix A.
Example 5.5.5
5 8 16 2 1 1
If A = 4 1 8 and B = 2 1 −2 show that A is diagonalizable but B is not.
−4 −4 −11 −1 0 −2
as the reader can verify. Since {x1 , x2 } is independent, we have dim (Eλ1 (A)) = 2 which is the
multiplicity of λ1 . Similarly, dim (Eλ2 (A)) = 1 equals the multiplicity
of λ2 . Hence A is
diagonalizable by Theorem 5.5.6, and a diagonalizing matrix is P = x1 x2 x3 .
Turning to B, cB (x) = (x + 1)2 (x − 3) so the eigenvalues are λ1 = −1 and λ2 = 3. The
corresponding eigenspaces are Eλ1 (B) = span {y1 } and Eλ2 (B) = span {y2 } where
−1 5
y1 = 2 , y2 = 6
1 −1
Here dim (Eλ1 (B)) = 1 is smaller than the multiplicity of λ1 , so the matrix B is not diagonalizable,
again by Theorem 5.5.6. The fact that dim (Eλ1 (B)) = 1 means that there is no possibility of
finding three linearly independent eigenvectors.
306 Vector Space Rn
Complex Eigenvalues
All the
matrices
we have considered have had real eigenvalues. But this need not be the case: The matrix
0 −1
A= has characteristic polynomial cA (x) = x2 + 1 which has no real roots. Nonetheless, this
1 0
matrix is diagonalizable; the only difference is that we must use a larger set of scalars, the complex
numbers. The basic properties of these numbers are outlined in Appendix A.
Indeed, nearly everything we have done for real matrices can be done for complex matrices. The
methods are the same; the only difference is that the arithmetic is carried out with complex numbers rather
than real ones. For example, the gaussian algorithm works in exactly the same way to solve systems of
linear equations with complex coefficients, matrix multiplication is defined the same way, and the matrix
inversion algorithm works in the same way.
But the complex numbers are better than the real numbers in one respect: While there are polynomials
like x2 + 1 with real coefficients that have no real root, this problem does not arise with the complex
numbers: Every nonconstant polynomial with complex coefficients has a complex root, and hence factors
completely as a product of linear factors. This fact is known as the fundamental theorem of algebra.12
Example 5.5.6
0 −1
Diagonalize the matrix A = .
1 0
Symmetric Matrices13
On the other hand, many of the applications of linear algebra involve a real matrix A and, while A will
have complex eigenvalues by the fundamental theorem of algebra, it is always of interest to know when
the eigenvalues are, in fact, real. While this can happen in a variety of ways, it turns out to hold whenever
A is symmetric. This important theorem will be used extensively later. Surprisingly, the theory of complex
eigenvalues can be used to prove this useful result about real eigenvalues.
Let z denote the conjugate of a complex number z. If A is a complex matrix, the conjugate
matrix
A
is defined to be the matrix obtained from A by conjugating every entry. Thus, if A = zi j , then A = zi j .
For example,
−i + 2 5 i+2 5
If A = then A =
i 3 + 4i −i 3 − 4i
Recall that z + w = z + w and zw = z w hold for all complex numbers z and w. It follows that if A and B
are two complex matrices, then
A + B = A + B, AB = A B and λ A = λ A
hold for all complex scalars λ . These facts are used in the proof of the following theorem.
Theorem 5.5.7
Let A be a symmetric real matrix. If λ is any complex eigenvalue of A, then λ is real.14
Proof. Observe that A = A because A is real. If λ is an eigenvalue of A, we show that λ is real by showing
that λ = λ . Let x be a (possibly complex) eigenvector corresponding to λ , so that x 6= 0 and Ax = λ x.
Define c = xT x.
z1
z2
If we write x = .. where the zi are complex numbers, we have
.
zn
Thus c is a real number, and c > 0 because at least one of the zi 6= 0 (as x 6= 0). We show that λ = λ by
verifying that λ c = λ c. We have
At this point we use the hypothesis that A is symmetric and real. This means AT = A = A so we continue
the calculation:
13 This discussion uses complex conjugation and absolute value. These topics are discussed in Appendix A.
14 This theorem was first proved in 1829 by the great French mathematician Augustin Louis Cauchy (1789–1857).
308 Vector Space Rn
λ c = xT AT x = xT (A x) = xT (Ax) = xT (λ x)
= xT (λ x)
= λ xT x
= λc
as required.
The technique in the proof of Theorem 5.5.7 will be used again when we return to complex linear algebra
in Section 8.7.
Example 5.5.7
Verify Theorem 5.5.7 for every real, symmetric 2 × 2 matrix A.
a b
Solution. If A = we have cA (x) = x2 − (a + c)x + (ac − b2 ), so the eigenvalues are given
b c
p
by λ = 12 [(a + c) ± (a + c)2 − 4(ac − b2 )]. But here
for any choice of a, b, and c. Hence, the eigenvalues are real numbers.
Exercise 5.5.4 In each case, decide whether the matrix Exercise 5.5.10 Let A be a diagonalizable n × n matrix
A is diagonalizable. If so, find P such that P−1 AP is di- with eigenvalues λ1 , λ2 , . . . , λn (including multiplici-
agonal. ties). Show that:
1 0 0 3 0 6 a. det A = λ1 λ2 · · · λn
a. 1 2 1 b. 0 −3 0
b. tr A = λ1 + λ2 + · · · + λn
0 0 1 5 0 2
3 1 6 4 0 0
Exercise 5.5.11 Given a polynomial p(x) = r0 + r1 x +
c. 2 1 0 d. 0 2 2
· · · + rn xn and a square matrix A, the matrix p(A) =
−1 0 −3 2 3 1
r0 I + r1 A + · · · + rn An is called the evaluation of p(x)
at A. Let B = P−1 AP. Show that p(B) = P−1 p(A)P for
Exercise 5.5.5 If A is invertible, show that AB is similar all polynomials p(x).
to BA for all B. Exercise 5.5.12 Let P be an invertible n × n matrix. If
−1
Exercise 5.5.6 Show that the only matrix similar to a A is any n × n matrix, write TP (A) = P AP. Verify that:
scalar matrix A = rI, r in R, is A itself.
a. TP (I) = I b. TP (AB) = TP (A)TP (B)
Exercise 5.5.7 Let λ be an eigenvalue of A with cor-
responding eigenvector x. If B = P−1 AP is similar to A, c. TP (A + B) = TP (A) + d. TP (rA) = rTP (A)
show that P−1 x is an eigenvector of B corresponding to TP (B)
λ. e. T (Ak ) = [T (A)]k for k ≥ 1
P P
Exercise 5.5.8 If A ∼ B and A has any of the following f. If A is invertible, TP (A−1 ) = [TP (A)]−1 .
properties, show that B has the same property.
g. If Q is invertible, TQ [TP (A)] = TPQ (A).
a. Idempotent, that is A2 = A.
Exercise 5.5.13
b. Nilpotent, that is Ak = 0 for some k ≥ 1.
a. Show that two diagonalizable matrices are similar
c. Invertible. if and only if they have the same eigenvalues with
the same multiplicities.
0 a b for all n ≥ 0. Define
Exercise 5.5.17 Let A = a 0 c and 0 1 0
··· 0
b c 0 0 0 1 0
xn
···
xn+1
c a b A= .. .. .. .. , Vn =
..
.
B = a b c . . . . . .
0 0 0 ··· 1
xn+k−1
b c a r0 r1 r2 ··· rk−1
Exercise 5.5.18 Assume the 2 × 2 matrix A is similar to d. A is diagonalizable if and only if the eigenvalues
an upper triangular matrix. If tr A = 0 = tr A2 , show that of A are distinct. [Hint: See part (c) and Theo-
A2 = 0. rem 5.5.4.]
Often an exact solution to a problem in applied mathematics is difficult to obtain. However, it is usually
just as useful to find arbitrarily close approximations to a solution. In particular, finding “linear approx-
imations” is a potent technique in applied mathematics. One basic case is the situation where a system
of linear equations has no solution, and it is desirable to find a “best approximation” to a solution to the
system. In this section best approximations are defined and a method for finding them is described. The
result is then applied to “least squares” approximation of data.
Suppose A is an m × n matrix and b is a column in Rm , and consider the system
Ax = b
of m linear equations in n variables. This need not have a solution. However, given any column z ∈ Rn ,
the distance kb − Azk is a measure of how far Az is from b. Hence it is natural to ask whether there is a
column z in Rn that is as close as possible to a solution in the sense that
kb − Azk
5.6. Best Approximation and Least Squares 311
U = {Ax | x lies in Rn }
In other words, the vector AT (b − Az) in Rn is orthogonal to every vector in Rn and so must be zero (being
orthogonal to itself). Hence z satisfies
(AT A)z = AT b
Note that this system can have more than one solution (see Exercise 5.6.5). However, the n × n matrix AT A
is invertible if (and only if) the columns of A are linearly independent (Theorem 5.4.3); so, in this case,
z is uniquely determined and is given explicitly by z = (AT A)−1 AT b. However, the most efficient way to
find z is to apply gaussian elimination to the normal equations.
This discussion is summarized in the following theorem.
Ax = b
of m equations in n variables.
(AT A)z = AT b
2. If the columns of A are linearly independent, then AT A is invertible and z is given uniquely
by z = (AT A)−1 AT b.
312 Vector Space Rn
is the solution to the system of equations, and kb − Azk = 0. Hence if A has independent columns, then
(AT A)−1 AT is playing the role of the inverse of the nonsquare matrix A. The matrix AT (AAT )−1 plays a
similar role when the rows of A are linearly independent. These are both special cases of the generalized
inverse of a matrix A (see Exercise 5.6.14). However, we shall not pursue this topic here.
Example 5.6.1
The system of linear equations
3x − y = 4
x + 2y = 0
2x + y = 1
x0
has no solution. Find the vector z = that best approximates a solution.
y0
Thus x0 = 87
83 and y0 =
−56
83 . With these values of x and y, the left sides of the equations are,
approximately,
317
3x0 − y0 = 83 = 3.82
−25
x0 + 2y0 = 83 = −0.30
118
2x0 + y0 = 83 = 1.42
This is as close as possible to a solution.
Example 5.6.2
The average number g of goals per game scored by a hockey player seems to be related linearly to
two factors: the number x1 of years of experience and the number x2 of goals in the preceding 10
games. The data on the following page were collected on four players. Find the linear function
5.6. Best Approximation and Least Squares 313
g x1 x2
0.8 5 3
0.8 3 4
0.6 1 5
0.4 2 1
z = (AT A)−1 AT b
0.8
119 −17 −19 1 1 1 1 0.14
1 0.8
= 42 −17 5 1 5 3 1 2
0.6
= 0.09
−19 1 5 3 4 5 1 0.08
0.4
Hence the best-fitting function is g = 0.14 + 0.09x1 + 0.08x2 . The amount of computation would
have been reduced if the normal equations had been constructed and then solved by gaussian
elimination.
In many scientific investigations, data are collected that relate two variables. For example, if x is the
number of dollars spent on advertising by a manufacturer and y is the value of sales in the region in
question, the manufacturer could generate data by spending x1 , x2 , . . . , xn dollars at different times and
measuring the corresponding sales values y1 , y2 , . . . , yn .
Suppose it is known that a linear relationship exists between the vari-
y ables x and y—in other words, that y = a + bx for some constants a and
Line 2 Line 1
(x5 , y5 )
b. If the data are plotted, the points (x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ) may
(x4 , y4 ) appear to lie on a straight line and estimating a and b requires finding
(x3 , y3 ) the “best-fitting” line through these data points. For example, if five data
(x1 , y1 ) points occur as shown in the diagram, line 1 is clearly a better fit than line
(x2 , y2 )
2. In general, the problem is to find the values of the constants a and b
such that the line y = a + bx best approximates the data in question. Note
x
0 that an exact fit would be obtained if a and b were such that yi = a + bxi
were true for each data point (xi , yi ). But this is too much to expect. Ex-
314 Vector Space Rn
perimental errors in measurement are bound to occur, so the choice of a and b should be made in such a
way that the errors between the observed values yi and the corresponding fitted values a + bxi are in some
sense minimized. Least squares approximation is a way to do this.
The first thing we must do is explain exactly what we mean by the best fit of a line y = a + bx to an
observed set of data points (x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ). For convenience, write the linear function
r0 + r1 x as
f (x) = r0 + r1 x
so that the fitted points (on the line) have coordinates (x1 , f (x1 )), . . . , (xn , f (xn )).
The second diagram is a sketch of what the line y = f (x) might look
y like. For each i the observed data point (xi , yi ) and the fitted point
dn
(xi , f (xi )) need not be the same, and the distance di between them mea-
(xi , yi ) (xn , f (xn )) sures how far the line misses the observed point. For this reason di is often
di (xn , yn )
(xi , f (xi )) called the error at xi , and a natural measure of how close the line y = f (x)
x)
f(
d1
(x1 , f (x1 ))
(x1 , y1 )
errors. However, it turns out to be better to use the sum of squares
x S = d12 + d22 + · · · + dn2
0 x1 xi xn
as the measure of error, and the line y = f (x) is to be chosen so as to make this sum as small
as possible. This line is said to be the least squares approximating line for the data points
(x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ).
The square of the error di is given by di2 = [yi − f (xi )]2 for each i, so the quantity S to be minimized is
the sum:
S = [y1 − f (x1 )]2 + [y2 − f (x2 )]2 + · · · + [yn − f (xn )]2
Note that all the numbers xi and yi are given here; what is required is that the function f be chosen in such
a way as to minimize S. Because f (x) = r0 + r1 x, this amounts to choosing r0 and r1 to minimize S. This
problem can be solved using Theorem 5.6.1. The following notation is convenient.
x1 y1 f (x1 ) r0 + r1 x1
x2 y2 f (x2 ) r0 + r1 x2
x = .. y = .. and f (x) = .. = ..
. . . .
xn yn f (xn ) r0 + r1 xn
Then the problem takes the following form: Choose r0 and r1 such that
S = [y1 − f (x1 )]2 + [y2 − f (x2 )]2 + · · · + [yn − f (xn )]2 = ky − f (x)k2
is as small as possible. Now write
1 x1
1 x2 r0
M= .. .. and r=
. . r1
1 xn
r0
Then Mr = f (x), so we are looking for a column r = such that ky − Mrk2 is as small as possible.
r1
In other words, we are looking for a best approximation z to the system Mr = y. Hence Theorem 5.6.1
applies directly, and we have
5.6. Best Approximation and Least Squares 315
Theorem 5.6.2
Suppose that n data points (x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ) are given, where at least two of
x1 , x2 , . . . , xn are distinct. Put
y1 1 x1
y2 1 x2
y = .. M = .. ..
. . .
yn 1 xn
Then the least squares approximating line for these data points has equation
y = z0 + z1 x
z0
where z = is found by gaussian elimination from the normal equations
z1
(M T M)z = M T y
The condition that at least two of x1 , x2 , . . . , xn are distinct ensures that M T M is an invertible
matrix, so z is unique:
z = (M T M)−1 M T y
Example 5.6.3
Let data points (x1 , y1 ), (x2 , y2 ), . . . , (x5 , y5 ) be given as in the accompanying table. Find the
least squares approximating line for these data.
x y
1 1
3 2
4 3
6 4
7 5
y1
T 1 1 ··· 1 y2
and M y = ..
x1 x2 · · · x5 .
y5
y1 + y2 + · · · + y5 15
= =
x1 y1 + x2 y2 + · · · + x5 y5 78
z0
so the normal equations (M M)z = M y for z =
T T become
z1
5 21 z0 15
= =
21 111 z1 78
z0 0.24
The solution (using gaussian elimination) is z = = to two decimal places, so the
z1 0.66
least squares approximating line for these data is y = 0.24 + 0.66x. Note that M T M is indeed
invertible here (the determinant is 114), and the exact solution is
T −1 T 1 111 −21 15 1 27 1 9
z = (M M) M y = 114 = 114 = 38
−21 5 78 75 25
Suppose now that, rather than a straight line, we want to find a polynomial
y = f (x) = r0 + r1 x + r2 x2 + · · · + rm xm
of degree m that best approximates the data pairs (x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ). As before, write
x1 y1 f (x1 )
x2 y2 f (x2 )
x = .. y = .. and f (x) = ..
. . .
xn yn f (xn )
For each xi we have two values of the variable y, the observed value yi , and the computed value f (xi ). The
problem is to choose f (x)—that is, choose r0 , r1 , . . . , rm —such that the f (xi ) are as close as possible to
the yi . Again we define “as close as possible” by the least squares condition: We choose the ri such that
ky − f (x)k2 = [y1 − f (x1 )]2 + [y2 − f (x2 )]2 + · · · + [yn − f (xn )]2
is as small as possible.
5.6. Best Approximation and Least Squares 317
If we write
1 x1 x21 · · · xm
1 r0
1 x2 x22 · · · xm r1
2
M= .. .. .. .. and r= ..
. . . . .
1 xn x2n · · · xm
n rm
we see that f (x) = Mr. Hence we want to find r such that ky − Mrk2 is as small as possible; that is, we
want a best approximation z to the system Mr = y. Theorem 5.6.1 gives the first part of Theorem 5.6.3.
Theorem 5.6.3
Let n data pairs (x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ) be given, and write
2
y1 1 x1 x1 · · · x1 m z0
y2 1 x x2 · · · xm z1
2 2 2
y = .. M = . . . . z= .
. .. .. .. .. ..
yn 1 x x2 · · · xm zm
n n n
(M T M)z = M T y
z = (M T M)−1 M T y
Proof. It remains to prove (2), and for that we show that the columns of M are linearly independent
(Theorem 5.4.3). Suppose a linear combination of the columns vanishes:
1 x1 xm
1 0
1 x2 xm 0
2
r0 .. + r1 .. + · · · + rm .. = ..
. . . .
1 xn xm
n 0
318 Vector Space Rn
Hence q(x) is a polynomial of degree m with at least m + 1 distinct roots, so q(x) must be the zero poly-
nomial (see Appendix D or Theorem 6.5.4). Thus r0 = r1 = · · · = rm = 0 as required.
Example 5.6.4
Find the least squares approximating quadratic y = z0 + z1 x + z2 x2 for the following data points.
Hence,
1 −3 9
1 1 1 1 1 1 −1 1 5 0 20
T
M M = −3 −1 0 1 3 1 0 0 0 20 0
=
9 1 0 1 9 1 1 1 20 0 164
1 3 9
3
1 1 1 1 1
1 11
T
M y = −3 −1 0 1 3 4
1 =
9 1 0 1 9 2 66
4
The normal equations for z are
5 0 20 11 1.15
0 20 0 z = 4 whence z = 0.20
20 0 164 66 0.26
This means that the least squares approximating quadratic for these data is
y = 1.15 + 0.20x + 0.26x2 .
5.6. Best Approximation and Least Squares 319
Other Functions
There is an extension of Theorem 5.6.3 that should be mentioned. Given data pairs (x1 , y1 ), (x2 , y2 ),
. . . , (xn , yn ), that theorem shows how to find a polynomial
f (x) = r0 + r1 x + · · · + rm xm
such that ky − f (x)k2 is as small as possible, where x and f (x) are as before. Choosing the appropriate
polynomial f (x) amounts to choosing the coefficients r0 , r1 , . . . , rm , and Theorem 5.6.3 gives a formula
for the optimal choices. Here f (x) is a linear combination of the functions 1, x, x2 , . . . , xm where the ri
are the coefficients, and this suggests applying the method to other functions. If f0 (x), f1 (x), . . . , fm (x)
are given functions, write
f (x) = r0 f0 (x) + r1 f1 (x) + · · · + rm fm (x)
where the ri are real numbers. Then the more general question is whether r0 , r1 , . . . , rm can be found such
that ky − f (x)k2 is as small as possible where
f (x1 )
f (x2 )
f (x) = ..
.
f (xm )
Such a function f (x) is called a least squares best approximation for these data pairs of the form
r0 f0 (x) + r1 f1 (x) + · · · + rm fm (x), ri in R. The proof of Theorem 5.6.3 goes through to prove
Theorem 5.6.4
Let n data pairs (x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ) be given, and suppose that m + 1 functions
f0 (x), f1 (x), . . . , fm (x) are specified. Write
y1 f0 (x1 ) f1 (x1 ) · · · fm (x1 ) z1
y2 f0 (x2 ) f1 (x2 ) · · · fm (x2 ) z2
y = .. M = .
. .
. .
. z = ..
. . . . .
yn f0 (xn ) f1 (xn ) · · · fm (xn ) zm
(M T M)z = M T y
2. If M T M is invertible (that is, if rank (M) = m + 1), then z is uniquely determined; in fact,
z = (M T M)−1 (M T y).
320 Vector Space Rn
Clearly Theorem 5.6.4 contains Theorem 5.6.3 as a special case, but there is no simple test in gen-
eral for whether M T M is invertible. Conditions for this to hold depend on the choice of the functions
f0 (x), f1 (x), . . . , fm (x).
Example 5.6.5
Given the data pairs (−1, 0), (0, 1), and (1, 4), find the least squares approximating function of
the form r0 x + r1 2x .
Exercise 5.6.1 Find the best approximation to a solution b. (2, 4), (4, 3), (7, 2), (8, 1)
of each of the following systems of equations.
c. (−1, −1), (0, 1), (1, 2), (2, 4), (3, 6)
a. x+ y− z=5 b. 3x + y + z = 6 d. (−2, 3), (−1, 1), (0, 0), (1, −2), (2, −4)
2x − y + 6z = 1 2x + 3y − z = 1
3x + 2y − z = 6 2x − y + z = 0
−x + 4y + z = 0 3x − 3y + 3z = 8 Exercise 5.6.3 Find the least squares approximating
quadratic y = z0 + z1 x + z2 x2 for each of the following
sets of data points.
Exercise 5.6.2 Find the least squares approximating line
y = z0 + z1 x for each of the following sets of data points.
a. (0, 1), (2, 2), (3, 3), (4, 5)
a. (1, 1), (3, 2), (4, 3), (6, 4) b. (−2, 1), (0, 0), (3, 2), (4, 3)
5.6. Best Approximation and Least Squares 321
Exercise 5.6.14 If A is an m × n matrix, it can be proved a. If A is square and invertible, show that A# = A−1 .
that there exists a unique n × m matrix A# satisfying the
following four conditions: AA# A = A; A# AA# = A# ; AA# b. If rank A = m, show that A# = AT (AAT )−1 .
and A# A are symmetric. The matrix A# is called the gen-
eralized inverse of A, or the Moore-Penrose inverse. c. If rank A = n, show that A# = (AT A)−1 AT .
Suppose the heights h1 , h2 , . . . , hn of n men are measured. Such a data set is called a sample of the heights
of all the men in the population under study, and various questions are often asked about such a sample:
What is the average height in the sample? How much variation is there in the sample heights, and how can
it be measured? What can be inferred from the sample about the heights of all men in the population? How
do these heights compare to heights of men in neighbouring countries? Does the prevalence of smoking
affect the height of a man?
The analysis of samples, and of inferences that can be drawn from them, is a subject called mathemat-
ical statistics, and an extensive body of information has been developed to answer many such questions.
In this section we will describe a few ways that linear algebra can be used.
It is convenient to represent a sample {x1 , x2 , . . . , xn } as a sample vector15 x = x1 x2 · · · xn
in Rn . This being done, the dot product in Rn provides a convenient tool to study the sample and describe
some of the statistical concepts related to it. The most widely known statistic for describing a data set is
the sample mean x defined by16
n
1 1
x= n (x1 + x2 + · · · + xn ) = n ∑ xi
i=1
The mean x is “typical” of the sample values xi , but may not itself be one of them. The number xi − x is
called the deviation of xi from the mean x. The deviation is positive if xi > x and it is negative if xi < x.
Moreover, the sum of these deviations is zero:
!
n n
∑ (xi − x) = ∑ xi − nx = nx − nx = 0 (5.6)
Sample x i=1 i=1
and the centred sample xc = −3 −2 −1 2 4 is also plotted. Thus, the effect of centring is to shift
the data by an amount x (to the left if x is positive) so that the mean moves to 0.
Another question that arises about samples is how much variability there is in the sample
x = x1 x2 · · · xn
that is, how widely are the data “spread out” around the sample mean x. A natural measure of variability
would be the sum of the deviations of the xi about the mean, but this sum is zero by (5.6); these deviations
cancel out. To avoid this cancellation, statisticians use the squares (xi − x)2 of the deviations as a measure
of variability. More precisely, they compute a statistic called the sample variance s2x defined17 as follows:
n
s2x = 1 2 2 2 1
n−1 [(x1 − x) + (x2 − x) + · · · + (xn − x) ] = n−1 ∑ (xi − x)2
i=1
The sample variance will be large if there are many xi at a large distance from the mean x, and it will
be small if all the xi are tightly clustered about the mean. The variance is clearly nonnegative (hence the
notation s2x ), and the square root sx of the variance is called the sample standard deviation.
The sample mean and variance can be conveniently described using the dot product. Let
1 = 1 1 ··· 1
denote the row with every entry equal to 1. If x = x1 x2 · · · xn , then x · 1 = x1 + x2 + · · · + xn , so
the sample mean is given by the formula
x = n1 (x · 1)
Moreover, remembering that x is a scalar, we have x1 = x x · · · x , so the centred sample vector xc
is given by
xc = x − x1 = x1 − x x2 − x · · · xn − x
Thus we obtain a formula for the sample variance:
s2x = 1
n−1 kxc k
2
= 1
n−1 kx − x1k
2
Linear algebra is also useful for comparing two different samples. To illustrate how, consider two exam-
ples.
The following table represents the number of sick days at work per
year and the yearly number of visits to a physician for 10 individuals.
Individual 1 2 3 4 5 6 7 8 9 10
Doctor visits 2 6 8 1 5 10 3 9 7 4
Sick Sick days 2 4 8 3 5 9 4 7 7 2
Days
The data are plotted in the scatter diagram where it is evident that,
roughly speaking, the more visits to the doctor the more sick days. This is
Doctor Visits an example of a positive correlation between sick days and doctor visits.
17 Since there are n sample values, it seems more natural to divide by n here, rather than by n − 1. The reason for using n − 1
is that then the sample variance s2 x provides a better estimate of the variance of the entire population from which the sample
was drawn.
324 Vector Space Rn
Now consider the following table representing the daily doses of vita-
min C and the number of sick days.
Individual 1 2 3 4 5 6 7 8 9 10
Vitamin C 1 5 7 0 4 9 2 8 6 3
Sick Sick days 5 2 2 6 2 1 4 3 2 5
Days
The scatter diagram is plotted as shown and it appears that the more vita-
Vitamin C Doses min C taken, the fewer sick days. In this case there is a negative correla-
tion between daily vitamin C and sick days.
In both these situations, we have paired samples, that is observations of two variables are made for ten
individuals: doctor visits and sick days in the first case; daily vitamin C and sick days in the second case.
The scatter diagrams point to a relationship between these variables, and there is a way to use the sample
to compute a number, called the correlation coefficient, that measures the degree to which the variables
are associated.
To
motivate the definition
of the
correlation coefficient,
suppose two paired samples
x = x1 x2 · · · xn , and y = y1 y2 · · · yn are given and consider the centred samples
xc = x1 − x x2 − x · · · xn − x and yc = y1 − y y2 − y · · · yn − y
If xk is large among the xi ’s, then the deviation xk − x will be positive; and xk − x will be negative if xk
is small among the xi ’s. The situation is similar for y, and the following table displays the sign of the
quantity (xi − x)(yk − y) in all four cases:
xi large xi small
yi large positive negative
yi small negative positive
Intuitively, if x and y are positively correlated, then two things happen:
1. Large values of the xi tend to be associated with large values of the yi , and
It follows from the table that, if x and y are positively correlated, then the dot product
n
xc · yc = ∑ (xi − x)(yi − y)
i=1
is positive. Similarly xc · yc is negative if x and y are negatively correlated. With this in mind, the sample
correlation coefficient18 r is defined by
xc ·yc
r = r(x, y) = kxc k kyc k
18 The idea of using a single number to measure the degree of relationship between different variables was pioneered by
Francis Galton (1822–1911). He was studying the degree to which characteristics of an offspring relate to those of its parents.
The idea was refined by Karl Pearson (1857–1936) and r is often referred to as the Pearson correlation coefficient.
5.7. An Application to Correlation and Variance 325
Bearing the situation in R3 in mind, r is the cosine of the “angle” between the vectors xc and yc , and so
we would expect it to lie between −1 and 1. Moreover, we would expect r to be near 1 (or −1) if these
vectors were pointing in the same (opposite) direction, that is the “angle” is near zero (or π ).
This is confirmed by Theorem 5.7.1 below, and it is also borne out in the examples above. If we
compute the correlation between sick days and visits to the physician (in the first scatter diagram above)
the result is r = 0.90 as expected. On the other hand, the correlation between daily vitamin C doses and
sick days (second scatter diagram) is r = −0.84.
However, a word of caution is in order here. We cannot conclude from the second example that taking
more vitamin C will reduce the number of sick days at work. The (negative) correlation may arise because
of some third factor that is related to both variables. For example, case it may be that less healthy people
are inclined to take more vitamin C. Correlation does not imply causation. Similarly, the correlation
between sick days and visits to the doctor does not mean that having many sick days causes more visits to
the doctor. A correlation between two variables may point to the existence of other underlying factors, but
it does not necessarily mean that there is a causality relationship between the variables.
Our discussion of the dot product in Rn provides the basic properties of the correlation coefficient:
Theorem 5.7.1
Let x = x1 x2 · · · xn and y = y1 y2 · · · yn be (nonzero) paired samples, and let
r = r(x, y) denote the correlation coefficient. Then:
1. −1 ≤ r ≤ 1.
2. r = 1 if and only if there exist a and b > 0 such that yi = a + bxi for each i.
3. r = −1 if and only if there exist a and b < 0 such that yi = a + bxi for each i.
Proof. The Cauchy inequality (Theorem 5.3.2) proves (1), and also shows that r = ±1 if and only if one
of xc and yc is a scalar multiple of the other. This in turn holds if and only if yc = bxc for some b 6= 0, and
it is easy to verify that r = 1 when b > 0 and r = −1 when b < 0.
Finally, yc = bxc means yi − y = b(xi − x) for each i; that is, yi = a + bxi where a = y − bx. Conversely,
if yi = a + bxi , then y = a + bx (verify), so y1 − y = (a + bxi ) − (a + bx) = b(x1 − x) for each i. In other
words, yc = bxc . This completes the proof.
Properties (2) and (3) in Theorem 5.7.1 show that r(x, y) = 1 means that there is a linear relation
with positive slope between the paired data (so large x values are paired with large y values). Similarly,
r(x, y) = −1 means that there is a linear relation with negative slope between the paired data (so small x
values are paired with small y values). This is borne out in the two scatter diagrams above.
We conclude by using the dot product to derive some useful formulas
for computing variances
and
correlation coefficients. Given samples x = x1 x2 · · · xn and y = y1 y2 · · · yn , the key ob-
servation is the following formula:
xc · yc = x · y − nx y (5.7)
Indeed, remembering that x and y are scalars:
326 Vector Space Rn
xc · yc = (x − x1) · (y − y1)
= x · y − x · (y1) − (x1) · y + (x1)(y1)
= x · y − y(x · 1) − x(1 · y) + xy(1 · 1)
= x · y − y(nx) − x(ny) + x y(n)
= x · y − nx y
1
Taking y = x in (5.7) gives a formula for the variance s2x = n−1 kxc k
2 of x.
Variance Formula
1
If x is a sample vector, then s2x = n−1 kxc k2 − nx2 .
xc ·yc
We also get a convenient formula for the correlation coefficient, r = r(x, y) = kxc k kyc k . Moreover, (5.7)
1
and the fact that s2x = n−1 kxc k
2 give:
Correlation Formula
If x and y are sample vectors, then
x · y − nx y
r = r(x, y) =
(n − 1)sxsy
Finally, we give a method that simplifies the computations of variances and correlations.
Data Scaling
Let x = x1 x2 · · · xn and y = y1 y2 · · · yn be sample vectors. Given constants
a, b,
c, and d , consider new samples z = z1 z2 · · · zn and w = w1 w2 · · · wn where
zi = a + bxi , for each i and wi = c + dyi for each i. Then:
a. z = a + bx
The verification is left as an exercise. For example, if x = 101 98 103 99 100 97 , subtracting
100 yields z = 1 −2 3 −1 0 −3 . A routine calculation shows that z = − 13 and s2z = 14 3 , so
1 2 14
x = 100 − 3 = 99.67, and sz = 3 = 4.67.
5.7. An Application to Correlation and Variance 327
Exercise 5.7.1 The following table gives IQ scores for 10 fathers and their eldest sons. Calculate the means, the
variances, and the correlation coefficient r. (The data scaling formula is useful.)
1 2 3 4 5 6 7 8 9 10
Father’s IQ 140 131 120 115 110 106 100 95 91 86
Son’s IQ 130 138 110 99 109 120 105 99 100 94
Exercise 5.7.2 The following table gives the number of years of education and the annual income (in thousands)
of 10 individuals. Find the means, the variances, and the correlation coefficient. (Again the data scaling formula is
useful.)
Individual 1 2 3 4 5 6 7 8 9 10
Years of education 12 16 13 18 19 12 18 19 12 14
Yearly income 31 48 35 28 55 40 39 60 32 35
(1000’s)
Exercise 5.7.3 If x is a sample vector, and xc is the centred sample, show that xc = 0 and the standard deviation of
xc is sx .
Exercise 5.7.4 Prove the data scaling formulas found on page 326: (a), (b), and (c).
Exercise 5.1 In each case either show that the state- g. If {x, y} is not independent, then {x, y, z} is not
ment is true or give an example showing that it is false. independent.
Throughout, x, y, z, x1 , x2 , . . . , xn denote vectors in Rn .
h. If all of x1 , x2 , . . . , xn are nonzero, then
a. If U is a subspace of Rn and x + y is in U , then x {x1 , x2 , . . . , xn } is independent.
and y are both in U .
i. If one of x1 , x2 , . . . , xn is zero, then
b. If U is a subspace of Rn and rx is in U , then x is {x1 , x2 , . . . , xn } is not independent.
in U .
j. If ax + by + cz = 0 where a, b, and c are in R, then
c. If U is a nonempty set and sx + ty is in U for any {x, y, z} is independent.
s and t whenever x and y are in U , then U is a
subspace. k. If {x, y, z} is independent, then ax + by + cz = 0
for some a, b, and c in R.
d. If U is a subspace of Rn and x is in U , then −x is
in U . l. If {x1 , x2 , . . . , xn } is not independent, then
t1 x1 + t2 x2 + · · · + tn xn = 0 for ti in R not all zero.
e. If {x, y} is independent, then {x, y, x + y} is in-
dependent. m. If {x1 , x2 , . . . , xn } is independent, then
t1 x1 + t2 x2 + · · · + tn xn = 0 for some ti in R.
f. If {x, y, z} is independent, then {x, y} is inde-
pendent. n. Every set of four non-zero vectors in R4 is a basis.
328 Vector Space Rn
o. No basis of R3 can contain a vector with a compo- r. Every nonempty subset of a basis of R3 is again a
nent 0. basis of R3 .
Many mathematical entities have the property that they can be added and multiplied by a number. Numbers
themselves have this property, as do m × n matrices: The sum of two such matrices is again m × n as is any
scalar multiple of such a matrix. Polynomials are another familiar example, as are the geometric vectors
in Chapter 4. It turns out that there are many other types of mathematical objects that can be added and
multiplied by a scalar, and the general study of such systems is introduced in this chapter. Remarkably,
much of what we could say in Chapter 5 about the dimension of subspaces in Rn can be formulated in this
generality.
329
330 Vector Spaces
The content of axioms A1 and S1 is described by saying that V is closed under vector addition and scalar
multiplication. The element 0 in axiom A4 is called the zero vector, and the vector −v in axiom A5 is
called the negative of v.
The rules of matrix arithmetic, when applied to Rn , give
Example 6.1.1
Rn is a vector space using matrix addition and scalar multiplication.2
It is important to realize that, in a general vector space, the vectors need not be n-tuples as in Rn . They
can be any kind of objects at all as long as the addition and scalar multiplication are defined and the axioms
are satisfied. The following examples illustrate the diversity of the concept.
The space Rn consists of special types of matrices. More generally, let Mmn denote the set of all m × n
matrices with real entries. Then Theorem 2.1.1 gives:
1 The scalars will usually be real numbers, but they could be complex numbers, or elements of an algebraic system called a
field. Another example is the field Q of rational numbers. We will look briefly at finite fields in Section 8.8.
2 We will usually write the vectors in Rn as n-tuples. However, if it is convenient, we will sometimes denote them as rows
or columns.
6.1. Examples and Basic Properties 331
Example 6.1.2
The set Mmn of all m × n matrices is a vector space using matrix addition and scalar multiplication.
The zero element in this vector space is the zero matrix of size m × n, and the vector space negative
of a matrix (required by axiom A5) is the usual matrix negative discussed in Section 2.1. Note that
Mmn is just Rmn in different notation.
In Chapter 5 we identified many important subspaces of Rn such as im A and null A for a matrix A. These
are all vector spaces.
Example 6.1.3
Show that every subspace of Rn is a vector space in its own right using the addition and scalar
multiplication of Rn .
Solution. Axioms A1 and S1 are two of the defining conditions for a subspace U of Rn (see
Section 5.1). The other eight axioms for a vector space are inherited from Rn . For example, if x
and y are in U and a is a scalar, then a(x + y) = ax + ay because x and y are in Rn . This shows that
axiom S2 holds for U ; similarly, the other axioms also hold for U .
Example 6.1.4
Let V denote the set of all ordered pairs (x, y) and define addition in V as in R2 . However, define a
new scalar multiplication in V by
a(x, y) = (ay, ax)
Determine if V is a vector space with these operations.
Solution. Axioms A1 to A5 are valid for V because they hold for matrices. Also a(x, y) = (ay, ax)
is again in V , so axiom S1 holds. To verify axiom S2, let v = (x, y) and w = (x1 , y1 ) be typical
elements in V and compute
Because these are equal, axiom S2 holds. Similarly, the reader can verify that axiom S3 holds.
However, axiom S4 fails because
need not equal ab(x, y) = (aby, abx). Hence, V is not a vector space. (In fact, axiom S5 also fails.)
Sets of polynomials provide another important source of examples of vector spaces, so we review some
basic facts. A polynomial in an indeterminate x is an expression
p(x) = a0 + a1 x + a2 x2 + · · · + an xn
332 Vector Spaces
where a0 , a1 , a2 , . . . , an are real numbers called the coefficients of the polynomial. If all the coefficients
are zero, the polynomial is called the zero polynomial and is denoted simply as 0. If p(x) 6= 0, the
highest power of x with a nonzero coefficient is called the degree of p(x) denoted as deg p(x). The
coefficient itself is called the leading coefficient of p(x). Hence deg (3 + 5x) = 1, deg (1 + x + x2 ) = 2,
and deg (4) = 0. (The degree of the zero polynomial is not defined.)
Let P denote the set of all polynomials and suppose that
p(x) = a0 + a1 x + a2 x2 + · · ·
q(x) = b0 + b1 x + b2 x2 + · · ·
are two polynomials in P (possibly of different degrees). Then p(x) and q(x) are called equal [written
p(x) = q(x)] if and only if all the corresponding coefficients are equal—that is, a0 = b0 , a1 = b1 , a2 = b2 ,
and so on. In particular, a0 + a1 x + a2 x2 + · · · = 0 means that a0 = 0, a1 = 0, a2 = 0, . . . , and this is the
reason for calling x an indeterminate. The set P has an addition and scalar multiplication defined on it as
follows: if p(x) and q(x) are as before and a is a real number,
p(x) + q(x) = (a0 + b0 ) + (a1 + b1 )x + (a2 + b2 )x2 + · · ·
ap(x) = aa0 + (aa1 )x + (aa2 )x2 + · · ·
Evidently, these are again polynomials, so P is closed under these operations, called pointwise addition
and scalar multiplication. The other vector space axioms are easily verified, and we have
Example 6.1.5
The set P of all polynomials is a vector space with the foregoing addition and scalar multiplication.
The zero vector is the zero polynomial, and the negative of a polynomial
p(x) = a0 + a1 x + a2 x2 + . . . is the polynomial −p(x) = −a0 − a1 x − a2 x2 − . . . obtained by
negating all the coefficients.
Example 6.1.6
Given n ≥ 1, let Pn denote the set of all polynomials of degree at most n, together with the zero
polynomial. That is
Pn = {a0 + a1 x + a2 x2 + · · · + an xn | a0 , a1 , a2 , . . . , an in R}.
Then Pn is a vector space. Indeed, sums and scalar multiples of polynomials in Pn are again in Pn ,
and the other vector space axioms are inherited from P. In particular, the zero vector and the
negative of a polynomial in Pn are the same as those in P.
If a and b are real numbers and a < b, the interval [a, b] is defined to be the set of all real numbers
x such that a ≤ x ≤ b. A (real-valued) function f on [a, b] is a rule that associates to every number x in
[a, b] a real number denoted f (x). The rule is frequently specified by giving a formula for f (x) in terms of
x. For example, f (x) = 2x , f (x) = sin x, and f (x) = x2 + 1 are familiar functions. In fact, every polynomial
p(x) can be regarded as the formula for a function p.
6.1. Examples and Basic Properties 333
y The set of all functions on [a, b] is denoted F[a, b]. Two functions
f and g in F[a, b] are equal if f (x) = g(x) for every x in [a, b], and we
y = x2 = f (x)
describe this by saying that f and g have the same action. Note that two
polynomials are equal in P (defined prior to Example 6.1.5) if and only if
1
y = f (x) + g(x) they are equal as functions.
= x2 − x
x If f and g are two functions in F[a, b], and if r is a real number, define
O 1 the sum f + g and the scalar product r f by
y = −x = g(x)
( f + g)(x) = f (x) + g(x) for each x in [a, b]
(r f )(x) = r f (x) for each x in [a, b]
In other words, the action of f + g upon x is to associate x with the number f (x) + g(x), and r f
associates x with r f (x). The sum of f (x) = x2 and g(x) = −x is shown in the diagram. These operations
on F[a, b] are called pointwise addition and scalar multiplication of functions and they are the usual
operations familiar from elementary algebra and calculus.
Example 6.1.7
The set F[a, b] of all functions on the interval [a, b] is a vector space using pointwise addition and
scalar multiplication. The zero function (in axiom A4), denoted 0, is the constant function defined
by
0(x) = 0 for each x in [a, b]
The negative of a function f is denoted − f and has action defined by
Axioms A1 and S1 are clearly satisfied because, if f and g are functions on [a, b], then f + g and
r f are again such functions. The verification of the remaining axioms is left as Exercise 6.1.14.
Other examples of vector spaces will appear later, but these are sufficiently varied to indicate the scope
of the concept and to illustrate the properties of vector spaces to be discussed. With such a variety of
examples, it may come as a surprise that a well-developed theory of vector spaces exists. That is, many
properties can be shown to hold for all vector spaces and hence hold in every example. Such properties
are called theorems and can be deduced from the axioms. Here is an important example.
Proof. We are given v + u = v + w. If these were numbers instead of vectors, we would simply subtract v
from both sides of the equation to obtain u = w. This can be accomplished with vectors by adding −v to
both sides of the equation. The steps (using only the axioms) are as follows:
v+u = v+w
−v + (v + u) = −v + (v + w) (axiom A5)
(−v + v) + u = (−v + v) + w (axiom A3)
334 Vector Spaces
u − v = u + (−v)
We shall say that this vector is the result of having subtracted v from u and, as in arithmetic, this operation
has the property given in Theorem 6.1.2.
Theorem 6.1.2
If u and v are vectors in a vector space V , the equation
x+v = u
x = u−v
Proof. The difference x = u − v is indeed a solution to the equation because (using several axioms)
x + v = (u − v) + v = [u + (−v)] + v = u + (−v + v) = u + 0 = u
To see that this is the only solution, suppose x1 is another solution so that x1 + v = u. Then x + v = x1 + v
(they both equal u), so x = x1 by cancellation.
Similarly, cancellation shows that there is only one zero vector in any vector space and only one
negative of each vector (Exercises 6.1.10 and 6.1.11). Hence we speak of the zero vector and the negative
of a vector.
The next theorem derives some basic properties of scalar multiplication that hold in every vector space,
and will be used extensively.
Theorem 6.1.3
Let v denote a vector in a vector space V and let a denote a real number.
1. 0v = 0.
2. a0 = 0.
3. If av = 0, then either a = 0 or v = 0.
3 Observe that none of the scalar multiplication axioms are needed here.
6.1. Examples and Basic Properties 335
4. (−1)v = −v.
Proof.
1. Observe that 0v + 0v = (0 + 0)v = 0v = 0v + 0 where the first equality is by axiom S3. It follows
that 0v = 0 by cancellation.
using (1) and axioms S5 and S3. Hence (−1)v + v = −v + v (because both are equal to 0), so
(−1)v = −v by cancellation.
5. The proof is left as Exercise 6.1.12.4
The properties in Theorem 6.1.3 are familiar for matrices; the point here is that they hold in every vector
space. It is hard to exaggerate the importance of this observation.
Axiom A3 ensures that the sum u + (v + w) = (u + v) + w is the same however it is formed, and we
write it simply as u + v + w. Similarly, there are different ways to form any sum v1 + v2 + · · · + vn , and
Axiom A3 guarantees that they are all equal. Moreover, Axiom A2 shows that the order in which the
vectors are written does not matter (for example: u + v + w + z = z + u + w + v).
Similarly, Axioms S2 and S3 extend. For example
for all a, u, v, and w. Similarly (a + b + c)v = av + bv + cv hold for all values of a, b, c, and v (verify).
More generally,
hold for all n ≥ 1, all numbers a, a1 , . . . , an , and all vectors, v, v1 , . . . , vn . The verifications are by induc-
tion and are left to the reader (Exercise 6.1.13). These facts—together with the axioms, Theorem 6.1.3,
and the definition of subtraction—enable us to simplify expressions involving sums of scalar multiples of
vectors by collecting like terms, expanding, and taking out common factors. This has been discussed for
the vector space of matrices in Section 2.1 (and for geometric vectors in Section 4.1); the manipulations
in an arbitrary vector space are carried out in the same way. Here is an illustration.
336 Vector Spaces
Example 6.1.8
If u, v, and w are vectors in a vector space V , simplify the expression
Example 6.1.9
A set {0} with one element becomes a vector space if we define
The resulting space is called the zero vector space and is denoted {0}.
The vector space axioms are easily verified for {0}. In any vector space V , Theorem 6.1.3 shows that the
zero subspace (consisting of the zero vector of V alone) is a copy of the zero vector space.
Exercise 6.1.1 Let V denote the set of ordered triples Exercise 6.1.2 Are the following sets vector spaces with
(x, y, z) and define addition in V as in R3 . For each of the indicated operations? If not, why not?
the following definitions of scalar multiplication, decide
whether V is a vector space. a. The set V of nonnegative real numbers; ordinary
addition and scalar multiplication.
a. a(x, y, z) = (ax, y, az)
b. The set V of all polynomials of degree ≥ 3,
together with 0; operations of P.
b. a(x, y, z) = (ax, 0, az)
c. The set of all polynomials of degree ≤ 3; opera-
c. a(x, y, z) = (0, 0, 0) tions of P.
The set V of all 2 × 2 matrices of the form Exercise 6.1.6 In each case show that the condition
e.
a b
; operations of M22 . au + bv + cw = 0 in V implies that a = b = c = 0.
0 c
a. V = R4 ; u = (2, 1, 0, 2), v = (1, 1, −1, 0),
f. The set V of 2 × 2 matrices with equal column w = (0, 1, 2, 1)
sums; operations of M22 .
1 0 0 1
g. The set V of 2 × 2 matrices with zero determinant; b. V = M22 ; u = ,v= ,
0 1 1 0
usual matrix operations. 1 1
w=
1 −1
h. The set V of real numbers; usual operations.
c. V = P; u = x3 + x, v = x2 + 1, w = x3 − x2 + x + 1
i. The set V of complex numbers; usual addition and
multiplication by a real number. d. V = F[0, π ]; u = sin x, v = cos x, w = 1—the con-
stant function
j. The set V of all ordered pairs (x, y) with the
addition of R2 , but using scalar multiplication Exercise 6.1.7 Simplify each of the following.
a(x, y) = (ax, −ay).
a. 3[2(u − 2v − w) + 3(w − v)] − 7(u − 3v − w)
k. The set V of all ordered pairs (x, y) with the
addition of R2 , but using scalar multiplication b. 4(3u − v + w) − 2[(3u − 2v) − 3(v − w)]
a(x, y) = (x, y) for all a in R. + 6(w − u − v)
l. The set V of all functions f : R → R with point- Exercise 6.1.8 Show that x = v is the only solution to
wise addition, but scalar multiplication defined by the equation x + x = 2v in a vector space V . Cite all ax-
(a f )(x) = f (ax). ioms used.
m. The set V of all 2 × 2 matrices whose entries sum Exercise 6.1.9 Show that −0 = 0 in any vector space.
to 0; operations of M22 . Cite all axioms used.
Exercise 6.1.10 Show that the zero vector 0 is uniquely
n. The set V of all 2 × 2 matrices with the addi-
determined by the property in axiom A4.
tion of M22 but scalar multiplication ∗ defined by
a ∗ X = aX T . Exercise 6.1.11 Given a vector v, show that its negative
−v is uniquely determined by the property in axiom A5.
Exercise 6.1.3 Let V be the set of positive real numbers Exercise 6.1.12
with vector addition being ordinary multiplication, and
scalar multiplication being a · v = va . Show that V is a a. Prove (2) of Theorem 6.1.3. [Hint: Axiom S2.]
vector space. b. Prove that (−a)v = −(av) in Theorem 6.1.3 by
Exercise 6.1.4 If V is the set of ordered pairs (x, y) of first computing (−a)v + av. Then do it using (4)
real numbers, show that it is a vector space with addition of Theorem 6.1.3 and axiom S4.
(x, y) + (x1 , y1 ) = (x + x1 , y + y1 + 1) and scalar mul- c. Prove that a(−v) = −(av) in Theorem 6.1.3 in
tiplication a(x, y) = (ax, ay + a − 1). What is the zero two ways, as in part (b).
vector in V ?
Exercise 6.1.5 Find x and y (in terms of u and v) such Exercise 6.1.13 Let v, v1 , . . . , vn denote vectors in a
that: vector space V and let a, a1 , . . . , an denote numbers.
Use induction on n to prove each of the following.
a. 2x + y = u b. 3x − 2y = u
5x + 3y = v 4x − 5y = v a. a(v1 + v2 + · · · + vn ) = av1 + av2 + · · · + avn
b. (a1 + a2 + · · · + an )v = a1 v + a2 v + · · · + an v
338 Vector Spaces
Exercise 6.1.14 Verify axioms A2—A5 and S2—S5 for is an m × n matrix, and X is in V n , define AX in V m by
the space F[a, b] of functions on [a, b] (Example 6.1.7). matrix multiplication. More precisely, if
Exercise 6.1.15 Prove each of the following for vectors
v1 u1
u and v and scalars a and b.
A = [ai j ] and X = ... , let AX = ..
.
a. If av = 0, then a = 0 or v = 0. vn un
b. If av = bv and v 6= 0, then a = b. where ui = ai1 v1 + ai2 v2 + · · · + ain vn for each i.
c. If av = aw and a 6= 0, then v = w. Prove that:
Subspaces of Rn (as defined in Section 5.1) are subspaces in the present sense by Example 6.1.3. Moreover,
the defining properties for a subspace of Rn actually characterize subspaces in general.
6.2. Subspaces and Spanning Sets 339
Proof. If U is a subspace of V , then (2) and (3) hold by axioms A1 and S1 respectively, applied to the
vector space U . Since U is nonempty (it is a vector space), choose u in U . Then (1) holds because 0 = 0u
is in U by (3) and Theorem 6.1.3.
Conversely, if (1), (2), and (3) hold, then axioms A1 and S1 hold because of (2) and (3), and axioms
A2, A3, S2, S3, S4, and S5 hold in U because they hold in V . Axiom A4 holds because the zero vector 0
of V is actually in U by (1), and so serves as the zero of U . Finally, given u in U , then its negative −u in V
is again in U by (3) because −u = (−1)u (again using Theorem 6.1.3). Hence −u serves as the negative
of u in U .
Note that the proof of Theorem 6.2.1 shows that if U is a subspace of V , then U and V share the same zero
vector, and that the negative of a vector in the space U is the same as its negative in V .
Example 6.2.1
If V is any vector space, show that {0} and V are subspaces of V .
Solution. U = V clearly satisfies the conditions of the subspace test. As to U = {0}, it satisfies the
conditions because 0 + 0 = 0 and a0 = 0 for all a in R.
Example 6.2.2
Let v be a vector in a vector space V . Show that the set
Rv = {av | a in R}
Solution. Because 0 = 0v, it is clear that 0 lies in Rv. Given two vectors av and a1 v in Rv, their
sum av + a1 v = (a + a1 )v is also a scalar multiple of v and so lies in Rv. Hence Rv is closed under
addition. Finally, given av, r(av) = (ra)v lies in Rv for all r ∈ R, so Rv is closed under scalar
multiplication. Hence the subspace test applies.
In particular, given d 6= 0 in R3 , Rd is the line through the origin with direction vector d.
340 Vector Spaces
The space Rv in Example 6.2.2 is described by giving the form of each vector in Rv. The next example
describes a subset U of the space Mnn by giving a condition that each matrix of U must satisfy.
Example 6.2.3
Let A be a fixed matrix in Mnn . Show that U = {X in Mnn | AX = X A} is a subspace of Mnn .
Solution. If 0 is the n × n zero matrix, then A0 = 0A, so 0 satisfies the condition for membership in
U . Next suppose that X and X1 lie in U so that AX = X A and AX1 = X1 A. Then
Suppose p(x) is a polynomial and a is a number. Then the number p(a) obtained by replacing x by a
in the expression for p(x) is called the evaluation of p(x) at a. For example, if p(x) = 5 − 6x + 2x2 , then
the evaluation of p(x) at a = 2 is p(2) = 5 − 12 + 8 = 1. If p(a) = 0, the number a is called a root of p(x).
Example 6.2.4
Consider the set U of all polynomials in P that have 3 as a root:
U = {p(x) ∈ P | p(3) = 0}
Solution. Clearly, the zero polynomial lies in U . Now let p(x) and q(x) lie in U so p(3) = 0 and
q(3) = 0. We have (p + q)(x) = p(x) + q(x) for all x, so (p + q)(3) = p(3) + q(3) = 0 + 0 = 0, and
U is closed under addition. The verification that U is closed under scalar multiplication is similar.
a0 + a1 x + a2 x2 + · · · + an xn
where a0 , a1 , a2 , . . . , an are real numbers, and so is closed under the addition and scalar multiplication in
P. Moreover, the zero polynomial is included in Pn . Thus the subspace test gives Example 6.2.5.
Example 6.2.5
Pn is a subspace of P for each n ≥ 0.
The next example involves the notion of the derivative f ′ of a function f . (If the reader is not fa-
miliar with calculus, this example may be omitted.) A function f defined on the interval [a, b] is called
differentiable if the derivative f ′ (r) exists at every r in [a, b].
6.2. Subspaces and Spanning Sets 341
Example 6.2.6
Show that the subset D[a, b] of all differentiable functions on [a, b] is a subspace of the vector
space F[a, b] of all functions on [a, b].
Solution. The derivative of any constant function is the constant function 0; in particular, 0 itself is
differentiable and so lies in D[a, b]. If f and g both lie in D[a, b] (so that f ′ and g′ exist), then it is
a theorem of calculus that f + g and r f are both differentiable for any r ∈ R. In fact,
( f + g)′ = f ′ + g′ and (r f )′ = r f ′ , so both lie in D[a, b]. This shows that D[a, b] is a subspace of
F[a, b].
v = a1 v1 + a2 v2 + · · · + an vn
where a1 , a2 , . . . , an are scalars, called the coefficients of v1 , v2 , . . . , vn . The set of all linear
combinations of these vectors is called their span, and is denoted by
If it happens that V = span {v1 , v2 , . . . , vn }, these vectors are called a spanning set for V . For example,
the span of two vectors v and w is the set
span {v, w} = {sv + tw | s and t in R}
of all sums of scalar multiples of these vectors.
Example 6.2.7
Consider the vectors p1 = 1 + x + 4x2 and p2 = 1 + 5x + x2 in P2 . Determine whether p1 and p2 lie
in span {1 + 2x − x2 , 3 + 5x + 2x2 }.
Again equating coefficients of powers of x gives equations 1 = s + 3t, 5 = 2s + 5t, and 1 = −s + 2t.
But in this case there is no solution, so p2 is not in span {1 + 2x − x2 , 3 + 5x + 2x2 }.
We saw in Example 5.1.6 that Rm = span {e1 , e2 , . . . , em } where the vectors e1 , e2 , . . . , em are the
columns of the m × m identity matrix. Of course Rm = Mm1 is the set of all m × 1 matrices, and there is
an analogous spanning set for each space Mmn . For example, each 2 × 2 matrix has the form
a b 1 0 0 1 0 0 0 0
=a +b +c +d
c d 0 0 0 0 1 0 0 1
so
1 0 0 1 0 0 0 0
M22 = span , , ,
0 0 0 0 1 0 0 1
Similarly, we obtain
Example 6.2.8
Mmn is the span of the set of all m × n matrices with exactly one entry equal to 1, and all other
entries zero.
The fact that every polynomial in Pn has the form a0 + a1 x + a2 x2 + · · · + an xn where each ai is in R
shows that
Example 6.2.9
Pn = span {1, x, x2 , . . . , xn }.
In Example 6.2.2 we saw that span {v} = {av | a in R} = Rv is a subspace for any vector v in a vector
space V . More generally, the span of any set of vectors is a subspace. In fact, the proof of Theorem 5.1.1
goes through to prove:
Theorem 6.2.2
Let U = span {v1 , v2 , . . . , vn } in a vector space V . Then:
2. U is the “smallest” subspace containing these vectors in the sense that any subspace that
contains each of v1 , v2 , . . . , vn must contain U .
Here is how condition 2 in Theorem 6.2.2 is used. Given vectors v1 , . . . , vk in a vector space V and a
subspace U ⊆ V , then:
span {v1 , . . . , vn } ⊆ U ⇔ each vi ∈ U
6.2. Subspaces and Spanning Sets 343
Example 6.2.10
Show that P3 = span {x2 + x3 , x, 2x2 + 1, 3}.
Solution. Write U = span {x2 + x3 , x, 2x2 + 1, 3}. Then U ⊆ P3 , and we use the fact that
P3 = span {1, x, x2 , x3 } to show that P3 ⊆ U . In fact, x and 1 = 13 · 3 clearly lie in U . But then
successively,
x2 = 21 [(2x2 + 1) − 1] and x3 = (x2 + x3 ) − x2
also lie in U . Hence P3 ⊆ U by Theorem 6.2.2.
Example 6.2.11
Let u and v be two vectors in a vector space V . Show that
Solution. We have span {u + 2v, u − v} ⊆ span {u, v} by Theorem 6.2.2 because both u + 2v and
u − v lie in span {u, v}. On the other hand,
Exercise 6.2.1 Which of the following are subspaces of Exercise 6.2.2 Which of the following are subspaces of
P3 ? Support your answer. M22 ? Support your answer.
a. U = { f (x) | f (x) ∈ P3 , f (2) = 1} a b
a. U = a, b, and c in R
0 c
b. U = {xg(x) | g(x) ∈ P2 }
a b
c. U = {xg(x) | g(x) ∈ P3 } b. U = a + b = c + d; a, b, c, d in R
c d
d. U = {xg(x) + (1 − x)h(x) | g(x) and h(x) ∈ P2 } c. U = {A | A ∈ M22 , A = AT }
e. U = The set of all polynomials in P3 with constant d. U = {A | A ∈ M22 , AB = 0}, B a fixed 2× 2 matrix
term 0
e. U = {A | A ∈ M22 , A2 = A}
f. U = { f (x) | f (x) ∈ P3 , deg f (x) = 3}
f. U = {A | A ∈ M22 , A is not invertible}
344 Vector Spaces
g. U = {A | A ∈ M22 , BAC = CAB}, B and C fixed Exercise 6.2.8 Which of the following functions lie in
2 × 2 matrices span {cos2 x, sin2 x}? (Work in F[0, π ].)
a. cos 2x b. 1
Exercise 6.2.3 Which of the following are subspaces of
F[0, 1]? Support your answer. c. x2 d. 1 + x2
Exercise 6.2.20 If Pn = span {p1 (x), p2 (x), . . . , pk (x)} Exercise 6.2.25 Let {v1 , v2 , . . . , vn } and
and a is in R, show that pi (a) 6= 0 for some i. {u1 , u2 , . . . , un } be sets of vectors in a vector space,
and let
Exercise 6.2.21 Let U be a subspace of a vector space
v1 u1
V. .. Y = ..
X = . .
a. If au is in U where a 6= 0, show that u is in U . vn un
Exercise 6.2.22 Let U be a nonempty subset of a vector a. Show that span {v1 , . . . , vn } ⊆ span {u1 , . . . , un }
space V . Show that U is a subspace of V if and only if if and only if AY = X for some n × n matrix A.
u1 + au2 lies in U for all u1 and u2 in U and all a in R.
b. If X = AY where A is invertible, show that
Exercise 6.2.23 Let U = {p(x) in P | p(3) = 0} be the span {v1 , . . . , vn } = span {u1 , . . . , un }.
set in Example 6.2.4. Use the factor theorem (see Sec-
tion 6.5) to show that U consists of multiples of x − 3;
that is, show that U = {(x − 3)q(x) | q(x) ∈ P}. Use this Exercise 6.2.26 If U and W are subspaces of a vector
to show that U is a subspace of P. space V , let U ∪ W = {v | v is in U or v is in W }. Show
that U ∪W is a subspace if and only if U ⊆ W or W ⊆ U .
Exercise 6.2.24 Let A1 , A2 , . . . , Am denote n × n matri-
ces. If 0 6= y ∈ Rn and A1 y = A2 y = · · · = Am y = 0, show Exercise 6.2.27 Show that P cannot be spanned by a
that {A1 , A2 , . . . , Am } cannot span Mnn . finite set of polynomials.
If s1 v1 + s2 v2 + · · · + sn vn = 0, then s1 = s2 = · · · = sn = 0.
A set of vectors that is not linearly independent is said to be linearly dependent (or simply
dependent).
The trivial linear combination of the vectors v1 , v2 , . . . , vn is the one with every coefficient zero:
This is obviously one way of expressing 0 as a linear combination of the vectors v1 , v2 , . . . , vn , and they
are linearly independent when it is the only way.
346 Vector Spaces
Example 6.3.1
Show that {1 + x, 3x + x2 , 2 + x − x2 } is independent in P2 .
s1 (1 + x) + s2 (3x + x2 ) + s3 (2 + x − x2 ) = 0
s1 + + 2s3 = 0
s1 + 3s2 + s3 = 0
s2 − s3 = 0
Example 6.3.2
Show that {sin x, cos x} is independent in the vector space F[0, 2π ] of functions defined on the
interval [0, 2π ].
s1 (sin x) + s2 (cos x) = 0
This must hold for all values of x in [0, 2π ] (by the definition of equality in F[0, 2π ]). Taking
π
x = 0 yields s2 = 0 (because sin 0 = 0 and cos 0 = 1). Similarly, s1 = 0 follows from taking x = 2
(because sin π2 = 1 and cos π2 = 0).
Example 6.3.3
Suppose that {u, v} is an independent set in a vector space V . Show that {u + 2v, u − 3v} is also
independent.
Because {u, v} is independent, this yields linear equations s + t = 0 and 2s − 3t = 0. The only
solution is s = t = 0.
6.3. Linear Independence and Dimension 347
Example 6.3.4
Show that any set of polynomials of distinct degrees is independent.
t 1 p1 + t 2 p2 + · · · + t m pm = 0
where each ti is in R. As deg (p1 ) = d1 , let axd1 be the term in p1 of highest degree, where a 6= 0.
Since d1 > d2 > · · · > dm , it follows that t1axd1 is the only term of degree d1 in the linear
combination t1 p1 + t2 p2 + · · · + tm pm = 0. This means that t1 axd1 = 0, whence t1 a = 0, hence
t1 = 0 (because a 6= 0). But then t2 p2 + · · · + tm pm = 0 so we can repeat the argument to show that
t2 = 0. Continuing, we obtain ti = 0 for each i, as desired.
Example 6.3.5
Suppose that A is an n × n matrix such that Ak = 0 but Ak−1 6= 0. Show that
B = {I, A, A2 , . . . , Ak−1 } is independent in Mnn .
Since Ak = 0, all the higher powers are zero, so this becomes r0 Ak−1 = 0. But Ak−1 6= 0, so r0 = 0,
and we have r1 A1 + r2 A2 + · · · + rk−1 Ak−1 = 0. Now multiply by Ak−2 to conclude that r1 = 0.
Continuing, we obtain ri = 0 for each i, so B is independent.
The next example collects several useful properties of independence for reference.
Example 6.3.6
Let V denote a vector space.
Solution.
A set of vectors is independent if 0 is a linear combination in a unique way. The following theorem
shows that every linear combination of these vectors has uniquely determined coefficients, and so extends
Theorem 5.2.1.
Theorem 6.3.1
Let {v1 , v2 , . . . , vn } be a linearly independent set of vectors in a vector space V . If a vector v has
two (ostensibly different) representations
v = s 1 v1 + s 2 v2 + · · · + s n vn
v = t1 v1 + t2v2 + · · · + tnvn
The following theorem extends (and proves) Theorem 5.2.4, and is one of the most useful results in
linear algebra.
Proof. Let V = span {v1 , v2 , . . . , vn }, and suppose that {u1 , u2 , . . . , um } is an independent set in V .
Then u1 = a1 v1 + a2 v2 + · · · + an vn where each ai is in R. As u1 6= 0 (Example 6.3.6), not all of the ai are
zero, say a1 6= 0 (after relabelling the vi ). Then V = span {u1 , v2 , v3 , . . . , vn } as the reader can verify.
Hence, write u2 = b1 u1 + c2 v2 + c3 v3 + · · · + cn vn . Then some ci 6= 0 because {u1 , u2 } is independent;
so, as before, V = span {u1 , u2 , v3 , . . . , vn }, again after possible relabelling of the vi . If m > n, this
procedure continues until all the vectors vi are replaced by the vectors u1 , u2 , . . . , un . In particular,
V = span {u1 , u2 , . . . , un }. But then un+1 is a linear combination of u1 , u2 , . . . , un contrary to the
independence of the ui . Hence, the assumption m > n cannot be valid, so m ≤ n and the theorem is proved.
2. V = span {e1 , e2 , . . . , en }
Thus if a set of vectors {e1 , e2 , . . . , en } is a basis, then every vector in V can be written as a linear
combination of these vectors in a unique way (Theorem 6.3.1). But even more is true: Any two (finite)
bases of V contain the same number of vectors.
Proof. Because V = span {e1 , e2 , . . . , en } and {f1 , f2 , . . . , fm } is independent, it follows from Theo-
rem 6.3.2 that m ≤ n. Similarly n ≤ m, so n = m, as asserted.
Theorem 6.3.3 guarantees that no matter which basis of V is chosen it contains the same number of
vectors as any other basis. Hence there is no ambiguity about the following definition.
dim V = n
dim {0} = 0
In our discussion to this point we have always assumed that a basis is nonempty and hence that the di-
mension of the space is at least 1. However, the zero space {0} has no basis (by Example 6.3.6) so our
insistence that dim {0} = 0 amounts to saying that the empty set of vectors is a basis of {0}. Thus the
statement that “the dimension of a vector space is the number of vectors in any basis” holds even for the
zero space.
We saw in Example 5.2.9 that dim (Rn ) = n and, if e j denotes column j of In , that {e1 , e2 , . . . , en } is
a basis (called the standard basis). In Example 6.3.7 below, similar considerations apply to the space Mmn
of all m × n matrices; the verifications are left to the reader.
Example 6.3.7
The space Mmn has dimension mn, and one basis consists of all m × n matrices with exactly one
entry equal to 1 and all other entries equal to 0. We call this the standard basis of Mmn .
350 Vector Spaces
Example 6.3.8
Show that dim Pn = n + 1 and that {1, x, x2 , . . . , xn } is a basis, called the standard basis of Pn .
Example 6.3.9
If v 6= 0 is any nonzero vector in a vector space V , show that span {v} = Rv has dimension 1.
Solution. {v} clearly spans Rv, and it is linearly independent by Example 6.3.6. Hence {v} is a
basis of Rv, and so dim Rv = 1.
Example 6.3.10
1 1
Let A = and consider the subspace
0 0
U = {X in M22 | AX = X A}
Example 6.3.11
Show that the set V of all symmetric 2 × 2 matrices is a vector space, and find the dimension of V .
Solution. A matrix A is symmetric if AT = A. If A and B lie in V , then
(A + B)T = AT + BT = A + B and (kA)T = kAT = kA
using Theorem 2.1.2. Hence A + B and kA are also symmetric. As the 2 × 2 zero matrix is also in
6.3. Linear Independence and Dimension 351
V , this shows that V is a vector space (being a subspace of M22 ). Now a matrix A is symmetric
when entries directly across the main diagonal are equal, so each 2 × 2 symmetric matrix has the
form
a c 1 0 0 0 0 1
=a +b +c
c b 0 0 0 1 1 0
1 0 0 0 0 1
Hence the set B = , , spans V , and the reader can verify that B is
0 0 0 1 1 0
linearly independent. Thus B is a basis of V , so dim V = 3.
It is frequently convenient to alter a basis by multiplying each basis vector by a nonzero scalar. The
next example shows that this always produces another basis. The proof is left as Exercise 6.3.22.
Example 6.3.12
Let B = {v1 , v2 , . . . , vn } be nonzero vectors in a vector space V . Given nonzero scalars
a1 , a2 , . . . , an , write D = {a1 v1 , a2 v2 , . . . , an vn }. If B is independent or spans V , the same is true
of D. In particular, if B is a basis of V , so also is D.
d. V = M22;
a. {1 + x, 1 − x, x + x2 } in P2 −1 0 1 −1 1 1 0 −1
, , ,
0 −1 −1 1 1 1 −1 0
b. {x2 , x + 1, 1 − x − x2 } in P2 1 1 1
e. V = F[1, 2]; x , x2 , x3
c.
n o
1 1 1 0 0 0 0 1 1
f. V = F[0, 1]; x2 +x−6 1
, x2 −5x+6 , x21−9
, , ,
0 0 1 0 1 −1 0 1
in M22
Exercise 6.3.3 Which of the following are independent
d.
in F[0, 2π ]?
1 1 0 1 1 0 1 1
, , ,
1 0 1 1 1 1 0 1 a. {sin2 x, cos2 x}
in M22
b. {1, sin2 x, cos2 x}
Exercise 6.3.2 Which of the following subsets of V are c. {x, sin2 x, cos2 x}
independent?
Exercise 6.3.4 Find all values of a such that the follow-
a. V = P2 ; {x2 + 1, x + 1, x} ing are independent in R3 .
Exercise 6.3.5 Show that the following are bases of the a. Let V denote the set of all 2 × 2 matrices with
space V indicated. equal column sums. Show that V is a subspace
of M22 , and compute dim V .
a. {(1, 1, 0), (1, 0, 1), (0, 1, 1)}; V = R3 b. Repeat part (a) for 3 × 3 matrices.
b. {(−1, 1, 1), (1, −1, 1), (1, 1, −1)}; V = R3 c. Repeat part (a) for n × n matrices.
1 0 0 1 1 1 1 0
c. , , , ; Exercise 6.3.11
0 1 1 0 0 1 0 0
V = M22
a. Let V = {(x2 + x+ 1)p(x) | p(x) in P2 }. Show that
d. {1 + x, x + x2 , x2 + x3 , x3 }; V = P3 V is a subspace of P4 and find dim V . [Hint: If
f (x)g(x) = 0 in P, then f (x) = 0 or g(x) = 0.]
Exercise 6.3.6 Exhibit a basis and calculate the dimen-
b. Repeat with V = {(x2 − x)p(x) | p(x) in P3 }, a
sion of each of the following subspaces of P2 .
subset of P5 .
a. {a(1 + x) + b(x + x2 ) | a and b in R} c. Generalize.
b. {a + b(x + x2 ) | a and b in R}
Exercise 6.3.12 In each case, either prove the assertion
c. {p(x) | p(1) = 0} or give an example showing that it is false.
o. If dim V = n, then no set of fewer than n vectors Exercise 6.3.24 Let U and W be subspaces of V with
can span V . bases {u1 , u2 , u3 } and {w1 , w2 } respectively. If U
and W have only the zero vector in common, show that
Exercise 6.3.13 Let A 6= 0 and B 6= 0 be n × n matrices, {u1 , u2 , u3 , w1 , w2 } is independent.
and assume that A is symmetric and B is skew-symmetric
Exercise 6.3.25 Let {p, q} be independent polynomi-
(that is, BT = −B). Show that {A, B} is independent.
als. Show that {p, q, pq} is independent if and only if
Exercise 6.3.14 Show that every set of vectors contain- deg p ≥ 1 and deg q ≥ 1.
ing a dependent set is again dependent.
Exercise 6.3.26 If z is a complex number, show that
Exercise 6.3.15 Show that every nonempty subset of an {z, z2 } is independent if and only if z is not real.
independent set of vectors is again independent.
Exercise 6.3.27 Let B = {A1 , A2 , . . . , An } ⊆ Mmn , and
Exercise 6.3.16 Let f and g be functions on [a, b], and write B′ = {AT1 , AT2 , . . . , ATn } ⊆ Mnm . Show that:
assume that f (a) = 1 = g(b) and f (b) = 0 = g(a). Show
that { f , g} is independent in F[a, b]. a. B is independent if and only if B′ is independent.
Exercise 6.3.17 Let {A1 , A2 , . . . , Ak } be independent b. B spans Mmn if and only if B′ spans Mnm .
in Mmn , and suppose that U and V are invertible ma-
trices of size m × m and n × n, respectively. Show that
Exercise 6.3.28 If V = F[a, b] as in Example 6.1.7,
{UA1V , UA2V , . . . , UAkV } is independent.
show that the set of constant functions is a subspace of
Exercise 6.3.18 Show that {v, w} is independent if and dimension 1 ( f is constant if there is a number c such
only if neither v nor w is a scalar multiple of the other. that f (x) = c for all x).
Exercise 6.3.19 Assume that {u, v} is independent in Exercise 6.3.29
a vector space V . Write u′ = au + bv and v′ = cu + dv,
where a, b, c, and d are numbers. Show ′ ′ a. If U is an invertible n × n matrix and
that {u , v } is
a c {A1 , A2 , . . . , Amn } is a basis of Mmn , show that
independent if and only if the matrix is invert- {A1U , A2U , . . . , AmnU } is also a basis.
b d
ible. [Hint: Theorem 2.4.5.]
b. Show that part (a) fails if U is not invertible. [Hint:
Exercise 6.3.20 If {v1 , v2 , . . . , vk } is independent and Theorem 2.4.5.]
w is not in span {v1 , v2 , . . . , vk }, show that:
Exercise 6.3.30 Show that {(a, b), (a1 , b1 )} is a basis
a. {w, v1 , v2 , . . . , vk } is independent. of R2 if and only if {a + bx, a1 + b1 x} is a basis of P1 .
b. {v1 + w, v2 + w, . . . , vk + w} is independent. Exercise 6.3.31 Find the dimension of the subspace
span {1, sin2 θ , cos 2θ } of F[0, 2π ].
Exercise 6.3.21 If {v1 , v2 , . . . , vk } is independent, Exercise 6.3.32 Show that F[0, 1] is not finite dimen-
show that {v1 , v1 + v2 , . . . , v1 + v2 + · · · + vk } is also sional.
independent.
Exercise 6.3.33 If U and W are subspaces of V , define
Exercise 6.3.22 Prove Example 6.3.12. their intersection U ∩W as follows:
Exercise 6.3.23 Let {u, v, w, z} be independent. U ∩W = {v | v is in both U and W }
Which of the following are dependent?
a. Show that U ∩W is a subspace contained in U and
a. {u − v, v − w, w − u} W.
354 Vector Spaces
b. Show that U ∩ W = {0} if and only if {u, w} is Exercise 6.3.35 Let Dn denote the set of all functions f
independent for any nonzero vectors u in U and w from the set {1, 2, . . . , n} to R.
in W .
a. Show that Dn is a vector space with pointwise ad-
c. If B and D are bases of U and W , and if U ∩W = dition and scalar multiplication.
{0}, show that B ∪ D = {v | v is in B or D} is in-
dependent. b. Show that {S1 , S2 , . . . , Sn } is a basis of Dn where,
for each k = 1, 2, . . . , n, the function Sk is defined
by Sk (k) = 1, whereas Sk ( j) = 0 if j 6= k.
Exercise 6.3.34 If U and W are vector spaces, let
V = {(u, w) | u in U and w in W }.
Exercise 6.3.36 A polynomial p(x) is called even if
p(−x) = p(x) and odd if p(−x) = −p(x). Let En and
a. Show that V is a vector space if (u, w) + On denote the sets of even and odd polynomials in Pn .
(u1 , w1 ) = (u + u1 , w + w1 ) and a(u, w) =
(au, aw). a. Show that En is a subspace of Pn and find dim En .
b. If dim U = m and dim W = n, show that b. Show that On is a subspace of Pn and find dim On .
dim V = m + n.
Exercise 6.3.37 Let {v1 , . . . , vn } be independent in a
c. If V1 , . . . , Vm are vector spaces, let vector space V , and let A be an n × n matrix. Define
u1 , . . . , un by
V = V1 × · · · ×Vm
= {(v1 , . . . , vm ) | vi ∈ Vi for each i} u1 v1
.. .
. = A ..
denote the space of n-tuples from the Vi with com- un vn
ponentwise operations (see Exercise 6.1.17). If
dim Vi = ni for each i, show that dim V = n1 + (See Exercise 6.1.18.) Show that {u1 , . . . , un } is inde-
· · · + nm . pendent if and only if A is invertible.
Up to this point, we have had no guarantee that an arbitrary vector space has a basis—and hence no
guarantee that one can speak at all of the dimension of V . However, Theorem 6.4.1 will show that any
space that is spanned by a finite set of vectors has a (finite) basis: The proof requires the following basic
lemma, of interest in itself, that gives a way to enlarge a given independent set of vectors.
Proof. Let tu + t1 v1 + t2 v2 + · · · + tk vk = 0; we must show that all the coefficients are zero. First, t = 0
because, otherwise, u = − tt1 v1 − tt2 v2 − · · · − ttk vk is in span {v1 , v2 , . . . , vk }, contrary to our assumption.
5 If X is a set, we write a ∈ X to indicate that a is an element of the set X. If a is not an element of X, we write a ∈
/ X.
6.4. Finite Dimensional Spaces 355
Hence t = 0. But then t1 v1 + t2 v2 + · · · + tk vk = 0 so the rest of the ti are zero by the independence of
{v1 , v2 , . . . , vk }. This is what we wanted.
z
Note that the converse of Lemma 6.4.1 is also true: if
u
{u, v1 , v2 , . . . , vk } is independent, then u is not in
span {v1 , v2 , . . . , vk }.
As an illustration, suppose that {v1 , v2 } is inde-
v1
pendent in R3 . Then v1 and v2 are not parallel, so
v2
0 span {v1 , v2 } is a plane through the origin (shaded in
y the diagram). By Lemma 6.4.1, u is not in this plane if
x and only if {u, v1 , v2 } is independent.
span {v1 , v2 }
Thus the zero vector space {0} is finite dimensional because {0} is a spanning set.
Lemma 6.4.2
Let V be a finite dimensional vector space. If U is any subspace of V , then any independent subset
of U can be enlarged to a finite basis of U .
Theorem 6.4.1
Let V be a finite dimensional vector space spanned by m vectors.
2. Every independent set of vectors in V can be enlarged to a basis of V by adding vectors from
any fixed basis of V .
3. If U is a subspace of V , then
Proof.
1. If V = {0}, then V has an empty basis and dim V = 0 ≤ m. Otherwise, let v 6= 0 be a vector in V .
Then {v} is independent, so (1) follows from Lemma 6.4.2 with U = V .
2. We refine the proof of Lemma 6.4.2. Fix a basis B of V and let I be an independent subset of V .
If span I = V then I is already a basis of V . If span I 6= V , then B is not contained in I (because
B spans V ). Hence choose b1 ∈ B such that b1 ∈ / span I. Hence the set I ∪ {b1 } is independent by
Lemma 6.4.1. If span (I ∪ {b1 }) = V we are done; otherwise a similar argument shows that (I ∪
{b1 , b2 }) is independent for some b2 ∈ B. Continue this process. As in the proof of Lemma 6.4.2,
a basis of V will be reached eventually.
3. a. This is clear if U = {0}. Otherwise, let u 6= 0 in U . Then {u} can be enlarged to a finite basis
B of U by Lemma 6.4.2, proving that U is finite dimensional. But B is independent in V , so
dim U ≤ dim V by the fundamental theorem.
b. This is clear if U = {0} because V has a basis; otherwise, it follows from (2).
Theorem 6.4.1 shows that a vector space V is finite dimensional if and only if it has a finite basis (possibly
empty), and that every subspace of a finite dimensional space is again finite dimensional.
Example 6.4.1
1 1 0 1 1 0
Enlarge the independent set D = , , to a basis of M22 .
1 0 1 1 1 1
1 0 0 1 0 0 0 0
Solution. The standard basis of M22 is , , , , so
0 0 0 0 1 0 0 1
including one of these in D will produce a basis by Theorem 6.4.1. In fact including any of these
matrices in D produces an independent set (verify), and hence a basis by Theorem
6.4.4. Of course
1 1
these vectors are not the only possibilities, for example, including works as well.
0 1
Example 6.4.2
Find a basis of P3 containing the independent set {1 + x, 1 + x2 }.
Solution. The standard basis of P3 is {1, x, x2 , x3 }, so including two of these vectors will do. If
we use 1 and x3 , the result is {1, 1 + x, 1 + x2 , x3 }. This is independent because the polynomials
have distinct degrees (Example 6.3.4), and so is a basis by Theorem 6.4.1. Of course, including
{1, x} or {1, x2 } would not work!
Example 6.4.3
Show that the space P of all polynomials is infinite dimensional.
6.4. Finite Dimensional Spaces 357
The next example illustrates how (2) of Theorem 6.4.1 can be used.
Example 6.4.4
If c1 , c2 , . . . , ck are independent columns in Rn , show that they are the first k columns in some
invertible n × n matrix.
Theorem 6.4.2
Let U and W be subspaces of the finite dimensional space V .
Proof. Since W is finite dimensional, (1) follows by taking V = W in part (3) of Theorem 6.4.1. Now
assume dim U = dim W = n, and let B be a basis of U . Then B is an independent set in W . If U 6= W ,
then span B 6= W , so B can be extended to an independent set of n + 1 vectors in W by Lemma 6.4.1.
This contradicts the fundamental theorem (Theorem 6.3.2) because W is spanned by dim W = n vectors.
Hence U = W , proving (2).
Theorem 6.4.2 is very useful. This was illustrated in Example 5.2.13 for R2 and R3 ; here is another
example.
Example 6.4.5
If a is a number, let W denote the subspace of all polynomials in Pn that have a as a root:
W = {p(x) | p(x) ∈ Pn and p(a) = 0}
Show that {(x − a), (x − a)2 , . . . , (x − a)n } is a basis of W .
Solution. Observe first that (x − a), (x − a)2 , . . . , (x − a)n are members of W , and that they are
independent because they have distinct degrees (Example 6.3.4). Write
U = span {(x − a), (x − a)2 , . . . , (x − a)n }
Then we have U ⊆ W ⊆ Pn , dim U = n, and dim Pn = n + 1. Hence n ≤ dim W ≤ n + 1 by
Theorem 6.4.2. Since dim W is an integer, we must have dim W = n or dim W = n + 1. But then
W = U or W = Pn , again by Theorem 6.4.2. Because W 6= Pn , it follows that W = U , as required.
358 Vector Spaces
A set of vectors is called dependent if it is not independent, that is if some nontrivial linear combina-
tion vanishes. The next result is a convenient test for dependence.
s1 v1 + (−1)v2 + s3 v3 + · · · + sk vk = 0
Theorem 6.4.3
Let V be a finite dimensional vector space. Any spanning set for V can be cut down (by deleting
vectors) to a basis of V .
Proof. Since V is finite dimensional, it has a finite spanning set S. Among all spanning sets contained in S,
choose S0 containing the smallest number of vectors. It suffices to show that S0 is independent (then S0 is a
basis, proving the theorem). Suppose, on the contrary, that S0 is not independent. Then, by Lemma 6.4.3,
some vector u ∈ S0 is a linear combination of the set S1 = S0 \ {u} of vectors in S0 other than u. It follows
that span S0 = span S1 , that is, V = span S1 . But S1 has fewer elements than S0 so this contradicts the
choice of S0 . Hence S0 is independent after all.
Note that, with Theorem 6.4.1, Theorem 6.4.3 completes the promised proof of Theorem 5.2.6 for the case
V = Rn .
Example 6.4.6
Find a basis of P3 in the spanning set S = {1, x + x2 , 2x − 3x2 , 1 + 3x − 2x2 , x3 }.
Solution. Since dim P3 = 4, we must eliminate one polynomial from S. It cannot be x3 because
the span of the rest of S is contained in P2 . But eliminating 1 + 3x − 2x2 does leave a basis (verify).
Note that 1 + 3x − 2x2 is the sum of the first three polynomials in S.
Theorem 6.4.4
Let V be a vector space with dim V = n, and suppose S is a set of exactly n vectors in V . Then S is
independent if and only if S spans V .
Proof. Assume first that S is independent. By Theorem 6.4.1, S is contained in a basis B of V . Hence
|S| = n = |B| so, since S ⊆ B, it follows that S = B. In particular S spans V .
Conversely, assume that S spans V , so S contains a basis B by Theorem 6.4.3. Again |S| = n = |B| so,
since S ⊇ B, it follows that S = B. Hence S is independent.
One of independence or spanning is often easier to establish than the other when showing that a set of
vectors is a basis. For example if V = Rn it is easy to check whether a subset S of Rn is orthogonal (hence
independent) but checking spanning can be tedious. Here are three more examples.
Example 6.4.7
Consider the set S = {p0 (x), p1 (x), . . . , pn (x)} of polynomials in Pn . If deg pk (x) = k for each k,
show that S is a basis of Pn .
Solution. The set S is independent—the degrees are distinct—see Example 6.3.4. Hence S is a
basis of Pn by Theorem 6.4.4 because dim Pn = n + 1.
Example 6.4.8
Let V denote the space of all symmetric 2 × 2 matrices. Find a basis of V consisting of invertible
matrices.
Solution. We know that dim V = 3 (Example 6.3.11), so what is needed is a set of three invertible,
symmetric
matrices
that (using
Theorem
6.4.4) is either independent or spans V . The set
1 0 1 0 0 1
, , is independent (verify) and so is a basis of the required type.
0 1 0 −1 1 0
Example 6.4.9
Let A be any n × n matrix. Show that there exist n2 + 1 scalars a0 , a1 , a2 , . . . , an2 not all zero,
such that
2
a0 I + a1 A + a2 A2 + · · · + an2 An = 0
where I denotes the n × n identity matrix.
Solution. The space Mnn of all n × n matrices has dimension n2 by Example 6.3.7. Hence the
2
n2 + 1 matrices I, A, A2 , . . . , An cannot be independent by Theorem 6.4.4, so a nontrivial linear
combination vanishes. This is the desired conclusion.
2
The result in Example 6.4.9 can be written as f (A) = 0 where f (x) = a0 + a1 x + a2 x2 + · · · + an2 xn . In
other words, A satisfies a nonzero polynomial f (x) of degree at most n2 . In fact we know that A satisfies
360 Vector Spaces
a nonzero polynomial of degree n (this is the Cayley-Hamilton theorem—see Theorem 8.7.10), but the
brevity of the solution in Example 6.4.6 is an indication of the power of these methods.
If U and W are subspaces of a vector space V , there are two related subspaces that are of interest, their
sum U +W and their intersection U ∩W , defined by
U +W = {u + w | u ∈ U and w ∈ W }
U ∩W = {v ∈ V | v ∈ U and v ∈ W }
It is routine to verify that these are indeed subspaces of V , that U ∩W is contained in both U and W , and
that U +W contains both U and W . We conclude this section with a useful fact about the dimensions of
these spaces. The proof is a good illustration of how the theorems in this section are used.
Theorem 6.4.5
Suppose that U and W are finite dimensional subspaces of a vector space V . Then U +W is finite
dimensional and
dim (U +W ) = dim U + dim W − dim (U ∩W ).
Proof. Since U ∩W ⊆ U , it has a finite basis, say {x1 , . . . , xd }. Extend it to a basis {x1 , . . . , xd , u1 , . . . , um }
of U by Theorem 6.4.1. Similarly extend {x1 , . . . , xd } to a basis {x1 , . . . , xd , w1 , . . . , w p } of W . Then
U +W = span {x1 , . . . , xd , u1 , . . . , um , w1 , . . . , w p }
as the reader can verify, so U +W is finite dimensional. For the rest, it suffices to show that
{x1 , . . . , xd , u1 , . . . , um , w1 , . . . , w p } is independent (verify). Suppose that
r1 x1 + · · · + rd xd + s1 u1 + · · · + sm um + t1 w1 + · · · + t p w p = 0 (6.1)
r1 x1 + · · · + rd xd + s1 u1 + · · · + sm um = −(t1 w1 + · · · + t p w p )
is in U (left side) and also in W (right side), and so is in U ∩ W . Hence (t1w1 + · · · + t pw p ) is a linear
combination of {x1 , . . . , xd }, so t1 = · · · = t p = 0, because {x1 , . . . , xd , w1 , . . . , w p } is independent.
Similarly, s1 = · · · = sm = 0, so (6.1) becomes r1 x1 + · · · + rd xd = 0. It follows that r1 = · · · = rd = 0, as
required.
Theorem 6.4.5 is particularly interesting if U ∩ W = {0}. Then there are no vectors xi in the above
proof, and the argument shows that if {u1 , . . . , um } and {w1 , . . . , w p } are bases of U and W respectively,
then {u1 , . . . , um , w1 , . . . , w p } is a basis of U + W . In this case U +W is said to be a direct sum (written
U ⊕W ); we return to this in Chapter 9.
6.4. Finite Dimensional Spaces 361
Exercise 6.4.1 In each case, find a basis for V that in- b. V = P3 ; S = {2x2 , 1 + x, 3, 1 + x + x2 + x3 }
cludes the vector v.
Exercise 6.4.6
a. V = R3 , v = (1, −1, 1)
a. Find a basis of M22 consisting of matrices with the
b. V = R3 , v = (0, 1, 1) property that A2 = A.
1 1 b. Find a basis of P3 consisting of polynomials
c. V = M22 , v =
1 1 whose coefficients sum to 4. What if they sum
to 0?
d. V = P2 , v = x2 − x + 1
Exercise 6.4.7 If {u, v, w} is a basis of V , determine
Exercise 6.4.2 In each case, find a basis for V among which of the following are bases.
the given vectors.
a. {u + v, u + w, v + w}
a. V = R3 ,
b. {2u + v + 3w, 3u + v − w, u − 4w}
{(1, 1, −1), (2, 0, 1), (−1, 1, −2), (1, 2, 1)}
c. {u, u + v + w}
b. V = P2 , {x2 + 3, x + 2, x2 − 2x − 1, x2 + x}
d. {u, u + w, u − w, v + w}
Exercise 6.4.3 In each case, find a basis of V containing
v and w. Exercise 6.4.8
a. V = R4 , v = (1, −1, 1, −1), w = (0, 1, 0, 1) a. Can two vectors span R3 ? Can they be linearly
independent? Explain.
b. V = R4 , v = (0, 0, 1, 1), w = (1, 1, 1, 1)
b. Can four vectors span R3 ? Can they be linearly
1 0 0 1 independent? Explain.
c. V = M22 , v = ,w=
0 1 1 0
Exercise 6.4.9 Show that any nonzero vector in a finite
d. V = P3 , v = x2 + 1, w = x2 + x dimensional vector space is part of a basis.
Exercise 6.4.10 If A is a square matrix, show that
Exercise 6.4.4 det A = 0 if and only if some row is a linear combina-
tion of the others.
a. If z is not a real number, show that {z, z2 } is a basis
of the real vector space C of all complex numbers. Exercise 6.4.11 Let D, I, and X denote finite, nonempty
sets of vectors in a vector space V . Assume that D is de-
b. If z is neither real nor pure imaginary, show that pendent and I is independent. In each case answer yes or
{z, z} is a basis of C. no, and defend your answer.
Exercise 6.4.5 In each case use Theorem 6.4.4 to decide a. If X ⊇ D, must X be dependent?
if S is a basis of V .
b. If X ⊆ D, must X be dependent?
a.
V = M22 ; c. If X ⊇ I, must X be independent?
1 1 0 1 0 0 0 0
S= , , , d. If X ⊆ I, must X be independent?
1 1 1 1 1 1 0 1
362 Vector Spaces
Exercise 6.4.12 If U and W are subspaces of V and b. Let B = {p0 (x), p1 (x), . . . , pn (x)} be a set of
dim U = 2, show that either U ⊆ W or dim (U ∩W ) ≤ 1. polynomials in Pn . Assume that there exist num-
bers a0 , a1 , . . . , an such that pi (ai ) 6= 0 for each i
Exercise 6.4.13 Let A be a nonzero 2 × 2 matrix and
but pi (a j ) = 0 if i is different from j. Show that B
write U = {X in M22 | X A = AX }. Show that dim U ≥ 2.
is a basis of Pn .
[Hint: I and A are in U .]
Exercise 6.4.14 If U ⊆ R2 is a subspace, show that Exercise 6.4.23 Let V be the set of all infinite sequences
U = {0}, U = R2 , or U is a line through the origin. (a0 , a1 , a2 , . . . ) of real numbers. Define addition and
Exercise 6.4.15 Given v1 , v2 , v3 , . . . , vk , and v, let U = scalar multiplication by
span {v1 , v2 , . . . , vk } and W = span {v1 , v2 , . . . , vk , v}.
Show that either dim W = dim U or dim W = 1 + (a0 , a1 , . . . ) + (b0 , b1 , . . . ) = (a0 + b0 , a1 + b1 , . . . )
dim U .
and
Exercise 6.4.16 Suppose U is a subspace of P1 , r(a0 , a1 , . . . ) = (ra0 , ra1 , . . . )
U 6= {0}, and U 6= P1 . Show that either U = R or
U = R(a + x) for some a in R.
a. Show that V is a vector space.
Exercise 6.4.17 Let U be a subspace of V and assume
dim V = 4 and dim U = 2. Does every basis of V result b. Show that V is not finite dimensional.
from adding (two) vectors to some basis of U ? Defend
c. [For those with some calculus.] Show that the set
your answer.
of convergent sequences (that is, lim an exists) is
n→∞
Exercise 6.4.18 Let U and W be subspaces of a vector a subspace, also of infinite dimension.
space V .
Exercise 6.4.22
Exercise 6.4.26 If A and B are m × n matrices, show
a. Let p(x) and q(x) lie in P1 and suppose that that rank (A + B) ≤ rank A + rank B. [Hint: If U and V
p(1) 6= 0, q(2) 6= 0, and p(2) = 0 = q(1). Show are the column spaces of A and B, respectively, show that
that {p(x), q(x)} is a basis of P1 . [Hint: If the column space of A + B is contained in U +V and that
rp(x) + sq(x) = 0, evaluate at x = 1, x = 2.] dim (U +V ) ≤ dim U + dim V . (See Theorem 6.4.5.)]
6.5. An Application to Polynomials 363
The vector space of all polynomials of degree at most n is denoted Pn , and it was established in Section 6.3
that Pn has dimension n + 1; in fact, {1, x, x2 , . . . , xn } is a basis. More generally, any n + 1 polynomials
of distinct degrees form a basis, by Theorem 6.4.4 (they are independent by Example 6.3.4). This proves
Theorem 6.5.1
Let p0 (x), p1 (x), p2 (x), . . . , pn (x) be polynomials in Pn of degrees 0, 1, 2, . . . , n, respectively.
Then {p0 (x), . . . , pn (x)} is a basis of Pn .
An immediate consequence is that {1, (x − a), (x − a)2 , . . . , (x − a)n } is a basis of Pn for any number
a. Hence we have the following:
Corollary 6.5.1
If a is any number, every polynomial f (x) of degree at most n has an expansion in powers of
(x − a):
f (x) = a0 + a1 (x − a) + a2 (x − a)2 + · · · + an (x − a)n (6.2)
f (x) = a0 + a1 (a − a) + · · · + an (a − a)n = a0
Hence a0 = f (a), and equation (6.2) can be written f (x) = f (a) + (x − a)g(x), where g(x) is a polynomial
of degree n − 1 (this assumes that n ≥ 1). If it happens that f (a) = 0, then it is clear that f (x) has the form
f (x) = (x − a)g(x). Conversely, every such polynomial certainly satisfies f (a) = 0, and we obtain:
Corollary 6.5.2
Let f (x) be a polynomial of degree n ≥ 1 and let a be any number. Then:
Remainder Theorem
Factor Theorem
The polynomial g(x) can be computed easily by using “long division” to divide f (x) by (x − a)—see
Appendix D.
All the coefficients in the expansion (6.2) of f (x) in powers of (x−a) can be determined in terms of the
derivatives of f (x).6 These will be familiar to students of calculus. Let f (n) (x) denote the nth derivative
6 The discussion of Taylor’s theorem can be omitted with no loss of continuity.
364 Vector Spaces
Example 6.5.1
Expand f (x) = 5x3 + 10x + 2 as a polynomial in powers of x − 1.
Solution. The derivatives are f (1) (x) = 15x2 + 10, f (2) (x) = 30x, and f (3) (x) = 30. Hence the
Taylor expansion is
Taylor’s theorem is useful in that it provides a formula for the coefficients in the expansion. It is dealt
with in calculus texts and will not be pursued here.
Theorem 6.5.1 produces bases of Pn consisting of polynomials of distinct degrees. A different criterion
is involved in the next theorem.
Theorem 6.5.2
Let f0 (x), f1 (x), . . . , fn (x) be nonzero polynomials in Pn . Assume that numbers a0 , a1 , . . . , an
exist such that
fi (ai ) 6= 0 for each i
fi (a j ) = 0 if i 6= j
Then
1. { f0 (x), . . . , fn (x)} is a basis of Pn .
2. If f (x) is any polynomial in Pn , its expansion as a linear combination of these basis vectors is
f (a0 ) f (a1 ) f (an )
f (x) = f0 (a0 ) f 0 (x) + f1 (a1 ) f 1 (x) + · · · + fn (an ) f n (x)
6.5. An Application to Polynomials 365
Proof.
1. It suffices (by Theorem 6.4.4) to show that { f0 (x), . . . , fn (x)} is linearly independent (because
dim Pn = n + 1). Suppose that
Because fi (a0 ) = 0 for all i > 0, taking x = a0 gives r0 f0 (a0 ) = 0. But then r0 = 0 because f0 (a0 ) 6= 0.
The proof that ri = 0 for i > 0 is analogous.
2. By (1), f (x) = r0 f0 (x) + · · · + rn fn (x) for some numbers ri . Once again, evaluating at a0 gives
f (a0 ) = r0 f0 (a0 ), so r0 = f (a0 )/ f0 (a0 ). Similarly, ri = f (ai )/ fi (ai ) for each i.
Example 6.5.2
Show that {x2 − x, x2 − 2x, x2 − 3x + 2} is a basis of P2 .
We investigate one natural choice of the polynomials fi (x) in Theorem 6.5.2. To illustrate, let a0 , a1 ,
and a2 be distinct numbers and write
(x−a1 )(x−a2 ) (x−a0 )(x−a2 ) (x−a0 )(x−a1 )
f0 (x) = (a0 −a1 )(a0 −a2 ) f1 (x) = (a1 −a0 )(a1 −a2 ) f2 (x) = (a2 −a0 )(a2 −a1 )
Then f0 (a0 ) = f1 (a1 ) = f2 (a2 ) = 1, and fi (a j ) = 0 for i 6= j. Hence Theorem 6.5.2 applies, and because
fi (ai ) = 1 for each i, the formula for expanding any polynomial is simplified.
In fact, this can be generalized with no extra effort. If a0 , a1 , . . . , an are distinct numbers, define the
Lagrange polynomials δ0 (x), δ1 (x), . . . , δn (x) relative to these numbers as follows:
∏i6=k (x−ai )
δk (x) = ∏i6=k (ak −ai )
k = 0, 1, 2, . . . , n
Here the numerator is the product of all the terms (x − a0 ), (x − a1 ), . . . , (x − an ) with (x − ak ) omitted,
and a similar remark applies to the denominator. If n = 2, these are just the polynomials in the preceding
paragraph. For another example, if n = 3, the polynomial δ1 (x) takes the form
(x−a0 )(x−a2 )(x−a3 )
δ1 (x) = (a1 −a0 )(a1 −a2 )(a1 −a3 )
In the general case, it is clear that δi (ai ) = 1 for each i and that δi (a j ) = 0 if i 6= j. Hence Theorem 6.5.2
specializes as Theorem 6.5.3.
366 Vector Spaces
of Lagrange polynomials is a basis of Pn , and any polynomial f (x) in Pn has the following unique
expansion as a linear combination of these polynomials.
f (x) = f (a0 )δ0 (x) + f (a1 )δ1 (x) + · · · + f (an )δn (x)
Example 6.5.3
Find the Lagrange interpolation expansion for f (x) = x2 − 2x + 1 relative to a0 = −1, a1 = 0, and
a2 = 1.
The Lagrange interpolation expansion gives an easy proof of the following important fact.
Theorem 6.5.4
Let f (x) be a polynomial in Pn , and let a0 , a1 , . . . , an denote distinct numbers. If f (ai ) = 0 for all
i, then f (x) is the zero polynomial (that is, all coefficients are zero).
Proof. All the coefficients in the Lagrange expansion of f (x) are zero.
6.5. An Application to Polynomials 367
Exercise 6.5.1 If polynomials f (x) and g(x) satisfy Exercise 6.5.7 Find the Lagrange interpolation expan-
f (a) = g(a), show that f (x) − g(x) = (x − a)h(x) for sion of f (x) relative to a0 = 1, a1 = 2, and a2 = 3 if:
some polynomial h(x).
a. f (x) = x2 + 1 b. f (x) = x2 + x + 1
Exercises 6.5.2, 6.5.3, 6.5.4, and 6.5.5 require poly-
nomial differentiation. Exercise 6.5.8 Let a0 , a1 , . . . , an be distinct numbers.
Exercise 6.5.2 Expand each of the following as a poly- If f (x) and g(x) in Pn satisfy f (ai ) = g(ai ) for all i, show
nomial in powers of x − 1. that f (x) = g(x). [Hint: See Theorem 6.5.4.]
Exercise 6.5.9 Let a0 , a1 , . . . , an be distinct numbers.
a. f (x) = x3 − 2x2 + x − 1 If f (x) ∈ Pn+1 satisfies f (ai ) = 0 for each i = 0, 1, . . . , n,
b. f (x) = x3 + x + 1 show that f (x) = r(x − a0 )(x − a1 ) · · · (x − an ) for some r
in R. [Hint: r is the coefficient of xn+1 in f (x). Consider
c. f (x) = x4 f (x) − r(x − a0 ) · · · (x − an ) and use Theorem 6.5.4.]
where f (k) (x) denotes the kth derivative of f (x). b. Show that dim Un = n − 1.
[Hint: If p(x)q(x) = 0 in P, then either p(x) = 0,
Exercise 6.5.6 Use Theorem 6.5.2 to show that the fol-
or q(x) = 0.]
lowing are bases of P2 .
c. Show {(x − a)n−1 (x − b), (x − a)n−2 (x − b)2 ,
a. {x2 − 2x, x2 + 2x, x2 − 4} . . . , (x − a)2 (x − b)n−2 , (x − a)(x − b)n−1 } is a ba-
sis of Un . [Hint: Exercise 6.5.10.]
b. {x2 − 3x + 2, x2 − 4x + 3, x2 − 5x + 6}
368 Vector Spaces
Theorem 6.6.1
The set of solutions of the first-order differential equation f ′ + a f = 0 is a one-dimensional vector
space and {e−ax } is a basis.
There is a far-reaching generalization of Theorem 6.6.1 that will be proved in Theorem 7.4.1.
Theorem 6.6.2
The set of solutions to the nth order equation (6.3) has dimension n.
Remark
Every differential equation of order n can be converted into a system of n linear first-order equations (see
Exercises 3.5.6 and 3.5.7). In the case that the matrix of this system is diagonalizable, this approach
provides a proof of Theorem 6.6.2. But if the matrix is not diagonalizable, Theorem 7.4.1 is required.
Theorem 6.6.1 suggests that we look for solutions to (6.3) of the form eλ x for some number λ . This is
a good idea. If we write f (x) = eλ x , it is easy to verify that f (k) (x) = λ k eλ x for each k ≥ 0, so substituting
f in (6.3) gives
(λ n + an−1 λ n−1 + an−2 λ n−2 + · · · + a2 λ 2 + a1 λ 1 + a0 )eλ x = 0
Since eλ x 6= 0 for all x, this shows that eλ x is a solution of (6.3) if and only if λ is a root of the characteristic
polynomial c(x), defined to be
c(x) = xn + an−1 xn−1 + an−2 xn−2 + · · · + a2 x2 + a1 x + a0
6.6. An Application to Differential Equations 369
Theorem 6.6.3
If λ is real, the function eλ x is a solution of (6.3) if and only if λ is a root of the characteristic
polynomial c(x).
Example 6.6.1
Find a basis of the space U of solutions of f ′′′ − 2 f ′′ − f ′ − 2 f = 0.
Solution. The characteristic polynomial is x3 − 2x2 − x − 1 = (x − 1)(x + 1)(x − 2), with roots
λ1 = 1, λ2 = −1, and λ3 = 2. Hence ex , e−x , and e2x are all in U . Moreover they are independent
(by Lemma 6.6.1 below) so, since dim (U ) = 3 by Theorem 6.6.2, {ex , e−x , e2x } is a basis of U .
Lemma 6.6.1
If λ1 , λ2 , . . . , λk are distinct, then {eλ1 x , eλ2 x , . . . , eλk x } is linearly independent.
Proof. If r1 eλ1 x + r2 eλ2 x + · · · + rk eλk x = 0 for all x, then r1 + r2 e(λ2 −λ1 )x + · · · + rk e(λk −λ1 )x = 0; that is,
r2 e(λ2 −λ1 )x + · · · + rk e(λk −λ1 )x is a constant. Since the λi are distinct, this forces r2 = · · · = rk = 0, whence
r1 = 0 also. This is what we wanted.
Theorem 6.6.4
Let U denote the space of solutions to the second-order equation
f ′′ + a f ′ + b f = 0
where a and b are real constants. Assume that the characteristic polynomial x2 + ax + b has two
real roots λ and µ. Then
Proof. Since dim (U ) = 2 by Theorem 6.6.2, (1) follows by Lemma 6.6.1, and (2) follows because the set
{eλ x , xeλ x } is independent (Exercise 6.6.3).
Example 6.6.2
Find the solution of f ′′ + 4 f ′ + 4 f = 0 that satisfies the boundary conditions f (0) = 1,
f (1) = −1.
370 Vector Spaces
One other question remains: What happens if the roots of the characteristic polynomial are not real?
To answer this, we must first state precisely what eλ x means when λ is not real. If q is a real number,
define
eiq = cos q + i sin q
where i2 = −1. Then the relationship eiq eiq1 = ei(q+q1 ) holds for all real q and q1 , as is easily verified. If
λ = p + iq, where p and q are real numbers, we define
1. eλ eµ = eλ +µ
2. eλ = 1 if and only if λ = 0
3. (eλ x )′ = λ eλ x
For convenience, denote the real and imaginary parts of f (x) as u(x) = e px cos(qx) and v(x) = e px sin(qx).
Then the fact that f (x) satisfies the differential equation gives
Equating real and imaginary parts shows that u(x) and v(x) are both solutions to the differential equation.
This proves part of Theorem 6.6.5.
Theorem 6.6.5
Let U denote the space of solutions of the second-order differential equation
f ′′ + a f ′ + b f = 0
where a and b are real. Suppose λ is a nonreal root of the characteristic polynomial x2 + ax + b. If
λ = p + iq, where p and q are real, then
{e px cos(qx), e px sin(qx)}
is a basis of U .
6.6. An Application to Differential Equations 371
Proof. The foregoing discussion shows that these functions lie in U . Because dim U = 2 by Theo-
rem 6.6.2, it suffices to show that they are linearly independent. But if
re px cos(qx) + se px sin(qx) = 0
for all x, then r cos(qx) + s sin(qx) = 0 for all x (because e px 6= 0). Taking x = 0 gives r = 0, and taking
π
x = 2q gives s = 0 (q 6= 0 because λ is not real). This is what we wanted.
Example 6.6.3
Find the solution f (x) to f ′′ − 2 f ′ + 2 f = 0 that satisfies f (0) = 2 and f ( π2 ) = 0.
Theorem 6.6.6
If q 6= 0 is a real number, the space of solutions to the differential equation f ′′ + q2 f = 0 has basis
{cos(qx), sin(qx)}.
Proof. The characteristic polynomial x2 + q2 has roots qi and −qi, so Theorem 6.6.5 applies with p = 0.
In many situations, the displacement s(t) of some object at time t turns out to have an oscillating form
s(t) = c sin(at) + d cos(at). These are called simple harmonic motions. An example follows.
Example 6.6.4
A weight is attached to an extension spring (see diagram). If it is pulled
from the equilibrium position and released, it is observed to oscillate up
and down. Let d(t) denote the distance of the weight below the equilibrium
position t seconds later. It is known (Hooke’s law) that the acceleration
d ′′ (t) of the weight is proportional to the displacement d(t) and in the opposite
direction. That is,
d ′′ (t) = −kd(t)
d(t) where k > 0 is called the spring constant. Find d(t) if the maximum extension
is 10 cm below the equilibrium position and find the period of the oscillation
(time taken for the weight to make a full oscillation).
Solution. It follows from Theorem 6.6.6 (with q2 = k) that
√ √
d(t) = r sin( k t) + s cos( k t)
372 Vector Spaces
√
where r and s are constants. The condition d(0) = 0 gives s = 0, so d(t) = r sin( k t). Now the
maximum value of the function sin x is 1 (when x = π2 ), so r = 10 (when t = √ π
). Hence
2 k
√
d(t) = 10 sin( k t)
√
Finally, the weight goes through a full oscillation as k t increases from 0 to 2π . The time taken is
2π
t=√ , the period of the oscillation.
k
j. f ′′ + 4 f ′ + 5 f = 0; f (0) = 0, f ( π2 ) = 1
a. Find the mass t hours later.
Exercise 6.6.8 Consider a spring, as in Example 6.6.4. k. If the period is 0.5 seconds, find k. [Assume that θ = 0
If the period of the oscillation is 30 seconds, find the when t = 0.]
spring constant k.
Exercise 6.6.9 As a pendulum swings (see the diagram),
let t measure the time since it was vertical. The angle
θ = θ (t) from the vertical can be shown to satisfy the θ
equation θ ′′ + kθ = 0, provided that θ is small. If the
maximal angle is θ = 0.05 radians, find θ (t) in terms of
Exercise 6.1 (Requires calculus) Let V denote the space Exercise 6.3 If A is an m × n matrix, show that A has
of all functions f : R → R for which the derivatives f ′ and rank m if and only if col A contains every column of Im .
f ′′ exist. Show that f1 , f2 , and f3 in V are linearly inde-
Exercise 6.4 Show that null A = null (AT A) for any real
pendent provided that their wronskian w(x) is nonzero
matrix A.
for some x, where
Exercise 6.5 Let A be an m × n matrix of rank r. Show
f1 (x) f2 (x) f3 (x)
that dim ( null A) = n − r (Theorem 5.4.3) as follows.
w(x) = det ′ ′
f1 (x) f2 (x) f3 (x)
′
Choose a basis {x1 , . . . , xk } of null A and extend it
f1′′ (x) f2′′ (x) f3′′ (x) to a basis {x1 , . . . , xk , z1 , . . . , zm } of Rn . Show that
{Az1 , . . . , Azm } is a basis of col A.
Exercise 6.2 Let {v1 , v2 , . . . , vn } be a basis of Rn (writ-
ten as columns), and let A be an n × n matrix.
Axiom T1 is just the requirement that T preserves vector addition. It asserts that the result T (v + v1 )
of adding v and v1 first and then applying T is the same as applying T first to get T (v) and T (v1 ) and
then adding. Similarly, axiom T2 means that T preserves scalar multiplication. Note that, even though the
additions in axiom T1 are both denoted by the same symbol +, the addition on the left forming v + v1 is
carried out in V , whereas the addition T (v) + T (v1 ) is done in W . Similarly, the scalar multiplications rv
and rT (v) in axiom T2 refer to the spaces V and W , respectively.
We have already seen many examples of linear transformations T : Rn → Rm . In fact, writing vectors
in Rn as columns, Theorem 2.6.2 shows that, for each such T , there is an m × n matrix A such that
T (x) = Ax for every x in Rn . Moreover, the matrix A is given by A = T (e1 ) T (e2 ) · · · T (en )
where {e1 , e2 , . . . , en } is the standard basis of Rn . We denote this transformation by TA : Rn → Rm ,
375
376 Linear Transformations
defined by
TA (x) = Ax for all x in Rn
Example 7.1.1 lists three important linear transformations that will be referred to later. The verification
of axioms T1 and T2 is left to the reader.
Example 7.1.1
If V and W are vector spaces, the following are linear transformations:
The symbol 0 will be used to denote the zero transformation from V to W for any spaces V and W . It
was also used earlier to denote the zero function [a, b] → R.
The next example gives two important transformations of matrices. Recall that the trace tr A of an
n × n matrix A is the sum of the entries on the main diagonal.
Example 7.1.2
Show that the transposition and trace are linear transformations. More precisely,
Solution. Axioms T1 and T2 for transposition are (A + B)T = AT + BT and (rA)T = r(AT ),
respectively (using Theorem 2.1.2). The verifications for the trace are left to the reader.
Example 7.1.3
If a is a scalar, define Ea : Pn → R by Ea (p) = p(a) for each polynomial p in Pn . Show that Ea is a
linear transformation (called evaluation at a).
Solution. If p and q are polynomials and r is in R, we use the fact that the sum p + q and scalar
product rp are defined as for functions:
Example 7.1.4
Show that the differentiation and integration operations on Pn are linear transformations. More
precisely,
Solution. These restate the following fundamental properties of differentiation and integration.
The next theorem collects three useful properties of all linear transformations. They can be described
by saying that, in addition to preserving addition and scalar multiplication (these are the axioms), linear
transformations preserve the zero vector, negatives, and linear combinations.
Theorem 7.1.1
Let T : V → W be a linear transformation.
1. T (0) = 0.
Proof.
The ability to use the last part of Theorem 7.1.1 effectively is vital to obtaining the benefits of linear
transformations. Example 7.1.5 and Theorem 7.1.2 provide illustrations.
Example 7.1.5
Let T : V → W be a linear transformation. If T (v − 3v1 ) = w and T (2v − v1 ) = w1 , find T (v) and
T (v1 ) in terms of w and w1 .
378 Linear Transformations
T (v) − 3T (v1 ) = w
2T (v) − T (v1 ) = w1
by Theorem 7.1.1. Subtracting twice the first from the second gives T (v1 ) = 51 (w1 − 2w). Then
substitution gives T (v) = 15 (3w1 − w).
The full effect of property (3) in Theorem 7.1.1 is this: If T : V → W is a linear transformation and
T (v1 ), T (v2 ), . . . , T (vn ) are known, then T (v) can be computed for every vector v in span {v1 , v2 , . . . , vn }.
In particular, if {v1 , v2 , . . . , vn } spans V , then T (v) is determined for all v in V by the choice of
T (v1 ), T (v2 ), . . . , T (vn ). The next theorem states this somewhat differently. As for functions in gen-
eral, two linear transformations T : V → W and S : V → W are called equal (written T = S) if they have
the same action; that is, if T (v) = S(v) for all v in V .
Theorem 7.1.2
Let T : V → W and S : V → W be two linear transformations. Suppose that
V = span {v1 , v2 , . . . , vn }. If T(vi ) = S(vi ) for each i, then T = S.
T (v) = T (a1 v1 + a2 v2 + · · · + an vn )
= a1 T (v1 ) + a2 T (v2 ) + · · · + an T (vn )
= a1 S(v1 ) + a2 S(v2 ) + · · · + an S(vn )
= S(a1 v1 + a2 v2 + · · · + an vn )
= S(v)
Example 7.1.6
Let V = span {v1 , . . . , vn }. Let T : V → W be a linear transformation. If T (v1 ) = · · · = T (vn ) = 0,
show that T = 0, the zero transformation from V to W .
Solution. The zero transformation 0 : V → W is defined by 0(v) = 0 for all v in V (Example 7.1.1),
so T (vi ) = 0(vi ) holds for each i. Hence T = 0 by Theorem 7.1.2.
Theorem 7.1.2 can be expressed as follows: If we know what a linear transformation T : V → W does
to each vector in a spanning set for V , then we know what T does to every vector in V . If the spanning set
is a basis, we can say much more.
7.1. Examples and Elementary Properties 379
Theorem 7.1.3
Let V and W be vector spaces and let {b1 , b2 , . . . , bn } be a basis of V . Given any vectors
w1 , w2 , . . . , wn in W (they need not be distinct), there exists a unique linear transformation
T : V → W satisfying T (bi ) = wi for each i = 1, 2, . . . , n. In fact, the action of T is as follows:
Given v = v1 b1 + v2 b2 + · · · + vn bn in V , vi in R, then
T (v) = T (v1 b1 + v2 b2 + · · · + vn bn ) = v1 w1 + v2 w2 + · · · + vn wn .
Proof. If a transformation T does exist with T (bi ) = wi for each i, and if S is any other such transformation,
then T (bi ) = wi = S(bi ) holds for each i, so S = T by Theorem 7.1.2. Hence T is unique if it exists, and
it remains to show that there really is such a linear transformation. Given v in V , we must specify T (v) in
W . Because {b1 , . . . , bn } is a basis of V , we have v = v1 b1 + · · · + vn bn , where v1 , . . . , vn are uniquely
determined by v (this is Theorem 6.3.1). Hence we may define T : V → W by
T (v) = T (v1 b1 + v2 b2 + · · · + vn bn ) = v1 w1 + v2 w2 + · · · + vn wn
for all v = v1 b1 + · · · + vn bn in V . This satisfies T (bi ) = wi for each i; the verification that T is linear is
left to the reader.
This theorem shows that linear transformations can be defined almost at will: Simply specify where
the basis vectors go, and the rest of the action is dictated by the linearity. Moreover, Theorem 7.1.2 shows
that deciding whether two linear transformations are equal comes down to determining whether they have
the same effect on the basis vectors. So, given a basis {b1 , . . . , bn } of a vector space V , there is a different
linear transformation V → W for every ordered selection w1 , w2 , . . . , wn of vectors in W (not necessarily
distinct).
Example 7.1.7
Find a linear transformation T : P2 → M22 such that
1 0 2 0 1 2 0 0
T (1 + x) = , T (x + x ) = , and T (1 + x ) = .
0 0 1 0 0 1
Exercise 7.1.3 In each case, assume that T is a linear Exercise 7.1.6 If T : V → W is a linear transformation,
transformation. show that T (v − v1 ) = T (v) − T (v1 ) for all v and v1 in
V.
a. If T : V → R and T (v1 ) = 1, T (v2 ) = −1, find
Exercise 7.1.7 Let {e1 , e2 } be the standard basis of R2 .
T (3v1 − 5v2 ).
Is it possible to have a linear transformation T such that
2
b. If T : V → R and T (v1 ) = 2, T (v2 ) = −3, find T (e1 ) lies in R while T (e2 ) lies in R ? Explain your an-
T (3v1 + 2v2 ). swer.
7.1. Examples and Elementary Properties 381
Exercise 7.1.8 Let {v1 , . . . , vn } be a basis of V and let b. If P is a subspace of W , show that
T : V → V be a linear transformation. {v in V | T (v) in P} is a subspace of V (called the
preimage of P under T ).
a. If T (vi ) = vi for each i, show that T = 1V .
Exercise 7.1.16 Show that differentiation is the only lin-
b. If T (vi ) = −vi for each i, show that T = −1 is the ear transformation Pn → Pn that satisfies T (xk ) = kxk−1
scalar operator (see Example 7.1.1). for each k = 0, 1, 2, . . . , n.
Exercise 7.1.17 Let T : V → W be a linear transforma-
Exercise 7.1.9 If A is an m × n matrix, let Ck (A) denote tion and let v1 , . . . , vn denote vectors in V .
column k of A. Show that Ck : Mmn → Rm is a linear
transformation for each k = 1, . . . , n. a. If {T (v1 ), . . . , T (vn )} is linearly independent,
show that {v1 , . . . , vn } is also independent.
Exercise 7.1.10 Let {e1 , . . . , en } be a basis of Rn .
Given k, 1 ≤ k ≤ n, define Pk : Rn → Rn by b. Find T : R2 → R2 for which the converse of part
Pk (r1 e1 + · · · + rn en ) = rk ek . Show that Pk a linear trans- (a) is false.
formation for each k.
Exercise 7.1.18 Suppose T : V → V is a linear operator
Exercise 7.1.11 Let S : V → W and T : V → W be linear
with the property that T [T (v)] = v for all v in V . (For
transformations. Given a in R, define functions
example, transposition in Mnn or conjugation in C.) If
(S + T ) : V → W and (aT ) : V → W by (S + T )(v) =
v 6= 0 in V , show that {v, T (v)} is linearly independent
S(v) + T (v) and (aT )(v) = aT (v) for all v in V . Show
if and only if T (v) 6= v and T (v) 6= −v.
that S + T and aT are linear transformations.
Exercise 7.1.19 If a and b are real numbers, define
Exercise 7.1.12 Describe all linear transformations Ta, b : C → C by Ta, b (r + si) = ra + sbi for all r + si in C.
T : R → V.
Exercise 7.1.13 Let V and W be vector spaces, let V a. Show that Ta, b is linear and Ta, b (z) = Ta, b (z) for
be finite dimensional, and let v 6= 0 in V . Given any all z in C. (Here z denotes the conjugate of z.)
w in W , show that there exists a linear transformation b. If T : C → C is linear and T (z) = T (z) for all z in
T : V → W with T (v) = w. [Hint: Theorem 6.4.1 and C, show that T = Ta, b for some real a and b.
Theorem 7.1.3.]
Exercise 7.1.14 Given y in Rn , define Sy : Rn → R by Exercise 7.1.20 Show that the following conditions are
Sy (x) = x · y for all x in Rn (where · is the dot product equivalent for a linear transformation T : M22 → M22 .
introduced in Section 5.3).
1. tr [T (A)] = tr A for all A in M22 .
a. Show that Sy : Rn → R is a linear transformation r11 r12
for any y in Rn . 2. T = r11 B11 + r12 B12 + r21 B21 +
r21 r22
r22 B22 for matrices Bi j such that
b. Show that every linear transformation T : Rn → R tr B11 = 1 = tr B22 and tr B12 = 0 = tr B21 .
arises in this way; that is, T = Sy for some y in Rn .
[Hint: If {e1 , . . . , en } is the standard basis of Rn , Exercise 7.1.21 Given a in R, consider the evaluation
write Sy (ei ) = yi for each i. Use Theorem 7.1.1.] map E : P → R defined in Example 7.1.3.
a n
Exercise 7.1.15 Let T : V → W be a linear transforma- a. Show that Ea is a linear transformation satisfy-
tion. ing the additional condition that Ea (xk ) = [Ea (x)]k
holds for all k = 0, 1, 2, . . . . [Note: x0 = 1.]
a. If U is a subspace of V , show that b. If T : Pn → R is a linear transformation satisfying
T (U ) = {T (u) | u in U } is a subspace of W (called T (xk ) = [T (x)]k for all k = 0, 1, 2, . . . , show that
the image of U under T ). T = Ea for some a in R.
382 Linear Transformations
Exercise 7.1.22 If T : Mnn → R is any linear transfor- Exercise 7.1.23 Let T : C → C be a linear transforma-
mation satisfying T (AB) = T (BA) for all A and B in Mnn , tion of the real vector space C and assume that T (a) = a
show that there exists a number k such that T (A) = k tr A for every real number a. Show that the following are
for all A. (See Lemma 5.5.1.) [Hint: Let Ei j denote the equivalent:
n × n matrix with 1 in the (i, j) position and zeros else-
where.
0 if k 6= l a. T (zw) = T (z)T (w) for all z and w in C.
Show that Eik El j = . Use this to
Ei j if k = l
show that T (Ei j ) = 0 if i 6= j and
T (E11 ) = T (E22 ) = · · · = T (Enn ). Put k = T (E11 ) and b. Either T = 1C or T (z) = z for each z in C (where
use the fact that {Ei j | 1 ≤ i, j ≤ n} is a basis of Mnn .] z denotes the conjugate).
This section is devoted to two important subspaces associated with a linear transformation T : V → W .
ker T = {v in V | T (v) = 0}
im T = {T (v) | v in V } = T (V )
Theorem 7.2.1
Let T : V → W be a linear transformation.
1. ker T is a subspace of V .
2. im T is a subspace of W .
Proof. The fact that T (0) = 0 shows that ker T and im T contain the zero vector of V and W respectively.
T (v + v1 ) = T (v) + T (v1 ) = 0 + 0 = 0
T (rv) = rT (v) = r0 = 0 for all r in R
Hence v + v1 and rv lie in ker T (they satisfy the required condition), so ker T is a subspace of V
by the subspace test (Theorem 6.2.1).
w + w1 = T (v) + T (v1 ) = T (v + v1 )
rw = rT (v) = T (rv) for all r in R
Hence w + w1 and rw both lie in im T (they have the required form), so im T is a subspace of W .
The rank of a matrix A was defined earlier to be the dimension of col A, the column space of A. The two
usages of the word rank are consistent in the following sense. Recall the definition of TA in Example 7.2.1.
Example 7.2.2
Given an m × n matrix A, show that im TA = col A, so rank TA = rank A.
Solution. Write A = c1 · · · cn in terms of its columns. Then
im TA = {Ax | x in Rn } = {x1 c1 + · · · + xn cn | xi in R}
using Definition 2.5. Hence im TA is the column space of A; the rest follows.
Often, a useful way to study a subspace of a vector space is to exhibit it as the kernel or image of a
linear transformation. Here is an example.
384 Linear Transformations
Example 7.2.3
Define a transformation P : Mnn → Mnn by P(A) = A − AT for all A in Mnn . Show that P is linear
and that:
Solution. The verification that P is linear is left to the reader. To prove part (a), note that a matrix
A lies in ker P just when 0 = P(A) = A − AT , and this occurs if and only if A = AT —that is, A is
symmetric. Turning to part (b), the space im P consists of all matrices P(A), A in Mnn . Every such
matrix is skew-symmetric because
P(A)T = (A − AT )T = AT − A = −P(A)
On the other hand, if S is skew-symmetric (that is, ST = −S), then S lies in im P. In fact,
1 1 1 T 1
P 2S = 2S − 2S = 2 (S − ST ) = 12 (S + S) = S
1. T is said to be onto if im T = W .
A vector w in W is said to be hit by T if w = T (v) for some v in V . Then T is onto if every vector in W
is hit at least once, and T is one-to-one if no element of W gets hit twice. Clearly the onto transformations
T are those for which im T = W is as large a subspace of W as possible. By contrast, Theorem 7.2.2
shows that the one-to-one transformations T are the ones with ker T as small a subspace of V as possible.
Theorem 7.2.2
If T : V → W is a linear transformation, then T is one-to-one if and only if ker T = {0}.
Proof. If T is one-to-one, let v be any vector in ker T . Then T (v) = 0, so T (v) = T (0). Hence v = 0
because T is one-to-one. Hence ker T = {0}.
Conversely, assume that ker T = {0} and let T (v) = T (v1 ) with v and v1 in V . Then
T (v − v1 ) = T (v) − T (v1 ) = 0, so v − v1 lies in ker T = {0}. This means that v − v1 = 0, so v = v1 ,
proving that T is one-to-one.
7.2. Kernel and Image of a Linear Transformation 385
Example 7.2.4
The identity transformation 1V : V → V is both one-to-one and onto for any vector space V .
Example 7.2.5
Consider the linear transformations
S : R3 → R2 given by S(x, y, z) = (x + y, x − y)
T : R2 → R3 given by T (x, y) = (x + y, x − y, x)
Show that T is one-to-one but not onto, whereas S is onto but not one-to-one.
Solution. The verification that they are linear is omitted. T is one-to-one because
However, it is not onto. For example (0, 0, 1) does not lie in im T because if
(0, 0, 1) = (x + y, x − y, x) for some x and y, then x + y = 0 = x − y and x = 1, an impossibility.
Turning to S, it is not one-to-one by Theorem 7.2.2 because (0, 0, 1) lies in ker S. But every
element (s, t) in R2 lies in im S because (s, t) = (x + y, x − y) = S(x, y, z) for some x, y, and z (in
fact, x = 21 (s + t), y = 12 (s − t), and z = 0). Hence S is onto.
Example 7.2.6
Let U be an invertible m × m matrix and define
Solution. The verification that T is linear is left to the reader. To see that T is one-to-one, let
T (X ) = 0. Then U X = 0, so left-multiplication by U −1 gives X = 0. Hence ker T = {0}, so T is
one-to-one. Finally, if Y is any member of Mmn , then U −1Y lies in Mmn too, and
T (U −1Y ) = U (U −1Y ) = Y . This shows that T is onto.
The linear transformations Rn → Rm all have the form TA for some m × n matrix A (Theorem 2.6.2).
The next theorem gives conditions under which they are onto or one-to-one. Note the connection with
Theorem 5.4.3 and Theorem 5.4.4.
386 Linear Transformations
Theorem 7.2.3
Let A be an m × n matrix, and let TA : Rn → Rm be the linear transformation induced by A, that is
TA (x) = Ax for all columns x in Rn .
Proof.
1. We have that im TA is the column space of A (see Example 7.2.2), so TA is onto if and only if the
column space of A is Rm . Because the rank of A is the dimension of the column space, this holds if
and only if rank A = m.
2. ker TA = {x in Rn | Ax = 0}, so (using Theorem 7.2.2) TA is one-to-one if and only if Ax = 0 implies
x = 0. This is equivalent to rank A = n by Theorem 5.4.3.
Let A denote an m × n matrix of rank r and let TA : Rn → Rm denote the corresponding matrix transfor-
mation given by TA (x) = Ax for all columns x in Rn . It follows from Example 7.2.1 and Example 7.2.2
that im TA = col A, so dim ( im TA ) = dim ( col A) = r. On the other hand Theorem 5.4.2 shows that
dim ( ker TA ) = dim ( null A) = n − r. Combining these we see that
Proof. Every vector in im T = T (V ) has the form T (v) for some v in V . Hence let {T (e1 ), T (e2 ), . . . , T (er )}
be a basis of im T , where the ei lie in V . Let {f1 , f2 , . . . , fk } be any basis of ker T . Then dim ( im T ) = r
and dim ( ker T ) = k, so it suffices to show that B = {e1 , . . . , er , f1 , . . . , fk } is a basis of V .
This implies that v −t1 e1 −t2 e2 − · · · −tr er lies in ker T and so is a linear combination of f1 , . . . , fk .
Hence v is a linear combination of the vectors in B.
7.2. Kernel and Image of a Linear Transformation 387
t1e1 + · · · + tr er + s1 f1 + · · · + sk fk = 0 (7.1)
Applying T gives t1T (e1 ) +· · ·+tr T (er ) = 0 (because T (fi ) = 0 for each i). Hence the independence
of {T (e1 ), . . . , T (er )} yields t1 = · · · = tr = 0. But then (7.1) becomes
s1 f1 + · · · + sk fk = 0
Note that the vector space V is not assumed to be finite dimensional in Theorem 7.2.4. In fact, verify-
ing that ker T and im T are both finite dimensional is often an important way to prove that V is finite
dimensional.
Note further that r + k = n in the proof so, after relabelling, we end up with a basis
B = {e1 , e2 , . . . , er , er+1 , . . . , en }
of V with the property that {er+1 , . . . , en } is a basis of ker T and {T (e1 ), . . . , T (er )} is a basis of im T .
In fact, if V is known in advance to be finite dimensional, then any basis {er+1 , . . . , en } of ker T can be
extended to a basis {e1 , e2 , . . . , er , er+1 , . . . , en } of V by Theorem 6.4.1. Moreover, it turns out that, no
matter how this is done, the vectors {T (e1 ), . . . , T (er )} will be a basis of im T . This result is useful, and
we record it for reference. The proof is much like that of Theorem 7.2.4 and is left as Exercise 7.2.26.
Theorem 7.2.5
Let T : V → W be a linear transformation, and let {e1 , . . . , er , er+1 , . . . , en } be a basis of V such
that {er+1 , . . . , en } is a basis of ker T . Then {T (e1 ), . . . , T (er )} is a basis of im T , and hence
r = rank T .
The dimension theorem is one of the most useful results in all of linear algebra. It shows that if
either dim ( ker T ) or dim ( im T ) can be found, then the other is automatically known. In many cases it is
easier to compute one than the other, so the theorem is a real asset. The rest of this section is devoted to
illustrations of this fact. The next example uses the dimension theorem to give a different proof of the first
part of Theorem 5.4.2.
Example 7.2.7
Let A be an m × n matrix of rank r. Show that the space null A of all solutions of the system
Ax = 0 of m homogeneous equations in n variables has dimension n − r.
Solution. The space in question is just ker TA , where TA : Rn → Rm is defined by TA (x) = Ax for
all columns x in Rn . But dim ( im TA ) = rank TA = rank A = r by Example 7.2.2, so
dim ( ker TA ) = n − r by the dimension theorem.
388 Linear Transformations
Example 7.2.8
If T : V → W is a linear transformation where V is finite dimensional, then
Indeed, dim V = dim ( ker T ) + dim ( im T ) by Theorem 7.2.4. Of course, the first inequality also
follows because ker T is a subspace of V .
Example 7.2.9
Let D : Pn → Pn−1 be the differentiation map defined by D [p(x)] = p′ (x). Compute ker D and
hence conclude that D is onto.
Solution. Because p′ (x) = 0 means p(x) is constant, we have dim ( ker D) = 1. Since
dim Pn = n + 1, the dimension theorem gives
Of course it is not difficult to verify directly that each polynomial q(x) in Pn−1 is the derivative of some
polynomial in Pn (simply integrate q(x)!), so the dimension theorem is not needed in this case. However,
in some situations it is difficult to see directly that a linear transformation is onto, and the method used in
Example 7.2.9 may be by far the easiest way to prove it. Here is another illustration.
Example 7.2.10
Given a in R, the evaluation map Ea : Pn → R is given by Ea [p(x)] = p(a). Show that Ea is linear
and onto, and hence conclude that {(x − a), (x − a)2 , . . . , (x − a)n } is a basis of ker Ea , the
subspace of all polynomials p(x) for which p(a) = 0.
Solution. Ea is linear by Example 7.1.3; the verification that it is onto is left to the reader. Hence
dim ( im Ea ) = dim (R) = 1, so dim ( ker Ea ) = (n + 1) − 1 = n by the dimension theorem. Now
each of the n polynomials (x − a), (x − a)2 , . . . , (x − a)n clearly lies in ker Ea , and they are
linearly independent (they have distinct degrees). Hence they are a basis because dim ( ker Ea ) = n.
Example 7.2.11
If A is any m × n matrix, show that rank A = rank AT A = rank AAT .
Solution. It suffices to show that rank A = rank AT A (the rest follows by replacing A with AT ).
Write B = AT A, and consider the associated matrix transformations
TA : Rn → Rm and TB : Rn → Rn
7.2. Kernel and Image of a Linear Transformation 389
Exercise 7.2.1 For each matrix A, find a basis for the h. T : Rn → R; T (r1 , r2 , . . . , rn ) = r1 + r2 + · · · + rn
kernel and image of TA , and find the rank and nullity of
TA . i. T : M
22 → M22 ; T (X ) = X A − AX , where
0 1
A=
1 2 −1 1 2 1 −1 3 1 0
a. 3 1 0 2 b. 1 0 3 1
1 −3 2 0 1 1 −4 2 1 1
j. T : M22 → M22 ; T (X ) = X A, where A =
0 0
1 2 −1 2 1 0
3 1 2 1 −1 3
c.
4 −1
d.
5 1 2 −3 Exercise 7.2.3 Let P : V → R and Q : V → R be lin-
0 2 −2 0 3 −6 ear transformations, where V is a vector space. Define
T : V → R2 by T (v) = (P(v), Q(v)).
Exercise 7.2.2 In each case, (i) find a basis of ker T ,
and (ii) find a basis of im T . You may assume that T is
a. Show that T is a linear transformation.
linear.
b. Show that ker T = ker P ∩ ker Q, the set of vec-
a. T : P2 → R2 ; T (a + bx + cx2 ) = (a, b) tors in both ker P and ker Q.
b. T : P2 → R2 ; T (p(x)) = (p(0), p(1))
Exercise 7.2.4 In each case, find a basis
c. T : R3 → R3 ; T (x, y, z) = (x + y, x + y, 0)
B = {e1 , . . . , er , er+1 , . . . , en } of V such that
d. T : R3 → R4 ; T (x, y, z) = (x, x, y, y) {er+1 , . . . , en } is a basis of ker T , and verify Theo-
rem 7.2.5.
a b a+b b+c
e. T : M22 → M22 ; T =
c d c+d d +a
a. T : R3 → R4 ; T (x, y, z) = (x − y + 2z, x + y −
a b z, 2x + z, 2y − 3z)
f. T : M22 → R; T = a+d
c d
b. T : R3 → R4 ; T (x, y, z) = (x + y + z, 2x − y +
g. T : Pn → R; T (r0 + r1 x + · · · + rn xn ) = rn 3z, z − 3y, 3x + 4z)
390 Linear Transformations
Exercise 7.2.5 Show that every matrix X in Mnn has the Exercise 7.2.8 Given {v1 , . . . , vn } in a vector space V ,
form X = AT − 2A for some matrix A in Mnn . [Hint: The define T : Rn → V by T (r1 , . . . , rn ) = r1 v1 + · · · + rn vn .
dimension theorem.] Show that T is linear, and that:
Exercise 7.2.6 In each case either prove the statement
a. T is one-to-one if and only if {v1 , . . . , vn } is in-
or give an example in which it is false. Throughout, let
dependent.
T : V → W be a linear transformation where V and W are
finite dimensional. b. T is onto if and only if V = span {v1 , . . . , vn }.
c. If dim V = 5 and dim W = 4, then ker T 6= {0}. Exercise 7.2.10 Let T : Mnn → R denote the trace map:
T (A) = tr A for all A in Mnn . Show that
d. If ker T = V , then W = {0}. dim ( ker T ) = n2 − 1.
e. If W = {0}, then ker T = V . Exercise 7.2.11 Show that the following are equivalent
for a linear transformation T : V → W .
f. If W = V , and im T ⊆ ker T , then T = 0.
1. ker T = V 2. im T = {0}
g. If {e1 , e2 , e3 } is a basis of V and 3. T = 0
T (e1 ) = 0 = T (e2 ), then dim ( im T ) ≤ 1.
1 Exercise 7.2.12 Let A and B be m × n and k × n matri-
h. If dim ( ker T ) ≤ dim W , then dim W ≥ dim V .
2 ces, respectively. Assume that Ax = 0 implies Bx = 0 for
i. If T is one-to-one, then dim V ≤ dim W . every n-column x. Show that rank A ≥ rank B.
[Hint: Theorem 7.2.4.]
j. If dim V ≤ dim W , then T is one-to-one. Exercise 7.2.13 Let A be an m × n matrix of rank r.
Thinking of Rn as rows, define V = {x in Rm | xA = 0}.
k. If T is onto, then dim V ≥ dim W .
Show that dim V = m − r.
l. If dim V ≥ dim W , then T is onto. Exercise 7.2.14 Consider
m. If {T (v1 ), . . . , T (vk )} is independent, then a b
V= a+c = b+d
{v1 , . . . , vk } is independent. c d
n. If {v1 , . . . , vk } spans V , then {T (v1 ), . . . , T (vk )} a b
spans W . a. Consider S : M22 → R with S = a+c−
c d
b − d. Show that S is linear and onto and that V is
a subspace of M22 . Compute dim V .
Exercise 7.2.7 Show that linear independence is pre-
served by one-to-one transformations and that spanning a b
b. Consider T : V → R with T = a + c.
sets are preserved by onto transformations. More pre- c d
cisely, if T : V → W is a linear transformation, show that: Show that T is linear and onto, and use this in-
formation to compute dim ( ker T ).
a. If T is one-to-one and {v1 , . . . , vn } is independent
in V , then {T (v1 ), . . . , T (vn )} is independent in Exercise 7.2.15 Define T : Pn → R by T [p(x)] = the
W. sum of all the coefficients of p(x).
b. If T is onto and V = span {v1 , . . . , vn }, then a. Use the dimension theorem to show that
W = span {T (v1 ), . . . , T (vn )}. dim ( ker T ) = n.
7.2. Kernel and Image of a Linear Transformation 391
Exercise 7.2.20 Let U and V denote the spaces of sym- Exercise 7.2.29 Let U be a subspace of a finite dimen-
metric and skew-symmetric n × n matrices. Show that sional vector space V .
dim U + dim V = n2 .
a. Show that U = ker T for some linear operator
Exercise 7.2.21 Assume that B in Mnn satisfies B = 0
k
T :V →V.
for some k ≥ 1. Show that every matrix in Mnn has
the form BA − A for some A in Mnn . [Hint: Show that b. Show that U = im S for some linear operator
T : Mnn → Mnn is linear and one-to-one where S : V → V . [Hint: Theorem 6.4.1 and Theo-
T (A) = BA − A for each A.] rem 7.1.3.]
Exercise 7.2.22 Fix a column y 6= 0 in Rn and let Exercise 7.2.30 Let V and W be finite dimensional vec-
U = {A in Mnn | Ay = 0}. Show that dim U = n(n − 1). tor spaces.
Exercise 7.2.23 If B in Mmn has rank r, let U = {A in a. Show that dim W ≤ dim V if and only if there
Mnn | BA = 0} and W = {BA | A in Mnn }. Show that exists an onto linear transformation T : V → W .
dim U = n(n − r) and dim W = nr. [Hint: Show that U [Hint: Theorem 6.4.1 and Theorem 7.1.3.]
consists of all matrices A whose columns are in the null
space of B. Use Example 7.2.7.] b. Show that dim W ≥ dim V if and only if there ex-
ists a one-to-one linear transformation T : V → W .
Exercise 7.2.24 Let T : V → V be a linear transforma- [Hint: Theorem 6.4.1 and Theorem 7.1.3.]
tion where dim V = n. If ker T ∩ im T = {0}, show that
every vector v in V can be written v = u + w for some u Exercise 7.2.31 Let A and B be n × n matrices, and as-
in ker T and w in im T . [Hint: Choose bases B ⊆ ker T sume that AX B = 0, X ∈ Mnn , implies X = 0. Show that A
and D ⊆ im T , and use Exercise 6.3.33.] and B are both invertible. [Hint: Dimension Theorem.]
392 Linear Transformations
Often two vector spaces can consist of quite different types of vectors but, on closer examination, turn out
to be the same underlying space displayed in different symbols. For example, consider the spaces
R2 = {(a, b) | a, b ∈ R} and P1 = {a + bx | a, b ∈ R}
Clearly these are the same vector space expressed in different notation: if we change each (a, b) in R2 to
a + bx, then R2 becomes P1 , complete with addition and scalar multiplication. This can be expressed by
noting that the map (a, b) 7→ a + bx is a linear transformation R2 → P1 that is both one-to-one and onto.
In this form, we can describe the general situation.
Example 7.3.1
The identity transformation 1V : V → V is an isomorphism for any vector space V .
Example 7.3.2
If T : Mmn → Mnm is defined by T (A) = AT for all A in Mmn , then T is an isomorphism (verify).
Hence Mmn ∼= Mnm .
Example 7.3.3
Isomorphic spaces can “look” ∼
quite
different. For example, M22 = P3 because the map
a b
T : M22 → P3 given by T = a + bx + cx2 + dx3 is an isomorphism (verify).
c d
The word isomorphism comes from two Greek roots: iso, meaning “same,” and morphos, meaning
“form.” An isomorphism T : V → W induces a pairing
v ↔ T (v)
7.3. Isomorphisms and Composition 393
between vectors v in V and vectors T (v) in W that preserves vector addition and scalar multiplication.
Hence, as far as their vector space properties are concerned, the spaces V and W are identical except
for notation. Because addition and scalar multiplication in either space are completely determined by the
same operations in the other space, all vector space properties of either space are completely determined
by those of the other.
One of the most important examples of isomorphic spaces was considered in Chapter 4. Let A denote
the set of all “arrows” with tail at the origin in space, and make A into a vector space using the paral-
lelogram law and the scalar multiple law (see Section 4.1). Then define a transformation T : R3 → A by
taking
x
T y = the arrow v from the origin to the point P(x, y, z).
z
In Section 4.1 matrix addition and scalar multiplication were shown to correspond to the parallelogram
law and the scalar multiplication law for these arrows, so the map T is a linear transformation. Moreover T
is an isomorphism: it is
one-to-one
by Theorem 4.1.2, and it is onto because,
given an arrow v in A with tip
x x
P(x, y, z), we have T y = v. This justifies the identification v = y in Chapter 4 of the geometric
z z
arrows with the algebraic matrices. This identification is very useful. The arrows give a “picture” of the
matrices and so bring geometric intuition into R3 ; the matrices are useful for detailed calculations and so
bring analytic precision into geometry. This is one of the best examples of the power of an isomorphism
to shed light on both spaces being considered.
The following theorem gives a very useful characterization of isomorphisms: They are the linear
transformations that preserve bases.
Theorem 7.3.1
If V and W are finite dimensional spaces, the following conditions are equivalent for a linear
transformation T : V → W .
1. T is an isomorphism.
3. There exists a basis {e1 , e2 , . . . , en } of V such that {T (e1 ), T (e2 ), . . . , T (en )} is a basis of
W.
Proof. (1) ⇒ (2). Let {e1 , . . . , en } be a basis of V . If t1 T (e1 ) + · · · + tn T (en ) = 0 with ti in R, then
T (t1e1 + · · · + tn en ) = 0, so t1e1 + · · · + tn en = 0 (because ker T = {0}). But then each ti = 0 by the
independence of the ei , so {T (e1 ), . . . , T (en )} is independent. To show that it spans W , choose w in
W . Because T is onto, w = T (v) for some v in V , so write v = t1e1 + · · · + tn en . Hence we obtain
w = T (v) = t1 T (e1 ) + · · · + tn T (en ), proving that {T (e1 ), . . . , T (en )} spans W .
(2) ⇒ (3). This is because V has a basis.
(3) ⇒ (1). If T (v) = 0, write v = v1 e1 + · · · + vn en where each vi is in R. Then
so v1 = · · · = vn = 0 by (3). Hence v = 0, so ker T = {0} and T is one-to-one. To show that T is onto, let
w be any vector in W . By (3) there exist w1 , . . . , wn in R such that
Thus T is onto.
Theorem 7.3.1 dovetails nicely with Theorem 7.1.3 as follows. Let V and W be vector spaces of
dimension n, and suppose that {e1 , e2 , . . . , en } and {f1 , f2 , . . . , fn } are bases of V and W , respectively.
Theorem 7.1.3 asserts that there exists a linear transformation T : V → W such that
T (r1 e1 + · · · + rn en ) = r1 f1 + · · · + rn fn
so isomorphisms between spaces of equal dimension can be easily defined as soon as bases are known. In
particular, this shows that if two vector spaces V and W have the same dimension then they are isomorphic,
that is V ∼
= W . This is half of the following theorem.
Theorem 7.3.2
If V and W are finite dimensional vector spaces, then V ∼
= W if and only if dim V = dim W .
Corollary 7.3.1
Let U , V , and W denote vector spaces. Then:
1. V ∼
= V for every vector space V .
2. If V ∼
= W then W ∼
= V.
3. If U ∼
= V and V ∼
= W , then U ∼
= W.
The proof is left to the reader. By virtue of these properties, the relation ∼
= is called an equivalence relation
on the class of finite dimensional vector spaces. Since dim (Rn ) = n it follows that
Corollary 7.3.2
If V is a vector space and dim V = n, then V is isomorphic to Rn .
7.3. Isomorphisms and Composition 395
If V is a vector space of dimension n, note that there are important explicit isomorphisms V → Rn .
Fix a basis B = {b1 , b2 , . . . , bn } of V and write {e1 , e2 , . . . , en } for the standard basis of Rn . By
Theorem 7.1.3 there is a unique linear transformation CB : V → Rn given by
v1
v2
CB (v1 b1 + v2 b2 + · · · + vn bn ) = v1 e1 + v2 e2 + · · · + vn en = ..
.
vn
where each vi is in R. Moreover, CB (bi ) = ei for each i so CB is an isomorphism by Theorem 7.3.1, called
the coordinate isomorphism corresponding to the basis B. These isomorphisms will play a central role
in Chapter 9.
The conclusion in the above corollary can be phrased as follows: As far as vector space properties
are concerned, every n-dimensional vector space V is essentially the same as Rn ; they are the “same”
vector space except for a change of symbols. This appears to make the process of abstraction seem less
important—just study Rn and be done with it! But consider the different “feel” of the spaces P8 and M33
even though they are both the “same” as R9 : For example, vectors in P8 can have roots, while vectors in
M33 can be multiplied. So the merit in the abstraction process lies in identifying common properties of
the vector spaces in the various examples. This is important even for finite dimensional spaces. However,
the payoff from abstraction is much greater in the infinite dimensional case, particularly for spaces of
functions.
Example 7.3.4
Let V denote the space of all 2 × 2 symmetric matrices. Find an isomorphism T : P2 → V such that
T (1) = I, where I is the 2 × 2 identity matrix.
2
Solution.
{1,x, x } is
abasis of
P2 , and we want a basis of V containing I. The set
1 0 0 1 0 0
, , is independent in V , so it is a basis because dim V = 3 (by
0 1 1 0 0 1
1 0 0 1
Example 6.3.11). Hence define T : P2 → V by taking T (1) = , T (x) = ,
0 1 1 0
2 0 0
T (x ) = , and extending linearly as in Theorem 7.1.3. Then T is an isomorphism by
0 1
Theorem 7.3.1, and its action is given by
2 2 a b
T (a + bx + cx ) = aT (1) + bT (x) + cT (x ) =
b a+c
The dimension theorem (Theorem 7.2.4) gives the following useful fact about isomorphisms.
Theorem 7.3.3
If V and W have the same dimension n, a linear transformation T : V → W is an isomorphism if it
is either one-to-one or onto.
396 Linear Transformations
Proof. The dimension theorem asserts that dim ( ker T ) + dim ( im T ) = n, so dim ( ker T ) = 0 if and only
if dim ( im T ) = n. Thus T is one-to-one if and only if T is onto, and the result follows.
Composition
Suppose that T : V → W and S : W → U are linear transformations. They link together as in the diagram
so, as in Section 2.3, it is possible to define a new function V → U by first applying T and then S.
Example 7.3.5
a b c d
Define: S : M22 → M22 and T : M22 → M22 by S = and T (A) = AT for
c d a b
A ∈ M22 . Describe the action of ST and T S, and show that ST 6= T S.
a b a c b d
Solution. ST =S = , whereas
c d b d a c
a b c d c a
TS =T = .
c d a b d b
a b a b
It is clear that T S need not equal ST , so T S 6= ST .
c d c d
The next theorem collects some basic properties of the composition operation.
1 In Section 2.3 we denoted the composite as S ◦ T . However, it is more convenient to use the simpler notation ST .
2
Actually, all that is required is U ⊆ V .
7.3. Isomorphisms and Composition 397
Theorem 7.3.4: 3
T S R
Let V −
→W − → Z be linear transformations.
→U −
2. T 1V = T and 1W T = T .
3. (RS)T = R(ST ).
Proof. The proofs of (1) and (2) are left as Exercise 7.3.25. To prove (3), observe that, for all v in V :
{(RS)T }(v) = (RS) [T (v)] = R{S [T (v)]} = R{(ST )(v)} = {R(ST )}(v)
Up to this point, composition seems to have no connection with isomorphisms. In fact, the two notions
are closely related.
Theorem 7.3.5
Let V and W be finite dimensional vector spaces. The following conditions are equivalent for a
linear transformation T : V → W .
1. T is an isomorphism.
Since ei = 1V (ei ), this gives ST = 1V by Theorem 7.1.2. But applying T gives T [S [T (ei )]] = T (ei ) for
each i, so T S = 1W (again by Theorem 7.1.2, using the basis D of W ).
(2) ⇒ (1). If T (v) = T (v1 ), then S [T (v)] = S [T (v1 )]. Because ST = 1V by (2), this reads v = v1 ; that
is, T is one-to-one. Given w in W , the fact that T S = 1W means that w = T [S(w)], so T is onto.
3 Theorem 7.3.4 can be expressed by saying that vector spaces and linear transformations are an example of a category. In
general a category consists of certain objects and, for any two objects X and Y , a set mor (X, Y ). The elements α of mor (X, Y )
are called morphisms from X to Y and are written α : X → Y . It is assumed that identity morphisms and composition are defined
in such a way that Theorem 7.3.4 holds. Hence, in the category of vector spaces the objects are the vector spaces themselves and
the morphisms are the linear transformations. Another example is the category of metric spaces, in which the objects are sets
equipped with a distance function (called a metric), and the morphisms are continuous functions (with respect to the metric).
The category of sets and functions is a very basic example.
398 Linear Transformations
Finally, S is uniquely determined by the condition ST = 1V because this condition implies (7.2). S
is an isomorphism because it carries the basis D to B. As to the last assertion, given w in W , write
w = r1 T (e1 ) + · · · + rn T (en ). Then w = T (v), where v = r1 e1 + · · · + rn en . Then S(w) = v by (7.2).
In other words, each of T and T −1 reverses the action of the other. In particular, equation (7.2) in the proof
of Theorem 7.3.5 shows how to define T −1 using the image of a basis under the isomorphism T . Here is
an example.
Example 7.3.6
Define T : P1 → P1 by T (a + bx) = (a − b) + ax. Show that T has an inverse, and find the action of
T −1 .
Solution. The transformation T is linear (verify). Because T (1) = 1 + x and T (x) = −1, T carries
the basis B = {1, x} to the basis D = {1 + x, −1}. Hence T is an isomorphism, and T −1 carries D
back to B, that is,
T −1 (1 + x) = 1 and T −1 (−1) = x
Because a + bx = b(1 + x) + (b − a)(−1), we obtain
Example 7.3.7
If B = {b1 , b2 , . . . , bn } is a basis of a vector space V , the coordinate transformation CB : V → Rn
is an isomorphism defined by
CB (v1 b1 + v2 b2 + · · · + vn bn ) = (v1 , v2 , . . . , vn )T
Condition (2) in Theorem 7.3.5 characterizes the inverse of a linear transformation T : V → W as the
(unique) transformation S : W → V that satisfies ST = 1V and T S = 1W . This often determines the inverse.
7.3. Isomorphisms and Composition 399
Example 7.3.8
Define T : R3 → R3 by T (x, y, z) = (z, x, y). Show that T 3 = 1R3 , and hence find T −1 .
Since this holds for all (x, y, z), it shows that T 3 = 1R3 , so T (T 2 ) = 1R3 = (T 2 )T . Thus T −1 = T 2
by (2) of Theorem 7.3.5.
Example 7.3.9
Define T : Pn → Rn+1 by T (p) = (p(0), p(1), . . . , p(n)) for all p in Pn . Show that T −1 exists.
Solution. The verification that T is linear is left to the reader. If T (p) = 0, then p(k) = 0 for
k = 0, 1, . . . , n, so p has n + 1 distinct roots. Because p has degree at most n, this implies that
p = 0 is the zero polynomial (Theorem 6.5.4) and hence that T is one-to-one. But
dim Pn = n + 1 = dim Rn+1 , so this means that T is also onto and hence is an isomorphism. Thus
T −1 exists by Theorem 7.3.5. Note that we have not given a description of the action of T −1 , we
have merely shown that such a description exists. To give it explicitly requires some ingenuity; one
method involves the Lagrange interpolation expansion (Theorem 6.5.3).
Exercise 7.3.1 Verify that each of the following is an h. T : Mmn → Mnm ; T (A) = AT
isomorphism (Theorem 7.3.3 is useful).
Exercise 7.3.4 In each case, compute the action of ST Exercise 7.3.7 In each case, show that T is self-inverse,
and T S, and show that ST 6= T S. that is: T −1 = T .
a. T : R4 → R4 ; T (x, y, z, w) = (−x, z, w, y)
Exercise 7.3.5 In each case, show that the linear trans-
formation T satisfies T 2 = T . b. T : R4 → R4 ; T (x, y, z, w) = (−y, x − y, z, −w)
Exercise 7.3.6 Determine whether each of the following a. If S and T are both one-to-one, show that ST is
transformations T has an inverse and, if so, determine the one-to-one.
action of T −1 .
b. If S and T are both onto, show that ST is onto.
a. T : R3 → R3 ;
T (x, y, z) = (x + y, y + z, z + x) Exercise 7.3.11 Let T : V → W be a linear transforma-
tion.
b. T : R4 → R4 ;
T (x, y, z, t) = (x + y, y + z, z + t, t + x) a. If T is one-to-one and T R = T R1 for transforma-
tions R and R1 : U → V , show that R = R1 .
c. T : M22 →M22;
a b a−c b−d b. If T is onto and ST = S1 T for transformations S
T =
c d 2a − c 2b − d and S1 : W → U , show that S = S1 .
d. T : M22 →M22;
a b a + 2c b + 2d Exercise 7.3.12 Consider the linear transformations
T = V−
T
→W −
R
→ U.
c d 3c − a 3d − b
T S
Exercise 7.3.13 Let V − → W be linear transforma- Exercise 7.3.22 Let A and B be matrices of size p × m
→U −
tions. and n × q. Assume that mn = pq. Define R : Mmn → M pq
by R(X ) = AX B.
a. If ST is one-to-one, show that T is one-to-one and
a. Show that Mmn ∼
= M pq by comparing dimensions.
that dim V ≤ dim U .
b. Show that R is a linear transformation.
b. If ST is onto, show that S is onto and that
dim W ≤ dim U . c. Show that if R is an isomorphism, then m = p
and n = q. [Hint: Show that T : Mmn → M pn
given by T (X ) = AX and S : Mmn → Mmq given
Exercise 7.3.14 Let T : V → V be a linear transforma-
by S(X ) = X B are both one-to-one, and use the
tion. Show that T 2 = 1V if and only if T is invertible and
dimension theorem.]
T = T −1 .
Exercise 7.3.15 Let N be a nilpotent n × n matrix (that Exercise 7.3.23 Let T : V → V be a linear transforma-
is, N k = 0 for some k). Show that T : Mnm → Mnm is tion such that T 2 = 0 is the zero transformation.
an isomorphism if T (X ) = X − NX . [Hint: If X is in
ker T , show that X = NX = N 2 X = · · · . Then use Theo- a. If V 6= {0}, show that T cannot be invertible.
rem 7.3.3.] b. If R : V → V is defined by R(v) = v + T (v) for all
Exercise 7.3.16 Let T : V → W be a linear transforma- v in V , show that R is linear and invertible.
tion, and let {e1 , . . . , er , er+1 , . . . , en } be any basis of V
such that {er+1 , . . . , en } is a basis of ker T . Show that Exercise 7.3.24 Let V consist of all sequences
∼
im T = span {e1 , . . . , er }. [Hint: See Theorem 7.2.5.] [x , x
0 1 2 , x , . . . ) of numbers, and define vector operations
b. Show that T is onto if and only if there exists a Exercise 7.3.29 If T : V → V is a linear transformation
linear transformation S : W → V with T S = 1W . where dim V = n, show that T ST = T for some isomor-
[Hint: Let {e1 , . . . , er , . . . , en } be a basis of phism S : V → V . [Hint: Let {e1 , . . . , er , er+1 , . . . , en }
V such that {er+1 , . . . , en } is a basis of ker T . be as in Theorem 7.2.5. Extend {T (e1 ), . . . , T (er )} to
Use Theorem 7.2.5, Theorem 7.1.2 and Theo- a basis of V , and use Theorem 7.3.1, Theorem 7.1.2 and
rem 7.1.3.] Theorem 7.1.3.]
Exercise 7.3.30 Let A and B denote m × n matrices. In
Exercise 7.3.28 Let S and T be linear transformations each case show that (1) and (2) are equivalent.
V → W , where dim V = n and dim W = m.
a. (1) A and B have the same null space. (2) B = PA
a. Show that ker S = ker T if and only if T = RS for some invertible m × m matrix P.
for some isomorphism R : W → W . [Hint: Let
{e1 , . . . , er , . . . , en } be a basis of V such that b. (1) A and B have the same range. (2) B = AQ for
{er+1 , . . . , en } is a basis of ker S = ker T . Use some invertible n × n matrix Q.
Theorem 7.2.5 to extend {S(e1 ), . . . , S(er )} and
{T (e1 ), . . . , T (er )} to bases of W .] [Hint: Use Exercise 7.3.28.]
Differential equations are instrumental in solving a variety of problems throughout science, social science,
and engineering. In this brief section, we will see that the set of solutions of a linear differential equation
(with constant coefficients) is a vector space and we will calculate its dimension. The proof is pure linear
algebra, although the applications are primarily in analysis. However, a key result (Lemma 7.4.3 below)
can be applied much more widely.
We denote the derivative of a function f : R → R by f ′ , and f will be called differentiable if it can
be differentiated any number of times. If f is a differentiable function, the nth derivative f (n) of f is the
result of differentiating n times. Thus f (0) = f , f (1) = f ′ , f (2) = f (1)′ , . . . , and in general f (n+1) = f (n)′
for each n ≥ 0. For small values of n these are often written as f , f ′ , f ′′ , f ′′′ , . . . .
If a, b, and c are numbers, the differential equations
f ′′ − a f ′ − b f = 0 or f ′′′ − a f ′′ − b f ′ − c f = 0
is called a differential equation of order n. We want to describe all solutions of this equation. Of course
a knowledge of calculus is required.
The set F of all functions R → R is a vector space with operations as described in Example 6.1.7. If f
and g are differentiable, we have ( f + g)′ = f ′ + g′ and (a f )′ = a f ′ for all a in R. With this it is a routine
matter to verify that the following set is a subspace of F:
Dn = { f : R → R | f is differentiable and is a solution to (7.3)}
Our sole objective in this section is to prove
Theorem 7.4.1
The space Dn has dimension n.
As will be clear later, the proof of Theorem 7.4.1 requires that we enlarge Dn somewhat and allow our
differentiable functions to take values in the set C of complex numbers. To do this, we must clarify what
it means for a function f : R → C to be differentiable. For each real number x write f (x) in terms of its
real and imaginary parts fr (x) and fi (x):
f (x) = fr (x) + i fi (x)
This produces new functions fr : R → R and fi : R → R, called the real and imaginary parts of f ,
respectively. We say that f is differentiable if both fr and fi are differentiable (as real functions), and we
define the derivative f ′ of f by
f ′ = fr′ + i fi′ (7.4)
We refer to this frequently in what follows.4
With this, write D∞ for the set of all differentiable complex valued functions f : R → C . This is a
complex vector space using pointwise addition (see Example 6.1.7), and the following scalar multiplica-
tion: For any w in C and f in D∞ , we define w f : R → C by (w f )(x) = w f (x) for all x in R. We will be
working in D∞ for the rest of this section. In particular, consider the following complex subspace of D∞ :
D∗n = { f : R → C | f is a solution to (7.3)}
Clearly, Dn ⊆ D∗n , and our interest in D∗n comes from
Lemma 7.4.1
If dim C (D∗n ) = n, then dim R (Dn ) = n.
Proof. Observe first that if dim C (D∗n ) = n, then dim R (D∗n ) = 2n. [In fact, if {g1 , . . . , gn } is a C-basis of
D∗n then {g1 , . . . , gn , ig1 , . . . , ign } is a R-basis of D∗n ]. Now observe that the set Dn × Dn of all ordered
pairs ( f , g) with f and g in Dn is a real vector space with componentwise operations. Define
θ : D∗n → Dn × Dn given by θ ( f ) = ( fr , fi ) for f in D∗n
4 Write |w| for the absolute value of any complex number w. As for functions R → R, we say that limt→0 f (t) = w if, for all
ε > 0 there exists δ > 0 such that | f (t) − w| <∈ whenever |t| < δ . (Note that t represents a real number here.) In particular,
given a real number x, we define the derivative f ′ of a function f : R → C by f ′ (x) = limt→0 1t [ f (x + t) − f (x)] and we say
that f is differentiable if f ′ (x) exists for all x in R. Then we can prove that f is differentiable if and only if both fr and fi are
differentiable, and that f ′ = fr′ + i fi′ in this case.
404 Linear Transformations
One verifies that θ is onto and one-to-one, and it is R-linear because f → fr and f → fi are both R-linear.
Hence D∗n ∼= Dn × Dn as R-spaces. Since dim R (D∗n ) is finite, it follows that dim R (Dn ) is finite, and we
have
2 dim R (Dn ) = dim R (Dn × Dn ) = dim R (D∗n ) = 2n
Hence dim R (Dn ) = n, as required.
It follows that to prove Theorem 7.4.1 it suffices to show that dim C (D∗n ) = n.
There is one function that arises frequently in any discussion of differential equations. Given a complex
number w = a + ib (where a and b are real), we have ew = ea (cos b + i sin b). The law of exponents,
ew ev = ew+v for all w, v in C is easily verified using the formulas for sin(b + b1 ) and cos(b + b1 ). If x is a
variable and w = a + ib is a complex number, define the exponential function ewx by
Hence ewx is differentiable because its real and imaginary parts are differentiable for all x. Moreover, the
following can be proved using (7.4):
(ewx )′ = wewx
In addition, (7.4) gives the product rule for differentiation:
Lemma 7.4.2
Given f in D∞ and w in C, there exists g in D∞ such that g′ − wg = f .
Proof. Define p(x) = f (x)e−wx . Then p is differentiable, whence pr and pi are both differentiable, hence
continuous, and so both have antiderivatives, say pr = q′r and pi = q′i . Then the function q = qr + iqi is in
D∞ , and q′ = p by (7.4). Finally define g(x) = q(x)ewx . Then
Proof. Let {u1 , u2 , . . . , um } be a basis of ker (T ) and let {v1 , v2 , . . . , vn } be a basis of ker (S). Since S
is onto, let ui = S(wi ) for some wi in V . It suffices to show that
B = {w1 , w2 , . . . , wm , v1 , v2 , . . . , vn }
7.5. More on Linear Recurrences 405
is a basis of ker (T S). Note B ⊆ ker (T S) because T S(wi ) = T (ui ) = 0 for each i and T S(v j ) = T (0) = 0
for each j.
Spanning. If v is in ker (T S), then S(v) is in ker (T ), say S(v) = ∑ ri ui = ∑ ri S (wi ) = S (∑ ri wi ). It follows
that v − ∑ ri wi is in ker (S) = span {v1 , v2 , . . . , vn }, proving that v is in span (B).
Independence. Let ∑ ri wi + ∑ t j v j = 0. Applying S, and noting that S(v j ) = 0 for each j, yields
0 = ∑ ri S(wi ) = ∑ ri ui . Hence ri = 0 for each i, and so ∑ t j v j = 0. This implies that each t j = 0, and so
proves the independence of B.
Proof of Theorem 7.4.1. By Lemma 7.4.1, it suffices to prove that dim C (D∗n ) = n. This holds for n = 1
because the proof of Theorem 3.5.1 goes through to show that D∗1 = Cea0 x . Hence we proceed by induction
on n. With an eye on equation (7.3), consider the polynomial
By the fundamental theorem of algebra,5 let w be a complex root of p(t), so that p(t) = q(t)(t −w) for some
complex polynomial q(t) of degree n − 1. It follows that p(D) = q(D)(D − w1D∞ ). Moreover D − w1D∞ is
onto by Lemma 7.4.2, dim C [ ker (D − w1D∞ )] = 1 by the case n = 1 above, and dim C ( ker [q(D)]) = n − 1
by induction. Hence Lemma 7.4.3 shows that ker [P(D)] is also finite dimensional and
Since D∗n = ker [p(D)], this completes the induction, and so proves Theorem 7.4.1.
In Section 3.4 we used diagonalization to study linear recurrences, and gave several examples. We now
apply the theory of vector spaces and linear transformations to study the problem in more generality.
Consider the linear recurrence
If the initial values x0 and x1 are prescribed, this gives a sequence of numbers. For example, if x0 = 1 and
x1 = 1 the sequence continues
as the reader can verify. Clearly, the entire sequence is uniquely determined by the recurrence and the two
initial values. In this section we define a vector space structure on the set of all sequences, and study the
subspace of those sequences that satisfy a particular recurrence.
Sequences will be considered entities in their own right, so it is useful to have a special notation for
them. Let
[xn ) denote the sequence x0 , x1 , x2 , . . . , xn , . . .
Example 7.5.1
Sequences of the form [c) for a fixed number c will be referred to as constant sequences, and those of the
form [λ n ), λ some number, are power sequences.
Two sequences are regarded as equal when they are identical:
[xn ) = [yn ) means xn = yn for all n = 0, 1, 2, . . .
Addition and scalar multiplication of sequences are defined by
[xn ) + [yn ) = [xn + yn )
r[xn ) = [rxn )
These operations are analogous to the addition and scalar multiplication in Rn , and it is easy to check that
the vector-space axioms are satisfied. The zero vector is the constant sequence [0), and the negative of a
sequence [xn ) is given by −[xn ) = [−xn ).
Now suppose k real numbers r0 , r1 , . . . , rk−1 are given, and consider the linear recurrence relation
determined by these numbers.
xn+k = r0 xn + r1 xn+1 + · · · + rk−1 xn+k−1 (7.5)
When r0 6= 0, we say this recurrence has length k.7 For example, the relation xn+2 = 2xn + xn+1 is of
length 2.
A sequence [xn ) is said to satisfy the relation (7.5) if (7.5) holds for all n ≥ 0. Let V denote the set of
all sequences that satisfy the relation. In symbols,
V = {[xn ) | xn+k = r0 xn + r1 xn+1 + · · · + rk−1 xn+k−1 hold for all n ≥ 0}
It is easy to see that the constant sequence [0) lies in V and that V is closed under addition and scalar
multiplication of sequences. Hence V is vector space (being a subspace of the space of all sequences).
The following important observation about V is needed (it was used implicitly earlier): If the first k terms
of two sequences agree, then the sequences are identical. More formally,
7 We shall usually assume that r0 6= 0; otherwise, we are essentially dealing with a recurrence of shorter length than k.
7.5. More on Linear Recurrences 407
Lemma 7.5.1
Let [xn ) and [yn ) denote two sequences in V . Then
Theorem 7.5.1
Given real numbers r0 , r1 , . . . , rk−1 , let
denote the vector space of all sequences satisfying the linear recurrence relation (7.5) determined
by r0 , r1 , . . . , rk−1 . Then the function
T : Rk → V
defined above is an isomorphism. In particular:
1. dim V = k.
Proof. (1) and (2) will follow from Theorem 7.3.1 and Theorem 7.3.2 as soon as we show that T is an
isomorphism. Given v and w in Rk , write v = (v0 , v1 , . . . , vk−1 ) and w = (w0 , w1 , . . . , wk−1 ). The first
k terms of T (v) and T (w) are v0 , v1 , . . . , vk−1 and w0 , w1 , . . . , wk−1 , respectively, so the first k terms of
T (v) + T (w) are v0 + w0 , v1 + w1 , . . . , vk−1 + wk−1 . Because these terms agree with the first k terms of
T (v + w), Lemma 7.5.1 implies that T (v + w) = T (v) + T (w). The proof that T (rv) + rT (v) is similar, so
T is linear.
Now let [xn ) be any sequence in V , and let v = (x0 , x1 , . . . , xk−1 ). Then the first k terms of [xn ) and
T (v) agree, so T (v) = [xn ). Hence T is onto. Finally, if T (v) = [0) is the zero sequence, then the first k
terms of T (v) are all zero (all terms of T (v) are zero!) so v = 0. This means that ker T = {0}, so T is
one-to-one.
408 Linear Transformations
Example 7.5.2
Show that the sequences [1), [n), and [(−1)n) are a basis of the space V of all solutions of the
recurrence
xn+3 = −xn + xn+1 + xn+2
Then find the solution satisfying x0 = 1, x1 = 2, x2 = 5.
Solution. The verifications that these sequences satisfy the recurrence (and hence lie in V ) are left
to the reader. They are a basis because [1) = T (1, 1, 1), [n) = T (0, 1, 2), and
[(−1)n ) = T (1, −1, 1); and {(1, 1, 1), (0, 1, 2), (1, −1, 1)} is a basis of R3 . Hence the
sequence [xn ) in V satisfying x0 = 1, x1 = 2, x2 = 5 is a linear combination of this basis:
1 = x0 = t1 + 0 + t3
2 = x1 = t1 + t2 − t3
5 = x2 = t1 + 2t2 + t3
This technique clearly works for any linear recurrence of length k: Simply take your favourite basis
{v1 , . . . , vk } of Rk —perhaps the standard basis—and compute T (v1 ), . . . , T (vk ). This is a basis of V all
right, but the nth term of T (vi ) is not usually given as an explicit function of n. (The basis in Example 7.5.2
was carefully chosen so that the nth terms of the three sequences were 1, n, and (−1)n , respectively, each
a simple function of n.)
However, it turns out that an explicit basis of V can be given in the general situation. Given the
recurrence (7.5) again:
xn+k = r0 xn + r1 xn+1 + · · · + rk−1 xn+k−1
the idea is to look for numbers λ such that the power sequence [λ n ) satisfies (7.5). This happens if and
only if
λ n+k = r0 λ n + r1 λ n+1 + · · · + rk−1 λ n+k−1
holds for all n ≥ 0. This is true just when the case n = 0 holds; that is,
λ k = r0 + r1 λ + · · · + rk−1 λ k−1
The polynomial
p(x) = xk − rk−1 xk−1 − · · · − r1 x − r0
is called the polynomial associated with the linear recurrence (7.5). Thus every root λ of p(x) provides a
sequence [λ n ) satisfying (7.5). If there are k distinct roots, the power sequences provide a basis. Inciden-
tally, if λ = 0, the sequence [λ n ) is 1, 0, 0, . . . ; that is, we accept the convention that 00 = 1.
7.5. More on Linear Recurrences 409
Theorem 7.5.2
Let r0 , r1 , . . . , rk−1 be real numbers; let
denote the vector space of all sequences satisfying the linear recurrence relation determined by
r0 , r1 , . . . , rk−1 ; and let
p(x) = xk − rk−1 xk−1 − · · · − r1 x − r0
denote the polynomial associated with the recurrence relation. Then
2. If λ1 , λ2 , . . . , λk are distinct real roots of p(x), then {[λ1n ), [λ2n ), . . . , [λkn )} is a basis of V .
Proof. It remains to prove (2). But [λin ) = T (vi ) where vi = (1, λi , λi2 , . . . , λik−1 ), so (2) follows by
Theorem 7.5.1, provided that (v1 , v2 , . . . , vn ) is a basis of Rk . This is true provided that the matrix with
the vi as its rows
1 λ1 λ12 · · · λ1k−1
1 λ λ 2 · · · λ k−1
2 2 2
.. .. .. . . ..
. . . . .
2
1 λk λk · · · λk k−1
is invertible. But this is a Vandermonde matrix and so is invertible if the λi are distinct (Theorem 3.2.7).
This proves (2).
Example 7.5.3
Find the solution of xn+2 = 2xn + xn+1 that satisfies x0 = a, x1 = b.
Solution. The associated polynomial is p(x) = x2 − x − 2 = (x − 2)(x + 1). The roots are λ1 = 2
and λ2 = −1, so the sequences [2n ) and [(−1)n) are a basis for the space of solutions by
Theorem 7.5.2. Hence every solution [xn ) is a linear combination
t1 + t2 = a
2t1 − t2 = b
If p(x) is the polynomial associated with a linear recurrence relation of length k, and if p(x) has k distinct
roots λ1 , λ2 , . . . , λk , then p(x) factors completely:
p(x) = (x − λ1 )(x − λ2 ) · · · (x − λk )
Each root λi provides a sequence [λin ) satisfying the recurrence, and they are a basis of V by Theorem 7.5.2.
In this case, each λi has multiplicity 1 as a root of p(x). In general, a root λ has multiplicity m if
p(x) = (x − λ )m q(x), where q(λ ) 6= 0. In this case, there are fewer than k distinct roots and so fewer
than k sequences [λ n ) satisfying the recurrence. However, we can still obtain a basis because, if λ has
multiplicity m (and λ 6= 0), it provides m linearly independent sequences that satisfy the recurrence. To
prove this, it is convenient to give another way to describe the space V of all sequences satisfying a given
linear recurrence relation.
Let S denote the vector space of all sequences and define a function
S is clearly a linear transformation and is called the shift operator on S. Note that powers of S shift the
sequence further: S2 [xn ) = S[xn+1 ) = [xn+2 ). In general,
can be written
Sk [xn ) = r0 [xn ) + r1 S[xn ) + · · · + rk−1 Sk−1 [xn ) (7.6)
Now let p(x) = xk −rk−1 xk−1 −· · ·−r1 x−r0 denote the polynomial associated with the recurrence relation.
The set L[S, S] of all linear transformations from S to itself is a vector space (verify8 ) that is closed under
composition. In particular,
p(S) = Sk − rk−1 Sk−1 − · · · − r1 S − r0
is a linear transformation called the evaluation of p at S. The point is that condition (7.6) can be written
as
p(S){[xn)} = 0
In other words, the space V of all sequences satisfying the recurrence relation is just ker [p(S)]. This is the
first assertion in the following theorem.
Theorem 7.5.3
Let r0 , r1 , . . . , rk−1 be real numbers, and let
denote the space of all sequences satisfying the linear recurrence relation determined by
r0 , r1 , . . . , rk−1 . Let
p(x) = xk − rk−1 xk−1 − · · · − r1 x − r0
denote the corresponding polynomial. Then:
This theorem combines with Theorem 7.5.2 to give a basis for V when p(x) has k real roots (not neces-
sarily distinct) none of which is zero. This last requirement means r0 6= 0, a condition that is unimportant
in practice (see Remark 1 below).
Theorem 7.5.4
Let r0 , r1 , . . . , rk−1 be real numbers with r0 6= 0; let
denote the space of all sequences satisfying the linear recurrence relation of length k determined by
r0 , . . . , rk−1 ; and assume that the polynomial
factors completely as
p(x) = (x − λ1 )m1 (x − λ2 )m2 · · · (x − λ p )m p
where λ1 , λ2 , . . . , λ p are distinct real numbers and each mi ≥ 1. Then λi 6= 0 for each i, and
n m −1 n
λ1n , nλ 1 , . . . , n 1 λ1
n n m −1 n
λ2 , nλ 2 , . . . , n 2 λ2
.
..
n n m −1 n
λ p , nλ p , . . . , n p λp
is a basis of V .
412 Linear Transformations
Proof. There are m1 + m2 + · · · + m p = k sequences in all so, because dim V = k, it suffices to show that
they are linearly independent. The assumption that r0 6= 0, implies that 0 is not a root of p(x). Hence each
λi 6= 0, so {[λin ), [nλin ), . . . , [nmi−1 λin )} is linearly independent by Theorem 7.5.3. The proof that the
whole set of sequences is linearly independent is omitted.
Example 7.5.4
Find a basis for the space V of all sequences [xn ) satisfying
Hence 3 is a double root, so [3n ) and [n3n ) both lie in V by Theorem 7.5.3 (the reader should verify
this). Similarly, λ = −1 is a root of multiplicity 1, so [(−1)n) lies in V . Hence
{[3n ), [n3n ), [(−1)n)} is a basis by Theorem 7.5.4.
Remark 1
If r0 = 0 [so p(x) has 0 as a root], the recurrence reduces to one of shorter length. For example, consider
If we set yn = xn+2 , this recurrence becomes yn+2 = 3yn + 2yn+1 , which has solutions [3n) and [(−1)n).
These give the following solution to (7.5):
0, 0, 1, 3, 32 , . . .
0, 0, 1, −1, (−1)2 , . . .
Remark 2
Theorem 7.5.4 completely describes the space V of sequences that satisfy a linear recurrence relation for
which the associated polynomial p(x) has all real roots. However, in many cases of interest, p(x) has
complex roots that are not real. If p(µ ) = 0, µ complex, then p(µ ) = 0 too (µ the conjugate), and the
main observation is that [µ n + µ n ) and [i(µ n + µ n )) are real solutions. Analogs of the preceding theorems
can then be proved.
7.5. More on Linear Recurrences 413
Exercise 7.5.1 Find a basis for the space V of sequences a. xn+2 = −a2 xn + 2axn+1 , a 6= 0
[xn ) satisfying the following recurrences, and use it to
find the sequence satisfying x0 = 1, x1 = 2, x2 = 1. b. xn+2 = −abxn + (a + b)xn+1 , (a 6= b)
a. xn+3 = −2xn + xn+1 + 2xn+2 Exercise 7.5.4 In each case, find a basis of V .
b. xn+3 = −6xn + 7xn+1
a. V = {[xn ) | xn+4 = 2xn+2 − xn+3 , for n ≥ 0}
c. xn+3 = −36xn + 7xn+2
b. V = {[xn ) | xn+4 = −xn+2 + 2xn+3 , for n ≥ 0}
Exercise 7.5.2 In each case, find a basis for the space V
of all sequences [xn ) satisfying the recurrence, and use it
Exercise 7.5.5 Suppose that [xn ) satisfies a linear recur-
to find xn if x0 = 1, x1 = −1, and x2 = 1.
rence relation of length k. If {e0 = (1, 0, . . . , 0),
e1 = (0, 1, . . . , 0), . . . , ek−1 = (0, 0, . . . , 1)} is the stan-
a. xn+3 = xn + xn+1 − xn+2
dard basis of Rk , show that
b. xn+3 = −2xn + 3xn+1
xn = x0 T (e0 ) + x1 T (e1 ) + · · · + xk−1 T (ek−1 )
c. xn+3 = −4xn + 3xn+2
holds for all n ≥ k. (Here T is as in Theorem 7.5.1.)
d. xn+3 = xn − 3xn+1 + 3xn+2
Exercise 7.5.6 Show that the shift operator S is onto but
e. xn+3 = 8xn − 12xn+1 + 6xn+2 not one-to-one. Find ker S.
Exercise 7.5.7 Find a basis for the space V of all se-
Exercise 7.5.3 Find a basis for the space V of sequences
quences [xn ) satisfying xn+2 = −xn .
[xn ) satisfying each of the following recurrences.
8. Orthogonality
In Section 5.3 we introduced the dot product in Rn and extended the basic geometric notions of length and
distance. A set {f1 , f2 , . . . , fm } of nonzero vectors in Rn was called an orthogonal set if fi · f j = 0 for
all i 6= j, and it was proved that every orthogonal set is independent. In particular, it was observed that
the expansion of a vector as a linear combination of orthogonal basis vectors is easy to obtain because
formulas exist for the coefficients. Hence the orthogonal bases are the “nice” bases, and much of this
chapter is devoted to extending results about bases to orthogonal bases. This leads to some very powerful
methods and theorems. Our first task is to show that every subspace of Rn has an orthogonal basis.
If {v1 , . . . , vm } is linearly independent in a general vector space, and if vm+1 is not in span {v1 , . . . , vm },
then {v1 , . . . , vm , vm+1 } is independent (Lemma 6.4.1). Here is the analog for orthogonal sets in Rn .
Then:
1. fm+1 · fk = 0 for k = 1, 2, . . . , m.
2. If x is not in span {f1 , . . . , fm }, then fm+1 6= 0 and {f1 , . . . , fm , fm+1 } is an orthogonal set.
fm+1 · fk = (x − t1 f1 − · · · − tk fk − · · · − tm fm ) · fk
= x · fk − t1 (f1 · fk ) − · · · − tk (fk · fk ) − · · · − tm (fm · fk )
= x · fk − tk kfk k2
=0
This proves (1), and (2) follows because fm+1 6= 0 if x is not in span {f1 , . . . , fm }.
The orthogonal lemma has three important consequences for Rn . The first is an extension for orthog-
onal sets of the fundamental fact that any independent set is part of a basis (Theorem 6.4.1).
415
416 Orthogonality
Theorem 8.1.1
Let U be a subspace of Rn .
Proof.
1. If span {f1 , . . . , fm } = U , it is already a basis. Otherwise, there exists x in U outside span {f1 , . . . , fm }.
If fm+1 is as given in the orthogonal lemma, then fm+1 is in U and {f1 , . . . , fm , fm+1 } is orthogonal.
If span {f1 , . . . , fm , fm+1 } = U , we are done. Otherwise, the process continues to create larger and
larger orthogonal subsets of U . They are all independent by Theorem 5.3.5, so we have a basis when
we reach a subset containing dim U vectors.
2. If U = {0}, the empty basis is orthogonal. Otherwise, if f 6= 0 is in U , then {f} is orthogonal, so (2)
follows from (1).
We can improve upon (2) of Theorem 8.1.1. In fact, the second consequence of the orthogonal lemma
is a procedure by which any basis {x1 , . . . , xm } of a subspace U of Rn can be systematically modified to
yield an orthogonal basis {f1 , . . . , fm } of U . The fi are constructed one at a time from the xi .
To start the process, take f1 = x1 . Then x2 is not in span {f1 } because {x1 , x2 } is independent, so take
x2 ·f1
f2 = x2 − kf f
k2 1
1
Thus {f1 , f2 } is orthogonal by Lemma 8.1.1. Moreover, span {f1 , f2 } = span {x1 , x2 } (verify), so x3 is
not in span {f1 , f2 }. Hence {f1 , f2 , f3 } is orthogonal where
x3 ·f1 x3 ·f2
f3 = x3 − kf f − kf
k2 1
f
k2 2
1 2
Again, span {f1 , f2 , f3 } = span {x1 , x2 , x3 }, so x4 is not in span {f1 , f2 , f3 } and the process continues.
At the mth iteration we construct an orthogonal set {f1 , . . . , fm } such that
Hence {f1 , f2 , . . . , fm } is the desired orthogonal basis of U . The procedure can be summarized as follows.
8.1. Orthogonal Complements and Projections 417
0 f2 f1 = x1
f1 f2 = x2 − kxf2 ·kf12 f1
1
span {f1 , f2 }
f3 = x3 − kxf3 ·kf12 f1 − kxf3 ·kf22 f2
1 2
Gram-Schmidt ..
.
fk = xk − kxfk ·kf12 f1 − kxfk ·kf22 f2 − · · · − kxfk ·fk−1 f
k2 k−1
1 2 k−1
f3
for each k = 2, 3, . . . , m. Then
The process (for k = 3) is depicted in the diagrams. Of course, the algorithm converts any basis of Rn
itself into an orthogonal basis.
Example 8.1.1
1 1 −1 −1
Find an orthogonal basis of the row space of A = 3 2 0 1 .
1 0 1 0
Solution. Let x1 , x2 , x3 denote the rows of A and observe that {x1 , x2 , x3 } is linearly independent.
Take f1 = x1 . The algorithm gives
x2 ·f1
f2 = x2 − kf f = (3, 2, 0, 1) − 44 (1, 1, −1, −1) = (2, 1, 1, 2)
k2 1
1
f3 = x3 − x3 ·f1 x3 ·f2
f − kf
kf1 k2 1 2 f2 = x3 − 04 f1 − 10
3
f2 = 1
10 (4, −3, 7, −6)
2k
1
Hence {(1, 1, −1, −1), (2, 1, 1, 2), 10 (4, −3, 7, −6)} is the orthogonal basis provided by the
algorithm. In hand calculations it may be convenient to eliminate fractions (see the Remark
below), so {(1, 1, −1, −1), (2, 1, 1, 2), (4, −3, 7, −6)} is also an orthogonal basis for row A.
1 Erhardt Schmidt (1876–1959) was a German mathematician who studied under the great David Hilbert and later developed
the theory of Hilbert spaces. He first described the present algorithm in 1907. Jörgen Pederson Gram (1850–1916) was a Danish
actuary.
418 Orthogonality
Remark
Observe that the vector kfx·fki2 fi is unchanged if a nonzero scalar multiple of fi is used in place of fi . Hence,
i
if a newly constructed fi is multiplied by a nonzero scalar at some stage of the Gram-Schmidt algorithm,
the subsequent fs will be unchanged. This is useful in actual calculations.
Projections
Suppose a point x and a plane U through the origin in R3 are given, and
x we want to find the point p in the plane that is closest to x. Our geometric
x−p
intuition assures us that such a point p exists. In fact (see the diagram), p
0
p must be chosen in such a way that x − p is perpendicular to the plane.
U Now we make two observations: first, the plane U is a subspace of R3
(because U contains the origin); and second, that the condition that x − p
is perpendicular to the plane U means that x − p is orthogonal to every vector in U . In these terms the
whole discussion makes sense in Rn . Furthermore, the orthogonal lemma provides exactly what is needed
to find p in this more general setting.
U ⊥ = {x in Rn | x · y = 0 for all y in U }
The following lemma collects some useful properties of the orthogonal complement; the proof of (1)
and (2) is left as Exercise 8.1.6.
Lemma 8.1.2
Let U be a subspace of Rn .
1. U ⊥ is a subspace of Rn .
Proof.
3. Let U = span {x1 , x2 , . . . , xk }; we must show that U ⊥ = {x | x · xi = 0 for each i}. If x is in U ⊥
then x · xi = 0 for all i because each xi is in U . Conversely, suppose that x · xi = 0 for all i; we must
show that x is in U ⊥ , that is, x · y = 0 for each y in U . Write y = r1 x1 + r2 x2 + · · · + rk xk , where each
ri is in R. Then, using Theorem 5.3.1,
x · y = r1 (x · x1 ) + r2 (x · x2 ) + · · · + rk (x · xk ) = r1 0 + r2 0 + · · · + rk 0 = 0
as required.
8.1. Orthogonal Complements and Projections 419
Example 8.1.2
Find U ⊥ if U = span {(1, −1, 2, 0), (1, 0, −2, 3)} in R4 .
x − y + 2z =0
x − 2z + 3w = 0
Then p ∈ U and (by the orthogonal lemma) x − p ∈ U ⊥, so it looks like we have a generalization of
Theorem 4.2.4.
However there is a potential problem: the formula (8.1) for p must be shown to be independent of the
choice of the orthogonal basis {f1 , f2 , . . . , fm }. To verify this, suppose that {f′1 , f′2 , . . . , f′m } is another
orthogonal basis of U , and write
′ ′ ′
x·f x·f x·f
p′ = kf′ k12 f′1 + kf′ k22 f′2 + · · · + kf′ km2 f′m
1 2 m
As before, p′ ∈ U and x − p′ ∈ U ⊥ , and we must show that p′ = p. To see this, write the vector p − p′ as
follows:
p − p′ = (x − p′ ) − (x − p)
This vector is in U (because p and p′ are in U ) and it is in U ⊥ (because x − p′ and x − p are in U ⊥), and
so it must be zero (it is orthogonal to itself!). This means p′ = p as desired.
Hence, the vector p in equation (8.1) depends only on x and the subspace U , and not on the choice
of orthogonal basis {f1 , . . . , fm } of U used to compute it. Thus, we are entitled to make the following
definition:
420 Orthogonality
is called the orthogonal projection of x on U . For the zero subspace U = {0}, we define
proj {0} x = 0
1. p is in U and x − p is in U ⊥.
Proof.
Example 8.1.3
Let U = span {x1 , x2 } in R4 where x1 = (1, 1, 0, 1) and x2 = (0, 1, 1, 2). If x = (3, −1, 0, 2),
find the vector in U closest to x and express x as the sum of a vector in U and a vector orthogonal
to U .
Solution. {x1 , x2 } is independent but not orthogonal. The Gram-Schmidt process gives an
orthogonal basis {f1 , f2 } of U where f1 = x1 = (1, 1, 0, 1) and
x2 ·f1
f2 = x2 − kf f = x2 − 33 f1 = (−1, 0, 1, 1)
k2 1
1
Thus, p is the vector in U closest to x, and x − p = 13 (4, −7, 1, 3) is orthogonal to every vector in
U . (This can be verified by checking that it is orthogonal to the generators x1 and x2 of U .) The
required decomposition of x is thus
Example 8.1.4
Find the point in the plane with equation 2x + y − z = 0 that is closest to the point (2, −1, −3).
Solution. We write R3 as rows. The plane is the subspace U whose points (x, y, z) satisfy
z = 2x + y. Hence
The Gram-Schmidt process produces an orthogonal basis {f1 , f2 } of U where f1 = (0, 1, 1) and
f2 = (1, −1, 1). Hence, the vector in U closest to x = (2, −1, −3) is
x·f1
projU x = f + kfx·fk22 f2
kf1 k2 1
= −2f1 + 0f2 = (0, −2, −2)
2
Thus, the point in U closest to (2, −1, −3) is (0, −2, −2).
The next theorem shows that projection on a subspace of Rn is actually a linear operator Rn → Rn .
Theorem 8.1.4
Let U be a fixed subspace of Rn . If we define T : Rn → Rn by
1. T is a linear operator.
2. im T = U and ker T = U ⊥.
3. dim U + dim U ⊥ = n.
Proof. If U = {0}, then U ⊥ = Rn , and so T (x) = proj {0} x = 0 for all x. Thus T = 0 is the zero (linear)
operator, so (1), (2), and (3) hold. Hence assume that U 6= {0}.
2. We have im T ⊆ U by (8.2) because each fi is in U . But if x is in U , then x = T (x) by (8.2) and the
expansion theorem applied to the space U . This shows that U ⊆ im T , so im T = U .
Now suppose that x is in U ⊥. Then x · fi = 0 for each i (again because each fi is in U ) so x is in
ker T by (8.2). Hence U ⊥ ⊆ ker T . On the other hand, Theorem 8.1.3 shows that x − T (x) is in U ⊥
for all x in Rn , and it follows that ker T ⊆ U ⊥. Hence ker T = U ⊥, proving (2).
3. This follows from (1), (2), and the dimension theorem (Theorem 7.2.4).
Exercise 8.1.1 In each case, use the Gram-Schmidt al- b. Show that {(1, 0, 2, −3), (4, 7, 1, 2)} is another
gorithm to convert the given basis B of V into an orthog- orthogonal basis of U .
onal basis.
c. Use the basis in part (b) to compute proj U x.
a. V = R2 , B = {(1, −1), (2, 1)}
b. V = R2 , B = {(2, 1), (1, 2)} Exercise 8.1.4 In each case, use the Gram-Schmidt al-
gorithm to find an orthogonal basis of the subspace U ,
c. V = R3 , B = {(1, −1, 1), (1, 0, 1), (1, 1, 2)} and find the vector in U closest to x.
Exercise 8.1.2 In each case, write x as the sum of a b. U = span {(1, −1, 0), (−1, 0, 1)}, x = (2, 1, 0)
vector in U and a vector in U ⊥ .
c. U = span {(1, 0, 1, 0), (1, 1, 1, 0), (1, 1, 0, 0)},
a. x = (1, 5, 7), U = span {(1, −2, 3), (−1, 1, 1)} x = (2, 0, −1, 3)
b. x = (2, 1, 6), U = span {(3, −1, 2), (2, 0, −3)} d. U = span {(1, −1, 0, 1), (1, 1, 0, 0), (1, 1, 0, 1)},
x = (2, 0, 3, 1)
c. x = (3, 1, 5, 9),
U = span {(1, 0, 1, 1), (0, 1, −1, 1), (−2, 0, 1, 1)}
Exercise 8.1.5 Let U = span {v1 , v2 , . . . , vk }, vi in Rn ,
d. x = (2, 0, 1, 6), and let A be the k × n matrix with the vi as rows.
U = span {(1, 1, 1, 1), (1, 1, −1, −1), (1, −1, 1, −1)}
Exercise 8.1.6
Exercise 8.1.3 Let x = (1, −2, 1, 6) in R4 , and let
U = span {(2, 1, 3, −4), (1, 2, 0, 1)}. a. Prove part 1 of Lemma 8.1.2.
a. Show that U ⊥ = Rn if and only if U = {0}. c. If EF = 0 = FE and E and F are projection ma-
trices, show that E + F is also a projection matrix.
b. Show that U ⊥ = {0} if and only if U = Rn .
d. If A is m × n and AAT is invertible, show that
E = AT (AAT )−1 A is a projection matrix.
Exercise 8.1.10 If U is a subspace of Rn , show that
proj U x = x for all x in U .
Exercise 8.1.18 Let A be an n×n matrix of rank r. Show
Exercise 8.1.11 If U is a subspace of Rn , show that that there is an invertible n × n matrix U such that UA is a
x = projU x + proj U ⊥ x for all x in Rn . row-echelon matrix with the property that the first r rows
Exercise 8.1.12 If {f1 , . . . , fn } is an orthogonal basis of are orthogonal. [Hint: Let R be the row-echelon form
Rn and U = span {f1 , . . . , fm }, show that of A, and use the Gram-Schmidt process on the nonzero
⊥
U = span {fm+1 , . . . , fn }. rows of R from the bottom up. Use Lemma 2.4.1.]
Exercise 8.1.13 If U is a subspace of Rn , show that Exercise 8.1.19 Let A be an (n−1)×n matrix with rows
U ⊥⊥ = U . [Hint: Show that U ⊆ U ⊥⊥ , then use The- x1 , x2 , . . . , xn−1 and let Ai denote the
orem 8.1.4 (3) twice.] (n − 1) × (n − 1) matrix obtained from A by deleting col-
umn i. Define the vector y in Rn by
Exercise 8.1.14 If U is a subspace of Rn , show how to
find an n × n matrix A such that U = {x | Ax = 0}. [Hint: y = det A1 − det A2 det A3 · · · (−1)n+1 det An
Exercise 8.1.13.]
Show that:
Exercise 8.1.15 Write Rn as rows. If A is an n × n ma-
trix, write its null space as null A = {x in Rn | AxT = 0}.
a. xi · y =
0 for
all i = 1, 2, . . . , n − 1. [Hint: Write
Show that:
xi
Bi = and show that det Bi = 0.]
⊥ ⊥
A
a. null A = ( row A) ; b. null A = ( col A) .
T
AxT = 0
i. E 2 = E = E T (E is a projection matrix).
ii. (x − xE) · (yE) = 0 for all x and y in Rn . are given by ty, t a parameter.
424 Orthogonality
Recall (Theorem 5.5.3) that an n × n matrix A is diagonalizable if and only if it has n linearly independent
eigenvectors. Moreover, the matrix P with these eigenvectors as columns is a diagonalizing matrix for A,
that is
P−1 AP is diagonal.
As we have seen, the really nice bases of Rn are the orthogonal ones, so a natural question is: which n × n
matrices have an orthogonal basis of eigenvectors? These turn out to be precisely the symmetric matrices,
and this is the main result of this section.
Before proceeding, recall that an orthogonal set of vectors is called orthonormal if kvk = 1 for each
vector v in the set, and that any orthogonal set {v1 , v2 , . . . , vk } can be “normalized”, that is converted into
an orthonormal set { kv1 k v1 , kv1 k v2 , . . . , kv1 k vk }. In particular, if a matrix A has n orthogonal eigenvectors,
1 2 k
they can (by normalizing) be taken to be orthonormal. The corresponding diagonalizing matrix P has
orthonormal columns, and such matrices are very easy to invert.
Theorem 8.2.1
The following conditions are equivalent for an n × n matrix P.
Proof. First recall that condition (1) is equivalent to PPT = I by Corollary 2.4.1 of Theorem 2.4.5. Let
x1 , x2 , . . . , xn denote the rows of P. Then xTj is the jth column of PT , so the (i, j)-entry of PPT is xi · x j .
Thus PPT = I means that xi · x j = 0 if i 6= j and xi · x j = 1 if i = j. Hence condition (1) is equivalent to
(2). The proof of the equivalence of (1) and (3) is similar.
Example 8.2.1
cos θ − sin θ
The rotation matrix is orthogonal for any angle θ .
sin θ cos θ
These orthogonal matrices have the virtue that they are easy to invert—simply take the transpose. But
they have many other important properties as well. If T : Rn → Rn is a linear operator, we will prove
2 In view of (2) and (3) of Theorem 8.2.1, orthonormal matrix might be a better name. But orthogonal matrix is standard.
8.2. Orthogonal Diagonalization 425
(Theorem 10.4.3) that T is distance preserving if and only if its matrix is orthogonal. In particular, the
matrices of rotations and reflections about the origin in R2 and R3 are all orthogonal (see Example 8.2.1).
It is not enough that the rows of a matrix A are merely orthogonal for A to be an orthogonal matrix.
Here is an example.
Example 8.2.2
2 1 1
The matrix −1 1 1 has orthogonal rows but the columns are not orthogonal. However, if
0 −1 1
√2 √1 √1
6 6 6
1 1
the rows are normalized, the resulting matrix −1
√ √ √ is orthogonal (so the columns are
3 3 3
0 −1
√ √1
2 2
now orthonormal as the reader can verify).
Example 8.2.3
If P and Q are orthogonal matrices, then PQ is also orthogonal, as is P−1 = PT .
Proof. (1) ⇔ (2). Given (1), let x1 , x2 , . . . , xn be orthonormal eigenvectors of A. Then P = x1 x2 . . . xn
is orthogonal, and P−1 AP is diagonal by Theorem 3.3.4. This proves (2). Conversely, given (2) let P−1 AP
be diagonal where P is orthogonal. If x1 , x2 , . . . , xn are the columns of P then {x1 , x2 , . . . , xn } is an
orthonormal basis of Rn that consists of eigenvectors of A by Theorem 3.3.4. This proves (1).
(2) ⇒ (3). If PT AP = D is diagonal, where P−1 = PT , then A = PDPT . But DT = D, so this gives
AT = PT T DT PT = PDPT = A.
(3) ⇒ (2). If A is an n × n symmetric matrix, we proceed by induction on n. If n = 1, A is already
diagonal. If n > 1, assume that (3) ⇒ (2) for (n − 1) × (n − 1) symmetric matrices. By Theorem 5.5.7 let
λ1 be a (real) eigenvalue of A, and let Ax1 = λ1 x1 , where kx1 k = 1. Use the Gram-Schmidt
algorithm to
find an orthonormal basis
{x1 , x2 , . . . , xn } for R . Let P1 = x1 x2 . . . xn , so P1 is an orthogonal
n
λ1 B
matrix and P1T AP1 = in block form by Lemma 5.5.2. But P1T AP1 is symmetric (A is), so it
0 A1
follows that B = 0 and A1 is symmetric. Then, by induction, there existsan (n − 1) × (n − 1) orthogonal
1 0
matrix Q such that QT A1 Q = D1 is diagonal. Observe that P2 = is orthogonal, and compute:
0 Q
(P1P2 )T A(P1P2 ) = P2T (P1T AP1)P2
1 0 λ1 0 1 0
=
0 QT 0 A1 0 Q
λ1 0
=
0 D1
is diagonal. Because P1 P2 is orthogonal, this proves (2).
A set of orthonormal eigenvectors of a symmetric matrix A is called a set of principal axes for A. The
name comes from geometry, and this is discussed in Section 8.9. Because the eigenvalues of a (real)
symmetric matrix are real, Theorem 8.2.2 is also called the real spectral theorem, and the set of distinct
eigenvalues is called the spectrum of the matrix. In full generality, the spectral theorem is a similar result
for matrices with complex entries (Theorem 8.7.8).
Example 8.2.4
1 0 −1
Find an orthogonal matrix P such that P−1 AP is diagonal, where A = 0 1 2 .
−1 2 5
respectively. Moreover, by what appears to be remarkably good luck, these eigenvectors are
orthogonal. We have kx1 k2 = 6, kx2 k2 = 5, and kx3 k2 = 30, so
√ √
h i 5
√ 2√ 6 −1
P = √16 x1 √15 x2 √130 x3 = √130 −2 √ 5 6 2
5 0 5
Actually, the fact that the eigenvectors in Example 8.2.4 are orthogonal is no coincidence. Theo-
rem 5.5.4 guarantees they are linearly independent (they correspond to distinct eigenvalues); the fact that
the matrix is symmetric implies that they are orthogonal. To prove this we need the following useful fact
about symmetric matrices.
Theorem 8.2.3
If A is an n × n symmetric matrix, then
(Ax) · y = x · (Ay)
Theorem 8.2.4
If A is a symmetric matrix, then eigenvectors of A corresponding to distinct eigenvalues are
orthogonal.
λ (x · y) = (λ x) · y = (Ax) · y = x · (Ay) = x · (µ y) = µ (x · y)
Now the procedure for diagonalizing a symmetric n × n matrix is clear. Find the distinct eigenvalues
(all real by Theorem 5.5.7) and find orthonormal bases for each eigenspace (the Gram-Schmidt algorithm
may be needed). Then the set of all these basis vectors is orthonormal (by Theorem 8.2.4) and contains n
vectors. Here is an example.
Example 8.2.5
8 −2 2
Orthogonally diagonalize the symmetric matrix A = −2 5 4 .
2 4 5
Solution. The characteristic polynomial is
x−8 2 −2
cA (x) = det 2 x − 5 −4 = x(x − 9)2
−2 −4 x − 5
Hence the distinct eigenvalues are 0 and 9 of multiplicities 1 and 2, respectively, so dim (E0 ) = 1
and dim (E9 ) = 2 by Theorem 5.5.6 (A is diagonalizable, being symmetric). Gaussian elimination
gives
1 −2 2
E0 (A) = span {x1 }, x1 = 2 , and E9 (A) = span 1 , 0
−2 0 1
The eigenvectors in E9 are both orthogonal to x1 as Theorem 8.2.4 guarantees, but not to each
other. However, the Gram-Schmidt process yields an orthogonal basis
−2 2
{x2 , x3 } of E9 (A) where x2 = 1 and x3 = 4
0 5
Example 8.2.6
Find principal axes for the quadratic form q = x21 − 4x1 x2 + x22 .
Solution. In order to utilize diagonalization, we first express q in matrix form. Observe that
1 −4 x1
q = x1 x2
0 1 x2
The matrix here is not symmetric, but we can remedy that by writing
q = x21 − 2x1 x2 − 2x2 x1 + x22
Then we have
1 −2 x1
q= x1 x2 = xT Ax
−2 1 x2
x1 1 −2
where x = and A = is symmetric. The eigenvalues of A are λ1 = 3 and
x2 −2 1
1 1
λ2 = −1, with corresponding (orthogonal) eigenvectors x1 = and x2 = . Since
−1 1
√
kx1 k = kx2 k = 2, so
1 1 1 T 3 0
P= √ is orthogonal and P AP = D =
2 −1 1 0 −1
y1
Now define new variables = y by y = PT x, equivalently x = Py (since P−1 = PT ). Hence
y2
Observe that the quadratic form q in Example 8.2.6 can be diagonalized in other ways. For example
where z1 = x1 − 2x2 and z2 = 3x2 . We examine this more carefully in Section 8.9.
If we are willing to replace “diagonal” by “upper triangular” in the principal axes theorem, we can
weaken the requirement that A is symmetric to insisting only that A has real eigenvalues.
Proof. We modify the proof of Theorem 8.2.2. If Ax1 = λ1 x1 where kx1 k = 1, let {x1 , x2 , . . ., xn } be an
λ1 B
orthonormal basis of R , and let P1 = x1 x2 · · · xn . Then P1 is orthogonal and P1 AP1 =
n T
0 A1
in block form. By induction,
let Q A1 Q = T1 be upper triangular where Q is of size (n − 1) ×(n − 1) and
T
1 0 λ1 BQ
orthogonal. Then P2 = is orthogonal, so P = P1 P2 is also orthogonal and P AP =
T
0 Q 0 T1
is upper triangular.
The proof of Theorem 8.2.5 gives no way to construct the matrix P. However, an algorithm will be given in
Section 11.1 where an improved version of Theorem 8.2.5 is presented. In a different direction, a version
of Theorem 8.2.5 holds for an arbitrary matrix with complex entries (Schur’s theorem in Section 8.7).
As for a diagonal matrix, the eigenvalues of an upper triangular matrix are displayed along the main
diagonal. Because A and PT AP have the same determinant and trace whenever P is orthogonal, Theo-
rem 8.2.5 gives:
Corollary 8.2.1
If A is an n × n matrix with real eigenvalues λ1 , λ2 , . . . , λn (possibly not all distinct), then
det A = λ1 λ2 . . . λn and tr A = λ1 + λ2 + · · · + λn .
This corollary remains true even if the eigenvalues are not real (using Schur’s theorem).
√
cos θ − sin θ 0 k = a2 + c2 and find an orthogonal matrix P such that
e. A = sin θ cos θ 0 P−1 AP is diagonal.
0 0 2
0 0 a
2 1 −1 Exercise 8.2.7 Consider A = 0 b 0 . Show that
f.
A = 1 −1 1 a 0 0
0 1 1 cA (x) = (x − b)(x − a)(x + a) and find an orthogonal ma-
trix P such that P−1 AP is diagonal.
−1 2 2
g. A = 2 −1 2 b a
2 2 −1 Exercise 8.2.8 Given A = , show that
a b
cA (x) = (x − a − b)(x + a − b) and find an orthogonal ma-
2 6 −3
h. A= 3 2 6 trix P such that P−1 AP is diagonal.
−6 3 2 b 0 a
Exercise 8.2.9 Consider A = 0 b 0 . Show that
Exercise 8.2.2 If P is a triangular orthogonal matrix, a 0 b
show that P is diagonal and that all diagonal entries are 1 cA (x) = (x − b)(x − b − a)(x − b + a) and find an orthog-
or −1. onal matrix P such that P−1 AP is diagonal.
Exercise 8.2.3 If P is orthogonal, show that kP is or- Exercise 8.2.10 In each case find new variables y1 and
thogonal if and only if k = 1 or k = −1. y2 that diagonalize the quadratic form q.
Exercise 8.2.4 If the first two rows of an orthogonal ma-
a. q = x21 + 6x1 x2 + x22 b. q = x21 + 4x1 x2 − 2x22
trix are ( 13 , 23 , 23 ) and ( 23 , 13 , −2
3 ), find all possible third
rows.
Exercise 8.2.11 Show that the following are equivalent
Exercise 8.2.5 For each matrix A, find an orthogonal for a symmetric matrix A.
matrix P such that P−1 AP is diagonal.
a. A is orthogonal. b. A2 = I.
0 1 1 −1
a. A = b. A = c. All eigenvalues of A are ±1.
1 0 −1 1
3 0 0 3 0 7 [Hint: For (b) if and only if (c), use Theorem 8.2.2.]
c. A = 0 2 2
d. A = 0 5 0
Exercise 8.2.12 We call matrices A and B orthogonally
0 2 5 7 0 3 ◦
similar (and write A ∼ B) if B = P AP for an orthogonal
T
1 1 0 5 −2 −4 matrix P.
e. A = 1 1 0 f. A = −2 8 −2
◦ ◦ ◦
0 0 2 −4 −2 5 a. Show that A ∼ A for all A; A ∼ B ⇒ B ∼ A; and
◦ ◦ ◦
A ∼ B and B ∼ C ⇒ A ∼ C.
5 3 0 0
3 5 0 0 b. Show that the following are equivalent for two
g. A = 0 0 7 1
symmetric matrices A and B.
0 0 1 7
i. A and B are similar.
3 5 −1 1
5 3 1 −1 ii. A and B are orthogonally similar.
h. A = −1
1 3 5 iii. A and B have the same eigenvalues.
1 −1 5 3
b. Show that A2 and B2 are orthogonally similar. Exercise 8.2.21 If the rows r1 , . . . , rn of the n × n ma-
trix A = [ai j ] are orthogonal, show that the (i, j)-entry of
c. Show that, if A is symmetric, so is B. a
A−1 is kr jik2 .
j
Exercise 8.2.22
Exercise 8.2.14 If A is symmetric, show that every
eigenvalue of A is nonnegative if and only if A = B2 for a. Let A be an m × n matrix. Show that the following
some symmetric matrix B. are equivalent.
Exercise 8.2.15 Prove the converse of Theorem 8.2.3:
If (Ax) · y = x · (Ay) for all n-columns x and y, then i. A has orthogonal rows.
A is symmetric. ii. A can be factored as A = DP, where D is in-
Exercise 8.2.16 Show that every eigenvalue of A is zero vertible and diagonal and P has orthonormal
if and only if A is nilpotent (Ak = 0 for some k ≥ 1). rows.
iii. AAT is an invertible, diagonal matrix.
Exercise 8.2.17 If A has real eigenvalues, show that
A = B +C where B is symmetric and C is nilpotent.
[Hint: Theorem 8.2.5.] b. Show that an n × n matrix A has orthogonal rows
if and only if A can be factored as A = DP, where
Exercise 8.2.18 Let P be an orthogonal matrix. P is orthogonal and D is diagonal and invertible.
a. Show that det P = 1 or det P = −1. Exercise 8.2.23 Let A be a skew-symmetric matrix; that
is, AT = −A. Assume that A is an n × n matrix.
b. Give 2 × 2 examples of P such that det P = 1 and
det P = −1.
a. Show that I + A is invertible. [Hint: By Theo-
c. If det P = −1, show that I + P has no inverse. rem 2.4.5, it suffices to show that (I + A)x = 0,
[Hint: PT (I + P) = (I + P)T .] x in Rn , implies x = 0. Compute x · x = xT x, and
use the fact that Ax = −x and A2 x = x.]
d. If P is n × n and det P 6= (−1)n , show that I − P
has no inverse. b. Show that P = (I − A)(I + A)−1 is orthogonal.
[Hint: PT (I − P) = −(I − P)T .] c. Show that every orthogonal matrix P such that
I + P is invertible arises as in part (b) from some
Exercise 8.2.19 We call a square matrix E a projection skew-symmetric matrix A.
matrix if E 2 = E = E T . (See Exercise 8.1.17.) [Hint: Solve P = (I − A)(I + A)−1 for A.]
a. If E is a projection matrix, show that P = I − 2E Exercise 8.2.24 Show that the following are equivalent
is orthogonal and symmetric. for an n × n matrix P.
Exercise 8.2.20 A matrix that we obtain from the iden- d. (Px) · (Py) = x · y for all columns x and y in Rn .
tity matrix by writing its rows in a different order is called [Hints: For (c) ⇒ (d), see Exercise 5.3.14(a). For
a permutation matrix. Show that every permutation (d) ⇒ (a), show that column i of P equals Pei ,
matrix is orthogonal. where ei is column i of the identity matrix.]
8.3. Positive Definite Matrices 433
All the eigenvalues of any symmetric matrix are real; this section is about the case in which the eigenvalues
are positive. These matrices, which arise whenever optimization (maximum and minimum) problems are
encountered, have countless applications throughout science and engineering. They also arise in statistics
(for example, in factor analysis used in the social sciences) and in geometry (see Section 8.9). We will
encounter them again in Chapter 10 when describing all inner products in Rn .
Because these matrices are symmetric, the principal axes theorem plays a central role in the theory.
Theorem 8.3.1
If A is positive definite, then it is invertible and det A > 0.
Proof. If A is n × n and the eigenvalues are λ1 , λ2 , . . . , λn , then det A = λ1 λ2 · · · λn > 0 by the principal
axes theorem (or the corollary to Theorem 8.2.5).
If x is a column in Rn and A is any real n × n matrix, we view the 1 × 1 matrix xT Ax as a real number.
With this convention, we have the following characterization of positive definite matrices.
Theorem 8.3.2
A symmetric matrix A is positive definite if and only if xT Ax > 0 for every column x 6= 0 in Rn .
Proof. A is symmetric so, by the principal axes theorem, let PT AP = D = diag (λ1 , λ2 , . . . , λn ) where
T
P−1 = PT and the λi are the eigenvalues of A. Given a column x in Rn , write y = PT x = y1 y2 . . . yn .
Then
xT Ax = xT (PDPT )x = yT Dy = λ1 y21 + λ2 y22 + · · · + λn y2n (8.3)
If A is positive definite and x 6= 0, then xT Ax > 0 by (8.3) because some y j 6= 0 and every λi > 0. Con-
versely, if xT Ax > 0 whenever x 6= 0, let x = Pe j 6= 0 where e j is column j of In . Then y = e j , so (8.3)
reads λ j = xT Ax > 0.
434 Orthogonality
Note that Theorem 8.3.2 shows that the positive definite matrices are exactly the symmetric matrices A for
which the quadratic form q = xT Ax takes only positive values.
Example 8.3.1
If U is any invertible n × n matrix, show that A = U T U is positive definite.
It is remarkable that the converse to Example 8.3.1 is also true. In fact every positive definite matrix
A can be factored as A = U T U where U is an upper triangular matrix with positive elements on the main
diagonal. However, before verifying this, we introduce another concept that is central to any discussion of
positive definite matrices.
If A is any n × n matrix, let (r) A denote the r × r submatrix in the upper left corner of A; that is, (r) A is
the matrix obtained from A by deleting the last n − r rows and columns. The matrices (1) A, (2) A, (3) A, . . . ,
(n) A = A are called the principal submatrices of A.
Example 8.3.2
10 5 2
10 5
If A = 5 3 2 then A = [10], A =
(1) (2) and (3) A = A.
5 3
2 2 3
Lemma 8.3.1
If A is positive definite, so is each principal submatrix (r) A for r = 1, 2, . . . , n.
(r) A
P y
Proof. Write A = in block form. If y 6= 0 in R , write x =
r in Rn .
Q R 0
Then x 6= 0, so the fact that A is positive definite gives
T (r) A P y
T
0 < x Ax = y 0 = yT ((r) A)y
Q R 0
5A similar argument shows that, if B is any matrix obtained from a positive definite matrix A by deleting certain rows and
deleting the same columns, then B is also positive definite.
8.3. Positive Definite Matrices 435
Theorem 8.3.3
The following conditions are equivalent for a symmetric n × n matrix A:
1. A is positive definite.
3. A = U T U where U is an upper triangular matrix with positive entries on the main diagonal.
Furthermore, the factorization in (3) is unique (called the Cholesky factorization6 of A).
Proof. First, (3) ⇒ (1) by Example 8.3.1, and (1) ⇒ (2) by Lemma 8.3.1 and Theorem 8.3.1.
(2) ⇒ (3).
√ Assume (2) and proceed by induction on n. If n = 1, then A = [a] where a > 0 by (2), so
take U = [ a]. If n > 1, write B = (n−1) A. Then B is symmetric and satisfies (2) so, by induction, we
have B = U T U as in (3) where U is of size (n − 1) × (n − 1). Then, as A is symmetric, it has block form
B p
A= where p is a column in Rn−1 and b is in R. If we write x = (U T )−1 p and c = b − xT x,
pT b
block multiplication gives T T
U U p U 0 U x
A= =
pT b xT 1 0 c
as the reader can verify. Taking determinants and applying Theorem 3.1.5 gives det A = det (U T ) det U ·
c = c( det U )2. Hence c > 0 because det A > 0 by (2), so the above factorization can be written
T
U √0 U √x
A=
xT c 0 c
Since U has positive diagonal entries, this proves (3).
As to the uniqueness, suppose that A = U T U = U1T U1 are two Cholesky factorizations. Now write
D = UU1−1 = (U T )−1U1T . Then D is upper triangular, because D = UU1−1, and lower triangular, because
D = (U T )−1U1T , and so it is a diagonal matrix. Thus U = DU1 and U1 = DU , so it suffices to show that
D = I. But eliminating U1 gives U = D2U , so D2 = I because U is invertible. Since the diagonal entries
of D are positive (this is true of U and U1 ), it follows that D = I.
The remarkable thing is that the matrix U in the Cholesky factorization is easy to obtain from A using
row operations. The key is that Step 1 of the following algorithm is possible for any positive definite
matrix A. A proof of the algorithm is given following Example 8.3.3.
Example 8.3.3
10 5 2
Find the Cholesky factorization of A = 5 3 2 .
2 2 3
Solution. The matrix A is positive definite by Theorem 8.3.3 because det (1) A = 10 > 0,
det (2) A = 5 > 0, and det (3) A = det A = 3 > 0. Hence Step 1 of the algorithm is carried out as
follows:
10 5 2 10 5 2 10 5 2
A = 5 3 2 → 0 21 1 → 0 21 1 = U1
2 2 3 0 1 13 5 0 0 35
√
5 2
10 √
10
√
10
√
1
Now carry out Step 2 on U1 to obtain U = 0 √ 2 .
2 √
0 0 √3
5
The reader can verify that U T U = A.
Proof of the Cholesky Algorithm. If A is positive definite, let A = U T U be the Cholesky factorization,
and let D = diag (d1 , . . . , dn ) be the common diagonal of U and U T . Then U T D−1 is lower triangular
with ones on the diagonal (call such matrices LT-1). Hence L = (U T D−1 )−1 is also LT-1, and so In → L
by a sequence of row operations each of which adds a multiple of a row to a lower row (verify; modify
columns right to left). But then A → LA by the same sequence of row operations (see the discussion
preceding Theorem 2.5.1). Since LA = [D(U T )−1 ][U T U ] = DU is upper triangular with positive entries
on the diagonal, this shows that Step 1 of the algorithm is possible.
Turning to Step 2, let A → U1 as in Step 1 so that U1 = L1 A where L1 is LT-1. Since A is symmetric,
we get
L1U1T = L1 (L1 A)T = L1 AT LT1 = L1 ALT1 = U1 LT1 (8.4)
Let D1 = diag (e1 , . . . , en ) denote the diagonal of U1 . Then (8.4) gives L1 (U1T D−1 T −1
1 ) = U1 L1 D1 . This is
both upper triangular (right side) and LT-1 (left side), and so must equal In . In particular, U1T D−1 −1
1 = L1 .
√ √
Now let D2 = diag ( e1 , . . . , en ), so that D22 = D1 . If we write U = D−1 2 U1 we have
2 −1
U T U = (U1T D−1 −1 T T −1 −1
2 )(D2 U1 ) = U1 (D2 ) U1 = (U1 D1 )U1 = (L1 )U1 = A
Exercise 8.3.1 Find the Cholesky decomposition of Exercise 8.3.8 Let A0 be formed from A by deleting
each of the following matrices. rows 2 and 4 and deleting columns 2 and 4. If A is posi-
tive definite, show that A0 is positive definite.
4 3 2 −1 Exercise 8.3.9 If A is positive definite, show that
a. b.
3 5 −1 1 A = CCT where C has orthogonal columns.
12 4 3 20 4 5
Exercise 8.3.10 If A is positive definite, show that
c. 4 2 −1 d. 4 2 3
A = C2 where C is positive definite.
3 −1 7 5 3 5
Exercise 8.3.11 Let A be a positive definite matrix. If a
Exercise 8.3.2 is a real number, show that aA is positive definite if and
only if a > 0.
a. If A is positive definite, show that Ak is positive Exercise 8.3.12
definite for all k ≥ 1.
a. Suppose an invertible matrix A can be factored in
b. Prove the converse to (a) when k is odd.
Mnn as A = LDU where L is lower triangular with
2
c. Find a symmetric matrix A such that A is positive 1s on the diagonal, U is upper triangular with 1s
definite but A is not. on the diagonal, and D is diagonal with positive
diagonal entries. Show that the factorization is
1 a unique: If A = L1 D1U1 is another such factoriza-
Exercise 8.3.3 Let A = . If a2 < b, show that tion, show that L1 = L, D1 = D, and U1 = U .
a b
A is positive definite and find the Cholesky factorization.
b. Show that a matrix A is positive definite if and
Exercise 8.3.4 If A and B are positive definite and r > 0, only if A is symmetric and admits a factorization
show that A + B and rA are both positive definite. A = LDU as in (a).
Exercise
8.3.5 If A and B are positive definite, show that
A 0 Exercise 8.3.13 Let A be positive definite and write
is positive definite. dr = det (r) A for each r = 1, 2, . . . , n. If U is the
0 B
upper triangular matrix obtained in step 1 of the algo-
Exercise 8.3.6 If A is an n × n positive definite matrix
rithm, show that the diagonal elements u11 , u22 , . . . , unn
and U is an n × m matrix of rank m, show that U T AU is
of U are given by u11 = d1 , u j j = d j /d j−1 if j > 1.
positive definite.
[Hint: If LA = U where L is lower triangular with 1s
Exercise 8.3.7 If A is positive definite, show that each on the diagonal, use block multiplication to show that
diagonal entry is positive. det (r) A = det (r)U for each r.]
8.4 QR-Factorization7
One of the main virtues of orthogonal matrices is that they can be easily inverted—the transpose is the
inverse. This fact, combined with the factorization theorem in this section, provides a useful way to
simplify many matrix calculations (for example, in least squares approximation).
7 This section is not used elsewhere in the book
438 Orthogonality
The importance of the factorization lies in the fact that there are computer algorithms that accomplish it
with good control over round-off error, making it particularly useful in matrix calculations. The factoriza-
tion is a matrix version of the Gram-Schmidt process.
Suppose A = c1 c2 · · · cn is an m × n matrix with linearly independent columns c1 , c2 , . . . , cn .
The Gram-Schmidt algorithm can be applied to these columns to provide orthogonal columns f1 , f2 , . . . , fn
where f1 = c1 and
ck ·f1 ck ·f2 c ·f
fk = ck − kf f + kf
k2 1
f − · · · − kfk k−1
k2 2
f
k2 k−1
1 2 k−1
1
for each k = 2, 3, . . . , n. Now write qk = kfk k fk for each k. Then q1 , q2 , . . . , qn are orthonormal columns,
and the above equation becomes
kfk kqk = ck − (ck · q1 )q1 − (ck · q2 )q2 − · · · − (ck · qk−1 )qk−1
Using these equations, express each ck as a linear combination of the qi :
c1 = kf1 kq1
c2 = (c2 · q1 )q1 + kf2 kq2
c3 = (c3 · q1 )q1 + (c3 · q2 )q2 + kf3 kq3
.. ..
. .
cn = (cn · q1 )q1 + (cn · q2 )q2 + (cn · q3 )q3 + · · · + kfn kqn
These equations have a matrix form that gives the required factorization:
A = c1 c2 c3 · · · cn
kf1 k c2 · q1 c3 · q1 · · · cn · q1
0 kf2 k c3 · q2 · · · cn · q2
0
= q1 q2 q3 · · · qn 0 kf 3 k · · · cn · q3 (8.5)
.. .. .. .. ..
. . . . .
0 0 0 · · · kfn k
Here the first factor Q = q1 q2 q3 · · · qn has orthonormal columns, and the second factor is an
n × n upper triangular matrix R with positive diagonal entries (and so is invertible). We record this in the
following theorem.
The matrices Q and R in Theorem 8.4.1 are uniquely determined by A; we return to this below.
8.4. QR-Factorization 439
Example 8.4.1
1 1 0
−1 0 1
Find the QR-factorization of A =
0
.
1 1
0 0 1
Solution. Denote the columns of A as c1 , c2 , and c3 , and observe that {c1 , c2 , c3 } is independent.
If we apply the Gram-Schmidt algorithm to these columns, the result is:
1
1 2 0
−1 1 0
1
f1 = c1 =
0 , f2 = c2 − 2 f1 = 2
, and f3 = c3 + 12 f1 − f2 =
0 .
1
0 0 1
Write q j = kf1k2 f j for each j, so {q1 , q2 , q3 } is orthonormal. Then equation (8.5) preceding
j
Theorem 8.4.1 gives A = QR where
√1 √1 0 √
2 6 √3 1 0
−1 1
√2 √6 0 √1
− 3 1 0
Q = q1 q2 q3 = = 6
0 √2 0 0 2 √0
6
0 0 6
0 0 1
√
2 √12 √ −1
2 1 −1
kf1 k c2 · q1 c3 · q1 √ 2
√ √ √
R= 0 kf2 k c3 · q2 = 0 √3 √3 = √1 0 3 √3
2 2 2
0 0 kf3 k 0 0 2
0 0 1
If a matrix A has independent rows and we apply QR-factorization to AT , the result is:
Corollary 8.4.1
If A has independent rows, then A factors uniquely as A = LP where P has orthonormal rows and L
is an invertible lower triangular matrix with positive main diagonal entries.
Theorem 8.4.2
Every square, invertible matrix A has factorizations A = QR and A = LP where Q and P are
orthogonal, R is upper triangular with positive diagonal entries, and L is lower triangular with
positive diagonal entries.
440 Orthogonality
Remark
In Section 5.6 we found how to find a best approximation z to a solution of a (possibly inconsistent) system
Ax = b of linear equations: take z to be any solution of the “normal” equations (AT A)z = AT b. If A has
independent columns this z is unique (AT A is invertible by Theorem 5.4.3), so it is often desirable to com-
pute (AT A)−1 . This is particularly useful in least squares approximation (Section 5.6). This is simplified
if we have a QR-factorization of A (and is one of the main reasons for the importance of Theorem 8.4.1).
For if A = QR is such a factorization, then QT Q = In because Q has orthonormal columns (verify), so we
obtain
AT A = RT QT QR = RT R
Hence computing (AT A)−1 amounts to finding R−1 , and this is a routine matter because R is upper trian-
gular. Thus the difficulty in computing (AT A)−1 lies in obtaining the QR-factorization of A.
Theorem 8.4.3
Let A be an m × n matrix with independent columns. If A = QR and A = Q1 R1 are
QR-factorizations of A, then Q1 = Q and R1 = R.
Proof. Write Q = c1 c2 · · · cn and Q1 = d1 d2 · · · dn in terms of their columns, and ob-
serve first that QT Q = In = QT1 Q1 because Q and Q1 have orthonormal columns. Hence it suffices to show
that Q1 = Q (then R1 = QT1 A = QT A = R). Since QT1 Q1 = In , the equation QR = Q1 R1 gives QT1 Q = R1 R−1 ;
for convenience we write this matrix as
QT1 Q = R1 R−1 = ti j
This matrix is upper triangular with positive diagonal elements (since this is true for R and R1 ), so tii > 0
for each i and ti j = 0 if i > j. On the other hand, the (i, j)-entry of QT1 Q is dTi c j = di · c j , so we have
di · c j = ti j for all i and j. But each c j is in span {d1 , d2 , . . . , dn } because Q = Q1 (R1 R−1 ). Hence the
expansion theorem gives
c1 = t11 d1
c2 = t12 d1 + t22 d2
c3 = t13 d1 + t23 d2 + t33 d3
c4 = t14 d1 + t24 d2 + t34 d3 + t44 d4
.. ..
. .
The first of these equations gives 1 = kc1 k = kt11d1 k = |t11 |kd1 k = t11 , whence c1 = d1 . But then we
have t12 = d1 · c2 = c1 · c2 = 0, so the second equation becomes c2 = t22 d2 . Now a similar argument gives
c2 = d2 , and then t13 = 0 and t23 = 0 follows in the same way. Hence c3 = t33 d3 and c3 = d3 . Continue in
this way to get ci = di for all i. This means that Q1 = Q, which is what we wanted.
8.5. Computing Eigenvalues 441
Exercise 8.4.1 In each case find the QR-factorization of c. If AB has a QR-factorization, show that the same
A. is true of B but not necessarily A.
1 −1 2 1 1 0 0
a. A = b. A = [Hint: Consider AA T where A = .]
−1 0 1 1 1 1 1
1 1 1 1 1 0
1 1 0 −1 0 1
c. A =
1 0 0
d. A = 0
Exercise 8.4.3 If R is upper triangular and invertible,
1 1
show that there exists a diagonal matrix D with diagonal
0 0 0 1 −1 0
entries ±1 such that R1 = DR is invertible, upper trian-
Exercise 8.4.2 Let A and B denote matrices. gular, and has positive diagonal entries.
Exercise 8.4.4 If A has independent columns, let
a. If A and B have independent columns, show
A = QR where Q has orthonormal columns and R is in-
that AB has independent columns. [Hint: Theo-
vertible and upper triangular. [Some authors call this a
rem 5.4.3.]
QR-factorization of A.] Show that there is a diagonal ma-
b. Show that A has a QR-factorization if and only if trix D with diagonal entries ±1 such that A = (QD)(DR)
A has independent columns. is the QR-factorization of A. [Hint: Preceding exercise.]
In practice, the problem of finding eigenvalues of a matrix is virtually never solved by finding the roots
of the characteristic polynomial. This is difficult for large matrices and iterative methods are much better.
Two such methods are described briefly in this section.
In Chapter 3 our initial rationale for diagonalizing matrices was to be able to compute the powers of a
square matrix, and the eigenvalues were needed to do this. In this section, we are interested in efficiently
computing eigenvalues, and it may come as no surprise that the first method we discuss uses the powers
of a matrix.
Recall that an eigenvalue λ of an n × n matrix A is called a dominant eigenvalue if λ has multiplicity
1, and
|λ | > |µ | for all eigenvalues µ 6= λ
Any corresponding eigenvector is called a dominant eigenvector of A. When such an eigenvalue exists,
one technique for finding it is as follows: Let x0 in Rn be a first approximation to a dominant eigenvector
λ , and compute successive approximations x1 , x2 , . . . as follows:
In general, we define
xk+1 = Axk for each k ≥ 0
If the first estimate x0 is good enough, these vectors xn will approximate the dominant eigenvector λ (see
below). This technique is called the power method (because xk = Ak x0 for each k ≥ 1). Observe that if z
is any eigenvector corresponding to λ , then
z·(Az) z·(λ z)
kzk2
= kzk2
=λ
Because the vectors x1 , x2 , . . . , xn , . . . approximate dominant eigenvectors, this suggests that we define
the Rayleigh quotients as follows:
xk ·xk+1
rk = kx k2
for k ≥ 1
k
Example 8.5.1
1 1
Use the power method to approximate a dominant eigenvector and eigenvalue of A = .
2 0
1 1
Solution. The eigenvalues of A are 2 and −1, with eigenvectors and . Take
1 −2
1
x0 = as the first approximation and compute x1 , x2 , . . . , successively, from
0
x1 = Ax0 , x2 = Ax1 , . . . . The result is
1 3 5 11 21
x1 = , x2 = , x3 = , x4 = , x3 = , ...
2 2 6 10 22
1
These vectors are approaching scalar multiples of the dominant eigenvector . Moreover, the
1
Rayleigh quotients are
r1 = 75 , r2 = 27 115 451
13 , r3 = 61 , r4 = 221 , . . .
and these are approaching the dominant eigenvalue 2.
To see why the power method works, let λ1 , λ2 , . . . , λm be eigenvalues of A with λ1 dominant and
let y1 , y2 , . . . , ym be corresponding eigenvectors. What is required is that the first approximation x0 be a
linear combination of these eigenvectors:
x0 = a1 y1 + a2 y2 + · · · + am ym with a1 6= 0
Hence k k
1 λ2 λm
x
λ1k k
= a1 y1 + a2 λ1 y2 + · · · + am λ1 ym
8.5. Computing Eigenvalues 443
The right side approaches a1 y1 as k increases because λ1 is dominant λλ1i < 1 for each i > 1 . Because
a1 6= 0, this means that xk approximates the dominant eigenvector a1 λ1k y1 .
The power method requires that the first approximation x0 be a linear combination of eigenvectors.
(In Example 8.5.1 the eigenvectors form a basis of R2 .) But evenin thiscase the method fails if a1 = 0,
−1
where a1 is the coefficient of the dominant eigenvector (try x0 = in Example 8.5.1). In general,
2
the rate of convergence is quite slow if any of the ratios λλ1i is near 1. Also, because the method requires
repeated multiplications by A, it is not recommended unless these multiplications are easy to carry out (for
example, if most of the entries of A are zero).
QR-Algorithm
A much better method for approximating the eigenvalues of an invertible matrix A depends on the factor-
ization (using the Gram-Schmidt algorithm) of A in the form
A = QR
where Q is orthogonal and R is invertible and upper triangular (see Theorem 8.4.2). The QR-algorithm
uses this repeatedly to create a sequence of matrices A1 = A, A2 , A3 , . . . , as follows:
In general, Ak is factored as Ak = Qk Rk and we define Ak+1 = Rk Qk . Then Ak+1 is similar to Ak [in fact,
Ak+1 = Rk Qk = (Q−1 k Ak )Qk ], and hence each Ak has the same eigenvalues as A. If the eigenvalues of A are
real and have distinct absolute values, the remarkable thing is that the sequence of matrices A1 , A2 , A3 , . . .
converges to an upper triangular matrix with these eigenvalues on the main diagonal. [See below for the
case of complex eigenvalues.]
Example 8.5.2
1 1
If A = as in Example 8.5.1, use the QR-algorithm to approximate the eigenvalues.
2 0
√1
7 4 1 13 11
where Q2 = and R2 = √
65 4 −7 65 0 10
1 27 −5 2.08 −0.38
A3 = 13 =
8 −14 0.62 −1.08
2 ∗
This is converging to and so is approximating the eigenvalues 2 and −1 on the main
0 −1
diagonal.
It is beyond the scope of this book to pursue a detailed discussion of these methods. The reader is
referred to J. M. Wilkinson, The Algebraic Eigenvalue Problem (Oxford, England: Oxford University
Press, 1965) or G. W. Stewart, Introduction to Matrix Computations (New York: Academic Press, 1973).
We conclude with some remarks on the QR-algorithm.
Shifting. Convergence is accelerated if, at stage k of the algorithm, a number sk is chosen and Ak − sk I is
factored in the form Qk Rk rather than Ak itself. Then
Q−1 −1
k Ak Qk = Qk (Qk Rk + sk I)Qk = Rk Qk + sk I
so we take Ak+1 = Rk Qk + sk I. If the shifts sk are carefully chosen, convergence can be greatly improved.
Preliminary Preparation. A matrix such as
∗ ∗ ∗ ∗ ∗
∗ ∗ ∗ ∗ ∗
0 ∗ ∗ ∗ ∗
0 0 ∗ ∗ ∗
0 0 0 ∗ ∗
is said to be in upper Hessenberg form, and the QR-factorizations of such matrices are greatly simplified.
Given an n × n matrix A, a series of orthogonal matrices H1 , H2 , . . . , Hm (called Householder matrices)
can be easily constructed such that
B = HmT · · · H1T AH1 · · · Hm
is in upper Hessenberg form. Then the QR-algorithm can be efficiently applied to B and, because B is
similar to A, it produces the eigenvalues of A.
Complex Eigenvalues. If some of the eigenvalues of a real matrix A are not real, the QR-algorithm con-
verges to a block upper triangular matrix where the diagonal blocks are either 1 × 1 (the real eigenvalues)
or 2 × 2 (each providing a pair of conjugate complex eigenvalues of A).
8.6. The Singular Value Decomposition 445
Exercise 8.5.1 In each case, find the exact eigenvalues Exercise 8.5.4 If A is symmetric, show that each matrix
and determine
corresponding eigenvectors. Then start Ak in the QR-algorithm is also symmetric. Deduce that
1 they converge to a diagonal matrix.
with x0 = and compute x4 and r3 using the power
1
method. 8.5.5 Apply the QR-algorithm to
Exercise
2 −3
2 −4 5 2 A= . Explain.
a. A = b. A = 1 −2
−3 3 −3 −2
Exercise 8.5.6 Given a matrix A, let Ak , Qk , and Rk ,
1 2 3 1 k ≥ 1, be the matrices constructed in the QR-algorithm.
c. A = d. A =
2 1 1 0 Show that Ak = (Q1 Q2 · · · Qk )(Rk · · · R2 R1 ) for each k ≥ 1
and hence that this is a QR-factorization of Ak .
Exercise 8.5.2 In each case, find the exact eigenvalues
[Hint: Show that Qk Rk = Rk−1 Qk−1 for each k ≥ 2, and
and then approximate them using the QR-algorithm.
use this equality to compute (Q1 Q2 · · · Qk )(Rk · · · R2 R1 )
1 1 3 1 “from the centre out.” Use the fact that (AB)n+1 =
a. A = b. A =
1 0 1 0 A(BA)n B for any square matrices A and B.]
When working with a square matrix A it is clearly useful to be able to “diagonalize” A, that is to find
a factorization A = Q−1 DQ where Q is invertible and D is diagonal. Unfortunately such a factorization
may not exist for A. However, even if A is not square gaussian elimination provides a factorization of
the form A = PDQ where P and Q are invertible and D is diagonal—the Smith Normal form (Theorem
2.5.3). However, if A is real we can choose P and Q to be orthogonal real matrices and D to be real. Such
a factorization is called a singular value decomposition (SVD) for A, one of the most useful tools in
applied linear algebra. In this Section we show how to explicitly compute an SVD for any real matrix A,
and illustrate some of its many applications.
We need a fact about two subspaces associated with an m × n matrix A:
Then im A is called the image of A (so named because of the linear transformation Rn → Rm with x 7→ Ax);
and col A is called the column space of A (Definition 5.10). Surprisingly, these spaces are equal:
Lemma 8.6.1
For any m × n matrix A, im A = col A.
446 Orthogonality
Proof. Let A = a1 a2 · · · an in terms of its columns. Let x ∈ im A, say x = Ay, y in Rn . If
T
y = y1 y2 · · · yn , then Ay = y1 a1 + y2 a2 + · · · + yn an ∈ col A by Definition 2.5. This shows that
im A ⊆ col A. For the other inclusion, each ak = Aek where ek is column k of In .
We know a lot about any real symmetric matrix: Its eigenvalues are real (Theorem 5.5.7), and it is orthog-
onally diagonalizable by the Principal Axes Theorem (Theorem 8.2.2). So for any real matrix A (square
or not), the fact that both AT A and AAT are real and symmetric suggests that we can learn a lot about A by
studying them. This section shows just how true this is.
The following Lemma reveals some similarities between AT A and AAT which simplify the statement
and the proof of the SVD we are constructing.
Lemma 8.6.2
Let A be a real m × n matrix. Then:
Proof.
Then (1.) follows for AT A, and the case AAT follows by replacing A by AT .
2. Write N(B) for the set of positive eigenvalues of a matrix B. We must show that N(AT A) = N(AAT ).
If λ ∈ N(AT A) with eigenvector 0 6= q ∈ Rn , then Aq ∈ Rm and
To analyze an m × n matrix A we have two symmetric matrices to work with: AT A and AAT . In view
of Lemma 8.6.2, we choose AT A (sometimes called the Gram matrix of A), and derive a series of facts
which we will need. This narrative is a bit long, but trust that it will be worth the effort. We parse it out in
several steps:
1. The n × n matrix AT A is real and symmetric so, by the Principal Axes Theorem 8.2.2, let
{q1 , q2 , . . . , qn } ⊆ Rn be an orthonormal basis of eigenvectors of AT A, with corresponding eigenval-
ues λ1 , λ2 , . . . , λn . By Lemma 8.6.2(1), λi is real for each i and λi ≥ 0. By re-ordering the qi we may
8.6. The Singular Value Decomposition 447
2. Even though the λi are the eigenvalues of AT A, the number r in (i) turns out to be rank A. To understand
why, consider the vectors Aqi ∈ im A. For all i, j:
With this write U = span {Aq1 , Aq2 , . . . , Aqr } ⊆ im A; we claim that U = im A, that is im A ⊆ U .
For this we must show that Ax ∈ U for each x ∈ Rn . Since {q1 , . . . , qr , . . . , qn } is a basis of Rn (it is
orthonormal), we can write xk = t1 q1 + · · · + tr qr + · · · + tn qn where each t j ∈ R. Then, using (iv) we
obtain
Ax = t1 Aq1 + · · · + tr Aqr + · · · + tn Aqn = t1 Aq1 + · · · + tr Aqr ∈ U
But col A = im A by Lemma 8.6.1, and rank A = dim ( col A) by Theorem 5.4.1, so
(v)
rank A = dim ( col A) = dim ( im A) = r (vi)
Definition 8.7
√ (iii)
The real numbers σi = λi = kAq̄i k for i = 1, 2, . . . , n, are called the singular values of the
matrix A.
With (vi) this makes the following definitions depend only upon A.
8 Of course they could all be positive (r = n) or all zero (so AT A = 0, and hence A = 0 by Exercise 5.3.9).
448 Orthogonality
Definition 8.8
Let A be a real, m × n matrix of rank r, with positive singular values σ1 ≥ σ2 ≥ · · · ≥ σr > 0 and
σi = 0 if i > r. Define:
DA 0
DA = diag (σ1 , . . . , σr ) and ΣA =
0 0 m×n
Here ΣA is in block form and is called the singular matrix of A.
The singular values σi and the matrices DA and ΣA will be referred to frequently below.
4. Returning to our narrative, normalize the vectors Aq1 , Aq2 , . . . , Aqr , by defining
1
pi = kAqi k Aqi ∈ Rm for each i = 1, 2, . . . , r (viii)
Then we compute:
σ1 · · · 0 0 ··· 0
.. . . . .. .. ..
. . . .
0 ··· σr 0 ··· 0
PΣA = p1 · · · pr pr+1 · · · pm
0 ··· 0 0 ··· 0
. .. .. ..
.. . . .
0 ··· 0 0 ··· 0
= σ1 p1 · · · σr pr 0 · · · 0
(xii)
= AQ
Theorem 8.6.1
Let A be a real m × n matrix, and let σ1 ≥ σ2 ≥ · · · ≥ σr > 0 be the positive singular values of A.
Then r is the rank of A and we have the factorization
The factorization A = PΣA QT in Theorem 8.6.1, where P and Q are orthogonal matrices, is called a
Singular Value Decomposition (SVD) of A. This decomposition is not unique. For example if r < m then
the vectors pr+1 , . . . , pm can be any extension of {p1 , . . ., pr } to an orthonormal basis of Rm , and each
will lead to a different matrix P in the decomposition. For a more dramatic example, if A = In then ΣA = In ,
and A = PΣA PT is a SVD of A for any orthogonal n × n matrix P.
Example 8.6.1
1 0 1
Find a singular value decomposition for A = .
−1 1 0
2 −1 1
Solution. We have AT A = −1 1 0 , so the characteristic polynomial is
1 0 1
x−2 1 −1
cAT A (x) = det 1 x−1 0 = (x − 3)(x − 1)x
−1 0 x−1
In this case, {p1 , p2 } is already a basis of R2 (so the Gram-Schmidt algorithm is not needed), and
we have the 2 × 2 orthogonal matrix
1 1 1
P = p1 p2 = √
2 −1 1
Finally (by Theorem 8.6.1) the singular value decomposition for A is
√ 2 −1
√ √ 1
1 1 3 0 0 √1
A = PΣA QT = √12 √ 0 √ 3 √3
−1 1 0 1 0 6
− 2 − 2 2
Of course this can be confirmed by direct matrix multiplication.
Thus, computing an SVD for a real matrix A is a routine matter, and we now describe a systematic
procedure for doing so.
SVD Algorithm
Given a real m × n matrix A, find an SVD A = PΣA QT as follows:
1. Use the Diagonalization Algorithm (see page 181) to find the (real and non-negative)
eigenvalues λ1 , λ2 , . . . , λn of AT A with corresponding (orthonormal) eigenvectors
q1 , q2 , . . . , qn . Reorder the qi (if necessary) to ensure that the nonzero eigenvalues are
λ1 ≥ λ2 ≥ · · · ≥ λr > 0 and λi = 0 if i > r.
In practise the singular values σi , the matrices P and Q, and even the rank of an m × n matrix are not
8.6. The Singular Value Decomposition 451
calculated this way. There are sophisticated numerical algorithms for calculating them to a high degree of
accuracy. The reader is referred to books on numerical linear algebra.
So the main virtue of Theorem 8.6.1 is that it provides a way of constructing an SVD for every real
matrix A. In particular it shows that every real matrix A has a singular value decomposition9 in the
following, more general, sense:
Definition 8.9
A Singular Value Decomposition (SVD) of an m ×n matrix A of rank r is a factorization
D 0
A = U ΣV T where U and V are orthogonal and Σ = in block form where
0 0 m×n
D = diag (d1 , d2 , . . . , dr ) where each di > 0, and r ≤ m and r ≤ n.
Note that for any SVD A = U ΣV T we immediately obtain some information about A:
Lemma 8.6.3
If A = U ΣV T is any SVD for A as in Definition 8.9, then:
1. r = rank A.
so ΣT Σ and AT A are similar n ×n matrices (Definition 5.11). Hence r = rank A by Corollary 5.4.3, proving
(1.). Furthermore, ΣT Σ and AT A have the same eigenvalues by Theorem 5.5.1; that is (using (1.)):
We note in passing that more is true. Let A be m × n of rank r, and let A = U ΣV T be any SVD for A.
Using the proof of Lemma 8.6.3 we have di = σiτ for some permutation τ of {1, 2, . . . , r}. In fact, it can
be shown that there exist orthogonal matrices U1 and V1 obtained from U and V by τ -permuting columns
and rows respectively, such that A = U1 ΣAV1T is an SVD of A.
9 In fact every complex matrix has an SVD [J.T. Scheick, Linear Algebra with Applications, McGraw-Hill, 1997]
452 Orthogonality
It turns out that any singular value decomposition contains a great deal of information about an m ×
n matrix A and the subspaces associated with A. For example, in addition to Lemma 8.6.3, the set
{p1 , p2 , . . . , pr } of vectors constructed in the proof of Theorem 8.6.1 is an orthonormal basis of col A
(by (v) and (viii) in the proof). There are more such examples, which is the thrust of this subsection.
In particular, there are four subspaces associated to a real m × n matrix A that have come to be called
fundamental:
Definition 8.10
The fundamental subspaces of an m × n matrix A are:
null A = {x ∈ Rn | Ax = 0}
null AT = {x ∈ Rn | AT x = 0}
If A = U ΣV T is any SVD for the real m × n matrix A, any orthonormal bases of U and V provide orthonor-
mal bases for each of these fundamental subspaces. We are going to prove this, but first we need three
properties related to the orthogonal complement U ⊥ of a subspace U of Rn , where (Definition 8.1):
U ⊥ = {x ∈ Rn | u · x = 0 for all u ∈ U }
The orthogonal complement plays an important role in the Projection Theorem (Theorem 8.1.3), and we
return to it in Section 10.2. For now we need:
Lemma 8.6.4
If A is any matrix then:
U ⊥ = span {fk+1 , . . . , fm }
Proof.
Hence null A = ( row A)⊥ . Now replace A by AT to get null AT = ( row AT )⊥ = ( col A)⊥ , which is
the other identity in (1).
3. We have span {fk+1 , . . . , fm } ⊆ U ⊥ because {f1 , . . . , fm } is orthogonal. For the other inclusion, let
x ∈ U ⊥ so fi · x = 0 for i = 1, 2, . . . , k. By the Expansion Theorem 5.3.6:
With this we can see how any SVD for a matrix A provides orthonormal bases for each of the four
fundamental subspaces of A.
Theorem 8.6.2
Let A be an m × n real matrix, let A = U ΣV T be any SVD for A where U and V are orthogonal of
size m × m and n × n respectively, and let
D 0
Σ= where D = diag (λ1 , λ2 , . . . , λr ), with each λi > 0
0 0 m×n
Write U = u1 · · · ur · · · um and V = v1 · · · vr · · · vn , so {u1 , . . . , ur , . . . , um }
and {v1 , . . . , vr , . . . , vn } are orthonormal bases of Rm and Rn respectively. Then
√ √ √
1. r = rank A, and the singular values of A are λ1 , λ2 , . . . , λr .
Proof.
(a.)
b. We have ( col A)⊥ = ( span {u1 , . . . , ur })⊥ = span {ur+1 , . . . , um } by Lemma 8.6.4(3). This
proves (b.) because ( col A)⊥ = null AT by Lemma 8.6.4(1).
c. We have dim ( null A) + dim ( im A) = n by the Dimension Theorem 7.2.4, applied to
T : Rn → Rm where T (x) = Ax. Since also im A = col A by Lemma 8.6.1, we obtain
So to prove (c.) it is enough to show that v j ∈ null A whenever j > r. To this end write
Example 8.6.2
Consider the homogeneous linear system
Ax = 0 of m equations in n variables
Then the set of all solutions is null A. Hence if A = U ΣV T is any SVD for A then (in the notation
of Theorem 8.6.2) {vr+1 , . . . , vn } is an orthonormal basis of the set of solutions for the system. As
such they are a set of basic solutions for the system, the most basic notion in Chapter 1.
8.6. The Singular Value Decomposition 455
If A is real and n × n the factorization in the title is related to the polar decomposition A. Unlike the SVD,
in this case the decomposition is uniquely determined by A.
Recall (Section 8.3) that a symmetric matrix A is called positive definite if and only if xT Ax > 0 for
every column x 6= 0 ∈ Rn . Before proceeding, we must explore the following weaker notion:
Definition 8.11
A real n × n matrix G is called positive10 if it is symmetric and
xT Gx ≥ 0 for all x ∈ Rn
1 1
Clearly every positive definite matrix is positive, but the converse fails. Indeed, A = is positive
1 1
T T
because, if x = a b in R2 , then xT Ax = (a + b)2 ≥ 0. But yT Ay = 0 if y = 1 −1 , so A is not
positive definite.
Lemma 8.6.5
Let G denote an n × n positive matrix.
Proof.
Definition 8.12
If A is a real n × n matrix, a factorization
Any SVD for a real square matrix A yields a polar form for A.
Theorem 8.6.3
Every square real matrix has a polar form.
Proof. Let A = U ΣV T be a SVD for A with Σ as in Definition 8.9 and m = n. Since U T U = In here we
have
A = U ΣV T = (U Σ)(U T U )V T = (U ΣU T )(UV T )
So if we write G = U ΣU T and Q = UV T , then Q is orthogonal, and it remains to show that G is positive.
But this follows from Lemma 8.6.5.
The SVD for a square matrix A is not unique (In = PIn PT for any orthogonal matrix P). But given the
proof of Theorem 8.6.3 it is surprising that the polar decomposition is unique.11 We omit the proof.
The name “polar form” is reminiscent of the same form for complex numbers (see Appendix A). This
is no coincidence. To see why, we represent the complex numbers as real 2 × 2 matrices. Write M2 (R) for
the set of all real 2 × 2 matrices, and define
a −b
σ : C → M2 (R) by σ (a + bi) = for all a + bi in C
b a
One verifies that σ preserves addition and multiplication in the sense that
for all complex numbers z and w. Since θ is one-to-one we may identify each complex number a + bi with
the matrix θ (a + bi), that is we write
a −b
a + bi = for all a + bi in C
b a
0 0 1 0 0 −1 r 0
Thus 0 = ,1= = I2 , i = , and r = if r is real.
0 0 0 1 1 0 0 r
√
If z = a + bi is nonzero then the absolute value r = |z| = a2 + b2 6= 0. If θ is the angle of z in standard
position, then cos θ = a/r and sin θ = b/r. Observe:
a −b r 0 a/r −b/r r 0 cos θ − sin θ
= = = GQ (xiii)
b a 0 r b/r a/r 0 r sin θ cos θ
r 0 cos θ − sin θ
where G = is positive and Q = is orthogonal. But in C we have G = r and
0 r sin θ cos θ
Q = cos θ + i sin θ so (xiii) reads z = r(cos θ + i sin θ ) = reiθ which is the classical
polar form for the
a −b
complex number a + bi. This is why (xiii) is called the polar form of the matrix ; Definition
b a
8.12 simply adopts the terminology for n × n matrices.
11 See J.T. Scheick, Linear Algebra with Applications, McGraw-Hill, 1997, page 379.
8.6. The Singular Value Decomposition 457
It is impossible for a non-square matrix A to have an inverse (see the footnote to Definition 2.11). Nonethe-
less, one candidate for an “inverse” of A is an m × n matrix B such that
Such a matrix B is called a middle inverse for A. If A is invertible then A−1 is the unique middle inverse
for
1 0
A, but a middle inverse is not unique in general, even for square matrices. For example, if A = 0 0
0 0
1 0 0
then B = is a middle inverse for A for any b.
b 0 0
If ABA = A and BAB = B it is easy to see that AB and BA are both idempotent matrices. In 1955 Roger
Penrose observed that the middle inverse is unique if both AB and BA are symmetric. We omit the proof.
Definition 8.13
Let A be a real m × n matrix. The pseudoinverse of A is the unique n × m matrix A+ such that A
and A+ satisfy P1 and P2, that is:
If A is invertible then A+ = A−1 as expected. In general, the symmetry in conditions P1 and P2 shows
that A is the pseudoinverse of A+ , that is A++ = A.
12 R. Penrose, A generalized inverse for matrices, Proceedings of the Cambridge Philosophical Society 5l (1955), 406-413.
In fact Penrose proved this for any complex matrix, where AB and BA are both required to be hermitian (see Definition 8.18 in
the following section).
13 Penrose called the matrix A+ the generalized inverse of A, but the term pseudoinverse is now commonly used. The matrix
+
A is also called the Moore-Penrose inverse after E.H. Moore who had the idea in 1935 as part of a larger work on “General
Analysis”. Penrose independently re-discovered it 20 years later.
458 Orthogonality
Theorem 8.6.5
Let A be an m × n matrix.
Proof. Here AAT (respectively AT A) is invertible by Theorem 5.4.4 (respectively Theorem 5.4.3). The rest
is a routine verification.
In general, given an m × n matrix A, the pseudoinverse A+ can be computed from any SVD for A. To
see how, we need some notation. Let A = U ΣV T be an SVD for A (as in Definition 8.9) where U and V
D 0
are orthogonal and Σ = in block form where D = diag (d1 , d2 , . . . , dr ) where each di > 0.
0 0 m×n
Hence D is invertible, so we make:
Definition 8.14
−1
′ D 0
Σ = .
0 0 n×m
Lemma 8.6.6
• ΣΣ′ Σ = Σ Ir 0
• ΣΣ′ =
0 0 m×m
Ir 0
• Σ′ Σ =
• Σ′ ΣΣ′ = Σ′ 0 0 n×n
by Lemma 8.6.6. Similarly BAB = B. Moreover AB = U (ΣΣ′)U T and BA = V (Σ′ Σ)V T are both symmetric
again by Lemma 8.6.6. This proves
Theorem 8.6.6
Let A be real and m × n, and let A = U ΣV T is any SVD for A as in Definition 8.9. Then
A+ = V Σ′U T .
8.6. The Singular Value Decomposition 459
Of
coursewe can always use the SVD constructed in Theorem 8.6.1 to find the pseudoinverse. If
1 0
1 0 0
A = 0 0 , we observed above that B = is a middle inverse for A for any b. Furthermore
b 0 0
0 0
AB is symmetric but BA is not, so B 6= A+ .
Example 8.6.3
1 0
Find A+ if A = 0 0 .
0 0
1 0
T
Solution. A A = with eigenvalues λ1 = 1 and λ2 = 0 and corresponding eigenvectors
0 0
1 0
q1 = and q2 = . Hence Q = q1 q2 = I2 . Also A has rank 1 with singular values
0 1
1 0
′ 1 0 0
σ1 = 1 and σ2 = 0, so ΣA = 0 0 = A and ΣA = = AT in this case.
0 0 0
0 0
1 0 1
Since Aq1 = 0 and Aq2 = 0 , we have p1 = 0 which extends to an orthonormal
0 0 0
0 0
basis {p1 , p2 , p3 } of R3 where (say) p2 = 1 and p3 = 0 . Hence
0 1
P = p1 p2 p3 =I, so the SVD for A is A = PΣA Q . Finally, the pseudoinverse of A is
T
1 0 0
A+ = QΣ′A PT = Σ′A = . Note that A+ = AT in this case.
0 0 0
The following Lemma collects some properties of the pseudoinverse that mimic those of the inverse.
The verifications are left as exercises.
Lemma 8.6.7
Let A be an m × n matrix of rank r.
1. A++ = A.
3. (AT )+ = (A+ )T .
Exercise 8.6.2 For any matrix A show that Exercise 8.6.11 If A = U ΣV T is an SVD for A, find an
SVD for AT .
ΣAT = (ΣA )T
Exercise 8.6.12 Let A be a real, m × n matrix with pos-
itive singular values σ1 , σ2 , . . . , σr , and write
Exercise 8.6.3 If A is m × n with all singular values pos-
itive, what is rank A? s(x) = (x − σ1 )(x − σ2 ) · · · (x − σr )
Exercise 8.6.4 If A has singular values σ1 , . . . , σr , what a. Show that cAT A (x) = s(x)xn−r and
are the singular values of: cAT A (c) = s(x)xm−r .
If A is an n × n matrix, the characteristic polynomial cA (x) is a polynomial of degree n and the eigenvalues
of A are just the roots of cA (x). In most of our examples these roots have been real numbers (in fact,
the examples have been carefully chosen so this will be the case!); but it need not happen, even when
0 1
the characteristic polynomial has real coefficients. For example, if A = then cA (x) = x2 + 1
−1 0
has roots i and −i, where i is a complex number satisfying i2 = −1. Therefore, we have to deal with the
possibility that the eigenvalues of a (real) square matrix might be complex numbers.
In fact, nearly everything in this book would remain true if the phrase real number were replaced by
complex number wherever it occurs. Then we would deal with matrices with complex entries, systems
of linear equations with complex coefficients (and complex solutions), determinants of complex matrices,
and vector spaces with scalar multiplication by any complex number allowed. Moreover, the proofs of
most theorems about (the real version of) these concepts extend easily to the complex case. It is not our
intention here to give a full treatment of complex linear algebra. However, we will carry the theory far
enough to give another proof that the eigenvalues of a real symmetric matrix A are real (Theorem 5.5.7)
and to prove the spectral theorem, an extension of the principal axes theorem (Theorem 8.2.2).
The set of complex numbers is denoted C . We will use only the most basic properties of these numbers
(mainly conjugation and absolute values), and the reader can find this material in Appendix A.
If n ≥ 1, we denote the set of all n-tuples of complex numbers by Cn . As with Rn , these n-tuples will
be written either as row or column matrices and will be referred to as vectors. We define vector operations
on Cn as follows:
With these definitions, Cn satisfies the axioms for a vector space (with complex scalars) given in Chapter 6.
Thus we can speak of spanning sets for Cn , of linearly independent subsets, and of bases. In all cases,
the definitions are identical to the real case, except that the scalars are allowed to be complex numbers. In
particular, the standard basis of Rn remains a basis of Cn , called the standard basis of Cn .
A matrix A = ai j is called a complex matrix if every entry ai j is a complex number. The notion of
conjugation for complex numbers extends to matrices as follows: Define the conjugate of A = ai j to be
the matrix
A = ai j
obtained from A by conjugating every entry. Then (using Appendix A)
Clearly, if z and w actually lie in Rn , then hz, wi = z · w is the usual dot product.
Example 8.7.1
If z = (2, 1 − i, 2i, 3 − i) and w = (1 − i, −1, −i, 3 + 2i), then
Note that hz, wi is a complex number in general. However, if w = z = (z1 , z2 , . . . , zn ), the definition
gives hz, zi = |z1 |2 + · · · + |zn |2 which is a nonnegative real number, equal to 0 if and only if z = 0. This
explains the conjugation in the definition of hz, wi, and it gives (4) of the following theorem.
Theorem 8.7.1
Let z, z1 , w, and w1 denote vectors in Cn , and let λ denote a complex number.
Proof. We leave (1) and (2) to the reader (Exercise 8.7.10), and (4) has already been proved. To prove (3),
write z = (z1, z2 , . . ., zn ) and w = (w1 , w2 , . . . , wn ). Then
hw, zi = (w1 z1 + · · · + wn zn ) = w1 z1 + · · · + wn zn
= z1 w1 + · · · + zn wn = hz, wi
8.7. Complex Matrices 463
The only properties of the norm function we will need are the following (the proofs are left to the reader):
Theorem 8.7.2
If z is any vector in Cn , then
A vector u in Cn is called a unit vector if kuk = 1. Property (2) in Theorem 8.7.2 then shows that if
1
z 6= 0 is any nonzero vector in Cn , then u = kzk z is a unit vector.
Example 8.7.2
In C4 , find a unit vector u that is a positive real multiple of z = (1 − i, i, 2, 3 + 4i).
√ √ √
Solution. kzk = 2 + 1 + 4 + 25 = 32 = 4 2, so take u = 4√1 2 z.
Transposition of complex matrices is defined just as in the real case, and the following notion is fun-
damental.
AH = (A)T = (AT )
Example 8.7.3
H 3 −2i
3 1−i 2+i
= 1 + i 5 − 2i
2i 5 + 2i −i
2−i i
14
Other notations for AH are A∗ and A† .
464 Orthogonality
The following properties of AH follow easily from the rules for transposition of real matrices and
extend these rules to complex matrices. Note the conjugate in property (3).
Theorem 8.7.3
Let A and B denote complex matrices, and let λ be a complex number.
1. (AH )H = A.
2. (A + B)H = AH + BH .
3. (λ A)H = λ AH .
4. (AB)H = BH AH .
If A is a real symmetric matrix, it is clear that AH = A. The complex matrices that satisfy this condition
turn out to be the most natural generalization of the real symmetric matrices:
Hermitian matrices are easy to recognize because the entries on the main diagonal must be real, and the
“reflection” of each nondiagonal entry in the main diagonal must be the conjugate of that entry.
Example 8.7.4
3 i 2+i
−i −2 −7 is hermitian, whereas 1 i
and
1 i
are not.
i −2 −i i
2 − i −7 1
The following Theorem extends Theorem 8.2.3, and gives a very useful characterization of hermitian
matrices in terms of the standard inner product in Cn .
Theorem 8.7.4
An n × n complex matrix A is hermitian if and only if
hAz, wi = hz, Awi
for all n-tuples z and w in Cn .
15 The name hermitian honours Charles Hermite (1822–1901), a French mathematician who worked primarily in analysis and
is remembered as the first to show that the number e from calculus is transcendental—that is, e is not a root of any polynomial
with integer coefficients.
8.7. Complex Matrices 465
Theorem 8.7.5
Let A denote a hermitian matrix.
Proof. Let λ and µ be eigenvalues of A with (nonzero) eigenvectors z and w. Then Az = λ z and Aw = µ w,
so Theorem 8.7.4 gives
λ hz, wi = hλ z, wi = hAz, wi = hz, Awi = hz, µ wi = µ hz, wi (8.6)
If µ = λ and w = z, this becomes λ hz, zi = λ hz, zi. Because hz, zi = kzk2 6= 0, this implies λ = λ .
Thus λ is real, proving (1). Similarly, µ is real, so equation (8.6) gives λ hz, wi = µ hz, wi. If λ 6= µ , this
implies hz, wi = 0, proving (2).
The principal axes theorem (Theorem 8.2.2) asserts that every real symmetric matrix A is orthogonally
diagonalizable—that is PT AP is diagonal where P is an orthogonal matrix (P−1 = PT ). The next theorem
identifies the complex analogs of these orthogonal real matrices.
466 Orthogonality
Theorem 8.7.6
The following are equivalent for an n × n complex matrix A.
Proof. If A = c1 c2 · · · cn is a complex matrix with jth column c j , then AT A = hci , c j i , as in
Theorem 8.2.1. Now (1) ⇔ (2) follows, and (1) ⇔ (3) is proved in the same way.
Example 8.7.5
1+i 1
The matrix A = has orthogonal columns, but the rows are not orthogonal.
1−i i
√
1 1+i √ 2
Normalizing the columns gives the unitary matrix 2 .
1−i 2i
Given a real symmetric matrix A, the diagonalization algorithm in Section 3.3 leads to a procedure for
finding an orthogonal matrix P such that PT AP is diagonal (see Example 8.2.4). The following example
illustrates Theorem 8.7.5 and shows that the technique works for complex matrices.
Example 8.7.6
3 2+i
Consider the hermitian matrix A = . Find the eigenvalues of A, find two
2−i 7
orthonormal eigenvectors, and so find a unitary matrix U such that U H AU is diagonal.
Hence
the
eigenvalues
are 2 and 8 (both real as expected), and corresponding eigenvectors are
2+i 1 √
and (orthogonal as expected). Each has length 6 so, as in the (real)
−1 2−i
1 2 + i 1
diagonalization algorithm, let U = √ be the unitary matrix with the normalized
6 −1 2 − i
eigenvectors as columns.
2 0
Then U AU =
H is diagonal.
0 8
Unitary Diagonalization
U H AU = T
is upper triangular. Moreover, the entries on the main diagonal of T are the eigenvalues
λ1 , λ2 , . . . , λn of A (including multiplicities).
Proof. We use induction on n. If n = 1, A is already upper triangular. If n > 1, assume the theorem is valid
for (n − 1) × (n − 1) complex matrices. Let λ1 be an eigenvalue of A, and let y1 be an eigenvector with
ky1 k = 1. Then y1 is part of a basis of Cn (by the analog of Theorem 6.4.1), so the (complex analog of
the) Gram-Schmidt
process provides y2 , . . . , yn such that {y1 , y2 , . . . , yn } is an orthonormal basis of Cn .
If U1 = y1 y2 · · · yn is the matrix with these vectors as its columns, then (see Lemma 5.4.3)
H λ1 X1
U1 AU1 =
0 A1
1 0
is upper triangular. Then U2 = is a unitary n × n matrix. Hence U = U1U2 is unitary (using
0 W1
Theorem 8.7.6), and
1 0 λ1 X1 1 0 λ1 X1W1
= =
0 W1H 0 A1 0 W1 0 T1
is upper triangular. Finally, A and U H AU = T have the same eigenvalues by (the complex version of)
Theorem 5.5.1, and they are the diagonal entries of T because T is upper triangular.
The fact that similar matrices have the same traces and determinants gives the following consequence
of Schur’s theorem.
Corollary 8.7.1
Let A be an n × n complex matrix, and let λ1 , λ2 , . . . , λn denote the eigenvalues of A, including
multiplicities. Then
det A = λ1 λ2 · · · λn and tr A = λ1 + λ2 + · · · + λn
Schur’s theorem asserts that every complex matrix can be “unitarily triangularized.” However, we
1 1
cannot substitute “unitarily diagonalized” here. In fact, if A = , there is no invertible complex
0 1
matrix U at all such that U −1 AU is diagonal. However, the situation is much better for hermitian matrices.
Proof. By Schur’s theorem, let U H AU = T be upper triangular where U is unitary. Since A is hermitian,
this gives
T H = (U H AU )H = U H AH U HH = U H AU = T
This means that T is both upper and lower triangular. Hence T is actually diagonal.
The principal axes theorem asserts that a real matrix A is symmetric if and only if it is orthogonally
diagonalizable (that is, PT AP is diagonal for some real orthogonal matrix P). Theorem 8.7.8 is the complex
analog of half of this result. However, the converse is false for complex matrices: There exist unitarily
diagonalizable matrices that are not hermitian.
Example 8.7.7
0 1
Show that the non-hermitian matrix A = is unitarily diagonalizable.
−1 0
i 0
such that U H AU = is diagonal.
0 −i
There is a very simple way to characterize those complex matrices that are unitarily diagonalizable.
To this end, an n × n complex matrix N is called
normal if NN = N N. It is clear that every hermitian
H H
0 1
or unitary matrix is normal, as is the matrix in Example 8.7.7. In fact we have the following
−1 0
result.
Theorem 8.7.9
An n × n complex matrix A is unitarily diagonalizable if and only if A is normal.
Proof. Assume first that U H AU = D, where U is unitary and D is diagonal. Then DDH = DH D as is
easily verified. Because DDH = U H (AAH )U and DH D = U H (AH A)U , it follows by cancellation that
AAH = AH A.
Conversely, assume A is normal—that is, AAH = AH A. By Schur’s theorem, let U H AU = T , where T
is upper triangular and U is unitary. Then T is normal too:
Proof. If p(x) is any polynomial with complex coefficients, then p(P−1 AP) = P−1 p(A)P for any invertible
complex matrix P. Hence, by Schur’s theorem, we may assume that A is upper triangular. Then the
eigenvalues λ1 , λ2 , . . . , λn of A appear along the main diagonal, so
Thus
cA (A) = (A − λ1 I)(A − λ2I)(A − λ3I) · · ·(A − λn I)
Note that each matrix A − λi I is upper triangular. Now observe:
2. Then (A − λ1 I)(A − λ2I) has the first two columns zero because the second column of (A − λ2I) is
(b, 0, 0, . . . , 0)T for some constant b.
3. Next (A − λ1I)(A − λ2I)(A − λ3I) has the first three columns zero because column 3 of (A − λ3 I) is
(c, d, 0, . . . , 0)T for some constants c and d.
Continuing in this way we see that (A − λ1 I)(A − λ2I)(A − λ3I) · · ·(A − λn I) has all n columns zero; that
is, cA (A) = 0.
Exercise 8.7.1 In each case, compute the norm of the b. U = {(w, 2w, a) | w in C, a in R}
complex vector.
c. U = R3
a. (1, 1 − i, −2, i) d. U = {(v + w, v − 2w, v) | v, w in C}
b. (1 − i, 1 + i, 1, −1)
Exercise 8.7.4 In each case, find a basis over C, and
c. (2 + i, 1 − i, 2, 0, −i) determine the dimension of the complex subspace U of
C3 (see the previous exercise).
d. (−2, −i, 1 + i, 1 − i, 2i)
a. U = {(w, v + w, v − iw) | v, w in C}
Exercise 8.7.2 In each case, determine whether the two
b. U = {(iv + w, 0, 2v − w) | v, w in C}
vectors are orthogonal.
c. U = {(u, v, w) | iu − 3v + (1 − i)w = 0;
a. (4, −3i, 2 + i), (i, 2, 2 − 4i) u, v, w in C}
b. (i, −i, 2 + i), (i, i, 2 − i) d. U = {(u, v, w) | 2u + (1 + i)v − iw = 0;
u, v, w in C}
c. (1, 1, i, i), (1, i, −i, 1)
d. (4 + 4i, 2 + i, 2i), (−1 + i, 2, 3 − 2i) Exercise 8.7.5 In each case, determine whether the
given matrix is hermitian, unitary, or normal.
Exercise 8.7.3 A subset U of Cn is called a complex
1 −i 2 3
subspace of Cn if it contains 0 and if, given v and w in a. b.
i i −3 2
U , both v + w and zv lie in U (z any complex number).
In each case, determine whether U is a complex subspace 1 i 1 −i
c. d.
of C3 . −i 2 i −1
1 1 −1 1 1+i
a. U = {(w, w, 0) | w in C} e. 2
√ f.
1 1 1+i i
8.7. Complex Matrices 471
1+i 1 √1
z z b. Show that the diagonal entries of any hermitian
g. h. 2|z|
, z 6= 0
−i −1 + i z −z matrix are real.
Exercise 8.7.16 Exercise 8.7.20 If A is hermitian, show that all the co-
efficients of cA (x) are real numbers.
a. If Z is an invertible complex matrix, show that Z H
Exercise 8.7.21
is invertible and that (Z H )−1 = (Z −1 )H .
1 1
b. Show that the inverse of a unitary matrix is again a. If A = , show that U −1 AU is not diagonal
0 1
unitary.
for any invertible complex matrix U .
c. If U is unitary, show that U is unitary.
H
0 1
b. If A = , show that U −1 AU is not upper
−1 0
Exercise 8.7.17 Let Z be an m × n matrix such that triangular for any real invertible matrix U .
Z H Z = In (for example, Z is a unit column in Cn ).
Exercise 8.7.22 If A is any n × n matrix, show that
a. Show that V = ZZ H is hermitian and satisfies U H AU is lower triangular for some unitary matrix U .
V2 = V.
Exercise 8.7.23 If A is a 3 × 3 matrix, show that
b. Show that U = I − 2ZZ H is both unitary and her- A2 = 0 if and only if there exists a unitary
matrix U
−1
mitian (so U = U = U ).
H 0 0 u
such that U H AU has the form 0 0 v or the form
0 0 0
Exercise 8.7.18
0 u v
0 0 0 .
a. If N is normal, show that zN is also normal for all
0 0 0
complex numbers z.
Exercise 8.7.24 If A2 = A, show that rank A = tr A.
b. Show that (a) fails if normal is replaced by hermi- [Hint: Use Schur’s theorem.]
tian.
Exercise 8.7.25 Let A be any n × n complex matrix
Exercise 8.7.19 Show that a real2 × 2 normal with eigenvalues λ1 , . . . , λn . Show that A = P + N
matrix is
a b where N n = 0 and P = U DU T where U is unitary and
either symmetric or has the form . D = diag (λ1 , . . . , λn ). [Hint: Schur’s theorem]
−b a
For centuries mankind has been using codes to transmit messages. In many cases, for example transmit-
ting financial, medical, or military information, the message is disguised in such a way that it cannot be
understood by an intruder who intercepts it, but can be easily “decoded” by the intended receiver. This
subject is called cryptography and, while intriguing, is not our focus here. Instead, we investigate methods
for detecting and correcting errors in the transmission of the message.
The stunning photos of the planet Saturn sent by the space probe are a very good example of how
successful these methods can be. These messages are subject to “noise” such as solar interference which
causes errors in the message. The signal is received on Earth with errors that must be detected and cor-
rected before the high-quality pictures can be printed. This is done using error-correcting codes. To see
how, we first discuss a system of adding and multiplying integers while ignoring multiples of a fixed
integer.
8.8. An Application to Linear Codes over Finite Fields 473
Modular Arithmetic
We work in the set Z = {0, ±1, ±2, ±3, . . . } of integers, that is the set of whole numbers. Everyone is
familiar with the process of “long division” from arithmetic. For example, we can divide an integer a by 5
and leave a remainder “modulo 5” in the set {0, 1, 2, 3, 4}. As an illustration
19 = 3 · 5 + 4
so the remainder of 19 modulo 5 is 4. Similarly, the remainder of 137 modulo 5 is 2 because we have
137 = 27 · 5 + 2. This works even for negative integers: For example,
−17 = (−4) · 5 + 3
Here q is called the quotient of a modulo n, and r is called the remainder of a modulo n. We refer to n
as the modulus. Thus, if n = 6, the fact that 134 = 22 · 6 + 2 means that 134 has quotient 22 and remainder
2 modulo 6.
Our interest here is in the set of all possible remainders modulo n. This set is denoted
Zn = {0, 1, 2, 3, . . . , n − 1}
and is called the set of integers modulo n. Thus every integer is uniquely represented in Zn by its remain-
der modulo n.
We are going to show how to do arithmetic in Zn by adding and multiplying modulo n. That is, we
add or multiply two numbers in Zn by calculating the usual sum or product in Z and taking the remainder
modulo n. It is proved in books on abstract algebra that the usual laws of arithmetic hold in Zn for any
modulus n ≥ 2. This seems remarkable until we remember that these laws are true for ordinary addition
and multiplication and all we are doing is reducing modulo n.
To illustrate, consider the case n = 6, so that Z6 = {0, 1, 2, 3, 4, 5}. Then 2 + 5 = 1 in Z6 because 7
leaves a remainder of 1 when divided by 6. Similarly, 2 · 5 = 4 in Z6 , while 3 + 5 = 2, and 3 + 3 = 0. In
this way we can fill in the addition and multiplication tables for Z6 ; the result is:
Tables for Z6
+ 0 1 2 3 4 5 × 0 1 2 3 4 5
0 0 1 2 3 4 5 0 0 0 0 0 0 0
1 1 2 3 4 5 0 1 0 1 2 3 4 5
2 2 3 4 5 0 1 2 0 2 4 0 2 4
3 3 4 5 0 1 2 3 0 3 0 3 0 3
4 4 5 0 1 2 3 4 0 4 2 0 4 2
5 5 0 1 2 3 4 5 0 5 4 3 2 1
474 Orthogonality
Calculations in Z6 are carried out much as in Z . As an illustration, consider the familiar “distributive law”
a(b + c) = ab + ac from ordinary arithmetic. This holds for all a, b, and c in Z6 ; we verify a particular
case:
3(5 + 4) = 3 · 5 + 3 · 4 in Z6
In fact, the left side is 3(5 + 4) = 3 · 3 = 3, and the right side is (3 · 5) + (3 · 4) = 3 + 0 = 3 too. Hence
doing arithmetic in Z6 is familiar. However, there are differences. For example, 3 · 4 = 0 in Z6 , in contrast
to the fact that a · b = 0 in Z can only happen when either a = 0 or b = 0. Similarly, 32 = 3 in Z6 , unlike
Z.
Note that we will make statements like −30 = 19 in Z7 ; it means that −30 and 19 leave the same
remainder 5 when divided by 7, and so are equal in Z7 because they both equal 5. In general, if n ≥ 2 is
any modulus, the operative fact is that
In this case we say that a and b are equal modulo n, and write a = b( mod n).
Arithmetic in Zn is, in a sense, simpler than that for the integers. For example, consider negatives.
Given the element 8 in Z17 , what is −8? The answer lies in the observation that 8 + 9 = 0 in Z17 , so
−8 = 9 (and −9 = 8). In the same way, finding negatives is not difficult in Zn for any modulus n.
Finite Fields
In our study of linear algebra so far the scalars have been real (possibly complex) numbers. The set R
of real numbers has the property that it is closed under addition and multiplication, that the usual laws of
arithmetic hold, and that every nonzero real number has an inverse in R. Such a system is called a field.
Hence the real numbers R form a field, as does the set C of complex numbers. Another example is the set
Q of all rational numbers (fractions); however the set Z of integers is not a field—for example, 2 has no
inverse in the set Z because 2 · x = 1 has no solution x in Z .
Our motivation for isolating the concept of a field is that nearly everything we have done remains valid
if the scalars are restricted to some field: The gaussian algorithm can be used to solve systems of linear
equations with coefficients in the field; a square matrix with entries from the field is invertible if and only
if its determinant is nonzero; the matrix inversion algorithm works in the same way; and so on. The reason
is that the field has all the properties used in the proofs of these results for the field R, so all the theorems
remain valid.
It turns out that there are finite fields—that is, finite sets that satisfy the usual laws of arithmetic and in
which every nonzero element a has an inverse, that is an element b in the field such that ab = 1. If n ≥ 2 is
an integer, the modular system Zn certainly satisfies the basic laws of arithmetic, but it need not be a field.
For example we have 2 · 3 = 0 in Z6 so 3 has no inverse in Z6 (if 3a = 1 then 2 = 2 · 1 = 2(3a) = 0a = 0
in Z6 , a contradiction). The problem is that 6 = 2 · 3 can be properly factored in Z.
An integer p ≥ 2 is called a prime if p cannot be factored as p = ab where a and b are positive integers
and neither a nor b equals 1. Thus the first few primes are 2, 3, 5, 7, 11, 13, 17, . . . . If n ≥ 2 is not a
prime and n = ab where 2 ≤ a, b ≤ n − 1, then ab = 0 in Zn and it follows (as above in the case n = 6)
that b cannot have an inverse in Zn , and hence that Zn is not a field. In other words, if Zn is a field, then n
must be a prime. Surprisingly, the converse is true:
8.8. An Application to Linear Codes over Finite Fields 475
Theorem 8.8.1
If p is a prime, then Z p is a field using addition and multiplication modulo p.
The proof can be found in books on abstract algebra.18 If p is a prime, the field Z p is called the field of
integers modulo p.
For example, consider the case n = 5. Then Z5 = {0, 1, 2, 3, 4} and the addition and multiplication
tables are:
+ 0 1 2 3 4 × 0 1 2 3 4
0 0 1 2 3 4 0 0 0 0 0 0
1 1 2 3 4 0 1 0 1 2 3 4
2 2 3 4 0 1 2 0 2 4 1 3
3 3 4 0 1 2 3 0 3 1 4 2
4 4 0 1 2 3 4 0 4 3 2 1
Hence 1 and 4 are self-inverse in Z5 , and 2 and 3 are inverses of each other, so Z5 is indeed a field. Here
is another important example.
Example 8.8.1
If p = 2, then Z2 = {0, 1} is a field with addition and multiplication modulo 2 given by the tables
+ 0 1 × 0 1
0 0 1 and 0 0 0
1 1 0 1 0 1
While it is routine to find negatives of elements of Z p , it is a bit more difficult to find inverses in Z p .
For example, how does one find 14−1 in Z17 ? Since we want 14−1 · 14 = 1 in Z17 , we are looking for an
integer a with the property that a · 14 = 1 modulo 17. Of course we can try all possibilities in Z17 (there are
only 17 of them!), and the result is a = 11 (verify). However this method is of little use for large primes
p, and it is a comfort to know that there is a systematic procedure (called the euclidean algorithm) for
finding inverses in Z p for any prime p. Furthermore, this algorithm is easy to program for a computer. To
illustrate the method, let us once again find the inverse of 14 in Z17 .
Example 8.8.2
Find the inverse of 14 in Z17 .
17 = 1 · 14 + 3
14 = 4 · 3 + 2
18 See, for example, W. Keith Nicholson, Introduction to Abstract Algebra, 4th ed., (New York: Wiley, 2012).
476 Orthogonality
and then divide (the previous divisor) 3 by the new remainder 2 to get
3 = 1·2+1
It is a theorem of number theory that, because 17 is a prime, this procedure will always lead to a
remainder of 1. At this point we eliminate remainders in these equations from the bottom up:
1 = 3−1·2 since 3 = 1 · 2 + 1
= 3 − 1 · (14 − 4 · 3) = 5 · 3 − 1 · 14 since 2 = 14 − 4 · 3
= 5 · (17 − 1 · 14) − 1 · 14 = 5 · 17 − 6 · 14 since 3 = 17 − 1 · 14
As mentioned above, nearly everything we have done with matrices over the field of real numbers can
be done in the same way for matrices with entries from Z p . We illustrate this with one example. Again
the reader is referred to books on abstract algebra.
Example 8.8.3
1 4
Determine if the matrix A = from Z7 is invertible and, if so, find its inverse.
6 5
While we shall not use them, there are finite fields other than Z p for the various primes p. Surprisingly,
for every prime p and every integer n ≥ 1, there exists a field with exactly pn elements, and this field is
unique.19 It is called the Galois field of order pn , and is denoted GF(pn ).
19
See, for example, W. K. Nicholson, Introduction to Abstract Algebra, 4th ed., (New York: Wiley, 2012).
8.8. An Application to Linear Codes over Finite Fields 477
Coding theory is concerned with the transmission of information over a channel that is affected by noise.
The noise causes errors, so the aim of the theory is to find ways to detect such errors and correct at least
some of them. General coding theory originated with the work of Claude Shannon (1916–2001) who
showed that information can be transmitted at near optimal rates with arbitrarily small chance of error.
Let F denote a finite field and, if n ≥ 1, let
with the usual componentwise addition and scalar multiplication. In this context, the rows in F n are
called words (or n-words) and, as the name implies, will be written as [a b c d] = abcd. The individual
components of a word are called its digits. A nonempty subset C of F n is called a code (or an n-code),
and the elements in C are called code words. If F = Z2 , these are called binary codes.
If a code word w is transmitted and an error occurs, the resulting word v is decoded as the code word
“closest” to v in F n . To make sense of what “closest” means, we need a distance function on F n analogous
to that in Rn (see Theorem 5.3.3). The usual definition in Rn does not work in this situation. For example,
if w = 1111 in (Z2 )4 then the square of the distance of w from 0 is
even though w 6= 0.
However there is a satisfactory notion of distance in F n due to Richard Hamming (1915–1998). Given
a word w = a1 a2 · · · an in F n , we first define the Hamming weight wt(w) to be the number of nonzero
digits in w:
wt(w) = wt(a1 a2 · · · an ) = |{i | ai 6= 0}|
Clearly, 0 ≤ wt(w) ≤ n for every word w in F n . Given another word v = b1 b2 · · · bn in F n , the Hamming
distance d(v, w) between v and w is defined by
In other words, d(v, w) is the number of places at which the digits of v and w differ. The next result
justifies using the term distance for this function d.
Theorem 8.8.2
Let u, v, and w denote words in F n . Then:
1. d(v, w) ≥ 0.
Proof. (1) and (3) are clear, and (2) follows because wt(v) = 0 if and only if v = 0. To prove (4), write
x = v − u and y = u − w. Then (4) reads wt(x + y) ≤ wt(x) + wt(y). If x = a1 a2 · · · an and y = b1 b2 · · · bn ,
this follows because ai + bi 6= 0 implies that either ai 6= 0 or bi 6= 0.
Given a word w in F n and a real number r > 0, define the ball Br (w) of radius r (or simply the r-ball)
about w as follows:
Br (w) = {x ∈ F n | d(w, x) ≤ r}
Using this we can describe one of the most useful decoding methods.
Using this method, we can describe how to construct a code C that can detect (or correct) t errors.
Suppose a code word c is transmitted and a word w is received with s errors where 1 ≤ s ≤ t. Then s is
the number of places at which the c- and w-digits differ, that is, s = d(c, w). Hence Bt (c) consists of all
possible received words where at most t errors have occurred.
Assume first that C has the property that no code word lies in the t-ball of another code word. Because
w is in Bt (c) and w 6= c, this means that w is not a code word and the error has been detected. If we
strengthen the assumption on C to require that the t-balls about code words are pairwise disjoint, then w
belongs to a unique ball (the one about c), and so w will be correctly decoded as c.
To describe when this happens, let C be an n-code. The minimum distance d of C is defined to be the
smallest distance between two distinct code words in C; that is,
d = min {d(v, w) | v and w in C; v 6= w}
Theorem 8.8.3
Let C be an n-code with minimum distance d . Assume that nearest neighbour decoding is used.
Then:
Proof.
1. Let c be a code word in C. If w ∈ Bt (c), then d(w, c) ≤ t < d by hypothesis. Thus the t-ball Bt (c)
contains no other code word, so C can detect t errors by the preceding discussion.
2. If 2t < d, it suffices (again by the preceding discussion) to show that the t-balls about distinct code
words are pairwise disjoint. But if c 6= c′ are code words in C and w is in Bt (c′ ) ∩ Bt (c), then
Theorem 8.8.2 gives
d(c, c′ ) ≤ d(c, w) + d(w, c′ ) ≤ t + t = 2t < d
by hypothesis, contradicting the minimality of d.
20 We say that C detects (corrects) t errors if C can detect (or correct) t or fewer errors.
8.8. An Application to Linear Codes over Finite Fields 479
Example 8.8.4
If F = Z3 = {0, 1, 2}, the 6-code {111111, 111222, 222111} has minimum distance 3 and so can
detect 2 errors and correct 1 error.
Let c be any word in F n . Aword w satisfies d(w, c) = r ifand only if w and c differ in exactly r digits.
If |F| = q, there are exactly nr (q − 1)r such words where nr is the binomial coefficient. Indeed, choose
the r places where they differ in nr ways, and then fill those places in w in (q − 1)r ways. It follows that
the number of words in the t-ball about c is
|Bt (c)| = n0 + n1 (q − 1) + n2 (q − 1)2 + · · · + nt (q − 1)t = ∑ti=0 ni (q − 1)i
This leads to a useful bound on the size of error-correcting codes.
Proof. Write k = ∑ti=0 ni (q − 1)i . The t-balls centred at distinct code words each contain k words, and
there are |C| of them. Moreover they are pairwise disjoint because the code corrects t errors (see the
discussion preceding Theorem 8.8.3). Hence they contain k · |C| distinct words, and so k · |C| ≤ |F n | = qn ,
proving the theorem.
A code is called perfect if there is equality in the Hamming bound; equivalently, if every word in F n
lies in exactly one t-ball about a code word. For example, if F = Z2 , n = 3, and t = 1, then q = 2 and
3 3 23
0 + 1 = 4, so the Hamming bound is 4 = 2. The 3-code C = {000, 111} has minimum distance 3 and
so can correct 1 error by Theorem 8.8.3. Hence C is perfect.
Linear Codes
Up to this point we have been regarding any nonempty subset of the F-vector space F n as a code. However
many important codes are actually subspaces. A subspace C ⊆ F n of dimension k ≥ 1 over F is called an
(n, k)-linear code, or simply an (n, k)-code. We do not regard the zero subspace (that is, k = 0) as a code.
Example 8.8.5
If F = Z2 and n ≥ 2, the n-parity-check code is constructed as follows: An extra digit is added to
each word in F n−1 to make the number of 1s in the resulting word even (we say such words have
even parity). The resulting (n, n − 1)-code is linear because the sum of two words of even parity
again has even parity.
Many of the properties of general codes take a simpler form for linear codes. The following result gives
a much easier way to find the minimal distance of a linear code, and sharpens the results in Theorem 8.8.3.
480 Orthogonality
Theorem 8.8.5
Let C be an (n, k)-code with minimum distance d over a finite field F , and use nearest neighbour
decoding.
Proof.
2. Assume that C can detect t errors. Given w 6= 0 in C, the t-ball Bt (w) about w contains no other
code word (see the discussion preceding Theorem 8.8.3). In particular, it does not contain the code
word 0, so t < d(w, 0) = wt(w). Hence t < d by (1). The converse is part of Theorem 8.8.3.
Example 8.8.6
If F = Z2 , then
is a (7, 3)-code; in fact C = span {0101010, 1010101, 1110000}. The minimum distance for C is
3, the minimum weight of a nonzero word in C.
8.8. An Application to Linear Codes over Finite Fields 481
Matrix Generators
Given a linear n-code C over a finite field F, the way encoding works in practice is as follows. A message
stream is blocked off into segments of length k ≤ n called messages. Each message u in F k is encoded as a
code word, the code word is transmitted, the receiver decodes the received word as the nearest code word,
and then re-creates the original message. A fast and convenient method is needed to encode the incoming
messages, to decode the received word after transmission (with or without error), and finally to retrieve
messages from code words. All this can be achieved for any linear code using matrix multiplication.
Let G denote a k × n matrix over a finite field F, and encode each message u in F k as the word uG in
F n using matrix multiplication (thinking of words as rows). This amounts to saying that the set of code
words is the subspace C = {uG | u in F k } of F n . This subspace need not have dimension k for every
k × n matrix G. But, if {e1 , e2 , . . . , ek } is the standard basis of F k , then ei G is row i of G for each I and
{e1 G, e2 G, . . . , ek G} spans C. Hence dim C = k if and only if the rows of G are independent in F n , and
these matrices turn out to be exactly the ones we need. For reference, we state their main properties in
Lemma 8.8.1 below (see Theorem 5.4.4).
Lemma 8.8.1
The following are equivalent for a k × n matrix G over a finite field F :
1. rank G = k.
Note that Theorem 5.4.4 asserts that, over the real field R, the properties in Lemma 8.8.1 hold
if and onlyif
1 0 1 0
GGT is invertible. But this need not be true in general. For example, if F = Z2 and G = ,
0 1 0 1
then GGT = 0. The reason is that the dot product w · w can be zero for w in F n even if w 6= 0. However,
even though GGT is not invertible, we do have GK = I2 for some 4 × 2 matrix K over F as Lemma 8.8.1
T
1 0 0 0
asserts (in fact, K = is one such matrix).
0 1 0 0
482 Orthogonality
w1
Let C ⊆ F n be an (n, k)-code over a finite field F. If {w1 , . . . , wk } is a basis of C, let G = ...
wk
be the k × n matrix with the wi as its rows. Let {e1 , . . . , ek } is the standard basis of F regarded as rows.
k
Then wi = ei G for each i, so C = span {w1 , . . . , wk } = span {e1 G, . . . , ek G}. It follows (verify) that
C = {uG | u in F k }
Because of this, the k × n matrix G is called a generator of the code C, and G has rank k by Lemma 8.8.1
because its rows wi are independent.
In fact, every linear code C in F n has a generator of a simple, convenient form. If G is a generator
matrix for C, let R be the reduced row-echelon form of G. We claim that C is also generated
by R. Since
G → R by row operations, Theorem 2.5.1 shows that these same row operations G Ik → R W ,
performed on G Ik , produce an invertible k ×k matrix W such that R = W G. Then C = {uR | u in F k }.
[In fact, if u is in F k , then uG = u1 R where u1 = uW −1 is in F k , and uR = u2 G where u2 = uW is in F k ].
Thus R is a generator of C, so we may assume that G is in reduced row-echelon form.
In that case, G has no row of zeros (since rank G = k) and so contains
all
the columns of Ik . Hence a
′′
series of column interchanges will carry G to the block form G = Ik A for some k × (n − k) matrix
A. Hence the code C′′ = {uG′′ | u in F k } is essentially the same as C; the code words in C′′ are obtained
from those in C by a series of column interchanges. Hence if C is a linear (n, k)-code, we may (and shall)
assume that the generator matrix G has the form
G = Ik A for some k × (n − k) matrix A
Such a matrix is called a standard generator, or a systematic generator, for the code C. In this case,
if u is a message word in F k , the first k digits of the encoded word uG are just the first k digits of u, so
retrieval of u from uG is very simple indeed. The last n − k digits of uG are called parity digits.
Parity-Check Matrices
Theorem 8.8.6
Let F be a finite field, let G be a k × n matrix of rank k, let H be an (n − k) × n matrix of rank n − k,
and let C = {uG | u in F k } and D = {vH | V in F n−k } be the codes they generate. Then the
following conditions are equivalent:
1. GH T = 0.
2. HGT = 0.
3. C = {w in F n | wH T = 0}.
4. D = {w in F n | wGT = 0}.
Proof. First, (1) ⇔ (2) holds because HGT and GH T are transposes of each other.
8.8. An Application to Linear Codes over Finite Fields 483
(1) ⇒ (3) Consider the linear transformation T : F n → F n−k defined by T (w) = wH T for all w in F n .
To prove (3) we must show that C = ker T . We have C ⊆ ker T by (1) because T (uG) = uGH T = 0 for all
u in F k . Since dim C = rank G = k, it is enough (by Theorem 6.4.2) to show dim ( ker T ) = k. However
the dimension theorem (Theorem 7.2.4) shows that dim ( ker T ) = n − dim ( im T ), so it is enough to show
that dim ( im T ) = n − k. But if R1 , . . . , Rn are the rows of H T , then block multiplication gives
is a parity-check matrix for C. Indeed, rank H = n − k because the rows of H are independent (due to the
presence of In−k ), and
T
−A
GH = Ik A = −A + A = 0
In−k
by block multiplication. Hence H is a parity-check matrix for C and we have C = {w in F n | wH T = 0}.
Since wH T and HwT are transposes of each other, this shows that C can be characterized as follows:
C = {w in F n | HwT = 0}
by Theorem 8.8.6.
This is useful in decoding. The reason is that decoding is done as follows: If a code word c is trans-
mitted and v is received, then z = v − c is called the error. Since HcT = 0, we have HzT = HvT and this
word
s = HzT = HvT
is called the syndrome. The receiver knows v and s = HvT , and wants to recover c. Since c = v − z, it is
enough to find z. But the possibilities for z are the solutions of the linear system
HzT = s
where s is known. Now recall that Theorem 2.2.3 shows that these solutions have the form z = x + s where
x is any solution of the homogeneous system HxT = 0, that is, x is any word in C (by Lemma 8.8.1). In
other words, the errors z are the elements of the set
C + s = {c + s | c in C}
The set C + s is called a coset of C. Let |F| = q. Since |C + s| = |C| = qn−k the search for z is reduced
from qn possibilities in F n to qn−k possibilities in C + s. This is called syndrome decoding, and various
484 Orthogonality
methods for improving efficiency and accuracy have been devised. The reader is referred to books on
coding for more details.21
Orthogonal Codes
Let F be a finite field. Given two words v = a1 a2 · · · an and w = b1 b2 · · · bn in F n , the dot product v · w is
defined (as in Rn ) by
v · w = a1 b1 + a2 b2 + · · · + an bn
Note that v · w is an element of F, and it can be computed as a matrix product: v · w = vwT .
If C ⊆ F n is an (n, k)-code, the orthogonal complement C⊥ is defined as in Rn :
C⊥ = {v in F n | v · c = 0 for all c in C}
This is easily seen to be a subspace of F n , and it turns out to be an (n, n − k)-code. This follows when
F = R because we showed (in the projection theorem) that n = dim U ⊥ + dim U for any subspace U of
Rn . However the proofs break down for a finite field F because the dot product in F n has the property that
w · w = 0 can happen even if w 6= 0. Nonetheless, the result remains valid.
Theorem 8.8.7
Let C be an (n, k)-code over a finite
field F , let
G = Ik A be a standard generator for C where
A is k × (n − k), and write H = −A In−k for the parity-check matrix. Then:
T
1. H is a generator of C⊥ .
Proof. As in Theorem 8.8.6, let D = {vH | v in F n−k } denote the code generated by H. Observe first that,
for all w in F n and all u in F k , we have
Since C = {uG | u in F k }, this shows that w is in C⊥ if and only if (wGT ) · u = 0 for all u in F k ; if and
only if22 wGT = 0; if and only if w is in D (by Theorem 8.8.6). Thus C⊥ = D and a similar argument
shows that D⊥ = C.
The code C = {uG | u in F 4 } generated by G has dimension k = 4, and is called the Hamming (7, 4)-code.
The vectors in C are listed in the first table below. The dual code generated by H has dimension n − k = 3
and is listed in the second table.
u uG
0000 0000000
0001 0001011
0010 0010101
0011 0011110 v vH
0100 0100110 000 0000000
0101 0101101 001 1011001
0110 0110011 010 1101010
C: 0111 0111000 C⊥ : 011 0110011
1000 1000111 100 1110100
1001 1001100 101 0101101
1010 1010010 110 0011110
1011 1011001 111 1000111
1100 1100001
1101 1101010
1110 1110100
1111 1111111
Clearly each nonzero code word in C has weight at least 3, so C has minimum distance d = 3. Hence C
can detect two errors and correct one error by Theorem 8.8.5. The dual code has minimum distance 4 and
so can detect 3 errors and correct 1 error.
486 Orthogonality
Exercise 8.8.1 Find all a in Z10 such that: Exercise 8.8.8 Let K be a vector space over Z2 with ba-
sis {1, t}, so K = {a + bt | a, b, in Z2 }. It is known that
a. a2 = a. K becomes a field of four elements if we define t 2 = 1+t.
Write down the multiplication table of K.
b. a has an inverse (and find the inverse).
Exercise 8.8.9 Let K be a vector space over Z3 with ba-
c. ak = 0 for some k ≥ 1.
sis {1, t}, so K = {a + bt | a, b, in Z3 }. It is known that
d. a = 2k for some k ≥ 1. K becomes a field of nine elements if we define t 2 = −1
in Z3 . In each case find the inverse of the element x of K:
e. a = b2 for some b in Z10 .
a. x = 1 + 2t b. x = 1 + t
Exercise 8.8.2
a. Show that if 3a = 0 in Z10 , then necessarily a = 0 Exercise 8.8.10 How many errors can be detected or
in Z10 . corrected by each of the following binary linear codes?
b. Show that 2a = 0 in Z10 holds in Z10 if and only
if a = 0 or a = 5. a. C = {0000000, 0011110, 0100111, 0111001,
1001011, 1010101, 1101100, 1110010}
Exercise 8.8.3 Find the inverse of:
b. C = {0000000000, 0010011111, 0101100111,
a. 8 in Z13 ; b. 11 in Z19 . 0111111000, 1001110001, 1011101110,
1100010110, 1110001001}
Exercise 8.8.4 If ab = 0 in a field F, show that either
a = 0 or b = 0.
Exercise 8.8.11
Exercise 8.8.5 Show that the entries of the last column
of the multiplication table of Zn are a. If a binary linear (n, 2)-code corrects one error,
show that n ≥ 5. [Hint: Hamming bound.]
0, n − 1, n − 2, . . . , 2, 1
b. Find a (5, 2)-code that corrects one error.
in that order.
Exercise 8.8.6 In each case show that the matrix A is Exercise 8.8.12
invertible over the given field, and find A−1 .
a. If a binary linear (n, 3)-code corrects two errors,
1 4
a. A = over Z5 . show that n ≥ 9. [Hint: Hamming bound.]
2 1
1 0 0 1 1 1 1 0 0 0
5 6
b. A = over Z7 . b. If G = 0 1 0 1 1 0 0 1 1 0 ,
4 3
0 0 1 1 0 1 0 1 1 1
show that the binary (10, 3)-code generated by
Exercise 8.8.7 Consider the linear system
3x + y + 4z = 3 G corrects two errors. [It can be shown that no
. In each case solve the system by binary (9, 3)-code corrects two errors.]
4x + 3y + z = 1
reducing the augmented matrix to reduced row-echelon
form over the given field: Exercise 8.8.13
b. Find a binary linear (5, 2)-code that can correct Exercise 8.8.15 Let c be a word in F n . Show that
one error. Bt (c) = c + Bt (0), where we write
An expression like x21 + x22 + x23 − 2x1 x3 + x2 x3 is called a quadratic form in the variables x1 , x2 , and x3 .
In this section we show that new variables y1 , y2 , and y3 can always be found so that the quadratic form,
when expressed in terms of the new variables, has no cross terms y1 y2 , y1 y3 , or y2 y3 . Moreover, we do this
for forms involving any finite number of variables using orthogonal diagonalization. This has far-reaching
applications; quadratic forms arise in such diverse areas as statistics, physics, the theory of functions of
several variables, number theory, and geometry.
Example 8.9.1
Write q = x21 + 3x23 + 2x1 x2 − x1 x3 in the form q(x) = xT Ax, where A is a symmetric 3 × 3 matrix.
We shall assume from now on that all quadratic forms are given by
q(x) = xT Ax
where A is symmetric. Given such a form, the problem is to find new variables y1 , y2 , . . . , yn , related to
x1 , x2 , . . . , xn , with the property that when q is expressed in terms of y1 , y2 , . . . , yn , there are no cross
terms. If we write
y = (y1 , y2 , . . . , yn )T
this amounts to asking that q = yT Dy where D is diagonal. It turns out that this can always be accomplished
and, not surprisingly, that D is the matrix obtained when the symmetric matrix A is orthogonally diagonal-
ized. In fact, as Theorem 8.2.2 shows, a matrix P can be found that is orthogonal (that is, P−1 = PT ) and
diagonalizes A:
λ1 0 · · · 0
0 λ2 · · · 0
PT AP = D = .. .. ..
. . .
0 0 · · · λn
The diagonal entries λ1 , λ2 , . . . , λn are the (not necessarily distinct) eigenvalues of A, repeated according
to their multiplicities in cA (x), and the columns of P are corresponding (orthonormal) eigenvectors of A.
As A is symmetric, the λi are real by Theorem 5.5.7.
Now define new variables y by the equations
x = Py equivalently y = PT x
x = Py equivalently y = PT x
Let q = xT Ax be a quadratic form where A is a symmetric matrix and let λ1 , . . . , λn be the (real) eigen-
values of A repeated according to their multiplicities. A corresponding set {f1 , . . . , fn } of orthonormal
eigenvectors for A is called a set of principal axes for the quadratic form q. (The reason for the name
will become clear later.) The orthogonal matrix P in Theorem 8.9.1 is given as P = f1 · · · fn , so the
variables X and Y are related by
y1
y2
x = Py = f1 f2 · · · fn .. = y1 f1 + y2 f2 + · · · + yn fn
.
yn
Thus the new variables yi are the coefficients when x is expanded in terms of the orthonormal basis
{f1 , . . . , fn } of Rn . In particular, the coefficients yi are given by yi = x · fi by the expansion theorem
(Theorem 5.3.6). Hence q itself is easily computed from the eigenvalues λi and the principal axes fi :
q = q(x) = λ1 (x · f1 )2 + · · · + λn (x · fn )2
Example 8.9.2
Find new variables y1 , y2 , y3 , and y4 such that
q = 3(x21 + x22 + x23 + x24 ) + 2x1 x2 − 10x1 x3 + 10x1 x4 + 10x2 x3 − 10x2 x4 + 2x3 x4
The matrix
1 1 1 1
−1 −1 1 1
P= f1 f2 f3 f4 = 21
−1
1 1 −1
1 −1 1 −1
is thus orthogonal, and P−1 AP = PT AP is diagonal. Hence the new variables y and the old
variables x are related by y = PT x and x = Py. Explicitly,
y1 = 12 (x1 − x2 − x3 + x4 ) x1 = 21 (y1 + y2 + y3 + y4 )
y2 = 12 (x1 − x2 + x3 − x4 ) x2 = 21 (−y1 − y2 + y3 + y4 )
y3 = 12 (x1 + x2 + x3 + x4 ) x3 = 21 (−y1 + y2 + y3 − y4 )
y4 = 12 (x1 + x2 − x3 − x4 ) x4 = 21 (y1 − y2 + y3 − y4 )
It is instructive to look at the case of quadratic forms in two variables x1 and x2 . Then the principal
axes can always be found by rotating the x1 and x2 axes counterclockwise about the origin through an
angle θ . This rotation
is a linear transformation
Rθ : R2 → R2 , and it is shown in Theorem 2.6.4 that Rθ
cos θ − sin θ
has matrix P = . If {e1 , e2 } denotes the standard basis of R2 , the rotation produces a
sin θ cos θ
new basis {f1 , f2 } given by
cos θ − sin θ
f1 = Rθ (e1 ) = and f2 = Rθ (e2 ) = (8.7)
sin θ cos θ
x2 x1
Given a point p = = x1 e1 + x2 e2 in the original system, let y1
y2 x2
x2
p and y2 be the coordinates of p in the new system (see the diagram). That
y1 is,
y2
y1 x1 cos θ − sin θ y1
= p = y1 f1 + y2 f2 = (8.8)
θ x1 x2 sin θ cos θ y2
x 1
O
x1 y1
Writing x = and y = , this reads x = Py so, since P is or-
x2 y2
thogonal, this is the change of variables formula for the rotation as in Theorem 8.9.1.
8.9. An Application to Quadratic Forms 491
If r 6= 0 6= s, the graph of the equation rx21 + sx22 = 1 is called an ellipse if rs > 0 and a hyperbola if
rs < 0. More generally, given a quadratic form
the graph of the equation q = 1 is called a conic. We can now completely describe this graph. There are
two special cases which we leave to the reader.
So we assume that a 6= 0 and c 6= 0. In this case, the description depends on the quantity b2 − 4ac, called
the discriminant of the quadratic form q.
So we also assume that b2 − 4ac 6= 0. But then the next theorem asserts that there exists a rotation of
the plane about the origin which transforms the equation ax21 + bx1 x2 + cx22 = 1 into either an ellipse or a
hyperbola, and the theorem also provides a simple way to decide which conic it is.
Theorem 8.9.2
Consider the quadratic form q = ax21 + bx1 x2 + cx22 where a, c, and b2 − 4ac are all nonzero.
1. There is a counterclockwise rotation of the coordinate axes about the origin such that, in the
new coordinate system, q has no cross term.
If b =
Proof.
1
0, q already has no cross term and (1) and (2) are clear. So assume b 6= 0. The matrix
a 2b
A= 1 of q has characteristic polynomial cA (x) = x2 − (a + c)x − 14 (b2 − 4ac). If we write
b c
p2
d = b2 + (a − c)2 for convenience; then the quadratic formula gives the eigenvalues
λ1 = 12 [a + c − d] and λ2 = 12 [a + c + d]
as the reader can verify. These agree with equation (8.7) above if θ is an angle such that
a−c−d b
cos θ = √ and sin θ = √
b2 +(a−c−d)2 b2 +(a−c−d)2
cos θ − sin θ
Then P = f1 f2 = diagonalizes A and equation (8.8) becomes the formula x = Py
sin θ cos θ
in Theorem 8.9.1. This proves (1).
λ1 0
Finally, A is similar to so λ1 λ2 = det A = 14 (4ac − b2 ). Hence the graph of λ1 y21 + λ2 y22 = 1
0 λ2
is an ellipse if b2 < 4ac and an hyperbola if b2 > 4ac. This proves (2).
Example 8.9.3
Consider the equation x2 + xy + y2 = 1. Find a rotation so that the equation has no cross term.
Solution.
The determinant
of any orthogonal matrix P is either 1 or −1 (because PPT = I). The orthogonal
cos θ − sin θ
matrices arising from rotations all have determinant 1. More generally, given any
sin θ cos θ
quadratic form q = xT Ax, the orthogonal matrix P such that PT AP is diagonal can always be chosen so
that det P = 1 by interchanging two eigenvalues (and hence the corresponding columns of P). It is shown
in Theorem 10.4.4 that orthogonal 2 × 2 matrices with determinant 1 correspond to rotations. Similarly,
it can be shown that orthogonal 3 × 3 matrices with determinant 1 correspond to rotations about a line
through the origin. This extends Theorem 8.9.2: Every quadratic form in two or three variables can be
diagonalized by a rotation of the coordinate system.
8.9. An Application to Quadratic Forms 493
Congruence
Theorem 8.9.3
If q(x) = xT Ax is a quadratic form given by a symmetric matrix A, then A is uniquely determined
by q.
Proof. Let q(x) = xT Bx for all x where BT = B. If C = A − B, then CT = C and xT Cx = 0 for all x. We
must show that C = 0. Given y in Rn ,
0 = (x + y)T C(x + y) = xT Cx + xT Cy + yT Cx + yT Cy
= xT Cy + yT Cx
But yT Cx = (xT Cy)T = xT Cy (it is 1 × 1). Hence xT Cy = 0 for all x and y in Rn . If e j is column j of
In , then the (i, j)-entry of C is eTi Ce j = 0. Thus C = 0.
Hence we can speak of the symmetric matrix of a quadratic form.
On the other hand, a quadratic form q in variables xi can be written in several ways as a linear combi-
nation of squares of new variables, even if the new variables are required to be linear combinations of the
xi . For example, if q = 2x21 − 4x1 x2 + x22 then
The question arises: How are these changes of variables related, and what properties do they share? To
investigate this, we need a new concept.
Let a quadratic form q = q(x) = xT Ax be given in terms of variables x = (x1 , x2 , . . . , xn )T . If the new
variables y = (y1 , y2 , . . . , yn )T are to be linear combinations of the xi , then y = Ax for some n × n matrix
A. Moreover, since we want to be able to solve for the xi in terms of the yi , we ask that the matrix A be
invertible. Hence suppose U is an invertible matrix and that the new variables y are given by
y = U −1 x, equivalently x = U y
That is, q has matrix U T AU with respect to the new variables y. Hence, to study changes of variables
in quadratic forms, we study the following relationship on matrices: Two n × n matrices A and B are
c
called congruent, written A ∼ B, if B = U T AU for some invertible matrix U . Here are some properties of
congruence:
c
1. A ∼ A for all A.
c c
2. If A ∼ B, then B ∼ A.
494 Orthogonality
c c c
3. If A ∼ B and B ∼ C, then A ∼ C.
c
4. If A ∼ B, then A is symmetric if and only if B is symmetric.
c
5. If A ∼ B, then rank A = rank B.
Example 8.9.4
1 0 1 0
The symmetric matrices A = and B = have the same rank but are not
0 1 0 −1
c
congruent. Indeed, if A ∼ B, an invertible matrix U exists such that B = U T AU = U T U . But then
−1 = det B = ( det U )2, a contradiction.
The key distinction between A and B in Example 8.9.4 is that A has two positive eigenvalues (counting
multiplicities) whereas B has only one.
If k ≤ r ≤ n, let Dn (k, r) denote the n×n diagonal matrix whose main diagonal consists of k ones, followed
by r − k minus ones, followed by n − r zeros. Then we seek new variables z such that
Then DT0 DD0 = Dn (k, r), so if new variables z are given by x = (P0 D0 )z, we obtain
as required. Note that the change-of-variables matrix P0 D0 from z to x has orthogonal columns (in fact,
scalar multiples of the columns of P0 ).
Example 8.9.5
Completely diagonalize the quadratic form q in Example 8.9.2 and find the index and rank .
Solution. In the notation of Example 8.9.2, the eigenvalues of the matrix A of q are 12, −8, 4, 4; so
the index is 3 and the rank is 4. Moreover,
the corresponding
orthogonal eigenvectors are f1 , f2 , f3
(see Example 8.9.2), and f4 . Hence P0 = f1 f3 f4 f2 is orthogonal and
Theorem 8.9.5
Let A and B be symmetric n × n matrices, and let 0 ≤ k ≤ r ≤ n.
c
1. A has index k and rank r if and only if A ∼ Dn (k, r).
c
2. A ∼ B if and only if they have the same rank and index.
Proof.
1. If A has index k and rank r, take U = P0 D0 where P0 and D0 are as described prior to Example 8.9.5.
Then U T AU = Dn (k, r). The converse is true because Dn (k, r) has index k and rank r (using
Theorem 8.9.4).
c c
2. If A and B both have index k and rank r, then A ∼ Dn (k, r) ∼ B by (1). The converse was given
earlier.
496 Orthogonality
Hence the subspace W2 = span {fk′ +1 , . . . , fr } satisfies q(x) < 0 for all x 6= 0 in W2 . Note dim W2 = r − k′ .
It follows that W1 and W2 have only the zero vector in common. Hence, if B1 and B2 are bases of W1 and
W2 , respectively, then (Exercise 6.3.33) B1 ∪ B2 is an independent set of (n − r + k) + (r − k′ ) = n + k − k′
vectors in Rn . This implies that k ≤ k′ , and a similar argument shows k′ ≤ k.
Exercise 8.9.1 In each case, find a symmetric matrix A d. q = 7x21 + x22 + x23 + 8x1 x2 + 8x1 x3 − 16x2 x3
such that q = xT Bx takes the form q = xT Ax.
e. q = 2(x21 + x22 + x23 − x1 x2 + x1 x3 − x2 x3 )
1 1 1 1
a. b. f. q = 5x21 + 8x22 + 5x23 − 4(x1 x2 + 2x1 x3 + x2 x3 )
0 1 −1 2
1 0 1 1 2 −1 g. q = x21 − x23 − 4x1 x2 + 4x2 x3
c. 1 1 0 d. 4 1 0
0 1 1 5 −2 3 h. q = x21 + x23 − 2x1 x2 + 2x2 x3
Exercise 8.9.2 In each case, find a change of variables Exercise 8.9.3 For each of the following, write the equa-
that will diagonalize the quadratic form q. Determine the tion in terms of new variables so that it is in standard
index and rank of q. position, and identify the curve.
Exercise 8.9.4 Consider the equation ax2 + bxy + cy2 = Exercise 8.9.8 Given a symmetric matrix A, define
c
d, where b 6= 0. Introduce new variables x1 and y1 by qA (x) = xT Ax. Show that B ∼ A if and only if B is
rotating the axes counterclockwise through an angle θ . symmetric and there is an invertible matrix U such that
Show that the resulting equation has no x1 y1 -term if θ is qB (x) = qA (U x) for all x. [Hint: Theorem 8.9.3.]
given by
Exercise 8.9.9 Let q(x) = xT Ax be a quadratic form
a−c where A = AT .
cos 2θ = √
b2 +(a−c)2
λ1 y21 + · · · + λr y2r + k1 y1 + · · · + kn yn = c
a. If β is a bilinear form, show that an n × n matrix
A exists such that β (x, y) = xT Ay for all x, y.
b. Put x21 + 3x22 + 3x23 + 4x1 x2 − 4x1 x3 + 5x1 − 6x3 = 7
in this form and find variables y1 , y2 , y3 as in (a). b. Show that A is uniquely determined by β .
Example 8.10.1
A politician proposes to spend x1 dollars annually on health
x2 care and x2 dollars annually on education. She is constrained
in her spending by various budget pressures, and one model of this
√
5 5x21 + 3x22 ≤ 15 is that the expenditures x1 and x2 should satisfy a constraint like
2
5x21 + 3x22 ≤ 15
1 c=2
c=1 Since xi ≥ 0 for each i, the feasible region is the shaded area
O √ shown in the diagram. Any choice of feasible point (x1 , x2 ) in this
x1
1 32
region will satisfy the budget constraints. However, these choices
have different effects on voters, and the politician wants to choose
x = (x1 , x2 ) to maximize some measure q = q(x1 , x2 ) of voter satisfaction. Thus the assumption is
that, for any value of c, all points on the graph of q(x1 , x2 ) = c have the same appeal to voters.
Hence the goal is to find the largest value of c for which the graph of q(x1 , x2 ) = c contains a
feasible point.
The choice of the function q depends upon many factors; we will show how to solve the problem
for any quadratic form q (even with more than two variables). In the diagram the function q is
given by
q(x1 , x2 ) = x1 x2
and the graphs of q(x1 , x2 ) = c are shown for c = 1 and c = 2. As c increases the graph of
q(x1 , x2 ) = c moves up and to the right. From this it is clear√that there will be a solution for some
value of c between 1 and 2 (in fact the largest value is c = 12 15 = 1.94 to two decimal places).
The constraint 5x21 + 3x22 ≤ 15 in Example 8.10.1 can be put in a standard form. If we divide through
2 2
by 15, it becomes √x13 + √x25 ≤ 1. This suggests that we introduce new variables y = (y1 , y2 ) where
y1 = x1
√
3
and y2 = Then the constraint becomes kyk2 ≤ 1, equivalently kyk ≤ 1. In terms of these new
x2
√
5
.
√
variables, the objective function is q = 15y1 y2 , and we want to maximize
√ this subject to
√ kyk ≤ 1. When
this is done, the maximizing values of x1 and x2 are obtained from x1 = 3y1 and x2 = 5y2 .
Hence, for constraints like that in Example 8.10.1, there is no real loss in generality in assuming that
the constraint takes the form kxk ≤ 1. In this case the principal axes theorem solves the problem. Recall
that a vector in Rn of length 1 is called a unit vector.
Theorem 8.10.1
Consider the quadratic form q = q(x) = xT Ax where A is an n × n symmetric matrix, and let λ1
and λn denote the largest and smallest eigenvalues of A, respectively. Then:
Proof. Since A is symmetric, let the (real) eigenvalues λi of A be ordered as to size as follows:
λ1 ≥ λ2 ≥ · · · ≥ λn
8.10. An Application to Constrained Optimization 499
By the principal axes theorem, let P be an orthogonal matrix such that PT AP = D = diag (λ1 , λ2 , . . . , λn ).
Define y = PT x, equivalently x = Py, and note kyk = kxk because kyk2 = yT y = xT (PPT )x = xT x = kxk2 .
If we write y = (y1 , y2 , . . . , yn )T , then
because kyk = kxk ≤ 1. This shows that q(x) cannot exceed λ1 when kxk ≤ 1. To see that this maximum
is actually achieved, let f1 be a unit eigenvector corresponding to λ1 . Then
Hence λ1 is the maximum value of q(x) when kxk ≤ 1, proving (1). The proof of (2) is analogous.
The set of all vectors x in Rn such that kxk ≤ 1 is called the unit ball. If n = 2, it is often called the
unit disk and consists of the unit circle and its interior; if n = 3, it is the unit sphere and its interior. It is
worth noting that the maximum value of a quadratic form q(x) as x ranges throughout the unit ball is (by
Theorem 8.10.1) actually attained for a unit vector x on the boundary of the unit ball.
Theorem 8.10.1 is important for applications involving vibrations in areas as diverse as aerodynamics
and particle physics, and the maximum and minimum values in the theorem are often found using advanced
calculus to minimize the quadratic form on the unit ball. The algebraic approach using the principal axes
theorem gives a geometrical interpretation of the optimal values because they are eigenvalues.
Example 8.10.2
Maximize and minimize the form q(x) = 3x21 + 14x1 x2 + 3x22 subject to kxk ≤ 1.
3 7
Solution. The matrix of q is A = , with eigenvalues λ1 = 10 and λ2 = −4, and
7 3
corresponding unit eigenvectors f1 = √12 (1, 1) and f2 = √12 (1, −1). Hence, among all unit vectors
x in R2 , q(x) takes its maximal value 10 at x = f1 , and the minimum value of q(x) is −4 when
x = f2 .
As noted above, the objective function in a constrained optimization problem need not be a quadratic
form. We conclude with an example where the objective function is linear, and the feasible region is
determined by linear constraints.
500 Orthogonality
Example 8.10.3
A manufacturer makes x1 units of product 1, and x2 units
x2 of product 2, at a profit of $70 and $50 per unit respectively,
1200x1 + 1300x2 = 8700
and wants to choose x1 and x2 to maximize the total profit
p(x1 , x2 ) = 70x1 + 50x2 . However x1 and x2 are not arbitrary; for
p=
p = 430
p = 500
The feasible region in the plane satisfying these constraints (and x1 ≥ 0, x2 ≥ 0) is shaded in the
diagram. If the profit equation 70x1 + 50x2 = p is plotted for various values of p, the resulting
lines are parallel, with p increasing with distance from the origin. Hence the best choice occurs for
the line 70x1 + 50x2 = 430 that touches the shaded region at the point (4, 3). So the profit p has a
maximum of p = 430 for x1 = 4 units and x2 = 3 units.
Example 8.10.3 is a simple case of the general linear programming problem23 which arises in eco-
nomic, management, network, and scheduling applications. Here the objective function is a linear com-
bination q = a1 x1 + a2 x2 + · · · + an xn of the variables, and the feasible region consists of the vectors
x = (x1 , x2 , . . . , xn )T in Rn which satisfy a set of linear inequalities of the form b1 x1 +b2 x2 +· · ·+bn xn ≤ b.
There is a good method (an extension of the gaussian algorithm) called the simplex algorithm for finding
the maximum and minimum values of q when x ranges over such a feasible set. As Example 8.10.3 sug-
gests, the optimal values turn out to be vertices of the feasible set. In particular, they are on the boundary
of the feasible region, as is the case in Theorem 8.10.1.
Linear algebra is important in multivariate analysis in statistics, and we conclude with a very short look
at one application of diagonalization in this area. A main feature of probability and statistics is the idea
of a random variable X , that is a real-valued function which takes its values according to a probability
law (called its distribution). Random variables occur in a wide variety of contexts; examples include the
number of meteors falling per square kilometre in a given region, the price of a share of a stock, or the
duration of a long distance telephone call from a certain city.
The values of a random variable X are distributed about a central number µ , called the mean of X .
The mean can be calculated from the distribution as the expectation E(X ) = µ of the random variable X .
23 More information is available in “Linear Programming and Extensions” by N. Wu and R. Coppins, McGraw-Hill, 1981.
8.11. An Application to Statistical Principal Component Analysis 501
Functions of a random variable are again random variables. In particular, (X − µ )2 is a random variable,
and the variance of the random variable X , denoted var (X ), is defined to be the number
Clearly, cov (X , X ) = var (X ). If cov (X , Y ) = 0 then X and Y have little relationship to each other and
are said to be uncorrelated.25
Multivariate statistical analysis deals with a family X1 , X2 , . . . , Xn of random variables with means
µi = E(Xi ) and variances σi2 = var (Xi ) for each i. Let σi j = cov (Xi , X j ) denote the covariance of Xi and
X j . Then the covariance matrix of the random variables X1 , X2 , . . . , Xn is defined to be the n × n matrix
Σ = [σi j ]
whose (i, j)-entry is σi j . The matrix Σ is clearly symmetric; in fact it can be shown that Σ is positive
semidefinite in the sense that λ ≥ 0 for every eigenvalue λ of Σ. (In reality, Σ is positive definite in most
cases of interest.) So suppose that the eigenvalues of Σ are λ1 ≥ λ2 ≥ · · · ≥ λn ≥ 0. The principal axes
theorem (Theorem 8.2.2) shows that an orthogonal matrix P exists such that
PT ΣP = diag (λ1 , λ2 , . . . , λn )
If we write X = (X1 , X2, . . . , Xn ), the procedure for diagonalizing a quadratic form gives new variables
Y = (Y1 , Y2 , . . . , Yn ) defined by
Y = PT X
These new random variables Y1 , Y2 , . . . , Yn are called the principal components of the original random
variables Xi , and are linear combinations of the Xi . Furthermore, it can be shown that
The sum of the variances of a set of random variables is called the total variance of the variables, and
determining the source of this total variance is one of the benefits of principal component analysis. The
fact that the matrices Σ and diag (λ1 , λ2 , . . . , λn ) are similar means that they have the same trace, that is,
This means that the principal components Yi have the same total variance as the original random variables
Xi . Moreover, the fact that λ1 ≥ λ2 ≥ · · · ≥ λn ≥ 0 means that most of this variance resides in the first few
Yi . In practice, statisticians find that studying these first few Yi (and ignoring the rest) gives an accurate
analysis of the total system variability. This results in substantial data reduction since often only a few Yi
suffice for all practical purposes. Furthermore, these Yi are easily obtained as linear combinations of the
Xi . Finally, the analysis of the principal components often reveals relationships among the Xi that were not
previously suspected, and so results in interpretations that would not otherwise have been made.
9. Change of Basis
If A is an m × n matrix, the corresponding matrix transformation TA : Rn → Rm is defined by
It was shown in Theorem 2.6.2 that every linear transformation T : Rn → Rm is a matrix transformation;
that is, T = TA for some m × n matrix A. Furthermore, the matrix A is uniquely determined by T . In fact,
A is given in terms of its columns by
A = T (e1 ) T (e2 ) · · · T (en )
Let T : V → W be a linear transformation where dim V = n and dim W = m. The aim in this section is to
describe the action of T as multiplication by an m × n matrix A. The idea is to convert a vector v in V into
a column in Rn , multiply that column by A to get a column in Rm , and convert this column back to get
T (v) in W .
Converting vectors to columns is a simple matter, but one small change is needed. Up to now the order
of the vectors in a basis has been of no importance. However, in this section, we shall speak of an ordered
basis {b1 , b2 , . . . , bn }, which is just a basis where the order in which the vectors are listed is taken into
account. Hence {b2 , b1 , b3 } is a different ordered basis from {b1 , b2 , b3 }.
If B = {b1 , b2 , . . . , bn } is an ordered basis in a vector space V , and if
v = v1 b1 + v2 b2 + · · · + vn bn , vi ∈ R
is a vector in V , then the (uniquely determined) numbers v1 , v2 , . . . , vn are called the coordinates of v
with respect to the basis B.
503
504 Change of Basis
The reason for writing CB (v) as a column instead of a row will become clear later. Note that CB (bi ) = ei
is column i of In .
Example 9.1.1
The coordinate vector for v = (2, 1, 3) with respect tothe ordered
basis
0
B = {(1, 1, 0), (1, 0, 1), (0, 1, 1)} of R3 is CB (v) = 2 because
1
Theorem 9.1.1
If V has dimension n and B = {b1 , b2 , . . . , bn } is any ordered basis of V , the coordinate
transformation CB : V → Rn is an isomorphism. In fact, CB−1 : Rn → V is given by
v1 v1
v v2
−1 2
CB .. = v1 b1 + v2 b2 + · · · + vn bn for all .. in Rn .
. .
vn vn
Proof. The verification that CB is linear is Exercise 9.1.13. If T : Rn → V is the map denoted CB−1 in
the theorem, one verifies (Exercise 9.1.13) that TCB = 1V and CB T = 1Rn . Note that CB (b j ) is column
j of the identity matrix, so CB carries the basis B to the standard basis of Rn , proving again that it is an
isomorphism (Theorem 7.3.1)
T
Now let T : V → W be any linear transformation where dim V = n and
V W dim W = m, and let B = {b1 , b2 , . . . , bn } and D be ordered bases of V and
W , respectively. Then CB : V → Rn and CD : W → Rm are isomorphisms
CB CD and we have the situation shown in the diagram where A is an m ×n matrix
TA (to be determined). In fact, the composite
Rn Rm
CD TCB−1 : Rn → Rm is a linear transformation
so Theorem 2.6.2 shows that a unique m × n matrix A exists such that
CD TCB−1 = TA , equivalently CD T = TACB
9.1. The Matrix of a Linear Transformation 505
This requirement completely determines A. Indeed, the fact that CB (b j ) is column j of the identity matrix
gives
column j of A = ACB (b j ) = CD [T (b j )]
for all j. Hence, in terms of its columns,
A = CD [T (b1 )] CD [T (b2 )] · · · CD [T (bn )]
Theorem 9.1.2
Let T : V → W be a linear transformation where dim V = n and dim W = m, and let
B = {b1 , . . . , bn } and D be ordered bases of V and W , respectively. Then the matrix MDB (T ) just
given is the unique m × n matrix A that satisfies
CD T = TACB
The fact that T = CD−1 TACB means that the action of T on a vector v in V can be performed by first taking
coordinates (that is, applying CB to v), then multiplying by A (applying TA ), and finally converting the
resulting m-tuple back to a vector in W (applying CD−1 ).
Example 9.1.2
Define T : P2 → R2 by T (a + bx + cx2 ) = (a + c, b − a − c) for all polynomials a + bx + cx2 . If
B = {b1 , b2 , b3 } and D = {d1 , d2 } where
The next example shows how to determine the action of a transformation from its matrix.
Example 9.1.3
1 −1 0 0
Suppose T : M22 (R) → R3 is linear with matrix MDB (T ) = 0 1 −1 0 where
0 0 1 −1
1 0 0 1 0 0 0 0
B= , , , and D = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}
0 0 0 0 1 0 0 1
a b
Compute T (v) where v = .
c d
Solution. The idea is to compute CD [T (v)] first, and then obtain T (v). We have
a
1 −1 0 0 a−b
b
CD [T (v)] = MDB (T )CB (v) = 0 1 −1 0 c = b−c
0 0 1 −1 c−d
d
Example 9.1.4
Let A be an m × n matrix, and let TA : Rn → Rm be the matrix transformation induced by
A : TA (x) = Ax for all columns x in Rn . If B and D are the standard bases of Rn and Rm ,
9.1. The Matrix of a Linear Transformation 507
Solution. Write B = {e1 , . . . , en }. Because D is the standard basis of Rm , it is easy to verify that
CD (y) = y for all columns y in Rm . Hence
MDB (TA ) = TA (e1 ) TA (e2 ) · · · TA (en ) = Ae1 Ae2 · · · Aen = A
Example 9.1.5
Let V and W have ordered bases B and D, respectively. Let dim V = n.
The first result in Example 9.1.5 is false if the two bases of V are not equal. In fact, if B is the standard
basis of Rn , then the basis D of Rn can be chosen so that MDB (1Rn ) turns out to be any invertible matrix
we wish (Exercise 9.1.14).
The next two theorems show that composition of linear transformations is compatible with multiplica-
tion of the corresponding matrices.
Theorem 9.1.3
T S
T S
Let V → W → U be linear transformations and let B, D, and E be
V W U finite ordered bases of V , W , and U , respectively. Then
ST
MEB (ST ) = MED (S) · MDB(T )
MED (S)MDB(T )CB (v) = MED (S)CD [T (v)] = CE [ST (v)] = MEB (ST )CB (v)
If B = {e1 , . . . , en }, then CB (e j ) is column j of In . Hence taking v = e j shows that MED (S)MDB(T ) and
MEB (ST ) have equal jth columns. The theorem follows.
508 Change of Basis
Theorem 9.1.4
Let T : V → W be a linear transformation, where dim V = dim W = n. The following are
equivalent.
1. T is an isomorphism.
T T −1
Proof. (1) ⇒ (2). We have V → W → V , so Theorem 9.1.3 and Example 9.1.5 give
Similarly, MDB (T )MBD (T −1 ) = In , proving (2) (and the last statement in the theorem).
(2) ⇒ (3). This is clear.
(3) ⇒ (1). Suppose that TDB (T ) is invertible for some bases B and D and, for
TA−1 TA convenience, write A = MDB (T ). Then we have CD T = TACB by Theorem 9.1.2,
Rn Rn Rn so
TA TA−1 T = (CD )−1 TACB
by Theorem 9.1.1 where (CD )−1 and CB are isomorphisms. Hence (1) follows if
we can demonstrate that TA : Rn → Rn is also an isomorphism. But A is invertible by (3) and one verifies
that TA TA−1 = 1Rn = TA−1 TA . So TA is indeed invertible (and (TA )−1 = TA−1 ).
In Section 7.2 we defined the rank of a linear transformation T : V → W by rank T = dim ( im T ).
Moreover, if A is any m × n matrix and TA : Rn → Rm is the matrix transformation, we showed that
rank (TA ) = rank A. So it may not be surprising that rank T equals the rank of any matrix of T .
Theorem 9.1.5
Let T : V → W be a linear transformation where dim V = n and dim W = m. If B and D are any
ordered bases of V and W , then rank T = rank [MDB (T )].
Proof. Write A = MDB (T ) for convenience. The column space of A is U = {Ax | x in Rn }. This means
rank A = dim U and so, because rank T = dim ( im T ), it suffices to find an isomorphism S : im T → U .
Now every vector in im T has the form T (v), v in V . By Theorem 9.1.2, CD [T (v)] = ACB (v) lies in U . So
define S : im T → U by
S[T (v)] = CD [T (v)] for all vectors T (v) ∈ im T
The fact that CD is linear and one-to-one implies immediately that S is linear and one-to-one. To see that
S is onto, let Ax be any member of U , x in Rn . Then x = CB (v) for some v in V because CB is onto. Hence
Ax = ACB (v) = CD [T (v)] = S[T (v)], so S is onto. This means that S is an isomorphism.
9.1. The Matrix of a Linear Transformation 509
Example 9.1.6
Define T : P2 → R3 by T (a + bx + cx2 ) = (a − 2b, 3c − 2a, 3c − 4b) for a, b, c ∈ R. Compute
rank T .
Solution. Since rank T = rank [MDB (T )] for any bases B ⊆ P2 and D ⊆ R3 , we choose the most
convenient ones: B = {1, x, x2 } and D = {(1, 0,
0), (0, 1, 0), (0, 0, 1)}. Then
2
MDB (T ) = CD [T (1)] CD [T (x)] CD [T (x )] = A where
1 −2 0 1 −2 0 1 −2 0
A = −2 0 3 . Since A → 0 −4 3 → 0 1 − 34
0 −4 3 0 −4 3 0 0 0
We conclude with an example showing that the matrix of a linear transformation can be made very
simple by a careful choice of the two bases.
Example 9.1.7
Let T : V → W be a linear transformation where dim V = n and dim W = m. Choose an ordered
basis B = {b1 , . . . , br , br+1 , . . . , bn } of V in which {br+1 , . . . , bn } is a basis of ker T , possibly
empty. Then {T (b1 ), . . . , T (br )} is a basis of im T by Theorem 7.2.5, so extend it to an ordered
basis D = {T (b1 ), . . . , T (br ), fr+1 , . . . , fm } of W . Because T (br+1 ) = · · · = T (bn ) = 0, we have
Ir 0
MDB (T ) = CD [T (b1 )] · · · CD [T (br )] CD [T (br+1 )] · · · CD [T (bn )] =
0 0
b. V = P2 , v = ax2 + bx + c, B = {x2 , x + 1, x + 2}
Exercise 9.1.2 Suppose T : P2 → R2 is a linear trans-
c. V = R3 , v = (1, −1, 2), formation. If B = {1, x, x2 } and D = {(1, 1), (0, 1)},
B = {(1, −1, 0), (1, 1, 1), (0, 1, 1)} find the action of T given:
d. V = R3 , v = (a, b, c), 1 2 −1
B = {(1, −1, 2), (1, 1, −1), (0, 0, 1)} a. MDB (T ) =
−1 0 1
510 Change of Basis
2 1 3 1 0 0 1 0 0 0 0
b. MDB (T ) = , , , ;
−1 0 −2 0 0 0 0 1 0 0 1
a b
v=
Exercise 9.1.3 In each case, find the matrix of the linear c d
transformation T : V → W corresponding to the bases B
and D of V and W , respectively. Exercise 9.1.5 In each case, verify Theorem 9.1.3. Use
the standard basis in Rn and {1, x, x2 } in P2 .
a. T
: M22 → R,
T(A) = tr A; T S
1 0 0 1 0 0 0 0 a. R3 → R2 → R4 ; T (a, b, c) = (a + b, b − c),
B= , , , , S(a, b) = (a, b − 2a, 3b, a + b)
0 0 0 0 1 0 0 1
D = {1} T S
b. R3 → R4 → R2 ;
b. T : M22 → M22 , T (A) = AT ; T (a, b, c) = (a + b, c + b, a + c, b − a),
B=D S(a, b, c, d) = (a + b, c − d)
1 0 0 1 0 0 0 0
= , , , T S
c. P2 → R3 → P2 ; T (a+bx+cx2 ) = (a, b−c, c−a),
0 0 0 0 1 0 0 1
S(a, b, c) = b + cx + (a − c)x2
c. T : P2 → P3 , T [p(x)] = xp(x); B = {1, x, x2 } and
T S
D = {1, x, x2 , x3 } d. R3 → P2 → R2 ;
T (a, b, c) = (a − b) + (c − a)x + bx2 ,
d. T : P2 → P2 , T [p(x)] = p(x + 1); S(a + bx + cx2 ) = (a − b, c)
B = D = {1, x, x2 }
Exercise 9.1.6 Verify Theorem 9.1.3 for
Exercise 9.1.4 In each case, find the matrix of T S
M22 → M22 → P2 where T (A) = AT and
T : V → W corresponding to the bases B and D, respec- a b 2
tively, and use it to compute CD [T (v)], and hence T (v). S c d = b + (a + d)x + cx . Use the bases
1 0 0 1 0 0 0 0
a. T : R3 → R4 , T (x, y, z) = (x+z, 2z, y−z, x+2y); B = D = , , ,
0 0 0 0 1 0 0 1
B and D standard; v = (1, −1, 3) 2
and E = {1, x, x }.
b. T : R2 → R4 , T (x, y) = (2x − y, 3x + 2y, 4y, x); Exercise 9.1.7 In each case, find T −1 and verify that
B = {(1, 1), (1, 0)}, D standard; v = (a, b) [MDB (T )]−1 = MBD (T −1 ).
c d
tive. Show that T is an isomorphism by finding MBB(T )
1 0 0 1 0 0 0 0 when B = {1, x, x2 , . . . , xn }.
B = , , , ,
0 0 0 0 1 0 0 1
Exercise 9.1.18 If k is any number, define
D = standard
Tk : M22 → M22 by Tk (A) = A + kAT .
a. Show that X 0 is a subspace of L(V , W ). Show that MDB : L(V , W ) → Mmn is an isomorphism
of vector spaces. [Hint: Let B = {b1 , . . . , bn } and
b. If X ⊆ X1 , show that X10 ⊆ X 0 . D = {d1 , . . . , dm }. Given A = [ai j ] in Mmn , show that
c. If U and U1 are subspaces of V , show that A = MDB (T ) where T : V → W is defined by
(U +U1 )0 = U 0 ∩U10 . T (b j ) = a1 j d1 + a2 j d2 + · · · + am j dm for each j.]
Exercise 9.1.27 If V is a vector space, the space
V ∗ = L(V , R) is called the dual of V . Given a basis
Exercise 9.1.23 Define R : Mmn → L(Rn , by
Rm )
R(A) = TA for each m × n matrix A, where TA : Rn → Rm B = {b1 , b2 , . . . , bn } of V , let Ei : V → R for each
is given by TA (x) = Ax for all x in Rn . Show that R is an i = 1, 2, . . . , n be the linear transformation satisfying
isomorphism.
0 if i 6= j
Ei (b j ) =
Exercise 9.1.24 Let V be any vector space (we do not 1 if i = j
assume it is finite dimensional). Given v in V , define
Sv : R → V by Sv (r) = rv for all r in R. (each Ei exists by Theorem 7.1.3). Prove the following:
b. Show that the map R : V → L(R, V ) given by b. v = E1 (v)b1 + E2 (v)b2 + · · · + En (v)bn for all v in
R(v) = Sv is an isomorphism. [Hint: To show that V
R is onto, if T lies in L(R, V ), show that T = Sv c. T = T (b1 )E1 + T (b2 )E2 + · · · + T (bn )En for all T
where v = T (1).] in V ∗
While the study of linear transformations from one vector space to another is important, the central prob-
lem of linear algebra is to understand the structure of a linear transformation T : V → V from a space
V to itself. Such transformations are called linear operators. If T : V → V is a linear operator where
dim (V ) = n, it is possible
to choose bases B and D of V such that the matrix MDB (T ) has a very simple
Ir 0
form: MDB (T ) = where r = rank T (see Example 9.1.7). Consequently, only the rank of T
0 0
9.2. Operators and Similarity 513
is revealed by determining the simplest matrices MDB (T ) of T where the bases B and D can be chosen
arbitrarily. But if we insist that B = D and look for bases B such that MBB (T ) is as simple as possible, we
learn a great deal about the operator T . We begin this task in this section.
Theorem 9.2.1
Let T : V → V be an operator where dim V = n, and let B be an ordered basis of V .
For a fixed operator T on a vector space V , we are going to study how the matrix MB (T ) changes when
the basis B changes. This turns out to be closely related to how the coordinates CB (v) change for a vector
v in V . If B and D are two ordered bases of V , and if we take T = 1V in Theorem 9.1.2, we obtain
Theorem 9.2.2
Let B = {b1 , b2 , . . . , bn } and D denote ordered bases of a vector space V . Then the change matrix
PD←B is given in terms of its columns by
PD←B = CD (b1 ) CD (b2 ) · · · CD (bn ) (9.1)
1. PB←B = In
Proof. The formula 9.2 is derived above, and 9.1 is immediate from the definition of PD←B and the formula
for MDB (T ) in Theorem 9.1.2.
Example 9.2.1
In P2 find PD←B if B = {1, x, x2 } and D = {1, (1 − x), (1 − x)2 }. Then use this to express
p = p(x) = a + bx + cx2 as a polynomial in powers of (1 − x).
1 1 1 a
Hence PD←B = CD (1), CD (x), CD (x)2 = 0 −1 −2 . We have CB (p) = b , so
0 0 1 c
1 1 1 a a+b+c
CD (p) = PD←BCB (p) = 0 −1 −2 b = −b − 2c
0 0 1 c c
Now let B = {b1 , b2 , . . . , bn } and B0 be two ordered bases of a vector space V . An operator T : V → V
has different matrices MB [T ] and MB0 [T ] with respect to B and B0 . We can now determine how these
matrices are related. Theorem 9.2.2 asserts that
This holds for all v in V . Because CB (b j ) is the jth column of the identity matrix, it follows that
PMB (T ) = MB0 (T )P
Moreover P is invertible (in fact, P−1 = PB←B0 by Theorem 9.2.2), so this gives
MB (T ) = P−1 MB0 (T )P
This asserts that MB0 (T ) and MB (T ) are similar matrices, and proves Theorem 9.2.3.
1 This also follows from Taylor’s theorem (Corollary 6.5.3 of Theorem 6.5.1 with a = 1).
516 Change of Basis
Example 9.2.2
Let T : R3 → R3 be defined by T (a, b, c) = (2a − b, b + c, c − 3a). If B0 denotes the standard
basis of R3 and B = {(1, 1, 0), (1, 0, 1), (0, 1, 0)}, find an invertible matrix P such that
P−1 MB0 (T )P = MB (T ).
Solution. We have
2 −1 0
MB0 (T ) = CB0 (2, 0, −3) CB0 (−1, 1, 0) CB0 (0, 1, 1) = 0 1 1
−3 0 1
4 4 −1
MB (T ) = CB (1, 1, −3) CB (2, 1, −2) CB (−1, 1, 0) = −3 −2 0
−3 −3 2
1 1 0
P = PB0 ←B = CB0 (1, 1, 0) CB0 (1, 0, 1) CB0 (0, 1, 0) = 1 0 1
0 1 0
The reader can verify that P−1 MB0 (T )P = MB (T ); equivalently that MB0 (T )P = PMB (T ).
A square matrix is diagonalizable if and only if it is similar to a diagonal matrix. Theorem 9.2.3 comes
into this as follows: Suppose an n × n matrix A = MB0 (T ) is the matrix of some operator T : V → V with
respect to an ordered basis B0 . If another ordered basis B of V can be found such that MB (T ) = D is
diagonal, then Theorem 9.2.3 shows how to find an invertible P such that P−1 AP = D. In other words, the
“algebraic” problem of finding P such that P−1 AP is diagonal comes down to the “geometric” problem
of finding a basis B such that MB (T ) is diagonal. This shift of emphasis is one of the most important
techniques in linear algebra.
Each n × n matrix A can be easily realized as the matrix of an operator. In fact, (Example 9.1.4),
ME (TA ) = A
where TA : Rn → Rn is the matrix operator given by TA (x) = Ax, and E is the standard basis of Rn . The
first part of the next theorem gives the converse of Theorem 9.2.3: Any pair of similar matrices can be
realized as the matrices of the same linear operator with respect to different bases. This is part 1 of the
following theorem.
Theorem 9.2.4
Let A be an n × n matrix and let E be the standard basis of Rn .
1. Let A′ be similar to A, say A′ = P−1 AP, and let B be the ordered basis of Rn consisting of the
columns of P in order. Then TA : Rn → Rn is linear and
ME (TA ) = A and MB (TA ) = A′
2. If B is any ordered basis of Rn , let P be the (invertible) matrix whose columns are the vectors
in B in order. Then
MB (TA ) = P−1 AP
9.2. Operators and Similarity 517
Proof.
1. We have ME (TA ) = A by Example 9.1.4. Write P = b1 · · · bn in terms of its columns so
B = {b1 , . . . , bn } is a basis of Rn . Since E is the standard basis,
PE←B = CE (b1 ) · · · CE (bn ) = b1 · · · bn = P
Example 9.2.3
10 6 2 −1 1 0
Given A = ,P= , and D = , verify that P−1 AP = D and
−18 −11 −3 2 0 −2
use this fact to find a basis B of R2 such that MB (TA ) = D.
Let A be an n × n matrix. As in Example 9.2.3, Theorem 9.2.4 provides a new way to find an invertible
matrix P such that P−1 AP is diagonal. The idea is to find a basis B = {b1 , b2 , . . . , bn } of Rn such that
MB (TA ) = D is diagonal and take P = b1 b2 · · · bn to be the matrix with the b j as columns. Then,
by Theorem 9.2.4,
P−1 AP = MB (TA ) = D
As mentioned above, this converts the algebraic problem of diagonalizing A into the geometric problem of
finding the basis B. This new point of view is very powerful and will be explored in the next two sections.
Theorem 9.2.4 enables facts about matrices to be deduced from the corresponding properties of oper-
ators. Here is an example.
Example 9.2.4
Solution.
1. Let B = {b1 , . . . , br , br+1 , . . . , bn } be a basis of V chosen so that
ker T = span {br+1 , . . . , bn }. Then {T (b1 ), . . . , T (br )} is independent (Theorem 7.2.5), so
complete it to a basis {T (b1 ), . . . , T (br ), fr+1 , . . . , fn } of V .
518 Change of Basis
as required.
The reader will appreciate the power of these methods if he/she tries to find U directly in part 2 of Exam-
ple 9.2.4, even if A is 2 × 2.
A property of n × n matrices is called a similarity invariant if, whenever a given n × n matrix A has
the property, every matrix similar to A also has the property. Theorem 5.5.1 shows that rank , determinant,
trace, and characteristic polynomial are all similarity invariants.
To illustrate how such similarity invariants are related to linear operators, consider the case of rank .
If T : V → V is a linear operator, the matrices of T with respect to various bases of V all have the same
rank (being similar), so it is natural to regard the common rank of all these matrices as a property of T
itself and not of the particular matrix used to describe T . Hence the rank of T could be defined to be the
rank of A, where A is any matrix of T . This would be unambiguous because rank is a similarity invariant.
Of course, this is unnecessary in the case of rank because rank T was defined earlier to be the dimension
of im T , and this was proved to equal the rank of every matrix representing T (Theorem 9.1.5). This
definition of rank T is said to be intrinsic because it makes no reference to the matrices representing T .
However, the technique serves to identify an intrinsic property of T with every similarity invariant, and
some of these properties are not so easily defined directly.
In particular, if T : V → V is a linear operator on a finite dimensional space V , define the determinant
of T (denoted det T ) by
det T = det MB (T ), B any basis of V
This is independent of the choice of basis B because, if D is any other basis of V , the matrices MB (T ) and
MD (T ) are similar and so have the same determinant. In the same way, the trace of T (denoted tr T ) can
be defined by
tr T = tr MB (T ), B any basis of V
This is unambiguous for the same reason.
Theorems about matrices can often be translated to theorems about linear operators. Here is an exam-
ple.
9.2. Operators and Similarity 519
Example 9.2.5
Let S and T denote linear operators on the finite dimensional space V . Show that
Recall next that the characteristic polynomial of a matrix is another similarity invariant: If A and A′ are
similar matrices, then cA (x) = cA′ (x) (Theorem 5.5.1). As discussed above, the discovery of a similarity
invariant means the discovery of a property of linear operators. In this case, if T : V → V is a linear
operator on the finite dimensional space V , define the characteristic polynomial of T by
In other words, the characteristic polynomial of an operator T is the characteristic polynomial of any
matrix representing T . This is unambiguous because any two such matrices are similar by Theorem 9.2.3.
Example 9.2.6
Compute the characteristic polynomial cT (x) of the operator T : P2 → P2 given by
T (a + bx + cx2 ) = (b + c) + (a + c)x + (a + b)x2.
In Section 4.4 we computed the matrix of various projections, reflections, and rotations in R3 . How-
ever, the methods available then were not adequate to find the matrix of a rotation about a line through the
origin. We conclude this section with an example of how Theorem 9.2.3 can be used to compute such a
matrix.
Example 9.2.7
T
Let L be the line in R3 through the origin with (unit) direction vector d = 13 2 1 2 .
Compute the matrix of the rotation about L through an angle θ measured counterclockwise when
viewed in the direction of d.
520 Change of Basis
using the expansion theorem (Theorem 5.3.6). Since P−1 = PT (P is orthogonal), the matrix of R
with respect to E is
As a check one verifies that this is the identity matrix when θ = 0, as it should.
Note that in Example 9.2.7 not much motivation was given to the choices of the (orthonormal) vectors
f and g in the basis B0 , which is the key to the solution. However, if we begin with any basis containing
d the Gram-Schmidt algorithm will produce an orthogonal basis containing d, and the other two vectors
will automatically be in L⊥ = K.
9.2. Operators and Similarity 521
Exercise 9.2.1 In each case find PD←B , where B and D Exercise 9.2.6 Find PD←B if B = {b1 , b2 , b3 , b4 } and
are ordered bases of V . Then verify that D = {b2 , b3 , b1 , b4 }. Change matrices arising when the
CD (v) = PD←BCB (v). bases differ only in the order of the vectors are called
permutation matrices.
2
a. V = R , B = {(0, −1), (2, 1)},
Exercise 9.2.7 In each case, find P = PB0 ←B and verify
D = {(0, 1), (1, 1)}, v = (3, −5)
that P−1 MB0 (T )P = MB (T ) for the given operator T .
2 2
b. V = P2 , B = {x, 1+ x, x }, D = {2, x + 3, x − 1},
v = 1 + x + x2 a. T : R3 → R3 , T (a, b, c) = (2a − b, b + c, c − 3a);
B0 = {(1, 1, 0), (1, 0, 1), (0, 1, 0)} and B is the
c.
V = M22 , standard basis.
1 0 0 1 0 0 0 0
B= , , , ,
0 0 0 0 0 1 1 0 b. T : P2 → P2 ,
1 1 1 0 1 0 0 1 T (a + bx + cx2 ) = (a + b) + (b + c)x + (c + a)x2 ;
D = , , , ,
0 0 1 0 0 1 1 0 B0 = {1, x, x2 } and B = {1 − x2 , 1 + x, 2x + x2 }
3 −1
v= c. T : M22 →M22,
1 4
a b a+d b+c
T = ;
c d a+c b+d
3
Exercise 9.2.2 In R find PD←B , where
1 0 0 1 0 0 0 0
B = {(1, 0, 0), (1, 1, 0), (1, 1, 1)} and B0 = , , , ,
0 0 0 0 1 0 0 1
D = {(1, 0, 1), (1, 0, 1, 0)}. If v =
−1), (0, (a, b, c),
and
a+c a−b 1 1 0 0 1 0 0 1
B= , , ,
show that CD (v) = 12 a − c and CB (v) = b − c , 0 0 1 1 0 1 1 1
2b c
and verify that CD (v) = PD←BCB (v). Exercise 9.2.8 In each case, verify that P−1 AP = D and
Exercise 9.2.3 In P3 find PD←B if B = {1, x, x2 , x3 } find a basis B of R2 such that MB (TA ) = D.
and D = {1, (1 − x), (1 − x)2 , (1 − x)3 }. Then express
p = a + bx + cx2 + dx3 as a polynomial in powers of 11 −6 2 3 2 0
a. A = P= D=
(1 − x). 12 −6 3 4 0 3
Exercise 9.2.4 In each case verify that PD←B is the in- 29 −12 3 2 1 0
verse of PB←D and that PE←D PD←B = PE←B , where B, D, b. A = P= D=
70 −29 7 5 0 −1
and E are ordered bases of V .
Exercise 9.2.9 In each case, compute the characteristic
a. V = R3 , B = {(1, 1, 1), (1, −2, 1), (1, 0, −1)}, polynomial c (x).
T
D = standard basis,
E = {(1, 1, 1), (1, −1, 0), (−1, 0, 1)} a. T : R2 → R2 , T (a, b) = (a − b, 2b − a)
b. V = P2 , B = {1, x, x2 }, D = {1 + x + x2 , b. T : R2 → R2 , T (a, b) = (3a + 5b, 2a + 3b)
1 − x, −1 + x2 }, E = {x2 , x, 1}
c. T : P2 → P2 ,
Exercise 9.2.5 Use property (2) of Theorem 9.2.2, with T (a + bx + cx2 )
D the standard basis of Rn , to find the inverse of: = (a − 2c) + (2a + b + c)x + (c − a)x2
1 1 0 1 2 1 d. T : P2 → P2 ,
a. A = 1 0 1 b. A = 2 3 0 T (a + bx + cx2 )
0 1 1 −1 0 2 = (a + b − 2c) + (a − 2b + c)x + (b − 2a)x2
522 Change of Basis
Exercise 9.2.15 Let B = {b1 , b2 , . . . , bn } be Exercise 9.2.18 Find the standard matrix of the rotation
any ordered
basis of Rn , written as columns. If R about
the line through the origin with direction vector
T T
Q = b1 b2 · · · bn is the matrix with the bi as d = 2 3 6 . [Hint: Consider f = 6 2 −3
T
columns, show that QCB (v) = v for all v in Rn . and g = 3 −6 2 .]
A fundamental question in linear algebra is the following: If T : V → V is a linear operator, how can a
basis B of V be chosen so the matrix MB (T ) is as simple as possible? A basic technique for answering
such questions will be explained in this section. If U is a subspace of V , write its image under T as
T (U ) = {T (u) | u in U }
9.3. Invariant Subspaces and Direct Sums 523
Example 9.3.1
Let T : V → V be any linear operator. Then:
Solution. Item 1 is clear, and the rest is left as Exercises 9.3.1 and 9.3.2.
Example 9.3.2
Define T : R3 → R3 by T (a, b, c) = (3a + 2b, b − c, 4a + 2b − c). Then
U = {(a, b, a) | a, b in R} is T -invariant because
is in U for all a and b (the first and last entries are equal).
Example 9.3.3
Let T : V → V be a linear operator, and suppose that U = span {u1 , u2 , . . . , uk } is a subspace of
V . Show that U is T -invariant if and only if T (ui ) lies in U for each i = 1, 2, . . . , k.
and this lies in U if each T (ui ) lies in U . This shows that U is T -invariant if each T (ui ) lies in U ;
the converse is clear.
524 Change of Basis
Example 9.3.4
Define T : R2 → R2 by T (a, b) = (b, −a). Show that R2 contains no T -invariant subspace except
0 and R2 .
T :U →U
This is the reason for the importance of T -invariant subspaces and is the first step toward finding a basis
that simplifies the matrix of T .
Theorem 9.3.1
Let T : V → V be a linear operator where V has dimension n and suppose that U is any T -invariant
subspace of V . Let B1 = {b1 , . . . , bk } be any basis of U and extend it to a basis
B = {b1 , . . . , bk , bk+1 , . . . , bn } of V in any way. Then MB (T ) has the block triangular form
MB1 (T ) Y
MB (T ) =
0 Z
Proof. The matrix of (the restriction) T : U → U with respect to the basis B1 is the k × k matrix
MB1 (T ) = CB1 [T (b1 )] CB1 [T (b2 )] · · · CB1 [T (bk )]
Now compare the first column CB1 [T (b1 )] here with the first column CB [T (b1 )] of MB (T ). The fact that
T (b1 ) lies in U (because U is T -invariant) means that T (b1 ) has the form
Consequently,
t1
t2
t1 ..
t2 .
CB1 [T (b1 )] = .. in Rk whereas CB [T (b1 )] = tk in Rn
.
0
tk ..
.
0
MB1 (T ) Y
This shows that the matrices MB (T ) and have identical first columns.
0 Z
Similar statements apply to columns 2, 3, . . . , k, and this proves the theorem.
The block upper triangular form for the matrix MB (T ) in Theorem 9.3.1 is very useful because the
determinant of such a matrix equals the product of the determinants of each of the diagonal blocks. This
is recorded in Theorem 9.3.2 for reference, together with an important application to characteristic poly-
nomials.
Theorem 9.3.2
Let A be a block upper triangular matrix, say
A11 A12 A13 · · · A1n
0 A22 A23 · · · A2n
A= 0 0 A33 · · · A3n
.. ... ... ...
.
0 0 0 · · · Ann
Proof. If n = 2, (1) is Theorem 3.1.5; the general case (by induction on n) is left to the reader. Then (2)
follows from (1) because
xI − A11 −A12 −A13 · · · −A1n
0 xI − A22 −A23 · · · −A2n
0 0 xI − A · · · −A
xI − A = 33 3n
.. .. .. ..
. . . .
0 0 0 · · · xI − Ann
where, in each diagonal block, the symbol I stands for the identity matrix of the appropriate size.
526 Change of Basis
Example 9.3.5
Consider the linear operator T : P2 → P2 given by
Show that U = span {x, 1 + 2x2 } is T -invariant, use it to find a block upper triangular matrix for T ,
and use that to compute cT (x).
Solution. U is T -invariant by Example 9.3.3 because U = span {x, 1 + 2x2 } and both T (x) and
T (1 + 2x2 ) lie in U :
Eigenvalues
The following theorem reveals the connection between the eigenspaces of an operator T and those of the
matrices representing T .
Theorem 9.3.3
Let T : V → V be a linear operator where dim V = n, let B denote any ordered basis of V , and let
CB : V → Rn denote the coordinate isomorphism. Then:
1. The eigenvalues λ of T are precisely the eigenvalues of the matrix MB (T ) and thus are the
roots of the characteristic polynomial cT (x).
2. In this case the eigenspaces Eλ (T ) and Eλ [MB (T )] are isomorphic via the restriction
CB : Eλ (T ) → Eλ [MB (T )].
Proof. Write A = MB (T ) for convenience. If T (v) = λ v, then λ CB (v) = CB [T (v)] = ACB (v) because CB
is linear. Hence CB (v) lies in Eλ (A), so we do have a function CB : Eλ (T ) → Eλ (A). It is clearly linear
and one-to-one; we claim it is onto. If x is in Eλ (A), write x = CB (v) for some v in V (CB is onto). This v
actually lies in Eλ (T ). To see why, observe that
CB [T (v)] = ACB (v) = Ax = λ x = λ CB (v) = CB (λ v)
Hence T (v) = λ v because CB is one-to-one, and this proves (2). As to (1), we have already shown that
eigenvalues of T are eigenvalues of A. The converse follows, as in the foregoing proof that CB is onto.
Theorem 9.3.3 shows how to pass back and forth between the eigenvectors of an operator T and the
eigenvectors of any matrix MB (T ) of T :
v lies in Eλ (T ) if and only if CB (v) lies in Eλ [MB (T )]
Example 9.3.6
Find the eigenvalues and eigenspaces for T : P2 → P2 given by
T (a + bx + cx2 ) = (2a + b + c) + (2a + b − 2c)x − (a + 2c)x2
Theorem 9.3.4
Each eigenspace of a linear operator T : V → V is a T -invariant subspace of V .
Proof. If v lies in the eigenspace Eλ (T ), then T (v) = λ v, so T [T (v)] = T (λ v) = λ T (v). This shows that
T (v) lies in Eλ (T ) too.
Direct Sums
Sometimes vectors in a space V can be written naturally as a sum of vectors in two subspaces. For example,
in the space Mnn of all n × n matrices, we have subspaces
where a matrix Q is called skew-symmetric if QT = −Q. Then every matrix A in Mnn can be written as
the sum of a matrix in U and a matrix in W ; indeed,
A = 21 (A + AT ) + 21 (A − AT )
U +W = {u + w | u in U and w in W }
U ∩W = {v | v lies in both U and W }
These are subspaces of V , the sum containing both U and W and the intersection contained in both U and
W . It turns out that the most interesting pairs U and W are those for which U ∩W is as small as possible
and U +W is as large as possible.
U ∩W = {0} and U +W = V
Example 9.3.7
In the space R5 , consider the subspaces U = {(a, b, c, 0, 0) | a, b, and c in R} and
W = {(0, 0, 0, d, e) | d and e in R}. Show that R5 = U ⊕W .
Example 9.3.8
If U is a subspace of Rn , show that Rn = U ⊕U ⊥ .
Solution. The equation Rn = U +U ⊥ holds because, given x in Rn , the vector projU x lies in U
and x − projU x lies in U ⊥. To see that U ∩U ⊥ = {0}, observe that any vector in U ∩U ⊥ is
orthogonal to itself and hence must be zero.
Example 9.3.9
Let {e1 , e2 , . . . , en } be a basis of a vector space V , and partition it into two parts: {e1 , . . . , ek } and
{ek+1 , . . . , en }. If U = span {e1 , . . . , ek } and W = span {ek+1 , . . . , en }, show that V = U ⊕W .
Theorem 9.3.5
Let U and W be subspaces of a finite dimensional vector space V . The following three conditions
are equivalent:
1. V = U ⊕W .
2. Each vector v in V can be written uniquely in the form
v = u+w u in U , w in W
Theorem 9.3.6
If a finite dimensional vector space V is the direct sum V = U ⊕W of subspaces U and W , then
These direct sum decompositions of V play an important role in any discussion of invariant subspaces.
If T : V → V is a linear operator and if U1 is a T -invariant subspace, the block upper triangular matrix
MB1 (T ) Y
MB (T ) = (9.3)
0 Z
in Theorem 9.3.1 is achieved by choosing any basis B1 = {b1 , . . . , bk } of U1 and completing it to a basis
B = {b1 , . . . , bk , bk+1 , . . . , bn } of V in any way at all. The fact that U1 is T -invariant ensures that the
first k columns of MB (T ) have the form in (9.3) (that is, the last n − k entries are zero), and the question
arises whether the additional basis vectors bk+1 , . . . , bn can be chosen such that
U2 = span {bk+1 , . . . , bn }
is also T -invariant. In other words, does each T -invariant subspace of V have a T -invariant complement?
Unfortunately the answer in general is no (see Example 9.3.11 below); but when it is possible, the matrix
MB (T ) simplifies further. The assumption that the complement U2 = span {bk+1 , . . . , bn } is T -invariant
too means that Y = 0 in equation 9.3 above, and that Z = MB2 (T ) is the matrix of the restriction of T to
U2 (where B2 = {bk+1 , . . . , bn }). The verification is the same as in the proof of Theorem 9.3.1.
Theorem 9.3.7
Let T : V → V be a linear operator where V has dimension n. Suppose V = U1 ⊕U2 where both U1
and U2 are T -invariant. If B1 = {b1 , . . . , bk } and B2 = {bk+1 , . . . , bn } are bases of U1 and U2
respectively, then
B = {b1 , . . . , bk , bk+1 , . . . , bn }
is a basis of V , and MB (T ) has the block diagonal form
MB1 (T ) 0
MB (T ) =
0 MB2 (T )
where MB1 (T ) and MB2 (T ) are the matrices of the restrictions of T to U1 and to U2 respectively.
9.3. Invariant Subspaces and Direct Sums 531
Then T has a matrix in block diagonal form as in Theorem 9.3.7, and the study of T is reduced to
studying its restrictions to the lower-dimensional spaces U1 and U2 . If these can be determined, so can T .
Here is an example in which the action of T on the invariant subspaces U1 and U2 is very simple indeed.
The result for operators is used to derive the corresponding similarity theorem for matrices.
Example 9.3.10
Let T : V → V be a linear operator satisfying T 2 = 1V (such operators are called involutions).
Define
U1 = {v | T (v) = v} and U2 = {v | T (v) = −v}
Solution.
a. The verification that U1 and U2 are subspaces of V is left to the reader. If v lies in U1 ∩U2 ,
then v = T (v) = −v, and it follows that v = 0. Hence U1 ∩U2 = {0}. Given v in V , write
b. U1 and U2 are easily shown to be T -invariant, so the result follows from Theorem 9.3.7 if
bases B1 = {b1 , . . . , bk } and B2 = {bk+1 , . . . , bn } of U1 and U2 can be found such that
MB1 (T ) = Ik and MB2 (T ) = −In−k . But this is true for any choice of B1 and B2 :
MB1 (T ) = CB1 [T (b1 )] CB1 [T (b2 )] · · · CB1 [T (bk )]
= CB1 (b1 ) CB1 (b2 ) · · · CB1 (bk )
= Ik
A similar argument shows that MB2 (T ) = −In−k , so part (b) follows with
B = {b1 , b2 , . . . , bn }.
532 Change of Basis
But Theorem 9.2.4 shows that MB (TA ) = P−1 AP for some invertible matrix P, and this
proves part (c).
Note that the passage from the result for operators to the analogous result for matrices is routine and can
be carried out in any situation, as in the verification of part (c) of Example 9.3.10. The key is the analysis
of the operators. In this case, the involutions are just the operators satisfying T 2 = 1V , and the simplicity
of this condition means that the invariant subspaces U1 and U2 are easy to find.
Unfortunately, not every linear operator T : V → V is reducible. In fact, the linear operator in Exam-
ple 9.3.4 has no invariant subspaces except 0 and V . On the other hand, one might expect that this is the
only type of nonreducible operator; that is, if the operator has an invariant subspace that is not 0 or V , then
some invariant complement must exist. The next example shows that even this is not valid.
Example 9.3.11
a a+b 1
Consider the operator T : R2 → R2 given by T = . Show that U1 = R is
b b 0
T -invariant but that U1 has not T -invariant complement in R2 .
1 1 1
Solution. Because U1 = span and T = , it follows (by Example 9.3.3) that
0 0 0
U1 is T -invariant. Now assume, if possible, that U1 has a T -invariant complement U2 in R2 . Then
U1 ⊕U2 = R2 and T (U2 ) ⊆ U2 . Theorem 9.3.6 gives
This is as far as we take the theory here, but in Chapter 11 the techniques introduced in this section will
be refined to show that every matrix is similar to a very nice matrix indeed—its Jordan canonical form.
9.3. Invariant Subspaces and Direct Sums 533
Exercise 9.3.1 If T : V → V is any linear operator, show Exercise 9.3.8 In each case, show that U is T -invariant,
that ker T and im T are T -invariant subspaces. use it to find a block upper triangular matrix for T , and
use that to compute cT (x).
Exercise 9.3.2 Let T be a linear operator on V . If U and
W are T -invariant, show that a. T : P2 → P2 ,
T (a + bx + cx2 )
a. U ∩W and U +W are also T -invariant. = (−a + 2b + c) + (a + 3b + c)x + (a + 4b)x2 ,
U = span {1, x + x2 }
b. T (U ) is T -invariant.
b. T : P2 → P2 ,
T (a + bx + cx2 )
Exercise 9.3.3 Let S and T be linear operators on V and = (5a − 2b + c) + (5a − b + c)x + (a + 2c)x2 ,
assume that ST = T S. U = span {1 − 2x2 , x + x2 }
Pn = U ⊕W . (See Exercise 6.3.36.) [Hint: f (x) + f (−x) Exercise 9.3.20 Let T : V → V be a linear operator
is even.] where dim V = n. If U is a T -invariant subspace of V ,
let T1 : U → U denote the restriction of T to U
Exercise 9.3.14 Let E be an n × n matrix with E 2 = E.
(so T1 (u) = T (u) for all u in U ). Show that cT (x) =
Show that Mnn = U ⊕ W , where U = {A | AE = A} and
c (x) · q(x) for some polynomial q(x). [Hint: Theo-
W = {B | BE = 0}. [Hint: X E lies in U for every matrix T1
rem 9.3.1.]
X .]
Exercise 9.3.21 Let T : V → V be a linear operator
Exercise 9.3.15 Let U and W be subspaces of V . Show where dim V = n. Show that V has a basis of eigen-
that U ∩ W = {0} if and only if {u, w} is independent vectors if and only if V has a basis B such that M (T ) is
B
for all u 6= 0 in U and all w 6= 0 in W . diagonal.
T S
Exercise 9.3.16 Let V → W → V be linear transforma- Exercise 9.3.22 In each case, show that T 2 = 1 and
tions, and assume that dim V and dim W are finite. find (as in Example 9.3.10) an ordered basis B such that
MB (T ) has the given block form.
a. If ST = 1V , show that W = im T ⊕ ker S.
a. T : M22 → M22 where
T (A) = A ,
T
[Hint: Given w in W , show that w − T S(w) lies in
I3 0
ker S.] MB (T ) =
0 −1
T S
b. Illustrate with R2 → R3 → R2 where b. T : P3 → P3 where T [p(x)]
= p(−x),
T (x, y) = (x, y, 0) and S(x, y, z) = (x, y). I2 0
MB (T ) =
0 −I2
b. If dim V = n, find
a basis B of V such that b. If dim V = n, show that V has a basis B such that
Ir 0 cIk 0
MB (T ) = , where r = rank T . MB (T ) = for some k.
0 0 0 −cIn−k
c. If A is an n × nmatrix such 2
that A = A, show that c. If A is an n × n matrix such that A2 = c2 I, c6= 0,
Ir 0 cIk 0
A is similar to , where r = rank A. show that A is similar to for
0 0 0 −cIn−k
[Hint: Example 9.3.10.] some k.
Exercise 9.3.25 In each case, show that T 2 = T and find Exercise 9.3.28 If P is a fixed n × n matrix, define
(as in the preceding exercise) an ordered basis B such that T : Mnn → Mnn by T (A) = PA. Let U j denote the sub-
MB (T ) has the form given (0k is the k × k zero matrix). space of Mnn consisting of all matrices with all columns
zero except possibly column j.
a. T : P2 → P2 where
T (a + bx + 2 2 a. Show that each U j is T -invariant.
cx ) = (a
− b + c)(1 + x + x ),
1 0 b. Show that Mnn has a basis B such that MB (T ) is
MB (T ) =
0 02 block diagonal with each block on the diagonal
b. T : R3 → R3 where equal to P.
T (a, b, c)= (a + 2b,
0, 4b + c),
I2 0 Exercise 9.3.29 Let V be a vector space. If f : V → R
MB (T ) = is a linear transformation and z is a vector in V , define
0 0
T f , z : V → V by T f , z (v) = f (v)z for all v in V . Assume
c. T : M22 →M22 where that f 6= 0 and z 6= 0.
a b −5 −15 a b
T = ,
c d 2 6 c d a. Show that T f , z is a linear operator of rank 1.
I2 0
MB (T ) = b. If f 6= 0, show that T f , z is an idempotent if and
0 02
only if f (z) = 1. (Recall that T : V → V is called
an idempotent if T 2 = T .)
Exercise 9.3.26 Let T : V → V be an operator satisfying
T 2 = cT , c 6= 0. c. Show that every idempotent T : V → V of rank 1
has the form T = T f , z for some f : V → R and
a. Show that V = U ⊕ ker T , where some z in V with f (z) = 1. [Hint: Write
U = {u | T (u) = cu}. im T = Rz and show that T (z) = z. Then use Ex-
[Hint: Compute T (v − 1c T (v)).] ercise 9.3.23.]
b. If dim V = n, show that V has a basis B such that Exercise 9.3.30 Let U be a fixed n × n matrix, and con-
cIr 0
MB (T ) = , where r = rank T . sider the operator T : Mnn → Mnn given by T (A) = UA.
0 0
c. If A is any n × n matrix of rank r such that a. Show that λ is an eigenvalue of T if and only if it
2 is an eigenvalue of U .
= cA, c 6= 0, show that A is similar to
A
cIr 0
. b. If λ is an eigenvalue of T , show that Eλ (T ) con-
0 0
sists of all matrices whose columns lie in Eλ (U ):
Eλ (T )
Exercise 9.3.27 Let T : V → V be an operator such that
= { P1 P2 · · · Pn | Pi in Eλ (U ) for each i}
T 2 = c2 , c 6= 0.
c. Show if dim [Eλ (U )] = d, then dim [Eλ (T )] = nd.
a. Show that V = U1 ⊕U2 , where [Hint: If B = {x1 , . . . , xd } is a basis of Eλ (U ),
U1 = {v | T (v) = cv} and U2 = {v | T (v) = −cv}. consider the set of all matrices with one column
1
[Hint: v = 2c {[T (v) + cv] − [T (v) − cv]}.] from B and the other columns zero.]
536 Change of Basis
Exercise 9.3.31 Let T : V → V be a linear operator iv. Ui ∩ (Ui+1 + · · · +Um ) = {0} for each
where V is finite dimensional. If U ⊆ V is a subspace, let i = 1, 2, . . . , m − 1.
U = {u0 + T (u1 ) + T 2 (u2 ) + · · · + T k (uk ) | ui in U , k ≥
0}. Show that U is the smallest T -invariant subspace When these conditions are satisfied, we say that V is
containing U (that is, it is T -invariant, contains U , and the direct sum of the subspaces Ui , and write
is contained in every such subspace). V = U1 ⊕U2 ⊕ · · · ⊕Um .
Exercise 9.3.32 Let U1 , . . . , Um be subspaces of V and Exercise 9.3.33
assume that V = U1 +· · ·+Um ; that is, every v in V can be
written (in at least one way) in the form v = u1 + · · ·+ um , a. Let B be a basis of V and let B = B1 ∪ B2 ∪ · · · ∪ Bm
ui in Ui . Show that the following conditions are equiva- where the Bi are pairwise disjoint, nonempty sub-
lent. sets of B. If Ui = span Bi for each i, show that
V = U1 ⊕U2 ⊕ · · · ⊕Um (preceding exercise).
i. If u1 + · · · + um = 0, ui in Ui , then ui = 0 for each
b. Conversely if V = U1 ⊕ · · · ⊕ Um and Bi is a basis
i.
of Ui for each i, show that B = B1 ∪ · · · ∪ Bm is a
ii. If u1 + · · · + um = u′1 + · · · + u′m , ui and u′i in Ui , basis of V as in (a).
then ui = u′i for each i.
Exercise 9.3.34 Let T : V → V be an operator where
iii. Ui ∩ (U1 + · · · +Ui−1 +Ui+1 + · · · +Um ) = {0} for T 3 = 0. If u ∈ V and U = span {u, T (u), T 2 (u)}, show
each i = 1, 2, . . . , m. that U is T -invariant and has dimension 3.
10. Inner Product Spaces
The dot product was introduced in Rn to provide a natural generalization of the geometrical notions of
length and orthogonality that were so important in Chapter 4. The plan in this chapter is to define an inner
product on an arbitrary real vector space V (of which the dot product is an example in Rn ) and use it to
introduce these concepts in V . While this causes some repetition of arguments in Chapter 8, it is well
worth the effort because of the much wider scope of the results when stated in full generality.
A real vector space V with an inner product h , i will be called an inner product space. Note that every
subspace of an inner product space is again an inner product space using the same inner product.1
Example 10.1.1
Rn is an inner product space with the dot product as inner product:
See Theorem 5.3.1. This is also called the euclidean inner product, and Rn , equipped with the dot
product, is called euclidean n-space.
1 If
we regard Cn as a vector space over the field C of complex numbers, then the “standard inner product” on Cn defined in
Section 8.7 does not satisfy Axiom P4 (see Theorem 8.7.1(3)).
537
538 Inner Product Spaces
Example 10.1.2
If A and B are m × n matrices, define hA, Bi = tr (ABT ) where tr (X ) is the trace of the square
matrix X . Show that h , i is an inner product in Mmn .
Solution. P1 is clear. Since tr (P) = tr (PT ) for every square matrix P, we have P2:
Next, P3 and P4 follow because trace is a linear transformation Mmn → R (Exercise 10.1.19).
Turning to P5, let r1 , r2 , . . . , rm denote the rows of the matrix A. Then the (i, j)-entry of AAT is
ri · r j , so
hA, Ai = tr (AAT ) = r1 · r1 + r2 · r2 + · · · + rm · rm
But r j · r j is the sum of the squares of the entries of r j , so this shows that hA, Ai is the sum of the
squares of all nm entries of A. Axiom P5 follows.
Example 10.1.3: 2
Let C[a, b] denote the vector space of continuous functions from [a, b] to R, a subspace of
F[a, b]. Show that
Z b
h f , gi = f (x)g(x)dx
a
defines an inner product on C[a, b].
and it follows that the number h0, vi must be zero. This observation is recorded for reference in the
following theorem, along with several other properties of inner products. The other proofs are left as
Exercise 10.1.20.
2 This
example (and others later that refer to it) can be omitted with no loss of continuity by students with no calculus
background.
10.1. Inner Products and Norms 539
Theorem 10.1.1
Let h , i be an inner product on a space V ; let v, u, and w denote vectors in V ; and let r denote a
real number.
3. hv, 0i = 0 = h0, vi
for all r and s in R by axioms P3 and P4. Moreover, there is nothing special about the fact that there are
two terms in the linear combination or that it is in the first component:
hold for all ri and si in R and all v, w, vi , and w j in V . These results are described by saying that inner
products “preserve” linear combinations. For example,
h2u − v, 3u + 2vi = h2u, 3ui + h2u, 2vi + h−v, 3ui + h−v, 2vi
= 6hu, ui + 4hu, vi − 3hv, ui − 2hv, vi
= 6hu, ui + hu, vi − 2hv, vi
and this condition characterizes the positive definite matrices (Theorem 8.3.2). This proves the first asser-
tion in the next theorem.
Theorem 10.1.2
If A is any n × n positive definite matrix, then
defines an inner product on Rn , and every inner product on Rn arises in this way.
540 Inner Product Spaces
n
Proof. Given an inner product h , i on Rn , let {e1 , e2 , . . . , en } be the standard basis of Rn . If x = ∑ xi ei
i=1
n
and y = ∑ y j e j are two vectors in Rn, compute hx, yi by adding the inner product of each term xiei to
j=1
each term y j e j . The result is a double sum.
n n n n
hx, yi = ∑ ∑ hxiei, y j e j i = ∑ ∑ xihei, e j iy j
i=1 j=1 i=1 j=1
Hence hx, yi = xT Ay, where A is the n × n matrix whose (i, j)-entry is hei , e j i. The fact that
hei , e j i = he j , ei i
Remark
If we refer to the inner product space Rn without specifying the inner product, we mean that the dot
product is to be used.
Example 10.1.4
Let the inner product h , i be defined on R2 by
v1 w1
, = 2v1 w1 − v1 w2 − v2 w1 + v2 w2
v2 w2
Solution.
The (i, j)-entry of the matrix A isthe coefficient of vi w j in the expression, so
2 −1 x
A= . Incidentally, if x = , then
−1 1 y
d ( v , w) = k v − w k
Example 10.1.5
y The norm of a continuous function f = f (x) in C[a, b]
(with the inner product from Example 10.1.3) is given by
y = f (x)2
s
Z b
kfk = f (x)2 dx
a
|| f ||2
x Hence k f k2 is the area beneath the graph of y = f (x)2
O a b between x = a and x = b (shaded in the diagram).
Example 10.1.6
Show that hu + v, u − vi = kuk2 − kvk2 in any inner product space.
A vector v in an inner product space V is called a unit vector if kvk = 1. The set of all unit vectors in
V is called the unit ball in V . For example, if V = R2 (with the dot product) and v = (x, y), then
Hence the unit ball in R2 is the unit circle x2 + y2 = 1 with centre at the origin and radius 1. However, the
shape of the unit ball varies with the choice of inner product.
Example 10.1.7
Let a > 0 and b > 0. If v = (x, y) and w = (x1 , y1 ), define an
y inner product on R2 by
(0, b)
hv, wi = xx1
a2
+ yy
b2
1
(−a, 0) (a, 0) The reader can verify (Exercise 10.1.5) that this is indeed an
x inner product. In this case
O
2
x2
(0, −b) kvk2 = 1 if and only if a2
+ by2 = 1
Example 10.1.7 graphically illustrates the fact that norms and distances in an inner product space V vary
with the choice of inner product in V .
Theorem 10.1.3
1
If v 6= 0 is any vector in an inner product space V , then kvk v is the unique unit vector that is a
positive multiple of v.
The next theorem reveals an important and useful fact about the relationship between norms and inner
products, extending the Cauchy inequality for Rn (Theorem 5.3.2).
Moreover, equality occurs if and only if one of v and w is a scalar multiple of the other.
It follows that ab − hv, wi ≥ 0 and ab + hv, wi ≥ 0, and hence that −ab ≤ hv, wi ≤ ab. But then
|hv, wi| ≤ ab = kvkkwk, as desired.
4 Hermann Amandus Schwarz (1843–1921) was a German mathematician at the University of Berlin. He had strong geo-
metric intuition, which he applied with great ingenuity to particular problems. A version of the inequality appeared in 1885.
10.1. Inner Products and Norms 543
Conversely, if |hv, wi| = kvkkwk = ab then hv, wi = ±ab. Hence (10.1) shows that bv − aw = 0 or
bv + aw = 0. It follows that one of v and w is a scalar multiple of the other, even if a = 0 or b = 0.
Example 10.1.8
If f and g are continuous functions on the interval [a, b], then (see Example 10.1.3)
Z b
2 Z b Z b
2
f (x)g(x)dx ≤ f (x) dx g(x)2 dx
a a a
Another famous inequality, the so-called triangle inequality, also comes from the Cauchy-Schwarz
inequality. It is included in the following list of basic properties of the norm of a vector.
Theorem 10.1.5
If V is an inner product space, the norm k · k has the following properties.
p
Proof. Because kvk = hv, vi, properties (1) and (2) follow immediately from (3) and (4) of Theo-
rem 10.1.1. As to (3), compute
Hence (3) follows by taking positive square roots. Finally, the fact that hv, wi ≤ kvkkwk by the Cauchy-
Schwarz inequality gives
is a special case of (4) where V = R = R1 and the dot product hr, si = rs is used.
In many calculations in an inner product space, it is required to show that some vector v is zero. This
is often accomplished most easily by showing that its norm kvk is zero. Here is an example.
544 Inner Product Spaces
Example 10.1.9
Let {v1 , . . . , vn } be a spanning set for an inner product space V . If v in V satisfies hv, vi i = 0 for
each i = 1, 2, . . . , n, show that v = 0.
The norm properties in Theorem 10.1.5 translate to the following properties of distance familiar from
geometry. The proof is Exercise 10.1.21.
Theorem 10.1.6
Let V be an inner product space.
Exercise 10.1.1 In each case, determine which of ax- Exercise 10.1.2 Let V be an inner product space. If
ioms P1–P5 fail to hold. U ⊆ V is a subspace, show that U is an inner product
space using the same inner product.
a. V = R2 , h(x1 , y1 ), (x2 , y2 )i = x1 y1 x2 y2 Exercise 10.1.3 In each case, find a scalar multiple of v
that is a unit vector.
b. V = R3 ,
h(x1 , x2 , x3 ), (y1 , y2 , y3 )i = x1 y1 − x2 y2 + x3 y3 a. v = f inR1
C[0, 1] where f (x) = x2
h f , gi 0 f (x)g(x)dx
c. V = C, hz, wi = zw, where w is complex conjuga-
tion b. v = f inRπ
C[−π , π ] where f (x) = cos x
h f , gi −π f (x)g(x)dx
d. V = P3 , hp(x), q(x)i = p(1)q(1)
1 1 1
c. v = in R2 where hv, wi = vT w
3 1 2
e. V = M22 , hA, Bi = det (AB)
3 2 1 −1
f. V = F[0, 1], h f , gi = f (1)g(0) + f (0)g(1) d. v = in R , hv, wi = vT w
−1 −1 2
10.1. Inner Products and Norms 545
Exercise 10.1.4 In each case, find the distance between Exercise 10.1.12 In each case, show that hv, wi = vT Aw
u and v. defines an inner product on R2 and hence show that A is
positive definite.
a. u = (3, −1, 2, 0), v = (1, 1, 1, 3); hu, vi = u · v
2 1 5 −3
b. u = (1, 2, −1, 2), v = (2, 1, −1, 3); hu, vi = u · v a. A = b. A =
1 1 −3 2
c. u = f , v = g in C[0,R 1] where f (x) = x2 and 3 2 3 4
c. A = d. A =
g(x) = 1 − x; h f , gi = 01 f (x)g(x)dx 2 3 4 6
d. u = f , v = g in C[−πR , π ] where f (x) = 1 and Exercise 10.1.13 In each case, find a symmetric matrix
g(x) = cos x; h f , gi = −ππ f (x)g(x)dx A such that hv, wi = vT Aw.
Exercise 10.1.5 Let a1 , a2 , . . . , an be positive numbers. a.
v1
,
w1
= v1 w1 + 2v1 w2 + 2v2 w1 + 5v2 w2
Given v = (v1 , v2 , . . . , vn ) and w = (w1 , w2 , . . . , wn ), v2 w2
define hv, wi = a1 v1 w1 + · · · + an vn wn . Show that this is v1 w1
b. , = v1 w1 − v1 w2 − v2 w1 + 2v2w2
an inner product on Rn . v2 w2
* +
Exercise 10.1.6 If {b1 , . . . , bn } is a basis of V and if v1 w1
c. v2 , w2 = 2v1 w1 + v2w2 + v3 w3 − v1w2
v = v1 b1 + · · · + vn bn and w = w1 b1 + · · · + wn bn are vec-
tors in V , define v3 w3
− v2 w1 + v2w3 + v3 w2
* +
hv, wi = v1 w1 + · · · + vn wn . v1 w1
d. v2 , w2 = v1 w1 + 2v2w2 + 5v3w3
Show that this is an inner product on V . v3 w3
− 2v1w3 − 2v3w1
Exercise 10.1.7 If p = p(x) and q = q(x) are polynomi-
als in Pn , define
Exercise 10.1.14 If A is symmetric and xT Ax = 0 for
hp, qi = p(0)q(0) + p(1)q(1) + · · · + p(n)q(n) all columns x in Rn , show that A = 0. [Hint: Consider
hx + y, x + yi where hx, yi = xT Ay.]
Show that this is an inner product on Pn .
Exercise 10.1.15 Show that the sum of two inner prod-
[Hint for P5: Theorem 6.5.4 or Appendix D.]
ucts on V is again an inner product.
Exercise 10.1.8 Let Dn denote the space of all func- √
Exercise 10.1.16 Let kuk = 1, kvk = 2, kwk = 3,
tions from the set {1, 2, 3, . . . , n} to R with pointwise
hu, vi = −1, hu, wi = 0 and hv, wi = 3. Compute:
addition and scalar multiplication (see Exercise 6.3.35).
Show that h , i is an inner product on Dn if a. hv + w, 2u − vi b. hu − 2v − w, 3w − vi
hf, gi = f (1)g(1) + f (2)g(2) + · · · + f (n)g(n).
Exercise 10.1.9 Let re (z) denote the real part of the Exercise 10.1.17 Given the data in Exercise 10.1.16,
complex number z. Show that h , i is an inner product on show that u + v = w.
C if hz, wi = re (zw).
Exercise 10.1.18 Show that no vectors exist such that
Exercise 10.1.10 If T : V → V is an isomorphism of the kuk = 1, kvk = 2, and hu, vi = −3.
inner product space V , show that
Exercise 10.1.19 Complete Example 10.1.2.
hv, wi1 = hT (v), T (w)i Exercise 10.1.20 Prove Theorem 10.1.1.
defines a new inner product h , i1 on V . Exercise 10.1.21 Prove Theorem 10.1.6.
Exercise 10.1.11 Show that every inner product h , i Exercise 10.1.22 Let u and v be vectors in an inner
on Rn has the form hx, yi = (U x) · (U y) for some upper product space V .
triangular matrix U with positive diagonal entries. [Hint:
Theorem 8.3.3.] a. Expand h2u − 7v, 3u + 5vi.
546 Inner Product Spaces
b. Expand h3u − 4v, 5u + vi. Exercise 10.1.29 Use the Cauchy-Schwarz inequality in
an inner product space to show that:
c. Show that ku + vk2 = kuk2 + 2hu, vi + kvk2 .
a. If kuk ≤ 1, then hu, vi2 ≤ kvk2 for all v in V .
d. Show that ku − vk2 = kuk2 − 2hu, vi + kvk2 .
b. (x cos θ + y sin θ )2 ≤ x2 + y2 for all real x, y, and
θ.
Exercise 10.1.23 Show that
c. kr1 v1 + · · · + rn vn k2 ≤ [r1 kv1 k + · · · + rn kvn k]2 for
all vectors vi , and all ri > 0 in R.
kvk2 + kwk2 = 21 {kv + wk2 + kv − wk2 }
for any v and w in an inner product space. Exercise 10.1.30 If A is a 2 × n matrix, let u and v de-
note the rows of A.
Exercise 10.1.24 Let h , i be an inner product on a vec-
tor space V . Show that the corresponding distance func- kuk2 u · v
tion is translation invariant. That is, show that a. Show that AAT = .
u · v kvk2
d (v, w) = d (v + u, w + u) for all v, w, and u in V .
b. Show that det (AAT ) ≥ 0.
Exercise 10.1.25
Exercise 10.1.31
a. Show that hu, vi = 14 [ku + vk2 − ku − vk2 ] for all
u, v in an inner product space V .
a. If v and w are nonzero vectors in an inner product
hv, wi
space V , show that −1 ≤ kvkkwk ≤ 1, and hence
b. If h , i and h , i′ are two inner products on V that
that a unique angle θ exists such that
have equal associated norm functions, show that hv, wi
hu, vi = hu, vi′ holds for all u and v. kvkkwk = cos θ and 0 ≤ θ ≤ π . This angle θ is
called the angle between v and w.
Exercise 10.1.26 Let v denote a vector in an inner prod- b. Find the angle between v = (1, 2, −1, 1 3) and
uct space V . w = (2, 1, 0, 2, 0) in R5 with the dot product.
The idea that two lines can be perpendicular is fundamental in geometry, and this section is devoted to
introducing this notion into a general inner product space V . To motivate the definition, recall that two
nonzero geometric vectors x and y in Rn are perpendicular (or orthogonal) if and only if x · y = 0. In
general, two vectors v and w in an inner product space V are said to be orthogonal if
hv, wi = 0
A set {f1 , f2 , . . . , fn } of vectors is called an orthogonal set of vectors if
1. Each fi 6= 0.
2. hfi , f j i = 0 for all i 6= j.
If, in addition, kfi k = 1 for each i, the set {f1 , f2 , . . . , fn } is called an orthonormal set.
Example 10.2.1
{sin x, cos x} is orthogonal in C[−π , π ] because
Z π π
sin x cos x dx = − 41 cos 2x −π = 0
−π
The first result about orthogonal sets extends Pythagoras’ theorem in Rn (Theorem 5.3.4) and the same
proof works.
Theorem 10.2.2
Let {f1 , f2 , . . . , fn } be an orthogonal set of vectors.
As before, the process of passing from an orthogonal set to an orthonormal one is called normalizing the
orthogonal set. The proof of Theorem 5.3.5 goes through to give
548 Inner Product Spaces
Theorem 10.2.3
Every orthogonal set of vectors is linearly independent.
Example 10.2.2
2 0 0
Show that −1 , 1 , −1 is an orthogonal basis of R3 with inner product
0 1 2
1 1 0
hv, wi = vT Aw, where A = 1 2 0
0 0 1
Solution. We have
* 2 0 +
1 1 0
0
0
−1 , 1 = 2 −1 0 1 2 0 1 = 1 0 0 1 = 0
0 1 0 0 1 1 1
and the reader can verify that the other pairs are orthogonal too. Hence the set is orthogonal, so it
is linearly independent by Theorem 10.2.3. Because dim R3 = 3, it is a basis.
Example 10.2.3
If a0 , a1 , . . . , an are distinct numbers and p(x) and q(x) are in Pn , define
This is an inner product on Pn . (Axioms P1–P4 are routinely verified, and P5 holds because 0 is
the only polynomial of degree n with n + 1 distinct roots. See Theorem 6.5.4 or Appendix D.)
Recall that the Lagrange polynomials δ0 (x), δ1 (x), . . . , δn (x) relative to the numbers
10.2. Orthogonal Sets of Vectors 549
(x − a0 ), (x − a1 ), (x − a2 ), . . . , (x − an )
except that the kth term is omitted. Then {δ0 (x), δ1 (x), . . . , δn (x)} is orthonormal with respect to
h , i because δk (ai ) = 0 if i 6= k and δk (ak ) = 1. These facts also show that hp(x), δk (x)i = p(ak )
so the expansion theorem gives
p(x) = p(a0 )δ0 (x) + p(a1 )δ1 (x) + · · · + p(an )δn (x)
for each p(x) in Pn . This is the Lagrange interpolation expansion of p(x), Theorem 6.5.3, which
is important in numerical integration.
The proof of this result (and the next) is the same as for the dot product in Rn (Lemma 8.1.1 and
Theorem 8.1.2).
f1 = v1
f2 = v2 − hvk2f , kf21 i f1
1
f3 = v3 − hvk3f , kf21 i f1 − hvk3f , kf22 i f2
1 2
... ...
fk = vk − hvkkf , kf21 i f1 − hvkkf , kf22 i f2 − · · · − hvkkf , fk−1 i
f
k2 k−1
1 2 k−1
The purpose of the Gram-Schmidt algorithm is to convert a basis of an inner product space into an or-
thogonal basis. In particular, it shows that every finite dimensional inner product space has an orthogonal
basis.
Example 10.2.4
1 R
Consider V = P3 with the inner product hp, qi = −1 p(x)q(x)dx. If the Gram-Schmidt algorithm
is applied to the basis {1, x, x2 , x3 }, show that the result is the orthogonal basis
= x − 20 f1 = x
hx, f1 i
f2 = x − kf1 k2 1
f
hx2 , f1 i hx2 , f2 i
f3 = x2 − kf1 k2 f1 − kf2 k2 2
f
2
= x2 − 32 1 − 02 x
3
1 2
= 3 (3x − 1)
The polynomials in Example 10.2.4 are such that the leading coefficient is 1 in each case. In other contexts
(the study of differential equations, for example) it is customary to take multiples p(x) of these polynomials
such that p(1) = 1. The resulting orthogonal basis of P3 is
{1, x, 13 (3x2 − 1), 15 (5x3 − 3x)}
and these are the first four Legendre polynomials, so called to honour the French mathematician A. M.
Legendre (1752–1833). They are important in the study of differential equations.
If V is an inner product space of dimension n, let E = {f1 , f2 , . . . , fn } be an orthonormal basis of V
(by Theorem 10.2.5). If v = v1 f1 + v2 f2 + · · · + vn fn and w = w1 f1 + w2 f2 + · · · + wn fn are two vectors in
T T
V , we have CE (v) = v1 v2 · · · vn and CE (w) = w1 w2 · · · wn . Hence
This shows that the coordinate isomorphism CE : V → Rn preserves inner products, and so proves
Corollary 10.2.1
If V is any n-dimensional inner product space, then V is isomorphic to Rn as inner product spaces.
More precisely, if E is any orthonormal basis of V , the coordinate isomorphism
The orthogonal complement of a subspace U of Rn was defined (in Chapter 8) to be the set of all vectors
in Rn that are orthogonal to every vector in U . This notion has a natural extension in an arbitrary inner
product space. Let U be a subspace of an inner product space V . As in Rn , the orthogonal complement
U ⊥ of U in V is defined by
U ⊥ = {v | v ∈ V , hv, ui = 0 for all u ∈ U }
Theorem 10.2.6
Let U be a finite dimensional subspace of an inner product space V .
1. U ⊥ is a subspace of V and V = U ⊕U ⊥.
3. If dim V = n, then U ⊥⊥ = U .
Proof.
Proof. Only (3) remains to be proved. But since {f1 , f2 , . . . , fn } is an orthogonal basis of U and since
projU v is in U , the result follows from the expansion theorem (Theorem 10.2.4) applied to the finite
dimensional space U .
Note that there is no requirement in Theorem 10.2.7 that V is finite dimensional.
Example 10.2.5
Let U be a subspace of the finite dimensional inner product space V . Show that
projU ⊥ v = v − projU v for all v ∈ V .
v
The vectors v, projU v, and v− projU v in Theorem 10.2.7 can be visu-
v − proj U v
alized geometrically as in the diagram (where U is shaded and dim U = 2).
0
proj U v
This suggests that projU v is the vector in U closest to v. This is, in fact,
U the case.
kv − projU vk < kv − uk
Example 10.2.6
Consider the space C[−1, 1] of real-valued continuous functions on the interval [−1, 1] with inner
R1
product h f , gi = −1 f (x)g(x)dx. Find the polynomial p = p(x) of degree at most 2 that best
approximates the absolute-value function f given by f (x) = |x|.
Solution. Here we want the vector p in the subspace U = P2 of C[−1, 1] that is closest to f . In
Example 10.2.4 the Gram-Schmidt algorithm was applied to give an orthogonal basis
{f1 = 1, f2 = x, f3 = 3x2 − 1} of P2 (where, for convenience, we have changed f3 by a numerical
factor). Hence the required polynomial is
p = proj P2 f
y
= hkff , kf12i f1 + hkff , kf22i f2 + hkff , kf32i f3
1 2 3
y = p(x) 1 1/2
= 2 f1 + 0f2 + 8/5 f3
3 2
y = f (x)
= 16 (5x + 1)
x
-1 O 1 The graphs of p(x) and f (x) are given in the diagram.
If polynomials of degree at most n are allowed in Example 10.2.6, the polynomial in Pn is proj Pn f ,
and it is calculated in the same way. Because the subspaces Pn get larger as n increases, it turns out that the
approximating polynomials proj Pn f get closer and closer to f . In fact, solving many practical problems
comes down to approximating some interesting vector v (often a function) in an infinite dimensional inner
product space V by vectors in finite dimensional subspaces (which can be computed). If U1 ⊆ U2 are finite
dimensional subspaces of V , then
kv − projU2 vk ≤ kv − projU1 vk
by Theorem 10.2.8 (because projU1 v lies in U1 and hence in U2 ). Thus projU2 v is a better approximation
to v than projU1 v. Hence a general method in approximation theory might be described as follows: Given
v, use it to construct a sequence of finite dimensional subspaces
U1 ⊆ U2 ⊆ U3 ⊆ · · ·
of V in such a way that kv − projUk vk approaches zero as k increases. Then projUk v is a suitable ap-
proximation to v if k is large enough. For more information, the interested reader may wish to consult
Interpolation and Approximation by Philip J. Davis (New York: Blaisdell, 1963).
554 Inner Product Spaces
Use the dot product in Rn unless otherwise in- Exercise 10.2.4 In each case, use the Gram-Schmidt
structed. process to convert the basis B = {1, x, x2 } into an or-
Exercise 10.2.1 In each case, verify that B is an orthog- thogonal basis of P2 .
onal basis of V with the given inner product and use the
expansion theorem to express v as a linear combination a. hp, qi = p(0)q(0) + p(1)q(1) + p(2)q(2)
of the basis vectors. R2
b. hp, qi = 0 p(x)q(x)dx
a 1 1
a. v = ,B= , , V = R2 , Exercise 10.2.5 Show that {1, x − 12 , x2 − x + 16 }, is an
b −1 0
orthogonal basis of P2 with the inner product
2 2
hv, wi = v Aw where A =
T
Z 1
2 5
hp, qi = p(x)q(x)dx
0
a 1 −1 1
b. v = b , B = 1 , 0 , −6 , and find the corresponding orthonormal basis.
c 1 1 1 Exercise 10.2.6 In each case find U ⊥ and compute
2 0 1 dim U and dim U ⊥ .
V = R3 , hv, wi = vT Aw where A = 0 1 0
1 0 2 a. U = span {(1, 1, 2, 0), (3, −1, 2, 1),
c. v = a + bx + cx2 , B = {1 x, 2 − 3x2 }, V = P2 , (1, −3, −2, 1)} in R4
hp, qi = p(0)q(0) + p(1)q(1) + p(−1)q(−1) b. U = span {(1, 1, 0, 0)} in R4
a b c. U = span {1, x} in P2 with
d. v = ,
c d hp, qi = p(0)q(0) + p(1)q(1) + p(2)q(2)
1 0 1 0 0 1 0 1 R
B= , , , , d. U = span {x} in P2 with hp, qi = 01 p(x)q(x)dx
0 1 0 −1 1 0 −1 0
V = M22 , hX , Y i = tr (XY T )
1 0 1 1
e. U = span , in M22 with
0 1 0 0
Exercise 10.2.2 Let R3 have the inner product hX , Y i = tr (XY T )
h(x, y, z), (x′ , y′ , z′ )i = 2xx′ + yy′ + 3zz′ . In each case,
use the Gram-Schmidt algorithm to transform B into an 1 1 1 0 1 0
f. U = span , , in
orthogonal basis. 0 0 1 0 1 1
M22 with hX , Y i = tr (XY T )
a. B = {(1, 1, 0), (1, 0, 1), (0, 1, 1)}
Exercise 10.2.7 Let hX , Y i = tr (XY T ) in M22 . In each
b. B = {(1, 1, 1), (1, −1, 1), (1, 1, 0)} case find the matrix in U closest to A.
Exercise 10.2.3 Let M22 have the inner product 1 0 1 1
a. U = span , ,
hX , Y i = tr (XY T ). In each case, use the Gram-Schmidt 0 1 1 1
algorithm to transform B into an orthogonal basis. 1 −1
A=
2 3
1 1 1 0 0 1 1 0
a. B = , , , 1 0 1 1 1 1
0 0 1 0 0 1 0 1 b. U = span , , ,
0 1 1 −1 0 0
1 1 1 0 1 0 1 0 2 1
b. B = , , , A=
0 1 1 1 0 1 0 0 3 2
10.2. Orthogonal Sets of Vectors 555
a. Show that {u, v} is orthogonal if and only if Exercise 10.2.18 Let U be a finite dimensional subspace
ku + vk2 = kuk2 + kvk2 . of an inner product space V , and let v be a vector in V .
b. If u = v = (1, 1) and w = (−1, 0), show that a. Show that v lies in U if and only if v = proj U (v).
ku + v + wk2 = kuk2 + kvk2 + kwk2 but {u, v, w}
is not orthogonal. Hence the converse to Pythago- b. If V = R3 , show that (−5, 4, −3) lies in
ras’ theorem need not hold for more than two vec- span {(3, −2, 5), (−1, 1, 1)} but that (−1, 0, 2)
tors. does not.
Exercise 10.2.11 Let v and w be vectors in an inner Exercise 10.2.19 Let n 6= 0 and w 6= 0 be nonparallel
product space V . Show that: vectors in R3 (as in Chapter 4).
n o
a. v is orthogonal to w if and only if n·w
a. Show that n, n × w, w − knk2
n is an orthogo-
kv + wk = kv − wk.
nal basis of R3 .
b. v + w and v − w are orthogonal if and only if n o
n·w
kvk = kwk. b. Show that span n × w, w − knk 2n is the plane
through the origin with normal n.
Exercise 10.2.12 Let U and W be subspaces of an n-
dimensional inner product space V . Suppose hu, vi = 0 Exercise 10.2.20 Let E = {f1 , f2 , . . . , fn } be an or-
for all u ∈ U and w ∈ W and dim U + dim W = n. Show thonormal basis of V .
that U ⊥ = W .
Exercise 10.2.13 If U and W are subspaces of an inner a. Show that hv, wi = CE (v) ·CE (w) for all hv, wi in
⊥ ⊥
product space, show that (U +W ) = U ∩W . ⊥ V.
There is a natural way to define a symmetric linear operator T on a finite dimensional inner product
space V . If T is such an operator, it is shown in this section that V has an orthogonal basis consisting of
eigenvectors of T . This yields another proof of the principal axes theorem in the context of inner product
spaces.
Theorem 10.3.1
Let T : V → V be a linear operator on a finite dimensional space V . Then the following conditions
are equivalent.
Proof. We have MB (T ) = CB [T (b1 )] CB [T (b2 )] · · · CB [T (bn )] where B = {b1 , b2 , . . . , bn } is any
basis of V . By comparing columns:
λ1 0 ··· 0
0 λ2 ··· 0
MB (T ) = .. .. .. if and only if T (bi ) = λi bi for each i
. . .
0 0 · · · λn
Example 10.3.1
Let T : P2 → P2 be given by
so 2
cT (x)=(x + 2) (x− 5),and
the eigenvalues of T are λ = −2 and λ = 5. One sees that
0 4 1
1 , 0 , 0 is a basis of eigenvectors of MB (T ), so B = {x, 4 − 3x2 , 1 + x2 } is a
0
0 −3 1
basis of P2 consisting of eigenvectors of T .
If V is an inner product space, the expansion theorem gives a simple formula for the matrix of a linear
operator with respect to an orthogonal basis.
Theorem 10.3.2
Let T : V → V be a linear operator on an inner product space V . If B = {b1 , b2 , . . . , bn } is an
orthogonal basis of V , then h i
hbi , T (b j )i
MB (T ) = kb k2 i
Proof. Write MB (T ) = ai j . The jth column of MB (T ) is CB [T (e j )], so
T (b j ) = a1 j b1 + · · · + ai j bi + · · · + an j bn
Example 10.3.2
Let T : R3 → R3 be given by
If the dot product in R3 is used, find the matrix of T with respect to the standard basis
B = {e1 , e2 , e3 } where e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1).
It is not difficult to verify that an n × n matrix A is symmetric if and only if x · (Ay) = (Ax) · y holds for
all columns x and y in Rn . The analog for operators is as follows:
10.3. Orthogonal Diagonalization 559
Theorem 10.3.3
Let V be a finite dimensional inner product space. The following conditions are equivalent for a
linear operator T : V → V .
Proof. (1) ⇒ (2). Let B = {f1 , . . . , fn } be an orthonormal basis of V , and write MB (T ) = ai j . Then
ai j = hfi , T (f j )i by Theorem 10.3.2. Hence (1) and axiom P2 give
Example 10.3.3
If A is an n × n matrix, let TA : Rn → Rn be the matrix operator given by TA (v) = Av for all
columns v. If the dot product is used in Rn , then TA is a symmetric operator if and only if A is a
symmetric matrix.
560 Inner Product Spaces
Solution. If E is the standard basis of Rn , then E is orthonormal when the dot product is used. We
have ME (TA ) = A (by Example 9.1.4), so the result follows immediately from part (3) of
Theorem 10.3.3.
It is important to note that whether an operator is symmetric depends on which inner product is being
used (see Exercise 10.3.2).
If V is a finite dimensional inner product space, the eigenvalues of an operator T : V → V are the
same as those of MB (T ) for any orthonormal basis B (see Theorem 9.3.3). If T is symmetric, MB (T ) is a
symmetric matrix and so has real eigenvalues by Theorem 5.5.7. Hence we have the following:
Theorem 10.3.4
A symmetric linear operator on a finite dimensional inner product space has real eigenvalues.
If U is a subspace of an inner product space V , recall that its orthogonal complement is the subspace
U⊥ of V defined by
U ⊥ = {v in V | hv, ui = 0 for all u in U }
Theorem 10.3.5
Let T : V → V be a symmetric linear operator on an inner product space V , and let U be a
T -invariant subspace of V . Then:
2. U ⊥ is also T -invariant.
Proof.
1. U is itself an inner product space using the same inner product, and condition 1 in Theorem 10.3.3
that T is symmetric is clearly preserved.
2. If v is in U ⊥, our task is to show that T (v) is also in U ⊥ ; that is, hT (v), ui = 0 for all u in U . But if
u is in U , then T (u) also lies in U because U is T -invariant, so
The principal axes theorem (Theorem 8.2.2) asserts that an n × n matrix A is symmetric if and only if
Rn has an orthogonal basis of eigenvectors of A. The following result not only extends this theorem to an
arbitrary n-dimensional inner product space, but the proof is much more intuitive.
10.3. Orthogonal Diagonalization 561
1. T is symmetric.
Proof. (1) ⇒ (2). Assume that T is symmetric and proceed by induction on n = dim V . If n = 1, every
nonzero vector in V is an eigenvector of T , so there is nothing to prove. If n ≥ 2, assume inductively
that the theorem holds for spaces of dimension less than n. Let λ1 be a real eigenvalue of T (by Theo-
rem 10.3.4) and choose an eigenvector f1 corresponding to λ1 . Then U = Rf1 is T -invariant, so U ⊥ is
also T -invariant by Theorem 10.3.5 (T is symmetric). Because dim U ⊥ = n − 1 (Theorem 10.2.6), and
because the restriction of T to U ⊥ is a symmetric operator (Theorem 10.3.5), it follows by induction that
U ⊥ has an orthogonal basis {f2 , . . . , fn } of eigenvectors of T . Hence B = {f1 , f2 , . . . , fn } is an orthogonal
basis of V , which proves (2).
(2) ⇒ (1). If B = {f1 , . . . , fn } is a basis as in (2), then MB (T ) is symmetric (indeed diagonal), so T is
symmetric by Theorem 10.3.3.
The matrix version of the principal axes theorem is an immediate consequence of Theorem 10.3.6. If A
is an n × n symmetric matrix, then TA : Rn → Rn is a symmetric operator, so let B be an orthonormal basis
of Rn consisting of eigenvectors of TA (and hence of A). Then PT AP is diagonal where P is the orthogonal
matrix whose columns are the vectors in B (see Theorem 9.2.4).
Similarly, let T : V → V be a symmetric linear operator on the n-dimensional inner product space V
and let B0 be any convenient orthonormal basis of V . Then an orthonormal basis of eigenvectors of T can
be computed from MB0 (T ). In fact, if PT MB0 (T )P is diagonal where P is orthogonal, let B = {f1 , . . . , fn }
be the vectors in V such that CB0 (f j ) is column j of P for each j. Then B consists of eigenvectors of T by
Theorem 9.3.3, and they are orthonormal because B0 is orthonormal. Indeed
hfi , f j i = CB0 (fi ) ·CB0 (f j )
holds for all i and j, as the reader can verify. Here is an example.
Example 10.3.4
Let T : P2 → P2 be given by
Using the inner product ha + bx + cx2 , a′ + b′ x + c′ x2 i = aa′ + bb′ + cc′ , show that T is symmetric
and find an orthonormal basis of P2 consisting of eigenvectors.
8 −2 2
Solution. If B0 = {1, x, x2 }, then MB0 (T ) = −2 5 4 is symmetric, so T is symmetric.
2 4 5
This matrix was analyzed in Example 8.2.5, where it was found that an orthonormal basis of
562 Inner Product Spaces
n T T T o
1 1 1
eigenvectors is 3 1 2 −2 , 2 1 2 ,
3 −2 2 1
3 . Because B0 is
orthonormal, the corresponding orthonormal basis of P2 is
B = 13 (1 + 2x − 2x2 ), 13 (2 + x + 2x2 ), 13 (−2 + 2x + x2 )
Exercise 10.3.1 In each case, show that T is symmetric a. Show that T is symmetric.
by calculating MB (T ) for some orthonormal basis B.
b. Show that MB (T ) is not symmetric if the orthogo-
nal basis B = {(1, 0), (0, 2)} is used. Why does
a. T : R3 → R3 ; this not contradict Theorem 10.3.3?
T (a, b, c) = (a−2b, −2a+2b+2c, 2b−c); dot prod-
uct
Exercise 10.3.4 Let V be an n-dimensional inner prod-
b. T : M22 →M22; uct space, and let T and S denote symmetric linear oper-
a b c−a d −b ators on V . Show that:
T = ;
c d a + 2c b + 2d
inner
product:
′ a. The identity operator is symmetric.
x y x y′
, = xx′ + yy′ + zz′ + ww′ b. rT is symmetric for all r in R.
z w z′ w′
c. T : P2 → P2 ; c. S + T is symmetric.
T (a + bx + cx2 ) = (b + c) + (a + c)x + (a + b)x2 ; d. If T is invertible, then T −1 is symmetric.
inner product:
ha + bx + cx2 , a′ + b′ x + c′ x2 i = aa′ + bb′ + cc′ e. If ST = T S, then ST is symmetric.
b. Show that T b. T : R3 → R3 ;
is not symmetric if hx, yi = xAy ,
T
T (a, b) = (a − b, b − a) d. T : P2 → P2 ;
T (a + bx + cx2 ) = (c − a) + 3bx + (a − c)x2 ; inner
Use the dot product in R2 . product as in part (c)
10.3. Orthogonal Diagonalization 563
Exercise 10.3.6 If A is any n×n matrix, let TA : Rn → Rn If B = 1 0 0 1 0 0 0 0
0 , , , ,
be given by TA (x) = Ax. Suppose an inner product on Rn 0 0 0 0 1 0 0 1
is given by hx, yi = xT Py, where P is a positive definite then M (T ) = pQT qQT , where P = p q .
B
matrix. rQT sQT r s
2 2
Use the fact that cP = bP ⇒ (c − b )P = 0.]
T
a. Show that TA is symmetric if and only if Exercise 10.3.11 Let T : V → W be any linear transfor-
PA = AT P. mation and let B = {b1 , . . . , bn } and D = {d1 , . . . , dm }
be bases of V and W , respectively. If W is an inner prod-
b. Use part (a) to deduce Example 10.3.3.
uct space and D is orthogonal, show that
h i
hd , T (b )i
MDB (T ) = ikdi k2 j
Exercise 10.3.7 Let T : M22 → M22 be given by
T (X ) = AX , where A is a fixed 2 × 2 matrix. This is a generalization of Theorem 10.3.2.
Exercise 10.3.12 Let T : V → V be a linear operator on
a. Compute
MB (T
), where
an inner product space V of finite dimension. Show that
1 0 0 0 0 1 0 0
B = , , , . the following are equivalent.
0 0 1 0 0 0 0 1
Note the order! 1. hv, T (w)i = −hT (v), wi for all v and w in V .
Exercise 10.3.9 If T : V → V is symmetric, write b. Using the standard inner product on R2 , show that
T −1 (W ) = {v | T (v) is in W }. Show that T : R2 → R2 with T (a, b) = (a, a + b) satisfies
T (U )⊥ = T −1 (U ⊥ ) holds for every subspace U of V . condition (i) and that S : R2 → R2 with
S(a, b) = (b, −a) satisfies condition (ii), but that
Exercise 10.3.10 Let T : M22 → M22 be defined by
neither is symmetric. (Example 9.3.4 is useful for
T (X ) = PX Q, where P and Q are nonzero 2 × 2 matri-
S.)
ces. Use the inner product hX , Y i = tr (XY T ). Show that
T is symmetric if and only if either P and [Hint for part (a): If conditions (i) and (ii) hold,
Q are both
sym-
0 1 proceed by induction on n. By condition (i), let
metric or both are scalar multiples of . [Hint: e1 be an eigenvector of T . If U = Re1 , then U ⊥
−1 0
If B is as in is T -invariant by condition (ii), so show that the
part (a) ofExercise 10.3.7, then
aP cP restriction of T to U ⊥ satisfies conditions (i) and
MB (T ) = in block form, where
bP dP (ii). (Theorem 9.3.1 is helpful for part (i)). Then
a b apply induction to show that V has an orthogonal
Q= . basis of eigenvectors (as in Theorem 10.3.6)].
c d
564 Inner Product Spaces
Exercise 10.3.14 Let B = {f1 , f2 , . . . , fn } be an or- Exercise 10.3.15 Let V be a finite dimensional inner
thonormal basis of an inner product space V . Given product space. Show that the following conditions are
T : V → V , define T ′ : V → V by equivalent for a linear operator T : V → V .
T ′ (v) = hv, T (f1 )if1 + hv, T (f2 )if2 + · · · + hv, T (fn )ifn
n 1. T is symmetric and T 2 = T .
= ∑ hv, T (fi )ifi
i=1 Ir 0
2. MB (T ) = for some orthonormal basis B
0 0
a. Show that (aT )′ = aT ′ . of V .
10.4 Isometries
We saw in Section 2.6 that rotations about the origin and reflections in a line through the origin are linear
operators on R2 . Similar geometric arguments (in Section 4.4) establish that, in R3 , rotations about a line
through the origin and reflections in a plane through the origin are linear. We are going to give an algebraic
proof of these results that is valid in any inner product space. The key observation is that reflections and
rotations are distance preserving in the following sense. If V is an inner product space, a transformation
S : V → V (not necessarily linear) is said to be distance preserving if the distance between S(v) and S(w)
is the same as the distance between v and w for all vectors v and w; more formally, if
kS(v) − S(w)k = kv − wk for all v and w in V (10.2)
Distance-preserving maps need not be linear. For example, if u is any vector in V , the transformation
Su : V → V defined by Su (v) = v + u for all v in V is called translation by u, and it is routine to verify that
Su is distance preserving for any u. However, Su is linear only if u = 0 (since then Su (0) = 0). Remarkably,
distance-preserving operators that do fix the origin are necessarily linear.
10.4. Isometries 565
Lemma 10.4.1
Let V be an inner product space of dimension n, and consider a distance-preserving transformation
S : V → V . If S(0) = 0, then S is linear.
Proof. We have kS(v) − S(w)k2 = kv − wk2 for all v and w in V by (10.2), which gives
Now let {f1 , f2 , . . . , fn } be an orthonormal basis of V . Then {S(f1 ), S(f2 ), . . . , S(fn )} is orthonormal by
(10.3) and so is a basis because dim V = n. Now compute:
hS(v + w) − S(v) − S(w), S(fi )i = hS(v + w), S(fi )i − hS(v), S(fi )i − hS(w), S(fi )i
= hv + w, fi i − hv, fi i − hw, fi i
=0
for each i. It follows from the expansion theorem (Theorem 10.2.4) that S(v + w) − S(v) − S(w) = 0; that
is, S(v + w) = S(v) + S(w). A similar argument shows that S(av) = aS(v) holds for all a in R and v in V ,
so S is linear after all.
It is routine to verify that the composite of two distance-preserving transformations is again distance
preserving. In particular the composite of a translation and an isometry is distance preserving. Surpris-
ingly, the converse is true.
Theorem 10.4.1
If V is a finite dimensional inner product space, then every distance-preserving transformation
S : V → V is the composite of a translation and an isometry.
Proof. If S : V → V is distance preserving, write S(0) = u and define T : V → V by T (v) = S(v) − u for
all v in V . Then kT (v) − T (w)k = kv − wk for all vectors v and w in V as the reader can verify; that is, T
is distance preserving. Clearly, T (0) = 0, so it is an isometry by Lemma 10.4.1. Since
Theorem 10.4.2
Let T : V → V be a linear operator on a finite dimensional inner product space V .
The following conditions are equivalent:
1. T is an isometry. (T preserves distance)
2. kT (v)k = kvk for all v in V . (T preserves norms)
3. hT (v), T (w)i = hv, wi for all v and w in V . (T preserves inner products)
4. If {f1 , f2 , . . . , fn } is an orthonormal basis of V ,
then {T (f1 ), T (f2 ), . . . , T (fn )} is also an orthonormal basis. (T preserves orthonormal bases)
5. T carries some orthonormal basis to an orthonormal basis.
Corollary 10.4.1
Let V be a finite dimensional inner product space.
2. a. 1V : V → V is an isometry.
b. The composite of two isometries of V is an isometry.
c. The inverse of an isometry of V is an isometry.
Proof. (1) is by (4) of Theorem 10.4.2 and Theorem 7.3.1. (2a) is clear, and (2b) is left to the reader. If
T : V → V is an isometry and {f1 , . . . , fn } is an orthonormal basis of V , then (2c) follows because T −1
carries the orthonormal basis {T (f1 ), . . . , T (fn )} back to {f1 , . . . , fn }.
The conditions in part (2) of the corollary assert that the set of isometries of a finite dimensional inner
product space forms an algebraic system called a group. The theory of groups is well developed, and
groups of operators are important in geometry. In fact, geometry itself can be fruitfully viewed as the
study of those properties of a vector space that are preserved by a group of invertible linear operators.
Example 10.4.1
Rotations of R2 about the origin are isometries, as are reflections in lines through the origin: They
clearly preserve distance and so are linear by Lemma 10.4.1. Similarly, rotations about lines
through the origin and reflections in planes through the origin are isometries of R3 .
Example 10.4.2
Let T : Mnn → Mnn be the transposition operator: T (A) = AT . Then T is an isometry if the inner
product is hA, Bi = tr (ABT ) = ∑ ai j bi j . In fact, T permutes the basis consisting of all matrices
i, j
with one entry 1 and the other entries 0.
The proof of the next result requires the fact (see Theorem 10.4.2) that, if B is an orthonormal basis,
then hv, wi = CB (v) ·CB (w) for all vectors v and w.
Theorem 10.4.3
Let T : V → V be an operator where V is a finite dimensional inner product space. The following
conditions are equivalent.
1. T is an isometry.
Proof. (1) ⇒ (2). Let B = {e1 , . . . , en } be an orthonormal basis. Then the jth column of MB (T ) is
CB [T (e j )], and we have
using (1). Hence the columns of MB (T ) are orthonormal in Rn , which proves (2).
(2) ⇒ (3). This is clear.
(3) ⇒ (1). Let B = {e1 , . . . , en } be as in (3). Then, as before,
Corollary 10.4.2
If T : V → V is an isometry where V is a finite dimensional inner product space, then det T = ±1.
Example 10.4.3
If A is any n × n matrix, the matrix operator TA : Rn → Rn is an isometry if and only if A is
orthogonal using the dot product in Rn . Indeed, if E is the standard basis of Rn , then ME (TA ) = A
by Theorem 9.2.4.
Rotations and reflections that fix the origin are isometries in R2 and R3 (Example 10.4.1); we are going
to show that these isometries (and compositions of them in R3 ) are the only possibilities. In fact, this will
follow from a general structure theorem for isometries. Surprisingly enough, much of the work involves
the two–dimensional case.
Theorem 10.4.4
Let T : V → V be an isometry on the two-dimensional inner product space V . Then there are two
possibilities.
Either (1) There is an orthonormal basis B of V such that
cos θ − sin θ
MB (T ) = , 0 ≤ θ < 2π
sin θ cos θ
Furthermore, type (1) occurs if and only if det T = 1, and type (2) occurs if and only if
det T = −1.
Proof. The final statement follows from the rest because det T = det [MB (T )] for any basis B. Let
B0 = {e1 , e2 } be any ordered orthonormal basis of V and write
a b T (e1 ) = ae1 + ce2
A = MB0 (T ) = ; that is,
c d T (e2 ) = be1 + de2
Then A is orthogonal by Theorem 10.4.3, so its columns (and rows) are orthonormal. Hence
a2 + c2 = 1 = b2 + d 2
so (a, c) and (d, b) lie on the unit circle. Thus angles θ and ϕ exist such that
Then sin(θ + ϕ ) = cd + ab = 0 because the columns of A are orthogonal, so θ + ϕ = kπ for some integer
k. This gives d = cos(kπ − θ ) = (−1)k cos θ and b = sin(kπ − θ ) = (−1)k+1 sin θ . Finally
cos θ (−1)k+1 sin θ
A=
sin θ (−1)k cos θ
a c
If k is even we are in type (1) with B = B0 , so assume k is odd. Then A = . If a = −1 and c = 0,
c −a
we are in type (1) with
B = {e2 , e2 }. Otherwise
A has eigenvalues λ1 = 1 and λ2 = −1 with corresponding
1+a −c
eigenvectors x1 = and x2 = as the reader can verify. Write
c 1+a
Then f1 and f2 are orthogonal (verify) and CB0 (fi ) = CB0 (λi fi ) = xi for each i. Moreover
Corollary 10.4.3
An operator T : R2 → R2 is an isometry if and only if T is a rotation or a reflection.
In fact, if E is the standard basis of R2 , then the clockwise rotation Rθ about the origin through an angle
θ has matrix
cos θ − sin θ
ME (Rθ ) =
sin θ cos θ
(see Theorem 2.6.4). On the other hand, if S : R2 → R2 is the reflection in a line through the origin (called
the fixed line of the reflection), let f1 be a unit vector pointing along the fixed line and let f2 be a unit vector
perpendicular to the fixed line. Then B = {f1 , f2 } is an orthonormal basis, S(f1 ) = f1 and S(f2 ) = −f2 , so
1 0
MB (S) =
0 −1
Thus S is of type 2. Note that, in this case, 1 is an eigenvalue of S, and any eigenvector corresponding to
1 is a direction vector for the fixed line.
Example 10.4.4
In each case, determine whether TA : R2 → R2 is a rotation or a reflection, and then find the angle
or fixed line: √
1 √ 1 3 1 −3 4
(a) A = 2 (b) A = 5
− 3 1 4 3
570 Inner Product Spaces
Solution. Both matrices are orthogonal, so (because ME (TA ) = A, where E is the standard basis)
TA is an isometry in both cases. In the first √case, det A = 1, so TA is a counterclockwise rotation
through θ , where cos θ = 12 and sin θ = − 23 . Thus θ = − π3 . In (b), det A = −1, so TA is a
1
reflection in this case. We verify that d = is an eigenvector corresponding to the eigenvalue
2
1. Hence the fixed line Rd has equation y = 2x.
We now give a structure theorem for isometries. The proof requires three preliminary results, each of
interest in its own right.
Lemma 10.4.2
Let T : V → V be an isometry of a finite dimensional inner product space V . If U is a T -invariant
subspace of V , then U ⊥ is also T -invariant.
Proof. Let w lie in U ⊥. We are to prove that T (w) is also in U ⊥ ; that is, hT (w), ui = 0 for all u in U . At
this point, observe that the restriction of T to U is an isometry U → U and so is an isomorphism by the
corollary to Theorem 10.4.2. In particular, each u in U can be written in the form u = T (u1 ) for some u1
in U , so
hT (w), ui = hT (w), T (u1 )i = hw, u1 i = 0
because w is in U ⊥. This is what we wanted.
To employ Lemma 10.4.2 above to analyze an isometry T : V → V when dim V = n, it is necessary to
show that a T -invariant subspace U exists such that U 6= 0 and U 6= V . We will show, in fact, that such a
subspace U can always be found of dimension 1 or 2. If T has a real eigenvalue λ then Ru is T -invariant
where u is any λ -eigenvector. But, in case (1) of Theorem 10.4.4, the eigenvalues of T are eiθ and e−iθ
(the reader should check this), and these are nonreal if θ 6= 0 and θ 6= π . It turns out that every complex
eigenvalue λ of T has absolute value 1 (Lemma 10.4.3 below); and that U has a T -invariant subspace of
dimension 2 if λ is not real (Lemma 10.4.4).
Lemma 10.4.3
Let T : V → V be an isometry of the finite dimensional inner product space V . If λ is a complex
eigenvalue of T , then |λ | = 1.
Proof. Choose an orthonormal basis B of V , and let A = MB (T ). Then A is a real orthogonal matrix so,
using the standard inner product hx, yi = xT y in C, we get
for all x in Cn . But Ax = λ x for some x 6= 0, whence kxk2 = kλ xk2 = |λ |2kxk2 . This gives |λ | = 1, as
required.
10.4. Isometries 571
Lemma 10.4.4
Let T : V → V be an isometry of the n-dimensional inner product space V . If T has a nonreal
eigenvalue, then V has a two-dimensional T -invariant subspace.
Proof. Let B be an orthonormal basis of V , let A = MB (T ), and (using Lemma 10.4.3) let λ = eiα be a
nonreal eigenvalue of A, say Ax = λ x where x 6= 0 in Cn . Because A is real, complex conjugation gives
Ax = λ x, so λ is also an eigenvalue. Moreover λ 6= λ (λ is nonreal), so {x, x} is linearly independent in
Cn (the argument in the proof of Theorem 5.5.4 works). Now define
z1 = x + x and z2 = i(x − x)
Then z1 and z2 lie in Rn , and {z1 , z2 } is linearly independent over R because {x, x} is linearly independent
over C. Moreover
x = 12 (z1 − iz2 ) and x = 12 (z1 + iz2 )
Now λ + λ = 2 cos α and λ − λ = 2i sin α , and a routine computation gives
using Theorem 9.1.2. Because CB is one-to-one, this gives the first of the following equations (the other is
similar):
Theorem 10.4.5
Let T : V → V be an isometry
of the n-dimensional inner product space V . Given an angle θ , write
cos θ − sin θ
R(θ ) = . Then there exists an orthonormal basis B of V such that MB (T ) has
sin θ cos θ
one of the following block diagonal forms, classified for convenience by whether n is odd or even:
1 0 ··· 0 −1 0 ··· 0
0 R(θ1 ) · · · 0 0 R(θ1 ) · · · 0
n = 2k + 1 .. .. . .. . or . . . .
. . .. .. .. .. ..
0 0 · · · R(θk ) 0 0 · · · R(θk )
572 Inner Product Spaces
−1 0 0 ··· 0
R(θ1 ) 0 ··· 0
0 1 0 ··· 0
0 R(θ2 ) ··· 0
n = 2k . . ... . or 0 0 R(θ1 ) ··· 0
.. .. .. . . . ... .
.. .. .. ..
0 0 · · · R(θk )
0 0 0 · · · R(θk−1 )
Proof. We show first, by induction on n, that an orthonormal basis B of V can be found such that MB (T )
is a block diagonal matrix of the following form:
Ir 0 0 ··· 0
0 −Is 0 ··· 0
0
MB (T ) = 0 0 R(θ1 ) · · ·
.. .. .. . .. .
..
. . .
0 0 0 · · · R(θt )
where the identity matrix Ir , the matrix −Is , or the matrices R(θi ) may be missing. If n = 1 and V = Rv,
this holds because T (v) = λ v and λ = ±1 by Lemma 10.4.3. If n = 2, this follows from Theorem 10.4.4. If
n ≥ 3, either T has a real eigenvalue and therefore has a one-dimensional T -invariant subspace U = Ru for
any eigenvector u, or T has no real eigenvalue and therefore has a two-dimensional T -invariant subspace
U by Lemma 10.4.4. In either case U ⊥ is T -invariant (Lemma 10.4.2) and dim U ⊥ = n − dim U < n.
Hence, by induction, let B1 and B2 be orthonormal bases of U and U ⊥ such that MB1 (T ) and MB2 (T ) have
the form given. Then B = B1 ∪ B2 is an orthonormal basis of V , and MB (T ) has the desired form with a
suitable ordering of the vectors in B.
1 0 −1 0
Now observe that R(0) = and R(π ) = . It follows that an even number of 1s or −1s
0 1 0 −1
can be written as R(θ1 )-blocks. Hence, with a suitable reordering of the basis B, the theorem follows.
As in the dimension 2 situation, these possibilities can be given a geometric interpretation when V = R3
is taken as euclidean space. As before, this entails looking carefully at reflections and rotations in R3 . If
Q : R3 → R3 is any reflection in a plane through the origin (called the fixed plane of the reflection), take
{f2 , f3 } to be any orthonormal basis of the fixed plane and take f1 to be a unit vector perpendicular to
the fixed plane. Then Q(f1 ) = −f1 , whereas Q(f2 ) = f2 and Q(f3 ) = f3 . Hence B = {f1 , f2 , f3 } is an
orthonormal basis such that
−1 0 0
MB (Q) = 0 1 0
0 0 1
Similarly, suppose that R : R3 → R3 is any rotation about a line through the origin (called the axis of the
rotation), and let f1 be a unit vector pointing along the axis, so R(f1 ) = f1 . Now the plane through the
origin perpendicular to the axis is an R-invariant subspace of R2 of dimension 2, and the restriction of R
10.4.4, there is an orthonormal basis B1 = {f2 , f3 } of this
to this plane is a rotation. Hence, by Theorem
cos θ − sin θ
plane such that MB1 (R) = . But then B = {f1 , f2 , f3 } is an orthonormal basis of R3 such
sin θ cos θ
that the matrix of R is
1 0 0
MB (R) = 0 cos θ − sin θ
0 sin θ cos θ
10.4. Isometries 573
However, Theorem 10.4.5 shows that there are isometries T in R3 of a third type: those with a matrix of
the form
−1 0 0
MB (T ) = 0 cos θ − sin θ
0 sin θ cos θ
If B = {f1 , f2 , f3 }, let Q be the reflection in the plane spanned by f2 and f3 , and let R be the ro-
tation corresponding to θ about the line spanned by f1 . Then MB (Q) and MB (R) are as above, and
MB (Q)MB (R) = MB (T ) as the reader can verify. This means that MB (QR) = MB (T ) by Theorem 9.2.1,
and this in turn implies that QR = T because MB is one-to-one (see Exercise 9.1.26). A similar argument
shows that RQ = T , and we have Theorem 10.4.6.
Theorem 10.4.6
If T : R3 → R3 is an isometry, there are three possibilities.
1 0 0
a. T is a rotation, and MB (T ) = 0 cos θ − sin θ for some orthonormal basis B.
0 sin θ cos θ
−1 0 0
b. T is a reflection, and MB (T ) = 0 1 0 for some orthonormal basis B.
0 0 1
c. T = QR = RQ where Q isa reflection, R is a rotation
about an axis perpendicular to the fixed
−1 0 0
plane of Q and MB (T ) = 0 cos θ − sin θ for some orthonormal basis B.
0 sin θ cos θ
Proof. It remains only to verify the final observation that T is a rotation if and only if det T = 1. But
clearly det T = −1 in parts (b) and (c).
A useful way of analyzing a given isometry T : R3 → R3 comes from computing the eigenvalues of T .
Because the characteristic polynomial of T has degree 3, it must have a real root. Hence, there must be at
least one real eigenvalue, and the only possible real eigenvalues are ±1 by Lemma 10.4.3. Thus Table 10.1
includes all possibilities.
574 Inner Product Spaces
Table 10.1
Eigenvalues of T Action of T
(1) 1, no other real eigenvalues Rotation about the line Rf where f is an eigenvector corresponding
to 1. [Case (a) of Theorem 10.4.6.]
(2) −1, no other real eigenvalues Rotation about the line Rf followed by reflection in the plane (Rf)⊥
where f is an eigenvector corresponding to −1. [Case (c) of Theo-
rem 10.4.6.]
(3) −1, 1, 1 Reflection in the plane (Rf)⊥ where f is an eigenvector correspond-
ing to −1. [Case (b) of Theorem 10.4.6.]
(4) 1, −1, −1 This is as in (1) with a rotation of π .
(5) −1, −1, −1 Here T (x) = −x for all x. This is (2) with a rotation of π .
Example 10.4.5
x y
Analyze the isometry T : R3 → R3 given by T y = z .
z −x
0 1 0
Solution. If B0 is the standard basis of R3 , then MB0 (T ) = 0 0 1 , so
−1 0 0
3 2
cT (x) = x + 1 = (x + 1)(x − x + 1). This is (2) in Table 10.1. Write:
1 1 1
f1 = √13 −1 f2 = √16 2 f3 = √12 0
1 1 −1
for some orthonormal basis B = {f1 , f2 , . . . , fn }. Then Q(f1 ) = −f1 whereas Q(u) = u for each u in
U = span {f2 , . . . , fn }. Hence U is called the fixed hyperplane of Q, and Q is called reflection in U .
Note that each hyperplane in V is the fixed hyperplane of a (unique) reflection of V . Clearly, reflections in
R2 and R3 are reflections in this more general sense.
Continuing the analogy with R2 and R3 , an isometry T : V → V is called a rotation if there exists an
orthonormal basis {f1 , . . . , fn } such that
Ir 0 0
MB (T ) = 0 R(θ ) 0
0 0 Is
cos θ − sin θ
in block form, where R(θ ) = , and where either Ir or Is (or both) may be missing. If
sin θ cos θ
R(θ ) occupies columns i and i + 1 of MB (T ), and if W = span {fi , fi+1 }, then W is T -invariant and the
matrix of T : W → W with respect to {fi , fi+1 } is R(θ ). Clearly, if W is viewed as a copy of R2 , then
T is a rotation in W . Moreover, T (u) = u holds for all vectors u in the (n − 2)-dimensional subspace
U = span {f1 , . . . , fi−1 , fi+1 , . . . , fn }, and U is called the fixed axis of the rotation T . In R3 , the axis of
any rotation is a line (one-dimensional), whereas in R2 the axis is U = {0}.
With these definitions, the following theorem is an immediate consequence of Theorem 10.4.5 (the
details are left to the reader).
Theorem 10.4.7
Let T : V → V be an isometry of a finite dimensional inner product space V . Then there exist
isometries T1 , . . . , T such that
T = Tk Tk−1 · · · T2 T1
where each Ti is either a rotation or a reflection, at most one is a reflection, and Ti T j = T j Ti holds
for all i and j. Furthermore, T is a composite of rotations if and only if det T = 1.
576 Inner Product Spaces
Exercise 10.4.9 Let T : V → V be a linear operator. Exercise 10.4.12 Let S : V → V be a distance preserving
Show that any two of the following conditions implies transformation where V is finite dimensional.
the third:
1. T is symmetric.
2. T is an involution (T 2 = 1V ). a. Show that the factorization in the proof of Theo-
rem 10.4.1 is unique. That is, if S = Su ◦ T and
3. T is an isometry. S = Su′ ◦ T ′ where u, u′ ∈ V and T , T ′ : V → V are
[Hint: In all cases, use the definition isometries, show that u = u′ and T = T ′ .
hv, T (w)i = hT (v), wi
of a symmetric operator. For (1) and (3) ⇒ (2), b. If S = Su ◦ T , u ∈ V , T an isometry, show that
use the fact that, if hT 2 (v) − v, wi = 0 for all w, w ∈ V exists such that S = T ◦ Sw .
then T 2 (v) = v.]
If U is an orthogonal basis of a vector space V , the expansion theorem (Theorem 10.2.4) presents a vector
v ∈ V as a linear combination of the vectors in U . Of course this requires that the set U is finite since
otherwise the linear combination is an infinite sum and makes no sense in V .
However, given an infinite orthogonal set U = {f1 , f2 , . . . , fn , . . . }, we can use the expansion theorem
for {f1 , f2 , . . . , fn } for each n to get a series of “approximations” vn for a given vector v. A natural
question is whether these vn are getting closer and closer to v as n increases. This turns out to be a very
fruitful idea.
In this section we shall investigate an important orthogonal set in the space C[−π , π ] of continuous
6 Thename honours the French mathematician J.B.J. Fourier (1768-1830) who used these techniques in 1822 to investigate
heat conduction in solids.
578 Inner Product Spaces
We leave the verifications to the reader, together with the task of showing that these functions are orthog-
onal:
hsin(kx), sin(mx)i = 0 = hcos(kx), cos(mx)i if k 6= m
and
hsin(kx), cos(mx)i = 0 for all k ≥ 0 and m ≥ 0
(Note that 1 = cos(0x), so the constant function 1 is included.)
Now define the following subspace of C[−π , π ]:
The aim is to use the approximation theorem (Theorem 10.2.8); so, given a function f in C[−π , π ], define
the Fourier coefficients of f by
Z π
h f (x), 1i 1
a0 = k1k2
= 2π f (x)dx
−π
Z π
h f (x), cos(kx)i 1
ak = k cos(kx)k2
= π f (x) cos(kx)dx k = 1, 2, . . .
−π
Z π
h f (x), sin(kx)i 1
bk = k sin(kx)k2
= π f (x) sin(kx)dx k = 1, 2, . . .
−π
Theorem 10.5.1
Let f be any continuous real-valued function defined on the interval [−π , π ]. If a0 , a1 , . . . , and b0 ,
b1 , . . . are the Fourier coefficients of f , then given n ≥ 0,
k f − fn k ≤ k f − gk
Example 10.5.1
Find the fifth Fourier approximation to the function f (x) defined on [−π , π ] as follows:
y
π + x if − π ≤ x < 0
f (x) =
π
π − x if 0 ≤ x ≤ π
We say that a function f is an even function if f (x) = f (−x) holds for all x; f is called an odd function
if f (−x) = − f (x) holds for all x. Examples of even functions are constant functions, the even powers x2 ,
x4 , . . . , and cos(kx); these functions are characterized by the fact that the graph of y = f (x) is symmetric
about the y axis. Examples of odd functions are the odd powers x, x3 , . . . , and sin(kx) where k > 0, and
the graph of y = f (x) is symmetric about the origin if f is odd. The usefulness of these functions stems
from the fact that Rπ
R−ππ
f (x)dx = 0 R if f is odd
π
−π f (x)dx = 2 0 f (x)dx if f is even
These facts often simplify the computations of the Fourier coefficients. For example:
This is because f (x) sin(kx) is odd in the first case and f (x) cos(kx) is odd in the second case.
The functions 1, cos(kx), and sin(kx) that occur in the Fourier approximation for f (x) are all easy to
generate as an electrical voltage (when x is time). By summing these signals (with the amplitudes given
by the Fourier coefficients), it is possible to produce an electrical signal with (the approximation to) f (x)
as the voltage. Hence these Fourier approximations play a fundamental role in electronics.
Finally, the Fourier approximations f1 , f2 , . . . of a function f get better and better as n increases. The
reason is that the subspaces Fn increase:
F1 ⊆ F2 ⊆ F3 ⊆ · · · ⊆ Fn ⊆ · · ·
So, because fn = proj Fn f , we get (see the discussion following Example 10.2.6)
k f − f1 k ≥ k f − f2 k ≥ · · · ≥ k f − fn k ≥ · · ·
These numbers k f − fn k approach zero; in fact, we have the following fundamental theorem.
Theorem 10.5.2
Let f be any continuous function in C[−π , π ]. Then
It shows that f has a representation as an infinite series, called the Fourier series of f :
whenever −π < x < π . A full discussion of Theorem 10.5.2 is beyond the scope of this book. This subject
had great historical impact on the development of mathematics, and has become one of the standard tools
in science and engineering.
Thus the Fourier series for the function f in Example 10.5.1 is
n o
π 4 1 1 1
f (x) = 2 + π cos x + 32 cos(3x) + 52 cos(5x) + 72 cos(7x) + · · ·
Example 10.5.2
Expand f (x) = x on the interval [−π , π ] in a Fourier series, and so obtain a series expansion of π4 .
Solution. Here f is an odd function so all the Fourier cosine coefficients ak are zero. As to the sine
7 We have to be careful at the end points x = π or x = −π because sin(kπ ) = sin(−kπ ) and cos(kπ ) = cos(−kπ ).
10.5. An Application to Fourier Approximation 581
coefficients: Z π
1
bk = π x sin(kx)dx = 2k (−1)k+1 for k ≥ 1
−π
where we omit the details of the integration by parts. Hence the Fourier series for x is
Exercise 10.5.1 In each case, find the Fourier approxi- Exercise 10.5.3
mation f5 of the given function in C[−π , π ].
R
a. RProve that −ππ Rf (x)dx = 0 if f is odd and that
a. f (x) = π − x π π
−π f (x)dx = 2 0 f (x)dx if f is even.
x if 0 ≤ x ≤ π
b. f (x) = |x| = b. Prove that 12 [ f (x) + f (−x)] is even and that
−x if − π ≤ x < 0
1
2 2 [ f (x) − f (−x)] is odd for any function f . Note
c. f (x) = x that they sum to f (x).
0 if − π ≤ x < 0
d. f (x) =
x if 0 ≤ x ≤ π Exercise 10.5.4 Show that {1, cos x, cos(2x), cos(3x), . . . }
is an orthogonal set Rπ
in C[0, π ] with respect to the inner
Exercise 10.5.2 product h f , gi = 0 f (x)g(x)dx.
a. Find f5 for the even function f on [−π , π ] satis- Exercise 10.5.5
fying f (x) = x for 0 ≤ x ≤ π .
π2
b. Find f6 for the even function f on [−π , π ] satis- a. Show that 8 = 1 + 312 + 512 + · · · using Exercise
fying f (x) = sin x for 0 ≤ x ≤ π . 10.5.1(b).
R
[Hint:
h If k > 1, sin x cos(kx) i
2
b. Show that π12 = 1 − 212 + 312 − 412 + · · · using Exer-
1 cos[(k−1)x] cos[(k+1)x]
=2 k−1 − k+1 .] cise 10.5.1(c).
11. Canonical Forms
Given a matrix A, the effect of a sequence of row-operations on A is to produce UA where U is invertible.
Under this “row-equivalence” operation the best that can be achieved is the reduced row-echelon form for
A. If column operations are also allowed, the result is UAV where both U and V are invertible, and the
best outcome under this “equivalence” operation is called the Smith canonical form of A (Theorem 2.5.3).
There are other kinds of operations on a matrix and, in many cases, there is a “canonical” best possible
result.
If A is square, the most important operation of this sort is arguably “similarity” wherein A is carried
to U −1 AU where U is invertible. In this case we say that matrices A and B are similar, and write A ∼ B,
when B = U −1 AU for some invertible matrix U . Under similarity the canonical matrices, called Jordan
canonical matrices, are block triangular with upper triangular “Jordan” blocks on the main diagonal. In
this short chapter we are going to define these Jordan blocks and prove that every matrix is similar to a
Jordan canonical matrix.
Here is the key to the method. Let T : V → V be an operator on an n-dimensional vector space V , and
suppose that we can find an ordered basis B of B so that the matrix MB (T ) is as simple as possible. Then,
if B0 is any ordered basis of V , the matrices MB (T ) and MB0 (T ) are similar; that is,
Moreover, P = PB0←B is easily computed from the bases B and D (Theorem 9.2.3). This, combined with
the invariant subspaces and direct sums studied in Section 9.3, enables us to calculate the Jordan canonical
form of any square matrix A. Along the way we derive an explicit construction of an invertible matrix P
such that P−1 AP is block triangular.
This technique is important in many ways. For example, if we want to diagonalize an n × n matrix A,
let TA : Rn → Rn be the operator given by TA (x) = Ax or all x in Rn , and look for a basis B of Rn such that
MB (TA ) is diagonal. If B0 = E is the standard basis of Rn , then ME (TA ) = A, so
and we have diagonalized A. Thus the “algebraic” problem of finding an invertible matrix P such that
P−1 AP is diagonal is converted into the “geometric” problem of finding a basis B such that MB (TA ) is
diagonal. This change of perspective is one of the most important techniques in linear algebra.
We have shown (Theorem 8.2.5) that any n × n matrix A with every eigenvalue real is orthogonally similar
to an upper triangular matrix U . The following theorem shows that U can be chosen in a special way.
583
584 Canonical Forms
where λ1 , λ2 , . . . , λk are the distinct eigenvalues of A. Then an invertible matrix P exists such that
U1 0 0 · · · 0
0 U2 0 · · · 0
P−1 AP = 0 0 U3 · · · 0
.. . . .
. .. .. ..
0 0 0 · · · Uk
where, for each i, Ui is an mi × mi upper triangular matrix with every entry on the main diagonal
equal to λi .
The proof is given at the end of this section. For now, we focus on a method for finding the matrix P. The
key concept is as follows.
Observe that the eigenspace Eλi (A) = null (λi I − A) is a subspace of Gλi (A). We need three technical
results.
Lemma 11.1.1
Using the notation of Theorem 11.1.1, we have dim [Gλi (A)] = mi .
Proof. Write Ai = (λi I − A)mi for convenience and let P be as in Theorem 11.1.1. The spaces
Gλi (A) = null (Ai ) and null (P−1 Ai P) are isomorphic via x ↔ P−1 x, so we show dim [ null (P−1 Ai P)] = mi .
Now P−1 Ai P = (λi I − P−1 AP)mi . If we use the block form in Theorem 11.1.1, this becomes
mi
λi I −U1 0 ··· 0
0 λi I −U2 · · · 0
P−1 Ai P = .. .. ..
. . .
0 0 · · · λi I −Uk
(λi I −U1 )mi 0 ··· 0
0 (λi I −U2 )mi · · · 0
= .. .. ..
. . .
0 0 · · · (λi I −Uk ) m i
11.1. Block Triangular Form 585
The matrix (λi I −U j )mi is invertible if j 6= i and zero if j = i (because then Ui is an mi ×mi upper triangular
matrix with each entry on the main diagonal equal to λi ). It follows that mi = dim [ null (P−1 Ai P)], as
required.
Lemma 11.1.2
If P is as in Theorem 11.1.1, denote the columns of P as follows:
p11 , p12 , . . . , p1m1 ; p21 , p22 , . . . , p2m2 ; ...; pk1 , pk2 , . . . , pkmk
Proof. It suffices by Lemma 11.1.1 to show that each pi j is in Gλi (A). Write the matrix in Theorem 11.1.1
as P−1 AP = diag (U1 , U2 , . . . , Uk ). Then
AP = P diag (U1 , U2 , . . . , Uk )
Lemma 11.1.3
If Bi is any basis of Gλi (A), then B = B1 ∪ B2 ∪ · · · ∪ Bk is a basis of Rn .
Proof. It suffices by Lemma 11.1.1 to show that B is independent. If a linear combination from B vanishes,
let xi be the sum of the terms from Bi . Then x1 + · · · + xk = 0. But xi = ∑ j ri j pi j by Lemma 11.1.2, so
∑i, j ri j pi j = 0. Hence each xi = 0, so each coefficient in xi is zero.
Lemma 11.1.2 suggests an algorithm for finding the matrix P in Theorem 11.1.1. Observe that there is
an ascending chain of subspaces leading from Eλi (A) to Gλi (A):
Eλi (A) = null [(λi I − A)] ⊆ null [(λi I − A)2 ] ⊆ · · · ⊆ null [(λi I − A)mi ] = Gλi (A)
Triangulation Algorithm
Suppose A has characteristic polynomial
1. Choose a basis of null [(λ1I − A)]; enlarge it by adding vectors (possibly none) to a basis of
null [(λ1I − A)2 ]; enlarge that to a basis of null [(λ1I − A)3 ], and so on. Continue to obtain an
ordered basis {p11 , p12 , . . . , p1m1 } of Gλ1 (A).
2. As in (1) choose a basis {pi1 , pi2 , . . . , pimi } of Gλi (A) for each i.
3. Let P = p11 p12 · · · p1m1 ; p21 p22 · · · p2m2 ; · · · ; pk1 pk2 · · · pkmk be the matrix with these
basis vectors (in order) as columns.
Proof. Lemma 11.1.3 guarantees that B = {p11 , . . . , pkm1 } is a basis of Rn , and Theorem 9.2.4 shows that
P−1 AP = MB (TA ). Now Gλi (A) is TA -invariant for each i because
where Ui is the matrix of the restriction of TA to Gλi (A), and it remains to show that Ui has the desired
upper triangular form. Given s, let pi j be a basis vector in null [(λi I − A)s+1 ]. Then (λi I − A)pi j is in
null [(λi I − A)s ], and therefore is a linear combination of the basis vectors pit coming before pi j . Hence
shows that the column of Ui corresponding to pi j has λi on the main diagonal and zeros below the main
diagonal. This is what we wanted.
Example 11.1.1
2 0 0 1
0 2 0 −1
If A =
−1 1
, find P such that P−1 AP is block triangular.
2 0
0 0 0 2
Solution. cA (x) = det [xI − A] = (x − 2)4 , so λ1 = 2 is the only eigenvalue and we are in the case
k = 1 of Theorem 11.1.1. Compute:
0 0 0 −1 0 0 0 0
0 0 0 1 0
(2I − A) = (2I − A)2 = 0 0 0 3
1 −1 0 0 0 0 0 −2 (2I − A) = 0
0 0 0 0 0 0 0 0
11.1. Block Triangular Form 587
By gaussian elimination find a basis {p11 , p12 } of null (2I − A); then extend in any way to a basis
{p11 , p12 , p13 } of null [(2I − A)2 ]; and finally get a basis {p11 , p12 , p13 , p14 } of
null [(2I − A)3 ] = R4 . One choice is
1 0 0 0
1 0 1 0
p11 =
0 p12 = 1 p13 = 0 p14 = 0
0 0 0 1
1 0 0 0 2 0 0 1
1 0 1 0 0
Hence P = p11 p12 p13 p14 = gives P−1 AP = 0 2 1
0 1 0 0 0 0 2 −2
0 0 0 1 0 0 0 2
Example 11.1.2
2 0 1 1
3 5 4 1
If A =
−4 −3 −3 −1
, find P such that P−1 AP is block triangular.
1 0 1 2
By solving equations, we find null (I − A) = span {p11 } and null (I − A)2 = span {p11 , p12 } where
1 0
1
p11 = p12 = 3
−2 −4
1 1
Since λ1 = 1 has multiplicity 2 as a root of cA (x), dim Gλ1 (A) = 2 by Lemma 11.1.1. Since p11
and p12 both lie in Gλ1 (A), we have Gλ1 (A) = span {p11 , p12 }. Turning to λ2 = 2, we find that
588 Canonical Forms
null (2I − A) = span {p21 } and null [(2I − A)2 ] = span {p21 , p22 } where
1 0
0
p21 = and p22 = −4
−1 3
1 0
Again,
dim Gλ2 (A) = 2 as λ2 has multiplicity2, so Gλ2 (A) = span
{p21 , p22 }. Hence
1 0 1 0 1 −3 0 0
1 3 0
P=
−4 gives P−1 AP = 0 1 0 0 .
−2 −4 −1 3 0 0 2 3
1 1 1 0 0 0 0 2
Proof. As in Theorem 11.1.1, write cA (x) = (x − λ1 )m1 · · · (x − λk )mk = Πki=1(x − λi )mi , and write
Example 11.1.3
1 3 x−1 −3
If A = , then cA (x) = det = x2 − 3x + 5. Then
−1 2 1 x −2
−2 9 3 9 5 0 0 0
cA (A) = A2 − 3A + 5I2 = − + = .
−3 1 −3 6 0 5 0 0
The proof of Theorem 11.1.1 requires the following simple fact about bases, the proof of which we leave
to the reader.
Lemma 11.1.4
If {v1 , v2 , . . . , vn } is a basis of a vector space V , so also is {v1 + sv2 , v2 , . . . , vn } for any scalar s.
Proof of Theorem 11.1.1. Let A be as in Theorem 11.1.1, and let T = TA : Rn → Rn be the matrix
transformation induced by A. For convenience, call a matrix a λ -m-ut matrix if it is an m × m up-
per triangular matrix and every diagonal entry equals λ . Then we must find a basis B of Rn such that
MB (T ) = diag (U1 , U2 , . . . , Uk ) where Ui is a λi -mi -ut matrix for each i. We proceed by induction on n.
If n = 1, take B = {v} where v is any eigenvector of T .
If n > 1, let v1 be a λ1 -eigenvector of T , and let B0 = {v1 , w1 , . . . , wn−1 } be any basis of Rn containing
v1 . Then (see Lemma 5.5.2)
λ1 X
MB0 (T ) =
0 A1
in block form where A1 is (n − 1) × (n − 1). Moreover, A and MB0 (T ) are similar, so
where Z1 is a λ1 -(m1 − 1)-ut matrix and Ui is a λi -mi -ut matrix for each i > 1.
1 0 −1 λ1 XQ
If P = , then P MB0 (T ) = = A′ , say. Hence A′ ∼ MB0 (T ) ∼ A so by
0 Q 0 Q−1 A1 Q
Theorem 9.2.4(2) there is a basis B of Rn such that MB1 (TA ) = A′ , that is MB1 (T ) = A′ . Hence MB1 (T )
takes the block form
λ1 X1 Y
0 Z1 0 0 0
λ1 XQ U2 · · · 0
MB1 (T ) = = (11.1)
0 diag (Z1 , U2 , . . . , Uk ) .. ..
0 . .
0 · · · Uk
λ1 X1
If we write U1 = , the basis B1 fulfills our needs except that the row matrix Y may not be zero.
0 Z1
We remedy this defect as follows. Observe that the first vector in the basis B1 is a λ1 eigenvector of T ,
which we continue to denote as v1 . The idea is to add suitable scalar multiples of v1 to the other vectors in
B1 . This results in a new basis by Lemma 11.1.4, and the multiples can be chosen so that the new matrix
of T is the same as (11.1) except that Y = 0. Let {w1 , . . . , wm2 } be the vectors in B1 corresponding to λ2
590 Canonical Forms
Because λ2 6= λ1 we can choose s such that T (w′1 ) = λ2 w′1 . Similarly, let w′2 = w2 + tv1 where t is to be
chosen. Then, as before,
Again, t can be chosen so that T (w′2 ) = u12 w′1 + λ2 w′2 . Continue in this way to eliminate y1 , . . . , ym2 .
This procedure also works for λ3 , λ4 , . . . and so produces a new basis B such that MB (T ) is as in (11.1)
but with Y = 0.
Two m × n matrices A and B are called row-equivalent if A can be carried to B using row operations
and, equivalently, if B = UA for some invertible matrix U . We know (Theorem 2.6.4) that each m × n
matrix is row-equivalent to a unique matrix in reduced row-echelon form, and we say that these reduced
row-echelon matrices are canonical forms
for m × n matrices using row operations. If we allow column
Ir 0
operations as well, then A → UAV = for invertible U and V , and the canonical forms are the
0 0
I 0
matrices r where r is the rank (this is the Smith normal form and is discussed in Theorem 2.6.3).
0 0
In this section, we discover the canonical forms for square matrices under similarity: A → P−1 AP.
If A is an n × n matrix with distinct real eigenvalues λ1 , λ2 , . . . , λk , we saw in Theorem 11.1.1 that A
is similar to a block triangular matrix; more precisely, an invertible matrix P exists such that
U1 0 · · · 0
0 U2 · · · 0
P−1 AP = .. .. . . .. = diag (U1 , U2 , . . . , Uk ) (11.2)
. . . .
0 0 0 Uk
where, for each i, Ui is upper triangular with λi repeated on the main diagonal. The Jordan canonical form
is a refinement of this theorem. The proof we gave of (11.2) is matrix theoretic because we wanted to give
an algorithm for actually finding the matrix P. However, we are going to employ abstract methods here.
Consequently, we reformulate Theorem 11.1.1 as follows:
Theorem 11.2.1
Let T : V → V be a linear operator where dim V = n. Assume that λ1 , λ2 , . . . , λk are the distinct
eigenvalues of T , and that the λi are all real. Then there exists a basis F of V such that
MF (T ) = diag (U1 , U2 , . . . , Uk ) where, for each i, Ui is square, upper triangular, with λi repeated
on the main diagonal.
592 Canonical Forms
Proof. Choose any basis B = {b1 , b2 , . . . , bn } of V and write A = MB (T ). Since A has the same eigenval-
ues as T , Theorem 11.1.1 shows that an invertible matrix P exists such that P−1 AP = diag (U1, U2 , . . . , Uk )
where the Ui are as in the statement of the Theorem. If p j denotes column j of P and CB : V → Rn is the
coordinate isomorphism, let f j = CB−1 (p j ) for each j. Then F = {f1 , f2 , . . . , fn } is a basis of V and
CB (f j ) = p j for each j. This means that PB←F = CB (f j ) = p j = P, and hence (by Theorem 9.2.2) that
PF←B = P−1 . With this, column j of MF (T ) is
as required.
Hence
λ 1 0 0
λ 1 0
λ 1 0 λ 1 0
J1 (λ ) = [λ ] , J2 (λ ) = , J3 (λ ) = 0 λ 1 , J4 (λ ) =
0
, ...
0 λ 0 λ 1
0 0 λ
0 0 0 λ
We are going to show that Theorem 11.2.1 holds with each block Ui replaced by Jordan blocks corre-
sponding to eigenvalues. It turns out that the whole thing hinges on the case λ = 0. An operator T is
called nilpotent if T m = 0 for some m ≥ 1, and in this case λ = 0 for every eigenvalue λ of T . Moreover,
the converse holds by Theorem 11.1.1. Hence the following lemma is crucial.
Lemma 11.2.1
Let T : V → V be a linear operator where dim V = n, and assume that T is nilpotent; that is,
T m = 0 for some m ≥ 1. Then V has a basis B such that
MB (T ) = diag (J1, J2 , . . . , Jk )
1 The converse is true too: If MB (T ) has this form for some basis B of V , then T is nilpotent.
11.2. The Jordan Canonical Form 593
ME (T ) = diag (U1 , U2 , . . . , Uk )
U j = diag (J1 , J2 , . . . , Jk )
where nk = n, and define Vi = span {Ei } for each i. Because the matrix ME (T ) = diag (U1, U2 , . . . , Um )
is block diagonal, it follows that each Vi is T -invariant and MEi (T ) = Ui for each i. Let Ui have λi repeated
along the main diagonal, and consider the restriction T : Vi → Vi . Then MEi (T − λi Ini ) is a nilpotent matrix,
and hence (T − λi Ini ) is a nilpotent operator on Vi . But then Lemma 11.2.1 shows that Vi has a basis Bi
such that MBi (T − λi Ini ) = diag (K1 , K2 , . . . , Kti ) where each Ki is a Jordan block corresponding to λ = 0.
Hence
B = B1 ∪ B2 ∪ · · · ∪ Bk
Corollary 11.2.1
If A is an n × n matrix with real eigenvalues, an invertible matrix P exists such that
P−1 AP = diag (J1 , J2 , . . . , Jk ) where each Ji is a Jordan block corresponding to an eigenvalue λi .
Proof. Apply Theorem 11.2.2 to the matrix transformation TA : Rn → Rn to find a basis B of Rn such that
MB (TA ) has the desired form. If P is the (invertible) n × n matrix with the vectors of B as its columns, then
P−1 AP = MB (TA ) by Theorem 9.2.4.
Of course if we work over the field C of complex numbers rather than R, the characteristic polynomial
of a (complex) matrix A splits completely as a product of linear factors. The proof of Theorem 11.2.2 goes
through to give
594 Canonical Forms
MF (T ) = diag (U1, U2 , . . . , Uk )
U j = diag (J1, J2 , . . . , Jt j )
Except for the order of the Jordan blocks Ji , the Jordan canonical form is uniquely determined by the
operator T . That is, for each eigenvalue λ the number and size of the Jordan blocks corresponding to λ
is uniquely determined. Thus, for example, two matrices (or two operators) are similar if and only if they
have the same Jordan canonical form. We omit the proof of uniqueness; it is best presented using modules
in a course on abstract algebra.
Proof of Lemma 1
Lemma 11.2.1
Let T : V → V be a linear operator where dim V = n, and assume that T is nilpotent; that is,
T m = 0 for some m ≥ 1. Then V has a basis B such that
MB (T ) = diag (J1, J2 , . . . , Jk )
Proof. The proof proceeds by induction on n. If n = 1, then T is a scalar operator, and so T = 0 and the
lemma holds. If n ≥ 1, we may assume that T 6= 0, so m ≥ 1 and we may assume that m is chosen such
that T m = 0, but T m−1 6= 0. Suppose T m−1 u 6= 0 for some u in V .3
Claim. {u, T u, T 2 u, . . . , T m−1 u} is independent.
Proof. Suppose a0 u+a1 T u+a2 T 2 u+· · ·+am−1 T m−1 u = 0 where each ai is in R. Since T m = 0, applying
T m−1 gives 0 = T m−1 0 = a0 T m−1 u, whence a0 = 0. Hence a1 T u + a2 T 2 u + · · · + am−1 T m−1 u = 0 and
applying T m−2 gives a1 = 0 in the same way. Continue in this fashion to obtain ai = 0 for each i. This
proves the Claim.
Now define P = span {u, T u, T 2 u, . . . , T m−1 u}. Then P is a T -invariant subspace (because T m = 0),
and T : P → P is nilpotent with matrix MB (T ) = Jm (0) where B = {u, T u, T 2 u, . . . , T m−1 u}. Hence we
are done, by induction, if V = P ⊕ Q where Q is T -invariant (then dim Q = n − dim P < n because P 6= 0,
2 This was first proved in 1870 by the French mathematician Camille Jordan (1838–1922) in his monumental Traité des
substitutions et des équations algébriques.
3 If S : V → V is an operator, we abbreviate S(u) by Su for simplicity.
11.2. The Jordan Canonical Form 595
and T : Q → Q is nilpotent). With this in mind, choose a T -invariant subspace Q of maximal dimension
such that P ∩ Q = {0}.4 We assume that V 6= P ⊕ Q and look for a contradiction.
Choose x ∈ V such that x ∈ / P ⊕ Q. Then T m x = 0 ∈ P ⊕ Q while T 0 x = x ∈ / P ⊕ Q. Hence there exists
k, 1 ≤ k ≤ m, such that T k x ∈ P ⊕ Q but T k−1 x ∈
/ P ⊕ Q. Write v = T k−1 x, so that
v∈
/ P⊕Q and Tv ∈ P⊕Q
Let T v = p + q with p in P and q in Q. Then 0 = T m−1 (T v) = T m−1 p + T m−1 q so, since P and Q are
T -invariant, T m−1 p = −T m−1 q ∈ P ∩ Q = {0}. Hence
T m−1 p = 0
p1 = a1 u + a2 T u + · · · + am−1 T m−2 u ∈ P
If we write v1 = v − p1 we have
T (v1 ) = T (v − p1 ) = T v − p = q ∈ Q
av = av1 + ap1 ∈ (P ⊕ Q) + P = P ⊕ Q
Since v ∈
/ P ⊕ Q, this implies that a = 0. But then p2 = q1 ∈ P ∩ Q = {0}, a contradiction. This completes
the proof.
a + bi
where a and b are real numbers, and i is a root of the polynomial x2 + 1. Here a and b are called the real
part and the imaginary part of the complex number, respectively. The real numbers are now regarded as
special complex numbers of the form a + 0i = a, with zero imaginary part. The complex numbers of the
form 0 + bi = bi with zero real part are called pure imaginary numbers. The complex number i itself is
called the imaginary unit and is distinguished by the fact that
i2 = −1
As the terms complex and imaginary suggest, these numbers met with some resistance when they were
first used. This has changed; now they are essential in science and engineering as well as mathematics,
and they are used extensively. The names persist, however, and continue to be a bit misleading: These
numbers are no more “complex” than the real numbers, and the number i is no more “imaginary” than −1.
Much as for polynomials, two complex numbers are declared to be equal if and only if they have the
same real parts and the same imaginary parts. In symbols,
The addition and subtraction of complex numbers is accomplished by adding and subtracting real and
imaginary parts:
(a + bi) + (a′ + b′ i) = (a + a′ ) + (b + b′ )i
(a + bi) − (a′ + b′ i) = (a − a′ ) + (b − b′ )i
This is analogous to these operations for linear polynomials a + bx and a′ + b′ x, and the multiplication of
complex numbers is also analogous with one difference: i2 = −1. The definition is
597
598 Complex Numbers
With these definitions of equality, addition, and multiplication, the complex numbers satisfy all the basic
arithmetical axioms adhered to by the real numbers (the verifications are omitted). One consequence of
this is that they can be manipulated in the obvious fashion, except that i2 is replaced by −1 wherever it
occurs, and the rule for equality must be observed.
Example A.1
If z = 2 − 3i and w = −1 + i, write each of the following in the form a + bi: z + w, z − w, zw, 13 z,
and z2 .
Solution.
Example A.2
Find all complex numbers z such as that z2 = i.
Solution. Write z = a + bi; we must determine a and b. Now z2 = (a2 − b2 ) + (2ab)i, so the
condition z2 = i becomes
(a2 − b2 ) + (2ab)i = 0 + i
Equating real and imaginary parts, we find that a2 = b2 and 2ab = 1. The solution is a = b = ± √1 ,
2
so the complex numbers required are z = √1 + √1 i and z = − √1 − √1 i.
2 2 2 2
As for real numbers, it is possible to divide by every nonzero complex number z. That is, there exists
a complex number w such that wz = 1. As in the real case, this number w is called the inverse of z and
is denoted by z−1 or 1z . Moreover, if z = a + bi, the fact that z 6= 0 means that a 6= 0 or b 6= 0. Hence
a2 + b2 6= 0, and an explicit formula for the inverse is
1 a b
z = a2 +b2
− a2 +b2i
In actual calculations, the work is facilitated by two useful notions: the conjugate and the absolute value
of a complex number. The next example illustrates the technique.
599
Example A.3
3+2i
Write 2+5i in the form a + bi.
Solution. Multiply top and bottom by the complex number 2 − 5i (obtained from the denominator
by negating the imaginary part). The result is
3+2i (2−5i)(3+2i) (6+10)+(4−15)i 16
2+5i = (2−5i)(2+5i) = 22 −(5i)2
= 29 − 11
29 i
16
Hence the simplified form is 29 − 11
29 i, as required.
The key to this technique is that the product (2 − 5i)(2 + 5i) = 29 in the denominator turned out to be
a real number. The situation in general leads to the following notation: If z = a + bi is a complex number,
the conjugate of z is the complex number, denoted z, given by
z = a − bi where z = a + bi
Hence z is obtained from z by negating the imaginary part. Thus (2 + 3i) = 2 − 3i and (1 − i) = 1 + i. If
we multiply z = a + bi by z, we obtain
zz = a2 + b2 where z = a + bi
The real number a2 + b2 is always nonnegative, so we can state the following definition: The√absolute
value or modulus of a complex number z = a + bi, denoted by |z|, is the positive square root a2 + b2 ;
that is, p
|z| = a2 + b2 where z = a + bi
p √ √ √
For example, |2 − 3i| = 22 + (−3)2 = 13 and |1 + i| = 12 + 12 = 2.
Note that if a √
real number a is viewed as the complex number a + 0i, its absolute value (as a complex
number) is |a| = a2 , which agrees with its absolute value as a real number.
With these notions in hand, we can describe the technique applied in Example A.3 as follows: When
converting a quotient wz of complex numbers to the form a + bi, multiply top and bottom by the conjugate
w of the denominator.
The following list contains the most important properties of conjugates and absolute values. Through-
out, z and w denote complex numbers.
1 1
C1. z ± w = z ± w C7. z = |z|2
z
C2. zw = z w C8. |z| ≥ 0 for all complex numbers z
C3. wz = wz C9. |z| = 0 if and only if z = 0
C4. (z) = z C10. |zw| = |z||w|
|z|
C5. z is real if and only if z = z C11. | wz | = |w|
C6. zz = |z|2 C12. |z + w| ≤ |z| + |w| (triangle inequality)
All these properties (except property C12) can (and should) be verified by the reader for arbitrary complex
numbers z = a + bi and w = c + di. They are not independent; for example, property C10 follows from
properties C2 and C6.
600 Complex Numbers
The triangle inequality, as its name suggests, comes from a geometric representation of the complex
numbers analogous to identification of the real numbers with the points of a line. The representation is
achieved as follows:
Introduce a rectangular coordinate system in the plane (Figure A.1),
y and identify the complex number a + bi with the point (a, b). When this
is done, the plane is called the complex plane. Note that the point (a, 0)
(a, b) = a + bi on the x axis now represents the real number a = a + 0i, and for this rea-
(0, b) = bi
son, the x axis is called the real axis. Similarly, the y axis is called the
i (a, 0) = a imaginary axis. The identification (a, b) = a + bi of the geometric point
x (a, b) and the complex number a + bi will be used in what follows without
0 1
comment. For example, the origin will be referred to as 0.
(a, −b) = a − bi This representation of the complex numbers in the complex plane gives
a useful way of describing the absolute value√and conjugate of a complex
Figure A.1 number z = a + bi. The absolute value |z| = a2 + b2 is just the distance
from z to the origin. This makes properties C8 and C9 quite obvious. The
conjugate z = a − bi of z is just the reflection of z in the real axis (x axis),
a fact that makes properties C4 and C5 clear.
Given two complex numbers z1 = a1 + b1 i = (a1 , b1 ) and z2 = a2 + b2 i = (a2 , b2 ), the absolute value
of their difference q
|z1 − z2 | = (a1 − a2 )2 + (b1 − b2 )2
is just the distance between them. This gives the complex distance formula:
y
respectively, so these lines are parallel. (If it happens that a = 0, then both these lines are vertical.)
Similarly, the lines from z to z + w and from 0 to w are also parallel, so the figure with vertices 0, z, w, and
z + w is indeed a parallelogram. Hence, the complex number z + w can be obtained geometrically from
z and w by completing the parallelogram. This is sometimes called the parallelogram law of complex
addition. Readers who have studied mechanics will recall that velocities and accelerations add in the same
way; in fact, these are all special cases of vector addition.
Polar Form
Now we can describe the polar form of a complex number. Let z = a + bi be a complex number, and
write the absolute value of z as p
y r = |z| = a2 + b2
If z 6= 0, the angle θ shown in Figure A.5 is called an argument of z and
is denoted
z = (a, b)
θ = arg z
r b This angle is not unique (θ + 2π k would do as well for any
θ
k = 0, ±1, ±2, . . . ). However, there is only one argument θ in the range
a
x −π < θ ≤ π , and this is sometimes called the principal argument of z.
0
Figure A.5
602 Complex Numbers
Returning to Figure A.5, we find that the real and imaginary parts a and b of z are related to r and θ by
a = r cos θ
b = r sin θ
is called Euler’s formula after the great Swiss mathematician Leonhard Euler (1707–1783). With this
notation, z is written
z = reiθ r = |z|, θ = arg (z)
This is a polar form of the complex number z. Of course it is not unique, because the argument can be
changed by adding a multiple of 2π .
Example A.4
Write z1 = −2 + 2i and z2 = −i in polar form.
Solution.
y The two numbers are plotted in the complex plane in Figure A.6.
z1 = −2 + 2i
The absolute values are
q √
θ1 r1 = | − 2 + 2i| = (−2)2 + 22 = 2 2
q
x r2 = | − i| = 02 + (−1)2 = 1
θ2 0
z2 = −i
By inspection of Figure A.6, arguments of z1 and z2 are
3π
θ1 = arg (−2 + 2i) = 4
3π
Figure A.6 θ2 = arg (−i) = 2
√
The corresponding polar forms are z1 = −2 + 2i = 2 2e3π i/4 and z2 = −i = e3π i/2 . Of course, we
could have taken the argument − π2 for z2 and obtained the polar form z2 = e−π i/2 .
In Euler’s formula eiθ = cos θ + i sin θ , the number e is the familiar constant e = 2.71828 . . . from
calculus. The reason for using e will not be given here; the reason why cos θ + i sin θ is written as an
exponential function of θ is that the law of exponents holds:
where θ and φ are any two angles. In fact, this is an immediate consequence of the addition identities for
sin(θ + φ ) and cos(θ + φ ):
603
z1 z2 = r1 r2 ei(θ1 +θ2 )
In other words, to multiply two complex numbers, simply multiply the absolute values and add the ar-
guments. This simplifies calculations considerably, particularly when we observe that it is valid for any
arguments θ1 and θ2 .
Example A.5
√
Multiply (1 − i)(1 + 3i) in two ways.
Solution.
√ √
y We have |1 − i| = 2 and |1 + 3i| = 2 so, from Figure A.7,
√ √
1 + 3i 1 − i = 2e−iπ /4
√ √
(1 − i)(1 + 3i) 1 + 3i = 2eiπ /3
π π
3 12
− π4
x Hence, by the multiplication rule,
0
√ √
1−i (1 − i)(1 + 3i) = ( 2e−iπ /4 )(2eiπ /3 )
√
= 2 2ei(−π /4+π /3)
Figure A.7 √
= 2 2eiπ /12
Roots of Unity
If a complex number z = reiθ is given in polar form, the powers assume a particularly simple form. In
fact, z2 = (reiθ )(reiθ ) = r2 e2iθ , z3 = z2 · z = (r2 e2iθ )(reiθ ) = r3 e3iθ , and so on. Continuing in this way,
it follows by induction that the following theorem holds for any positive integer n. The name honours
Abraham De Moivre (1667–1754).
Proof. The case n > 0 has been discussed, and the reader can verify the result for n = 0. To derive it for
n < 0, first observe that
if z = reiθ 6= 0 then z−1 = 1r e−iθ
In fact, (reiθ )( 1r e−iθ ) = 1ei0 = 1 by the multiplication rule. Now assume that n is negative and write it as
n = −m, m > 0. Then
Example A.6
√
y Verify that (−1 + 3i)3 = 8.
√
−1 + 3i √ √
Solution. We have | − 1 + 3i| = 2, so −1 + 3i = 2e2π i/3
(see Figure A.8). Hence De Moivre’s theorem gives
√
(−1 + 3i)3 = (2e2π i/3 )3 = 8e3(2π i/3) = 8e2π i = 8
2 2π
3
x
0
Figure A.8
De Moivre’s theorem can be used to find nth roots of complex numbers where n is positive. The next
example illustrates this technique.
Example A.7
Find the cube roots of unity; that is, find all complex numbers z such that z3 = 1.
Solution. First write z = reiθ and 1 = 1ei0 in polar form. We must use the condition z3 = 1 to
determine r and θ . Because z3 = r3 e3iθ by De Moivre’s theorem, this requirement becomes
r3 e3iθ = 1e0i
605
These two complex numbers are equal, so their absolute values must be equal and the arguments
must either be equal or differ by an integral multiple of 2π :
r3 = 1
3θ = 0 + 2kπ , k some integer
Figure A.9 These are displayed in Figure A.9. All other values of k yield
values of θ that differ from one of these by a multiple of 2π —and
so do not give new roots. Hence we have found all the roots.
The same type of calculation gives all complex nth roots of unity; that is, all complex numbers z such
that zn = 1. As before, write 1 = 1e0i and
z = reiθ
in polar form. Then zn = 1 takes the form
rn eniθ = 1e0i
rn = 1
nθ = 0 + 2kπ , k some integer
z = e2π ki/n , k = 0, 1, 2, . . . , n − 1
606 Complex Numbers
y
The nth roots of unity can be found geometrically as the points on the unit
e2π i/5 circle that cut the circle into n equal sectors, starting at 1. The case n = 5
e4π i/5 is shown in Figure A.10, where the five fifth roots of unity are plotted.
1 = e0i
x
0
e6π i/5 The method just used to find the nth roots of unity works equally well
e8π i/5 to find the nth roots of any complex number in polar form. We give one
example.
Figure A.10
Example A.8
√ √
Find the fourth roots of 2 + 2i.
√ √ √ √
Solution. First write 2 + 2i = 2eπ i/4 in polar form. If z = reiθ satisfies z4 = 2 + 2i, then De
Moivre’s theorem gives
r4 ei(4θ ) = 2eπ i/4
Hence r4 = 2 and 4θ = π
4 + 2kπ , k an integer. We obtain four distinct roots (and hence all) by
√
4 π 2kπ
r= 2, θ= 16 = 16 , k = 0, 1, 2, 3
An expression of the form ax2 + bx + c, where the coefficients a 6= 0, b, and c are real numbers, is
called a real quadratic. A complex number u is called a root of the quadratic if au2 + bu + c = 0. The
roots are given by the famous quadratic formula:
√
−b± b2 −4ac
u= 2a
The quantity d = b2 − 4ac is called the discriminant of the quadratic ax2 + bx + c, and there is no real
root if and only
√ if dp< 0. In this case the quadratic is said to be irreducible. Moreover, the fact that d < 0
means that d = i |d|, so the two (complex) roots are conjugates of each other:
p p
1 1
u = 2a (−b + i |d|) and u = 2a (−b − i |d|)
The converse of this is true too: Given any nonreal complex number u, then u and u are the roots of some
real irreducible quadratic. Indeed, the quadratic
x2 − (u + u)x + uu = (x − u)(x − u)
has real coefficients (uu = |u|2 and u + u is twice the real part of u) and so is irreducible because its roots
u and u are not real.
607
Example A.9
Find a real irreducible quadratic with u = 3 − 4i as a root.
As we mentioned earlier, the complex numbers are the culmination of a long search by mathematicians
to find a set of numbers large enough to contain a root of every polynomial. The fact that the complex
numbers have this property was first proved by Gauss in 1797 when he was 20 years old. The proof is
omitted.
If f (x) is a polynomial with complex coefficients, and if u1 is a root, then the factor theorem (Section 6.5)
asserts that
f (x) = (x − u1 )g(x)
where g(x) is a polynomial with complex coefficients and with degree one less than the degree of f (x).
Suppose that u2 is a root of g(x), again by the fundamental theorem. Then g(x) = (x − u2 )h(x), so
This process continues until the last polynomial to appear is linear. Thus f (x) has been expressed as a
product of linear factors. The last of these factors can be written in the form u(x − un ), where u and un are
complex (verify this), so the fundamental theorem takes the following form.
Theorem A.5
Every complex polynomial f (x) of degree n ≥ 1 has the form
where u, u1 , . . . , un are complex numbers and u 6= 0. The numbers u1 , u2 , . . . , un are the roots of
f (x) (and need not all be distinct), and u is the coefficient of xn .
This form of the fundamental theorem, when applied to a polynomial f (x) with real coefficients, can be
used to deduce the following result.
608 Complex Numbers
Theorem A.6
Every polynomial f (x) of positive degree with real coefficients can be factored as a product of
linear and irreducible quadratic factors.
where the coefficients ai are real. If u is a complex root of f (x), then we claim first that u is also a root. In
fact, we have f (u) = 0, so
where ai = ai for each i because the coefficients ai are real. Thus if u is a root of f (x), so is its conjugate
u. Of course some of the roots of f (x) may be real (and so equal their conjugates), but the nonreal roots
come in pairs, u and u. By Theorem A.6, we can thus write f (x) as a product:
(x − u j )(x − u j ) = x2 − (u j + u j )x + (u j u j )
is a real irreducible quadratic for each j (see the discussion preceding Example A.9). Hence (A.1) shows
that f (x) is a product of linear and irreducible quadratic factors, each with real coefficients. This is the
conclusion in Theorem A.6.
Exercises for A
Exercise A.1 Solve each of the following for the real Exercise A.2 Convert each of the following to the form
number x. a + bi.
e. i131 f. (2 − i)3
609
g. (1 + i)4 h. (1 − i)2 (2 + i)2 Exercise A.10 In each case, show that u is a root of the
√
3√ 3−i
√ quadratic equation, and find the other root.
i. + √3+7i
3+i 3−i
a. x2 − 3ix + (−3 + i) = 0; u = 1 + i
Exercise A.3 In each case, find the complex number z. b. x2 + ix − (4 − 2i) = 0; u = −2
c. x2 − (3 − 2i)x + (5 − i) = 0; u = 2 − 3i
a. iz − (1 + i)2 = 3 − i b. (i + z) − 3i(2 − z) =
iz + 1 d. x2 + 3(1 − i)x − 5i = 0; u = −2 + i
a. |z| = 1 b. |z − 1| = 2
Exercise A.5 Find all numbers x in each case.
c. z = iz d. z = −z
a. x3 = 8 b. x3 = −8 e. z = |z| f. im z = m · re z, m a
c. x4 = 16 d. x4 = 64 real number
Exercise A.13
Exercise A.6 In each case, find a real quadratic with u
as a root, and find the other root. a. Verify |zw| = |z||w| directly for z = a + bi and
w = c + di.
a. u = 1 + i b. u = 2 − 3i
b. Deduce (a) from properties C2 and C6.
c. u = −i d. u = 3 − 4i
Exercise A.14 Prove that |z + w| = |z|2 + |w|2 + wz + wz
Exercise A.7 Find the roots of x2 − 2 cos θ x + 1 = 0, θ for all complex numbers w and z.
any angle. Exercise A.15 If zw is real and z 6= 0, show that w = az
Exercise A.8 Find a real polynomial of degree 4 with for some real number a.
2 − i and 3 − 2i as roots. Exercise A.16 If zw = zv and z 6= 0, show that w = uv
Exercise A.9 Let re z and im z denote, respectively, the for some u in C with |u| = 1.
real and imaginary parts of z. Show that: Exercise A.17 Show that (1 + i)n + (1 − i)n is real for
all n, using property C5.
a. im (iz) = re z b. re (iz) = − im z
Exercise A.18 Express each of the following in polar
c. z + z = 2 re z d. z − z = 2i im z form (use the principal argument).
e. re (z + w) = re z + re w, and re (tz) = t · re z if t is
real a. 3 − 3i b. −4i
√ √
f. im (z + w) = im z + im w, and im (tz) = t · im z if c. − 3 + i d. −4 + 4 3i
t is real e. −7i f. −6 + 6i
610 Complex Numbers
Exercise A.19 Express each of the following in the form Exercise A.24 If z = reiθ in polar form, show that:
a + bi.
a. z = re−iθ b. z−1 = 1r e−iθ if z 6= 0
a. 3eπ i b. e7π i/3
√ Exercise A.25 Show that the sum of the nth roots of
c. 2e3π i/4 d. 2e−π i/4
√ unity is zero.
e. e5π i/4 f. 2 3e−2π i/6 [Hint: 1 − zn = (1 − z)(1 + z + z2 + · · · + zn−1 ) for any
complex number z.]
Exercise A.20 Express each of the following in the form Exercise A.26
a + bi.
a. Let z1 , z2 , z3 , z4 , and z5 be equally spaced around
√ √ the unit circle. Show that z1 + z2 + z3 + z4 + z5 = 0.
a. (−1 + 3i)2 b. (1 + 3i)−4 [Hint: (1 − z)(1 + z + z2 + z3 + z4 ) = 1 − z5 for any
c. (1 + i)8 d. (1 − i)10 complex number z.]
√ √
e. (1 − i)6 ( 3 + i)3 f. ( 3 − i)9 (2 − 2i)5 b. Repeat (a) for any n ≥ 2 points equally spaced
around the unit circle.
Exercise A.21 Use De Moivre’s theorem to show that: c. If |w| = 1, show that the sum of the roots of zn = w
is zero.
a. cos 2θ = cos2 θ − sin2 θ ; sin 2θ = 2 cos θ sin θ
Exercise A.27 If zn is real, n ≥ 1, show that (z)n is real.
2
b. cos 3θ = cos3 θ − 3 cos θ sin θ ;
Exercise A.28 If z2 = z2 , show that z is real or pure
sin 3θ = 3 cos2 θ sin θ − sin3 θ
imaginary.
Exercise A.29 If a and b are rational
√ numbers, let p √
and
Exercise A.22 q denote numbers√of the form a + b 2. If p = a + b 2,
define p̃ = a − b 2 and [p] = a2 − 2b2 . Show that each
a. Find the fourth roots of unity. of the following holds.
p⇒q
and read it as “p implies q.” Here p is the hypothesis and q the conclusion of the implication. The
verification that p ⇒ q is valid is called the proof of the implication. In this section we examine the most
common methods of proof2 and illustrate each technique with some examples.
Example B.1
If n is an odd integer, show that n2 is odd.
Note that the computation n2 = 4k2 + 4k + 1 in Example B.1 involves some simple properties of arith-
metic that we did not prove. These properties, in turn, can be proved from certain more basic properties
of numbers (called axioms)—more about that later. Actually, a whole body of mathematical information
lies behind nearly every proof of any complexity, although this fact usually is not stated explicitly. Here is
a geometrical example.
1
By an integer we mean a “whole number”; that is, a number in the set 0, ±1, ±2, ±3, . . .
2 For a more detailed look at proof techniques see D. Solow, How to Read and Do Proofs, 2nd ed. (New York: Wiley, 1990);
or J. F. Lucas. Introduction to Abstract Mathematics, Chapter 2 (Belmont, CA: Wadsworth, 1986).
611
612 Proofs
Example B.2
In a right triangle, show that the sum of the two acute angles is 90 degrees.
Solution.
β
α
Geometry was one of the first subjects in which formal proofs were used—Euclid’s Elements was
published about 300 B.C. The Elements is the most successful textbook ever written, and contains many
of the basic geometrical theorems that are taught in school today. In particular, Euclid included a proof of
an earlier theorem (about 500 B.C.) due to Pythagoras. Recall that, in a right triangle, the side opposite
the right angle is called the hypotenuse of the triangle.
Sometimes it is convenient (or even necessary) to break a proof into parts, and deal with each case
separately. We formulate the general method as follows:
613
To prove that p ⇒ q, show that p implies at least one of a list p1 , p2 , . . . , pn of statements (the cases) and
then show that pi ⇒ q for each i.
Example B.4
Show that n2 ≥ 0 for every integer n.
Then n2 > 0 in Cases (1) and (3) because the product of two positive (or two negative) integers is
positive. In Case (2) n2 = 02 = 0, so n2 ≥ 0 in every case.
Example B.5
If n is an integer, show that n2 − n is even.
We have n2 − n = n(n − 1), so this is even in Case (1) because any multiple of an even number is
again even. Similarly, n − 1 is even in Case (2) so n(n − 1) is again even for the same reason.
Hence n2 − n is even in any case.
The statements used in mathematics are required to be either true or false. This leads to a proof
technique which causes consternation in many beginning students. The method is a formal version of a
debating strategy whereby the debater assumes the truth of an opponent’s position and shows that it leads
to an absurd conclusion.
To prove that p ⇒ q, show that the assumption that both p is true and q is false leads to a contradiction. In
other words, if p is true, then q must be true; that is, p ⇒ q.
Example B.6
If r is a rational number (fraction), show that r2 6= 2.
Solution. To argue by contradiction, we assume that r is a rational number and that r2 = 2, and
show that this assumption leads to a contradiction. Let m and n be integers such that r = mn is in
614 Proofs
lowest terms (so, in particular, m and n are not both even). Then r2 = 2 gives m2 = 2n2 , so m2 is
even. This means m is even (Example B.1), say m = 2k. But then 2n2 = m2 = 4k2 , so n2 = 2k2 is
even, and hence n is even. This shows that n and m are both even, contrary to the choice of these
numbers.
Solution. Assume the conclusion is false. Then each hole contains at most one pigeon and so,
since there are n holes, there must be at most n pigeons, contrary to assumption.
The next example involves the notion of a prime number, that is an integer that is greater than 1 which
cannot be factored as the product of two smaller positive integers both greater than 1. The first few primes
are 2, 3, 5, 7, 11, . . . .
Example B.8
If 2n − 1 is a prime number, show that n is a prime number.
Solution. We must show that p ⇒ q where p is the statement “2n − 1 is a prime”, and q is the
statement “n is a prime.” Suppose that p is true but q is false so that n is not a prime, say n = ab
where a ≥ 2 and b ≥ 2 are integers. If we write 2a = x, then 2n = 2ab = (2a )b = xb . Hence 2n − 1
factors:
2n − 1 = xb − 1 = (x − 1)(xb−1 + xb−2 + · · · + x2 + x + 1)
As x ≥ 4, this expression is a factorization of 2n − 1 into smaller positive integers, contradicting the
assumption that 2n − 1 is prime.
The next example exhibits one way to show that an implication is not valid.
Example B.9
Show that the implication “n is a prime ⇒ 2n − 1 is a prime” is false.
Solution. The first four primes are 2, 3, 5, and 7, and the corresponding values for 2n − 1 are 3, 7,
31, 127 (when n = 2, 3, 5, 7). These are all prime as the reader can verify. This result seems to be
evidence that the implication is true. However, the next prime is 11 and 211 − 1 = 2047 = 23 · 89,
which is clearly not a prime.
We say that n = 11 is a counterexample to the (proposed) implication in Example B.9. Note that, if you
can find even one example for which an implication is not valid, the implication is false. Thus disproving
implications is in a sense easier than proving them.
The implications in Example B.8 and Example B.9 are closely related: They have the form p ⇒ q and
q ⇒ p, where p and q are statements. Each is called the converse of the other and, as these examples
615
show, an implication can be valid even though its converse is not valid. If both p ⇒ q and q ⇒ p are valid,
the statements p and q are called logically equivalent. This is written in symbols as
p⇔q
and is read “p if and only if q”. Many of the most satisfying theorems make the assertion that two
statements, ostensibly quite different, are in fact logically equivalent.
Example B.10
If n is an integer, show that “n is odd ⇔ n2 is odd.”
Solution. In Example B.1 we proved the implication “n is odd ⇒ n2 is odd.” Here we prove the
converse by contradiction. If n2 is odd, we assume that n is not odd. Then n is even, say n = 2k, so
n2 = 4k2 , which is also even, a contradiction.
Many more examples of proofs can be found in this book and, although they are often more complex,
most are based on one of these methods. In fact, linear algebra is one of the best topics on which the
reader can sharpen his or her skill at constructing proofs. Part of the reason for this is that much of linear
algebra is developed using the axiomatic method. That is, in the course of studying various examples
it is observed that they all have certain properties in common. Then a general, abstract system is studied
in which these basic properties are assumed to hold (and are called axioms). In this system, statements
(called theorems) are deduced from the axioms using the methods presented in this appendix. These
theorems will then be true in all the concrete examples, because the axioms hold in each case. But this
procedure is more than just an efficient method for finding theorems in the examples. By reducing the
proof to its essentials, we gain a better understanding of why the theorem is true and how it relates to
analogous theorems in other abstract systems.
The axiomatic method is not new. Euclid first used it in about 300 B.C. to derive all the propositions of
(euclidean) geometry from a list of 10 axioms. The method lends itself well to linear algebra. The axioms
are simple and easy to understand, and there are only a few of them. For example, the theory of vector
spaces contains a large number of theorems derived from only ten simple axioms.
Exercises for B
Exercise B.1 In each case prove the result and either Exercise B.2 In each case either prove the result by
prove the converse or give a counterexample. splitting into cases, or give a counterexample.
a. If n is an even integer, then n2 is a multiple of 4. a. If n is any integer, then n2 = 4k + 1 for some inte-
ger k.
b. If m is an even integer and n is an odd integer, then
m + n is odd. b. If n is any odd integer, then n2 = 8k + 1 for some
integer k.
c. If x = 2 or x = 3, then x3 − 6x2 + 11x − 6 = 0.
c. If n is any integer, n3 − n = 3k for some integer k.
d. If x2 − 5x + 6 = 0, then x = 2 or x = 3. [Hint: Use the fact that each integer has one of the
616 Proofs
forms 3k, 3k + 1, or 3k + 2, where k is an integer.] Exercise B.5 Disprove each statement by giving a coun-
terexample.
Exercise B.3 In each case prove the result by contradic- a. n2 + n + 11 is a prime for all positive integers n.
tion and either prove the converse or give a counterexam-
ple. b. n3 ≥ 2n for all integers n ≥ 2.
c. If a and
√ b are positive numbers and a ≤ b, then
√
a ≤ b.
Exercise B.4 Prove each implication by contradiction. Exercise B.6 The number e from calculus has a series
expansion
e = 1 + 1!1 + 2!1 + 3!1 + · · ·
a. If x and y are positive numbers, then
√ √ √
x + y 6= x + y. where n! = n(n − 1) · · · 3 · 2 · 1 for each integer n ≥ 1.
Prove that e is irrational by contradiction. [Hint: If
b. If x is irrational and y is rational, then x + y is irra- e = m/n, consider
tional.
k = n! e − 1 − 1!1 − 2!1 − 3!1 − · · · − n!1 .
c. If 13 people are selected, at least 2 have birthdays
Show that k is a positive integer and that
in the same month.
1 1
k= n+1 + (n+1)(n+2) + · · · < n1 .]
C. Mathematical Induction
Suppose one is presented with the following sequence of equations:
1=1
1+3 = 4
1+3+5 = 9
1 + 3 + 5 + 7 = 16
1 + 3 + 5 + 7 + 9 = 25
It is clear that there is a pattern. The numbers on the right side of the equations are the squares 12 , 22 , 32 ,
42 , and 52 and, in the equation with n2 on the right side, the left side is the sum of the first n odd numbers.
The odd numbers are
1 = 2·1−1
3 = 2·2−1
5 = 2·3−1
7 = 2·4−1
9 = 2·5−1
and from this it is clear that the nth odd number is 2n − 1. Hence, at least for n = 1, 2, 3, 4, or 5, the
following is true:
1 + 3 + · · · + (2n − 1) = n2 (Sn )
The question arises whether the statement Sn is true for every n. There is no hope of separately verifying
all these statements because there are infinitely many of them. A more subtle approach is required.
The idea is as follows: Suppose it is verified that the statement Sn+1 will be true whenever Sn is true.
That is, suppose we prove that, if Sn is true, then it necessarily follows that Sn+1 is also true. Then, if we
can show that S1 is true, it follows that S2 is true, and from this that S3 is true, hence that S4 is true, and
so on and on. This is the principle of induction. To express it more compactly, it is useful to have a short
way to express the assertion “If Sn is true, then Sn+1 is true.” As in Appendix B, we write this assertion as
Sn ⇒ Sn+1
and read it as “ Sn implies Sn+1 .” We can now state the principle of mathematical induction.
617
618 Mathematical Induction
1. S1 is true.
This is one of the most useful techniques in all of mathematics. It applies in a wide variety of situations,
as the following examples illustrate.
Example C.1
Show that 1 + 2 + · · · + n = 12 n(n + 1) for n ≥ 1.
1 + 2 + · · · + n = 21 n(n + 1)
Sn+1 : 1 + 2 + · · · + (n + 1) = 21 (n + 1)(n + 2)
is also true, and we are entitled to use Sn to do so. Now the left side of Sn+1 is the sum of the first
n + 1 positive integers. Hence the second-to-last term is n, so we can write
1 + 2 + · · · + (n + 1) = (1 + 2 + · · · + n) + (n + 1)
= 12 n(n + 1) + (n + 1) using Sn
= 12 (n + 1)(n + 2)
In the verification that Sn ⇒ Sn+1 , we assume that Sn is true and use it to deduce that Sn+1 is true. The
assumption that Sn is true is sometimes called the induction hypothesis.
Example C.2
xn+1 −1
If x is any number such that x 6= 1, show that 1 + x + x2 + · · · + xn = x−1 for n ≥ 1.
xn+1 −1
Solution. Let Sn be the statement: 1 + x + x2 + · · · + xn = x−1 .
619
x2 −1
1. S1 is true. S1 reads 1 + x = x−1 , which is true because x2 − 1 = (x − 1)(x + 1).
xn+1 −1
2. Sn ⇒ Sn+1 . Assume the truth of Sn : 1 + x + x2 + · · · + xn = x−1 .
xn+2 −1
We must deduce from this the truth of Sn+1 : 1 + x + x2 + · + xn+1 = x−1 . Starting with the left
side of Sn+1 and using the induction hypothesis, we find
1 + x + x2 + · · · + xn+1 = (1 + x + x2 + · · · + xn ) + xn+1
xn+1 −1 n+1
= x−1 + x
xn+1 −1+xn+1 (x−1)
= x−1
xn+2 −1
= x−1
Both of these examples involve formulas for a certain sum, and it is often convenient to use summation
notation. For example, ∑nk=1 (2k − 1) means that in the expression (2k − 1), k is to be given the values
k = 1, k = 2, k = 3, . . . , k = n, and then the resulting n numbers are to be added. The same thing applies
to other expressions involving k. For example,
n
∑ k 3 = 13 + 23 + · · · + n3
k=1
5
∑ (3k − 1) = (3 · 1 − 1) + (3 · 2 − 1) + (3 · 3 − 1) + (3 · 4 − 1) + (3 · 5 − 1)
k=1
The next example involves this notation.
Example C.3
Show that ∑nk=1 (3k2 − k) = n2 (n + 1) for each n ≥ 1.
We now turn to examples wherein induction is used to prove propositions that do not involve sums.
Example C.4
Show that 7n + 2 is a multiple of 3 for all n ≥ 1.
Solution.
1. S1 is true: 71 + 2 = 9 is a multiple of 3.
In all the foregoing examples, we have used the principle of induction starting at 1; that is, we have
verified that S1 is true and that Sn ⇒ Sn+1 for each n ≥ 1, and then we have concluded that Sn is true for
every n ≥ 1. But there is nothing special about 1 here. If m is some fixed integer and we verify that
1. Sm is true.
2. Sn ⇒ Sn+1 for every n ≥ m.
then it follows that Sn is true for every n ≥ m. This “extended” induction principle is just as plausible as
the induction principle and can, in fact, be proved by induction. The next example will illustrate it. Recall
that if n is a positive integer, the number n! (which is read “n-factorial”) is the product
n! = n(n − 1)(n − 2) · · ·3 · 2 · 1
of all the numbers from n to 1. Thus 2! = 2, 3! = 6, and so on.
Example C.5
Show that 2n < n! for all n ≥ 4.
2n+1 = 2 · 2n
< 2 · n! because 2n < n!
< (n + 1)n! because 2 < n + 1
= (n + 1)!
Exercises for C
In Exercises 1–19, prove the given statement by in- Exercise C.22 Suppose Sn is a statement about n for
duction for all n ≥ 1. each n ≥ 1. Explain what must be done to prove that Sn
Exercise C.1 1 + 3 + 5 + 7 + · · · + (2n − 1) = n2 is true for all n ≥ 1 if it is known that:
Exercise C.5 1 · 22 + 2 · 32 + · · · + n(n + 1)2 d. Both Sn and Sn+1 ⇒ Sn+2 for each n ≥ 1.
1
= 12 n(n + 1)(n + 2)(3n + 5)
Exercise C.6 1 1
+ 2·3 1
+ · · · + n(n+1) = n Exercise C.23 If Sn is a statement for each n ≥ 1, argue
1·2 n+1
that Sn is true for all n ≥ 1 if it is known that the following
Exercise C.7 12 + 32 + · · · + (2n − 1)2 = n3 (4n2 − 1) two conditions hold:
1 1 1
Exercise C.8 1·2·3 + 2·3·4 + · · · + n(n+1)(n+2)
n(n+3)
1. Sn ⇒ Sn−1 for each n ≥ 2.
= 4(n+1)(n+2)
2. Sn is true for infinitely many values of n.
Exercise C.9 1 + 2 + 22 + · · · + 2n−1 = 2n − 1
Exercise C.10 3 + 33 + 35 + · · · + 32n−1 = 83 (9n − 1) Exercise C.24 Suppose a sequence a1 , a2 , . . . of num-
Exercise C.11 1
+ 212 + · · · + n12 ≤ 2 − 1n bers is given that satisfies:
12
Exercise C.16 n3 + (n + 1)3 + (n + 2)3 is a multiple of Exercise C.25 Suppose a sequence a1 , a2 , . . . of num-
9. bers is given that satisfies:
Exercise C.17 5n + 3 is a multiple of 4.
1. a1 = b.
Exercise C.18 n3 − n is a multiple of 3.
2. an+1 = can + b for n = 1, 2, 3, . . . .
Exercise C.19 32n+1 + 2n+2 is a multiple of 7.
Formulate a theorem giving an in terms of n, and
Exercise C.20 Let Bn = 1 · 1! + 2 · 2! + 3 · 3! + · · · + n · n! prove your result by induction.
Find a formula for Bn and prove it.
Exercise C.21 Let Exercise C.26
Find a formula for An and prove it. b. Show that n3 ≤ 2n for all n ≥ 10.
D. Polynomials
Expressions like 3 − 5x and 1 + 3x − 2x2 are examples of polynomials. In general, a polynomial is an
expression of the form
f (x) = a0 + a1 x + a2 x2 + · · · + an xn
where the ai are numbers, called the coefficients of the polynomial, and x is a variable called an indeter-
minate. The number a0 is called the constant coefficient of the polynomial. The polynomial with every
coefficient zero is called the zero polynomial, and is denoted simply as 0.
If f (x) 6= 0, the coefficient of the highest power of x appearing in f (x) is called the leading coefficient
of f (x), and the highest power itself is called the degree of the polynomial and is denoted deg ( f (x)).
Hence
−1 + 5x + 3x2 has constant coefficient − 1, leading coefficient 3, and degree 2,
7 has constant coefficient 7, leading coefficient 7, and degree 0,
6x − 3x3 + x4 − x5 has constant coefficient 0, leading coefficient − 1, and degree 5.
623
624 Polynomials
Theorem D.1
If f (x) and g(x) are nonzero polynomials of degrees n and m respectively, their product f (x)g(x) is
also nonzero and
deg [ f (x)g(x)] = n + m
Example D.1
(2 − x + 3x2 )(3 + x2 − 5x3 ) = 6 − 3x + 11x2 − 11x3 + 8x4 − 15x5 .
If f (x) is any polynomial, the next theorem shows that f (x) − f (a) is a multiple of the polynomial
x − a. In fact we have
f (a) = a0 + a1 a + a2 a2 + · · · + an an
If these expressions are subtracted, the constant terms cancel and we obtain
Hence it suffices to show that, for each k ≥ 1, xk − ak = (x − a)p(x) for some polynomial p(x) of degree
k − 1. This is clear if k = 1. If it holds for some value k, the fact that
x2 − x − 1
x − 2 x3 − 3x2 + x − 1
x3 − 2x2
−x2 + x − 1
−x2 + 2x
−x − 1
−x + 2
−3
Hence x3 − 3x2 + x − 1 = (x − 2)(x2 − x − 1) + (−3). The final remainder is −3 = f (2) as is easily verified.
This procedure is called the division algorithm.1
A real number a is called a root of the polynomial f (x) if
f (a) = 0
Hence for example, 1 is a root of f (x) = 2 − x + 3x2 − 4x3 , but −1 is not a root because f (−1) = 10 6= 0.
If f (x) is a multiple of x − a, we say that x − a is a factor of f (x). Hence the remainder theorem shows
immediately that if a is root of f (x), then x − a is factor of f (x). But the converse is also true: If x − a is a
factor of f (x), say f (x) = (x − a)q(x), then f (a) = (a − a)q(a) = 0. This proves the
Example D.2
If f (x) = x3 − 2x2 − 6x + 4, then f (−2) = 0, so x − (−2) = x + 2 is a factor of f (x). In fact, the
division algorithm gives f (x) = (x + 2)(x2 − 4x + 2).
Consider the polynomial f (x) = x3 −3x+2. Then 1 is clearly a root of f (x), and the division algorithm
gives f (x) = (x − 1)(x2 + x − 2). But 1 is also a root of x2 + x − 2; in fact, x2 + x − 2 = (x − 1)(x + 2).
Hence
f (x) = (x − 1)2 (x + 2)
and we say that the root 1 has multiplicity 2.
Note that non-zero constant polynomials f (x) = b 6= 0 have no roots. However, there do exist non-
constant polynomials with no roots. For example, if g(x) = x2 + 1, then g(a) = a2 + 1 ≥ 1 for every real
number a, so a is not a root. However the complex number i is a root of g(x); we return to this below.
1 This
procedure can be used to divide f (x) by any nonzero polynomial d(x) in place of x − a; the remainder then is a
polynomial that is either zero or of degree less than the degree of d(x).
626 Polynomials
Now suppose that f (x) is any nonzero polynomial. We claim that it can be factored in the following
form:
f (x) = (x − a1 )(x − a2 ) · · · (x − am )g(x)
where a1 , a2 , . . . , am are the roots of f (x) and g(x) has no root (where the ai may have repetitions, and
may not appear at all if f (x) has no real root).
By the above calculation f (x) = x3 −3x +2 = (x −1)2 (x +2) has roots 1 and −2, with 1 of multiplicity
two (and g(x) = 1). Counting the root −2 once, we say that f (x) has three roots counting multiplicities.
The next theorem shows that no polynomial can have more roots than its degree even if multiplicities are
counted.
Theorem D.4
If f (x) is a nonzero polynomial of degree n, then f (x) has at most n roots counting multiplicities.
Proof. If n = 0, then f (x) is a constant and has no roots. So the theorem is true if n = 0. (It also holds for
n = 1 because, if f (x) = a + bx where b 6= 0, then the only root is − ab .) In general, suppose inductively
that the theorem holds for some value of n ≥ 0, and let f (x) have degree n + 1. We must show that f (x)
has at most n + 1 roots counting multiplicities. This is certainly true if f (x) has no root. On the other hand,
if a is a root of f (x), the factor theorem shows that f (x) = (x − a)q(x) for some polynomial q(x), and q(x)
has degree n by Theorem D.1. By induction, q(x) has at most n roots. But if b is any root of f (x), then
(b − a)q(b) = f (b) = 0
so either b = a or b is a root of q(x). It follows that f (x) has at most n roots. This completes the induction
and so proves Theorem D.4.
As we have seen, a polynomial may have no root, for example f (x) = x2 + 1. Of course f (x) has
complex roots i and −i, where i is the complex number such that i2 = −1. But Theorem D.4 even holds
for complex roots: the number of complex roots (counting multiplicities) cannot exceed the degree of the
polynomial. Moreover, the fundamental theorem of algebra asserts that the only nonzero polynomials with
no complex root are the non-zero constant polynomials. This is discussed more in Appendix A, Theorems
A.4 and A.5.
Selected Exercise Answers
Section 1.1 1.1.19 $4.50, $5.20
1.1.1 b.
Section 1.2
2(2s + 12t + 13) + 5s + 9(−s − 3t − 3) + 3t = −1;
(2s + 12t + 13) + 2s + 4(−s − 3t − 3) = 1 1.2.1 b. No, no
d. No, yes
1.1.2 b. x = t, y = 31 (1 − 2t) or x = 12 (1 − 3s), y = s
f. No, no
d. x = 1 + 2s − 5t, y = s, z = t or x = s, y = t,
z = 15 (1 − s + 2t)
0 1 −3 0 0 0 0
0 0 0 1 0 0 −1
1.1.4 x = 14 (3 + 2s), y = s, z = t 1.2.2 b.
0
0 0 0 1 0 0
0 0 0 0 0 1 1
1.1.5 a. No solution if b 6= 0. If b = 0, any x is a
solution. 1.2.3 b. x1 = 2r − 2s − t + 1, x2 = r, x3 = −5s + 3t − 1,
b. x = b x4 = s, x5 = −6t + 1, x6 = t
a
d. x1 = −4s − 5t − 4, x2 = −2s + t − 2, x3 = s, x4 = 1,
1 2 0 x5 = t
1.1.7 b.
0 1 1
1.2.4 b. x = − 71 , y = − 37
1 1 0 1
d. 0 1 1 0 d. x = 13 (t + 2), y = t
−1 0 1 2
f. No solution
2x − y = −1
1.2.5 b. x = −15t − 21, y = −11t − 17, z = t
1.1.8 b. −3x + 2y + z = 0
y+z= 3 d. No solution
2x1 − x2 = −1 f. x = −7, y = −9, z = 1
or −3x1 + 2x2 + x3 = 0
x2 + x3 = 3 h. x = 4, y = 3 + 2t, z = t
627
628 Polynomials
1.2.9 b. Unique solution x = −2a + b + 5c, 1 0 0
y = 3a − b − 6c, z = −2a + b + c, for any a, b, c. h. False. A = 0 1 0
0 0 0
d. If abc 6= −1, unique solution x = y = z = 0; if
abc = −1 the solutions are x = abt, y = −bt, z = t. 1.3.2 b. a = −3, x = 9t, y = −5t, z = t
f. If a = 1, solutions x = −t, y = t, z = −1. If a = 0, d. a = 1, x = −t, y = t, z = 0; or a = −1, x = t, y = 0,
there is no solution. If a 6= 1 and a 6= 0, unique z=t
−1
solution x = a−1a , y = 0, z = a .
1.3.3 b. Not a linear combination.
1.2.10 b. 1 d. v = x + 2y − z
d. 3
1.3.4 b. y = 2a1 − a2 + 4a3 .
f. 1
−2 −2 −3
1 0 0
1.2.11 b. 2
1.3.5 b. r
0 + s
−1 +t
−2
d. 3 0 1 0
f. 2 if a = 0 or a = 2; 3, otherwise. 0 0 1
0 −1
1 0 1 2 3
1.2.12 b. False. A = 0 1 1 d. s
1 +t
0
0 0 0 0 1
0 0
1 0 1
d. False. A = 0 1 0 1.3.6 b. The system in (a) has nontrivial solutions.
0 0 0
2x − y = 0 2x − y = 1 1.3.7 b. By Theorem 1.2.2, there are n − r = 6 − 1 = 5
f. False. is consistent but is parameters and thus infinitely many solutions.
−4x + 2y = 0 −4x + 2y = 1
not. d. If R is the row-echelon form of A, then R has a row of
h. True, A has 3 rows, so there are at most 3 leading 1s. zeros and 4 rows in all. Hence R has r = rank A = 1,
2, or 3. Thus there are n − r = 6 − r = 5, 4, or 3
parameters and thus infinitely many solutions.
1.2.14 b. Since oneof b
− a and c − a is nonzero,
then
1 a b+c 1 a b+c
1 1.3.9 b. That the graph of ax + by + cz = d contains
b c+a → 0 b−a a−b →
three points leads to 3 linear equations homogeneous
1 b c+a 0 c−a a−c
in variables a, b, c, and d. Apply Theorem 1.3.1.
1 a b+c 1 0 b+c+a
0 1 −1 → 0 1 −1
1.3.11 There are n − r parameters (Theorem 1.2.2), so there
0 0 0 0 0 0
are nontrivial solutions if and only if n − r > 0.
5 7 8
1.4.1 b. f1 = 85 − f4 − f7
1.2.18 20 in A, 20 in B, 20 in C. f2 = 60 − f4 − f7
f3 = −75 + f4 + f6
Section 1.3 f5 = 40 − f6 − f7
f4 , f6 , f7 parameters
1 0 1 0
1.3.1 b. False. A =
0 1 1 0 1.4.2 b. f5 = 15
25 ≤ f4 ≤ 30
1 0 1 1
d. False. A =
0 1 1 0
1.4.3 b. CD
1 0 0
f. False. A =
0 1 0 Section 1.5
629
1.5.2 I1 = − 51 , I2 = 35 , I3 = 4
5 d. (−12, 4, −12)
0 1 −2
1.5.4 I1 = 2, I2 = 1, I3 = 12 , I4 = 32 , I5 = 32 , I6 = 1
2
f. −1 0 4
2 −4 0
Section 1.6 4 −1
h.
−1 −6
1.6.2 2NH3 + 3CuO → N2 + 3Cu + 3H2 O
15 −5
2.1.3 b.
1.6.4 15Pb(N3 )2 + 44Cr(MnO4 )2 → 10 0
22Cr2 O3 + 88MnO2 + 5Pb3 O4 + 90NO d. Impossible
5 2
Supplementary Exercises for Chapter 1 f.
0 −1
Supplementary Exercise 1.1. b. No. If the h. Impossible
corresponding planes are parallel and distinct, there is
no solution. Otherwise they either coincide or have a 4
2.1.4 b. 1
whole common line of solutions, that is, at least one 2
parameter.
2.1.5 b. A = − 11
3 B
Supplementary Exercise 1.2. b.
1 1
x1 = 10 (−6s − 6t + 16), x2 = 10 (4s − t + 1), x3 = s, 2.1.6 b. X = 4A − 3B, Y = 4B − 5A
x4 = t
2.1.7 b. Y = (s, t), X = 12 (1 + 5s, 2 + 5t); s and t
Supplementary Exercise b.. b. If a = 1, no solution. If arbitrary
a = 2, x = 2 − 2t, y = −t, z = t. If a 6= 1 and a 6= 2, the
8−5a −2−a
unique solution is x = 3(a−1) , y = 3(a−1) , z = a+2
3 2.1.8 b. 20A − 7B + 2C
a b
R1 2.1.9 b. If A = , then (p, q, r, s) =
Supplementary Exercise 1.4. → c d
R
2 1
R1 + R2 R1 + R2 R2 R2 2 (2d, a + b − c − d, a − b + c − d, −a + b + c + d).
→ → →
R2 −R1 −R1 R1
2.1.11 b. If A + A′ = 0 then −A = −A + 0 =
−A + (A + A′) = (−A + A) + A′ = 0 + A′ = A′
Supplementary Exercise 1.6. a = 1, b = 2, c = −1
2.1.13 b. Write A = diag (a1 , . . . , an ), where a1 , . . . , an
Supplementary Exercise 1.8. The (real) solution is x = 2, are the main diagonal entries. If B = diag (b1 , . . . , bn )
y = 3 − t, z = t where t is a parameter. The given complex then kA = diag (ka1 , . . . , kan ).
solution occurs when t = 3 − i is complex. If the real system
has a unique solution, that solution is real because the 2.1.14 b. s = 1 or t = 0
coefficients and constants are all real.
d. s = 0, and t = 3
Supplementary Exercise 1.9. b. 5 of brand 1, 0 of
2 0
brand 2, 3 of brand 3 2.1.15 b.
1 −1
2 7
Section 2.1 d.
− 92 −5
2.1.1 b. (a b c d) = (−2, −4, −6, 0) + t(1, 1, 1, 1),
t arbitrary 2.1.16 b. A = AT , so using Theorem 2.1.2,
(kA)T = kAT = kA.
d. a = b = c = d = t, t arbitrary
2.1.19 b. False. Take B = −A for any A 6= 0.
−14
2.1.2 b. d. True. Transposing fixes the main diagonal.
−20
630 Polynomials
2.2.1 b. x1 − 3x2 − 3x3 + 3x4 = 5 2.2.6 We have Ax0 = 0 and Ax1 = 0 and so
8x2 + 2x4 = 1 A(sx0 + tx1 ) = s(Ax0 ) + t(Ax1 ) = s · 0 + t · 0 = 0.
x1 + 2x2 + 2x3 =2
x2 + 2x3 − 5x4 = 0
−3 2 −5
0 1 0
1 −2 −1 2.2.8 b. x = −1 + s 0 +t 2 .
−1 0 1
0 0 0
2.2.2 x1
2 + x2 −2 + x3 7 + 0 0 1
3 −4 9
1 5
−2 −3 1 2 2 0
2.2.10 b. False. = .
x4 =
0 8 2 4 −1 0
12 d. True. The linear
combination x1 a1 + · · · + xn an equals
−2
Ax where A = a1 · · · an by Theorem 2.2.1.
x1
1 2 3 2
2.2.3 b. Ax = x2 = 1 1 −1
0 −4 5 f. False. If A = and x = 0 , then
x 2 2 0
3 1
1 2 3 x + 2x2 + 3x3
x1 + x2 + x3 = 1
0 −4 5 − 4x2 + 5x3 1 1 1
Ax = 6= s +t for any s and t.
x1 4 2 2
3 −4 1 6
x2
d. Ax = 0 2 1 5 x3
1 −1 1
−8 7 −3 0 h. False. If A = , there is a solution
x4
−1 1 −1
3 −4 1 0 1
for b = but not for b = .
= x1 0 + x2 2 + x3 1 + 0 0
−8 7 −3
6 3x1 − 4x2 + x3 + 6x4
x y 0 1 x
x4 5 = 2x 2 + x 3 + 5x 4 2.2.11 b. Here T = = .
y x 1 0 y
0 −8x1 + 7x2 − 3x3
x y 0 1 x
d. Here T = = .
2.2.4 b. To solve Ax = b the reduction is y −x −1 0 y
1 3 2 0 4
1 0 −1 −3 1 →
2.2.13 b. Here
−1 2 3 5 1
1 0 −1 −3 1 x −x −1 0 0 x
0 1 1 1 1 so the general solution is T y = y = 0 1 0 y ,
0 0 0 0 0 z z 0 0 1 z
1 + s + 3t
1−s−t −1 0 0
.
s so the matrix is 0 1 0 .
t 0 0 1
631
2.2.16 Write A = a1 a2 · · · an in terms of its a b 0 0
2.3.6 b. If A = and E = , compare
columns. If b = x1 a1 + x2 a2 + · · · + xn an where the xi are c d 1 0
scalars, then Ax = b by Theorem 2.2.1 where entries an AE and EA.
T
x = x1 x2 · · · xn . That is, x is a solution to the
system Ax = b. 2.3.7 b. m × n and n × m for some m and n
2.2.18 b. By Theorem 2.2.3, A(tx1 ) = t(Ax1 ) = t · 0 = 0; 2.3.8 1 0 1 0 1 1
b. i. , ,
that is, tx1 is a solution to Ax = 0. 0 1 0 −1 0 −1
1 0 1 0 1 1
2.2.22 If A is m × n and x and y are n-vectors, we must show ii. , ,
0 0 0 1 0 0
that A(x + y) = Ax + Ay. Denote the columns of A by
T
a1 , a2 , . . . , an , and write x = x1 x2 · · · xn and 1 −2k 0 0
T
y = y1 y2 · · · yn . Then 0 1 0 0
T 2.3.12 b. A2k = 0
for
x + y = x1 + y1 x2 + y2 · · · xn + yn , so 0 1 0
Definition 2.1 and Theorem 2.1.1 give 0 0 0 1
A(x + y) = (x1 + y1)a1 + (x2 + y2 )a2 + · · · + (xn + yn )an = k = 0, 1, 2, . . . ,
(x1 a1 + x2 a2 + · · · + xnan ) + (y1a1 + y2 a2 + · · · + yn an ) = 1 −(2k + 1) 2 −1
0 1 0 0
Ax + Ay. A2k+1 = A2k A = for
0 0 −1 1
Section 2.3 0 0 0 1
k = 0, 1, 2, . . .
−1 −6 −2
2.3.1 b.
0 6 10 I 0
2.3.13 b. = I2k
d. −3 −15 0 I
f. [−23] d. 0k
m
1 0 X 0 0 X m+1
h. f. if n = 2m; if
0 1 0 Xm Xm 0
n = 2m + 1
aa′ 0 0
j. 0 bb′ 0
2.3.14 b. If Y is row i of the identity matrix I, then YA is
0 0 cc′
row i of IA = A.
−1 4 −10 2 7 −6
2.3.2 b. BA = ,B = , 2.3.16 b. AB − BA
1 2 4 −1 6
−2 12 d. 0
CB = 2 −6
1 6 2.3.18 b. (kA)C = k(AC) = k(CA) = C(kA)
2 4 8
4 10
AC = , CA = −1 −1 −5 2.3.20 We have AT = A and BT = B, so (AB)T = BT AT = BA.
−2 −1
1 4 2 Hence AB is symmetric if and only if AB = BA.
d. True. Since AT = A, we have 0 0 1 −2
−1 −2 −1 −3
(I + AT = I T + AT = I + A.
j.
1 2 1 2
0 1 0 −1 0 0
f. False. If A = , then A 6= 0 but A2 = 0.
0 0
1 −2 6 −30 210
h. True. We have A(A + B) = (A + B)A; that is, 0 1 −3 15 −105
A2 + AB = A2 + BA. Subtracting A2 gives AB = BA.
l.
0 0 1 −5 35
1 −2 2 4 0 0 0 1 −7
j. False. A = ,B=
2 4 1 2 0 0 0 0 1
l. False. See (j).
x 0 1 −3 4 −3
2.4.3 b. == 15
5
2.3.28 b. If A = [ai j ] and B = [bi j ] and y 1 −2 1 −2
∑ j ai j = 1 = ∑ j bi j , then the (i, j)-entry of AB is
x 9 −14 6 1
ci j = ∑k aik bk j , whence
d. y = 15 4 −4 1 −1 =
∑ j ci j = ∑ j ∑k aik bk j = ∑k aik (∑ j bk j ) = ∑k aik = 1. 15 −5 0
Alternatively: If e = (1, 1, . . . , 1), then the rows of A z −10
23
sum to 1 if and only if Ae = e. If also Be = e then 1
5 8
(AB)e = A(Be) = Ae = e.
−25
2.3.30 b. If A = [ai j ], then
tr (kA) = tr [kai j ] = ∑ni=1 kaii = k ∑ni=1 aii = k tr (A). 4 −2 1
h i 2.4.4 b. B = A−1 AB = 7 −2 4
e. Write AT = a′i j , where a′i j = a ji . Then −1 2 −1
AAT = ∑nk=1 aik a′k j , so
3 −2
tr (AAT ) = ∑ni=1 ∑nk=1 aik a′ki = ∑ni=1 ∑nk=1 a2ik . 2.4.5 b. 1
10 1 1
2.3.32 e. Observe that PQ = P2 + PAP − P2AP = P, so 1 0 1
2 d. 2
Q = PQ + APQ − PAPQ = P + AP − PAP = Q. 1 −1
1 2 0
2.3.34 b. (A + B)(A − B) = A2 − AB + BA − B2,
and f. 2 −6 1
(A − B)(A + B) = A2 + AB − BA − B2. These are equal
if and only if −AB + BA = AB − BA; that is, 1 1 1
h. − 2
2BA = 2AB; that is, BA = AB. 1 0
2.4.11 b. (i) Inconsistent. 2.4.33 b. Given ABAB = AABB. Left multiply by A−1 ,
x1 2 then right multiply by B−1 .
(ii) =
x2 −1
2.4.34 If Bx = 0 where x is n × 1, then ABx = 0 so x = 0 as
0 1 AB is invertible. Hence B is invertible by Theorem 2.4.5, so
2.4.15 b. B4 = I, so B−1 = B3 =
−1 0 A = (AB)B−1 is invertible.
c2 − 2 −c 1
−1
2.4.16 −c 1 0
2 2.4.35 b. B 3 = 0 so B is not invertible by
3−c c −1
−1
Theorem 2.4.5.
2.4.18 b. If column j of A is zero, Ay = 0 where y is
column j of the identity matrix. Use Theorem 2.4.5.
2.4.38 b. Write U = In − 2XX T . Then
d. If each column of A sums to 0, XA = 0 where X is the
UT = InT − 2X TT X T = U, and
row of 1s. Hence AT X T = 0 so A has no inverse by
U = In2 − (2XX T )In − In(2XX T ) + 4(XX T )(XX T ) =
2
Theorem 2.4.5 (X T 6= 0).
In − 4XX T + 4XX T = In .
2 −1 0
−1 0
2.4.26 b. −5 3 0 2.5.2 b.
0 1
−13 8 −1
1 −1
1 −1 −14 8 d.
−1 2 16 −9 0 1
d.
0
0 2 −1 0 1
0 0 1 −1 f.
1 0
1 −2 1 0 1 0 2.5.17
r
b. (i) A ∼ A because A = IA. (ii) If A ∼ B, then
r
2.5.6 b.
0 1 0 12 −5 1 r
A = UB, U invertible, so B = U −1 A. Thus B ∼ A. (iii)
1 0 7 r r
If A ∼ B and B ∼ C, then A = UB and B = VC, U and
A= . Alternatively,
0 1 −3 V invertible. Hence A = U(VC) = (UV )C, so A ∼ C.
r
1 0 1 −1 1 0
0 21 0 1 −5 1 r
b. If B ∼ A,let B = UA, U
2.5.19 invertible.If
1 0 7 d b 0 0 b
A= . U= , B = UA = where b
0 1 −3 −b d 0 0 d
and d are not both zero (as U is invertible). Every
1 2 0 1 0 0 1 0 0 such matrix B arises in this way: Use
d. 0 1 0 0 15 0 0 1 0 a b
0 0 1 0 −1 1 U= –it is invertible by Example 2.3.5.
0 0 1 −b a
1 0 0 1 0 0
0 1 0 −3 1 0 2.5.22 b. Multiply column i by 1/k.
−2 0 1 0 0 1
1 1 Section 2.6
0 0 1 1 0 5 5
0 1 0 A = 7 2 5 3 2
0 1 −5 −5
2.6.1 b. 6 = 3 2 − 2 0 , so
1 0 0
0 0 0 0 −13 −1 5
5 3 2
T 6 = 3T 2 − 2T 0 =
1 1 1 1 0 1 −13 −1 5
2.5.7 b. U = =
1 0 0 1 1 0 3 −1 11
3 −2 =
5 2 11
0 1 1 0 1 0
2.5.8 b. A = 5
1 0 2 1 0 −1 4
−1
1 2 b. As in 1(b), T 2 .
2.6.2 2 =
0 1 −9
−4
1 0 0 1 0 0
d. A = 0 1 0 0 1 0 2.6.3 b. T (e1 ) = −e2 and T (e2 ) = −e1 . So
−2 0
1 0 21 A
T (e1 ) T (e2 ) = −e2 −e1 =
1 0 −3 1 0 0 −1 0
0 1 .
0 0 1 4 0 −1
0 0 1 0 0 1 √ √
2
− 2
d. T (e1 ) = √2 and T (e2 ) = √2
2 2
2.5.10 UA = R by Theorem 2.5.1, so A = U −1 R. 2 2
√
2 1 −1
So A = T (e1 ) T (e2 ) = 2 .
1 1
2.5.12 b. U = A−1 , V = I 2 ; rank A = 2
−2 1 0 2.6.4 b. T (e1 ) = −e1 , T (e2 ) = e2 and T (e3 ) = e3 .
d. U = 3 −1 0 , Hence
Theorem 2.6.2 gives
2 −1 1 A
T (e1 ) T (e2 ) T (e3 ) = −e1 e2 e3 =
1 0 −1 −3 −1 0 0
0 1 1 4 0 1 0 .
V = 0 0
; rank A = 2
1 0 0 0 1
0 0 0 1
2.6.5 b. We have y1 = T (x1 ) for some x1 in Rn , and
y2 = T (x2 ) for some x2 in Rn . So
2.5.16
WriteU −1 = Ek Ek−1 · · · E2 E1 , Ei elementary. Then ay1 + by2 = aT (x1 ) + bT (x2 ) = T (ax1 + bx2 ). Hence
I U −1 A = U −1U U −1 A ay1 + by2 is also in the image of T .
−1 U A
=
U = E−1 k Ek−1 · · · E2 E1 U A . So
U A → I U A by row operations 0 0
(Lemma 2.5.1). 2.6.7 b. T 2 6= 2 .
1 −1
635
1 1
√1 = (x1 + y1 ) + (x2 + y2 ) + · · · + (xn + yn )
2.6.8 b. A = , rotation through θ = − π4 .
−1 1 2
= (x1 + x2 + · · · + xn) + (y1 + y2 + · · · + yn )
1 −8 −6 = T (x) + T (y)
d. A = 10 , reflection in the line y = −3x.
−6 8
Similarly, T (ax) = aT (x) for any scalar a, so T is linear. By
cos θ 0 − sin θ Theorem 2.6.2, T has matrix
2.6.10 b. 0 1 0 A = T (e1 ) T (e2 ) · · · T (en ) = 1 1 · · · 1 , as
sin θ 0 cos θ before.
2.6.17 b. If B2 = I then
w1
T 2 (x) = T [T (x)] = B(Bx) = B2 x = Ix = x = 1R2 (x) w2
for all x in Rn . Hence T 2 = 1R2 . If T 2 = 1R2 , then
where w = .. . Since this holds for all x in Rn , it
B2 x = T 2 (x) = 1R2 (x) = x = Ix for all x, so B2 = I by .
Theorem 2.2.6. wn
shows that T = TW . This also follows from
b. The Theorem 2.6.2, but we have first to verify that T is
2.6.18 matrix ofQ1 ◦Q0 is
0 1 1 0 0 −1 linear. (This comes to showing that
= , which is the w · (x + y) = w · s + w · y and w · (ax) = a(w · x) for all
1 0 0 −1 1 0
matrix of R π . x and y in Rn and all a in R.) Then T has matrix
2 A = T (e ) T (e ) · · · T (e ) =
1 2 n
d. The matrix of Q0 ◦ R π is w1 w2 · · · wn by Theorem 2.6.2. Hence if
2
1 0 0 −1 0 −1 x1
= , which is x2
0 −1 1 0 −1 0
the matrix of Q−1 . x = . in R, then T (x) = Ax = w · x, as required.
..
xn
2.6.20 We have
x1
x2 2.6.23 b. Given x in R and a in R, we have
T (x) = x1 + x2 + · · · + xn = 1 1 ··· 1 .. , so T (S ◦ T )(ax) = S [T (ax)] Definition of S ◦ T
.
= S [aT (x)] Because T is linear.
xn = a [S [T (x)]] Because S is linear.
is the matrix transformation
induced by the matrix = a [S ◦ T (x)] Definition of S ◦ T
A = 1 1 · · · 1 . In particular, T is linear. On the other
hand, we can use Theorem 2.6.2 to get A, but to do this we
mustfirst show
directly thatT is linear. If we write Section 2.7
x1 y1
x2 y2 1 2 1
2 0 0
x = . and y = . . Then
.. .. 2.7.1 b. 1 −3 0 0 1 − 23
xn yn −1 9 1
0 0 0
x1 + y1
x2 + y2 −1 0 0 0 1 3 −1 0 1
1 1 0 0 0 1 2 1 0
T (x + y) = T .. d.
. 1 −1 1 0 0 0 0 0 0
xn + yn 0 −2 0 1 0 0 0 0 0
636 Polynomials
1 1 −1 2 1 2.7.9 b. Let A = LU = L1U1 be two such factorizations.
2 0 0 0
Then UU1−1 = L−1 L1 ; write this matrix as
1 −2 0 0 0 1 − 12 0 0
f. D = UU1−1 = L−1 L1 . Then D is lower triangular
3 −2 1 0
0 0 0 0
0 (apply Lemma 2.7.1 to D = L−1 L1 ); and D is also
0 2 0 1
upper triangular (consider UU1−1 ). Hence D is
0 0 0 0 0
diagonal, and so D = I because L−1 and L1 are unit
triangular. Since A = LU; this completes the proof.
0 0 1
2.7.2 b. P = 1 0 0
0 1 0 Section 2.8
−1 2 1
PA = 0 −1 2 t
0 0 4 2.8.1 b. 3t
t
−1 0 0 1 −2 −1
= 0 −1 0 0 1 2
14t
0 0 4 0 0 1 17t
d.
1 0 0 0 47t
0 0 1 0 23t
d. P =
0 0 0 1
0 1 0 0 t
−1 −2 3 0
1 2.8.2 t
1 −1 3
PA =
2
t
5 −10 1
2 4 −6 5
−1 0 0 0 1 2 −3 0 bt
1 −1 0 2.8.4 P = is nonzero (for some t) unless b = 0
0 0 1 −2 −3 (1 − a)t
= 2
1 −2 0 0 0 1 −2 1
and a = 1. In that case, is a solution. If the entries of E
2 0 0 5 0 0 0 1 1
b
−1 + 2t are positive, then P = has positive entries.
−1 1−a
−t
2.7.3 b. y = 0 x =
s and t arbitrary
s
0 0.4 0.8
t 2.8.7 b.
0.7 0.2
2 8 − 2t
8 6−t
d. y =
−1 x = −1 − t t arbitrary a b 1 − a −b
2.8.8 If E = , then I − E = , so
0 t c d −c 1 − d
det (I − E) = (1 − a)(1 − d) − bc = 1 − tr E + det E. If
R1 R1 + R2 R1 + R2 −1 1 1−d b
2.7.5 → → → det (I − E) 6= 0, then (I − E) = det (I−E) ,
R R −R1 c 1−a
2 2
R2 R2 so (I − E)−1 ≥ 0 if det (I − E) > 0, that is, tr E < 1 + det E.
→
−R1 R1 The converse is now clear.
1 2 3 d. −1
2.9.2 b. 3 , 8
1
f. −39
1
d. 13 1 , 0.312 h. 0
1 j. 2abc
5 l. 0
1
f. 20 7 , 0.306
n. −56
8
p. abcd
2.9.4 b. 50% middle, 25% upper, 25% lower
3.1.5 b. −17
7 9 d. 106
2.9.6 16 , 16
2.9.8 a. 7 3.1.6 b. 0
75
b. He spends most
of his time in compartment 3; steady 3.1.7 b. 12
3
2
1
2a + p 2b + q 2c + r
state 16 5 . 3.1.8 b. det 2p + x 2q + y 2r + z
4
2x + a 2y + b 2z + c
2
a+ p+x b+q+y c+r+z
= 3 det 2p + x 2q + y 2r + z
2.9.12 a. Direct verification. 2x + a 2y + b 2z + c
b. Since 0 < p < 1 and 0 < q < 1 we get 0 < p + q < 2 a+ p+x b+q+y c+r+z
whence −1 < p + q − 1 < 1. Finally, = 3 det p−a q−b r−c
−1 < 1 − p − q < 1, so (1 − p − q)m converges to zero x− p y−q z−r
as m increases. 3x 3y 3z
= 3 det p − a q − b r − c · · ·
x− p y−q z−r
Supplementary Exercises for Chapter 2
Supplementary Exercise 2.2. b. 1 1
3.1.9 b. False. A =
U −1 = 41 (U 2 − 5U + 11I). 2 2
2 0 1 0
b. If xk = xm , then d. False. A = →R=
Supplementary Exercise 2.4. 0 1 0 1
y + k(y − z) = y + m(y − z). So (k − m)(y − z) = 0.
But y − z is not zero (because y and z are distinct), so 1 1
f. False. A =
k − m = 0 by Example 2.1.7. 0 1
1 1 1 0
h. False. A = and B =
Supplementary Exercise 2.6. d. Using parts (c) and (b) 0 1 1 1
gives I pqAIrs = ∑ni=1 ∑nj=1 ai j I pq Ii j Irs . The only
nonzero term occurs when i = q and j = r, so 3.1.10 b. 35
I pq AIrs = aqr I ps .
3.1.11 b. −6
Supplementary Exercise 2.7. b. If d. −6
A = [ai j ] = ∑i j ai j Ii j , then I pq AIrs = aqr I ps by 6(d).
But then aqr I ps = AI pq Irs = 0 if q 6= r, so aqr = 0 if
3.1.14 b. −(x − 2)(x2 + 2x − 12)
q 6= r. If q = r, then aqq I ps = AI pq Irs = AI ps is
independent of q. Thus aqq = a11 for all q.
3.1.15 b. −7
√
Section 3.1 3.1.16 b. ± 6
2
3.1.1 b. 0 d. x = ±y
638 Polynomials
x1 y1 d. det A = 1
x2 y2
3.1.21 Let x = .. , y = .. and f. det A = 0 if n is odd; nothing can be said if n is even
. .
xn yn
3.2.15 dA where d = det A
A = c1 · · · x + y · · · cn where x + y is in column j.
Expanding det A along column j (the one containing x + y):
1 0 1
3.2.19 b. 1c 0 c 1 , c 6= 0
n −1 c 1
T (x + y) = det A = ∑ (xi + yi )ci j (A)
i=1 8 − c2 2
−c c − 6
1
n n d. 2 c 1 −c
= ∑ xi ci j (A) + ∑ yi ci j (A) c2 − 10 c 8 − c2
i=1 i=1
= T (x) + T (y) 1−c c2 + 1 −c − 1
f. c31+1 c2 −c c + 1 , c 6= −1
Similarly for T (ax) = aT (x). −c 1 c2 − 1
3.2.34 b. Have ( adj A)A = ( det A)I; so taking inverses, 1 1
3.3.12 is not diagonalizable by Example 3.3.8.
A−1 · ( adj A)−1 = det1 A I. On the other hand, 0 1
A−1 adj (A−1 ) = det (A−1 )I = det1 A I. Comparison 1 1 2 1 −1 0
But = + where
0 1 0 −1 0 2
yields A−1 ( adj A)−1 = A−1 adj (A−1 ), and part (b)
follows. 2 1 1 −1
has diagonalizing matrix P = and
0 −1 0 3
d. Write det A = d, det B = e. By the adjugate formula −1 0
AB adj (AB) = deI, and is already diagonal.
0 2
AB adj B adj A = A[eI] adj A = (eI)(dI) = deI. Done
as AB is invertible.
3.3.14 We have λ 2 = λ for every eigenvalue λ (as λ = 0, 1)
so D2 = D, and so A2 = A as in Example 3.3.9.
Section 3.3
b. crA (x) = det
4 1 3.3.18 [xI −rA]
3.3.1 b. (x − 3)(x + 2); 3; −2; , ; = rn det xr I − A = rn cA xr
−1 1
4 1 3 0
P= ; P−1 AP = . b. If λ 6= 0, Ax = λ x if and only if A−1 x = λ1 x.
−1 1 0 −2 3.3.20
The result follows.
1 −3
d. (x − 2)3; 2; 1 , 0 ; No such P; Not
3.3.21 b. (A3 − 2A − 3I)x = A3 x − 2Ax + 3x =
0 1
λ 3 x − 2λ x + 3x =(λ 3 − 2λ − 3)x.
diagonalizable.
−1 1 3.3.23 b. If Am = 0 and Ax = λ x, x 6= 0, then
f. (x + 1)2(x − 2); −1, −2; 1 , 2 ; No such A x = A(λ x) = λ Ax = λ 2 x. In general, Ak x = λ k x for
2
2 1 all k ≥ 1. Hence, λ m x = Am x = 0x = 0, so λ = 0
P; Not diagonalizable. Note that this matrix and the (because x 6= 0).
matrix in Example 3.3.9 have the same characteristic
polynomial, but that matrix is diagonalizable.
3.3.24 a. If Ax = λ x, then Ak x = λ k x for each k. Hence
−1 1 λ mx = Am x = x, so λ m = 1. As λ is real, λ = ±1 by
h. (x − 1)2(x − 3); 1, 3; 0 , 0 No such P; the Hint. So if P−1 AP = D is diagonal, then D2 = I by
1 1 Theorem 3.3.4. Hence A2 = PD2 P = I.
Not diagonalizable.
3.3.27 a. We have P−1 AP = λ I by the diagonalization
2 algorithm, so A = P(λ I)P−1 = λ PP−1 = λ I.
3.3.2 b. Vk = 73 2k
1 b. No. λ = 1 is the only eigenvalue.
1
d. Vk = 32 3k 0 3.3.31 b. λ1 = 1, stabilizes.
1 1
√
d. λ1 = 24 (3 + 69) = 1.13, diverges.
3.3.4 Ax = λ x if and only if (A − α I)x = (λ − α )x. Same 3.3.34 Extinct if α < 15 , stable if α = 15 , diverges if α > 15 .
eigenvectors.
Section 3.4
−1 1 0
3.3.8 b. P AP = , so
0 2 3.4.1 b. xk = 13 4 − (−2)k
1 0 −1 9 − 8 · 2n 12(1 − 2n)
d. xk = 15 2k+2 + (−3)k
n
A =P P =
0 2n 6(2n − 1) 9 · 2n − 8
1
3.4.2 b. xk = (−1)k + 1
0 1 2
3.3.9 b. A =
0 2
3.4.3 b. xk+4 = xk + xk+2 + xk+3 ; x10 = 169
1
√
√ √
3.4.7 √2 + 3 λ1k + (−2 + 3)λ2k where λ1 = 1 + 3
2 3
Supplementary Exercise 3.2. b. If A is 1 × 1, then
√ AT = A. In general,
and λ2 = 1 − 3. T
det [Ai j ] = det (Ai j )T h= det
i (A ) ji by (a) and
34
k induction. Write AT = a′i j where a′i j = a ji , and
3.4.9 3 − 34 − 12 . Long term 11 13 million tons.
expand det AT along column 1.
n
1 λ λ
3.4.11 b. A λ = λ2 = λ2 = det AT = ∑ a′j1(−1) j+1 det [(AT ) j1 ]
j=1
2 2
λ a + b λ + cλ λ3 n
1 = ∑ a1 j (−1)1+ j det [A1 j ] = det A
λ λ j=1
λ2
where the last equality is the expansion of det A along
row 1.
11 k 11 5
3.4.12 b. xk = 10 3 + 15 (−2) − 6
k
Section 4.1
3.4.13 a. √
4.1.1 b. 6
pk+2 + qk+2 = [apk+1 + bpk + c(k)] + [aqk+1 + bqk ] = √
a(pk+1 + qk+1) + b(pk + qk ) + c(k) d. 5
√
f. 3 6
Section 3.5
−2
1 5 4.1.2 b. 13 −1
3.5.1 b. c1 e−2x ; c1 = − 23 , c2 =
e4x + c2 1
1 −1 3 2
−8 1 1 √
d. c1 10 e−x + c2 −2 e2x + c3 0 e4x ; 4.1.4 b. 2
7 1 1 d. 3
c1 = 0, c2 = − 12 , c3 = 32
4.1.6 b.
−→ −→ −→ 1 − → −
→ −
→ − → −
→
4 t/3
FE = FC + CE = 2 AC + 12 CB = 12 (AC + CB) = 12 AB
3.5.3 b. The solution to (a) is m(t) = 10 5 . Hence
4 t/3
we want t such that 10 5 = 5. We solve for t by 4.1.7 b. Yes
taking natural logarithms:
d. Yes
1
3 ln( 2 )
t= 4 = 9.32 hours. 4.1.8 b. p
ln( 5 )
d. −(p + q).
′ −1 ′ ′
3.5.5 a. If g = Ag, put f = g − A b. Then f = g and
Af = Ag − b, so f′ = g′ = Ag = Af + b, as required. −1 √
4.1.9 b. −1 , 27
5
3.5.6 b. Assume that f1′ = a1 f1 + f2 and f2′ = a2 f1 .
Differentiating gives f1′′ = a1 f1′ + f2 ′ = a1 f1′ + a2 f1 , 0
proving that f1 satisfies Equation 3.15. d. 0 , 0
0
Section 3.6 −2 √
f. 2 , 12
3.6.2 Consider the rows R p , R p+1 , . . . , Rq−1 , Rq . In q − p 2
adjacent interchanges they can be put in the order
R p+1 , . . . , Rq−1 , Rq , R p . Then in q − p − 1 adjacent 4.1.10 b. (i) Q(5, −1, 2) (ii) Q(1, 1, −4).
interchanges we can obtain the order Rq , R p+1 , . . . , Rq−1 , R p .
This uses 2(q − p) − 1 adjacent interchanges in all. −26
4.1.11 b. x = u − 6v + 5w = 4
Supplementary Exercises for Chapter 3 19
641
−→ −
→
a −5 4.1.31 b. CPk = −CPn+k if 1 ≤ k ≤ n, where there are
4.1.12 b. b = 8 2n points.
c 6
−→ −→ −→ −→
4.1.33 DA = 2EA and 2AF = FC, so
3a + 4b + c x1 −→ −→ −→ −→ −→ − → −→ −→ − → −→
2EF = 2(EF + AF) = DA+ FC = CB+ FC = FC + CB = FB.
4.1.13 b. If it holds then −a + c = x2 . −→ 1 −→
Hence EF = 2 FB. So F is the trisection point of both AC and
b+c x
3 EB.
3 4 1 x1 0 4 4 x1 + 3x2
−1 0 1 x2 → −1 0 1 x2
Section 4.2
0 1 1 x3 0 1 1 x3
If there is to be a solution then x1 + 3x2 = 4x3 must 4.2.1 b. 6
hold. This is not satisfied.
d. 0
5 f. 0
1 −5
4.1.14 b. 4
−2 4.2.2 b. π or 180◦
π
d. 3 or 60◦
4.1.17 b. Q(0, 7, 3). 2π
f. 3 or 120◦
−20
4.1.18 1
b. x = 40 −13 4.2.3 b. 1 or −17
14
−1
4.2.4 b. t 1
4.1.20 b. S(−1, 3, 2). 2
1 0
4.1.21 b. T. kv − wk = 0 implies that v − w = 0.
d. s 2 +t 3
d. F. kvk = k − vk for all v but v = −v only holds if 0 1
v = 0.
f. F. If t < 0 they have the opposite direction. 4.2.6 b. 29 + 57 = 86
4.2.14 b. −23x + 32y + 11z = 11 4.2.28 The four diagonals are (a, b, c), (−a, b, c),
(a, −b, c) and (a, b, −c) or their negatives. The dot products
d. 2x − y + z = 5
are ±(−a2 + b2 + c2), ±(a2 − b2 + c2 ), and ±(a2 + b2 − c2 ).
f. 2x + 3y + 2z = 7
h. 2x − 7y − 3z = −1 4.2.34 b. The sum of the squares of the lengths of the
diagonals equals the sum of the squares of the lengths
j. x − y − z = 3 of the four sides.
x 2 2 4.2.38 b. The angle θ between u and (u + v + w) is
4.2.15 b. y = −1 + t 1 given by
z 3 0 u·(u+v+w)
cos θ = kukku+v+wk = √ 2 kuk 2 = √13 because
2
kuk +kvk +kwk
x 1 1 kuk = kvk = kwk. Similar remarks apply to the other
d. y = 1 +t 1 angles.
z −1 1
x 1 4 4.2.39 b. Let p0 , p1 be the vectors of P0 , P1 , so
f. y = 1 +t 1 u = p0 − p1 . Then u · n = p0 · n –
z 2 −5 p1 · n = (ax0 + by0) − (ax1 + by1) = ax0 + by0 + c.
Hence the distance is
√
b. 6
Q( 73 , 23 , −2
u·n
|u·n|
4.2.16 3 , 3 )
knk2 n
= knk
4.2.19 b. (−2, 7, 0) + t(3, −5, 2) 4.2.41 b. This follows from (a) because
kvk2 = a2 + b2 + c2 .
4.2.20 b. None
x1 x x2 y
d. P( 13 −78 65
19 , 19 , 19 ) 4.2.44 d. Take y1 = y and y2 = z
z1 z z2 x
4.2.21 b. 3x + 2z = d, d arbitrary in (c).
u1 v1 w1 x
4.4.9 a. Write v = .
4.3.15 b. If u = u2 , v = v2 and w = w2 , y
u3 v3 w3
i u1 v1 + w1 v·d ax+by a
then u × (v + w) = det j u2 v2 + w2 PL (v) = kdk2
d= a2 +b2 b
k u3 v3 + w3
1 a2 x + aby
i u 1 v1 i u1 w1 = a2 +b2abx + b2y
= det j u2 v2 + det j u2 w2 2
k u 3 v3 k u3 w3 1 a + ab x
= a2 +b2
= (u × v) + (u × w) where we used Exercise 4.3.21. ab + b2 y
cos θ 0 − sin θ 5.1.15 b. x = (x + y) − y = (x + y) + (−y) is in U
4.4.6 0 1 0 because U is a subspace and both x + y and
sin θ 0 cos θ −y = (−1)y are in U.
644 Polynomials
5.1.16 b. True. x = 1x is in U.
1 1
1 −1
d. True. Always span {y, z} ⊆ span {x, y, z} by 5.2.4 b. 0 1 ; dimension 2.
,
Theorem 5.1.1. Since x is in span {x, y} we have
1 0
span {x, y, z} ⊆ span {y, z}, again by Theorem 5.1.1.
1 −1 0
1 2 a + 2b 1 1
f. False. a +b = cannot equal 0
0 0 0 d. , , ; dimension 3.
1 0 0
0
0 1 1
.
1
−1 1 1
f. 1 , 0 , 0 ; dimension 3.
5.1.20 If U is a subspace, then S2 and S3 certainly hold. 0 1 0
Conversely, assume that S2 and S3 hold for U. Since U is 0 0 1
nonempty, choose x in U. Then 0 = 0x is in U by S3, so S1
also holds. This means that U is a subspace. 5.2.5 b. If r(x + w) + s(y + w) + t(z + w) + u(w) = 0,
then rx + sy + tz + (r + s + t + u)w = 0, so r = 0,
5.1.22 b. The zero vector 0 is in U + W because s = 0, t = 0, and r + s + t + u = 0. The only solution is
0 = 0 + 0. Let p and q be vectors in U + W , say r = s = t = u = 0, so the set is independent. Since
p = x1 + y1 and q = x2 + y2 where x1 and x2 are in U, dim R4 = 4, the set is a basis by Theorem 5.2.7.
and y1 and y2 are in W . Then
p + q = (x1 + x2 ) + (y1 + y2 ) is in U + W because 5.2.6 b. Yes
x1 + x2 is in U and y1 + y2 is in W . Similarly,
a(p + q) = ap + aq is in U + W for any scalar a d. Yes
because ap is in U and aq is in W . Hence U + W is f. No.
indeed a subspace of Rn .
5.2.7 b. T. If ry + sz = 0, then 0x + ry + sz = 0 so
Section 5.2 r = s = 0 because {x, y, z} is independent.
d. F. If x 6= 0, take k = 2, x1 = x and x2 = −x.
1 1 0 0
5.2.1 b. Yes. If r 1 + s 1 + t 0 = 0 , f. F. If y = −x and z = 0, then 1x + 1y + 1z = 0.
1 1 1 0 h. T. This is a nontrivial, vanishing linear combination,
then r + s = 0, r − s = 0, and r + s + t = 0. These
so the xi cannot be independent.
equations give r = s = t = 0.
d.
No. Indeed:
5.2.10 If rx2 + sx3 + tx5 = 0 then
1 1 0 0 0 0x1 + rx2 + sx3 + 0x4 + tx5 + 0x6 = 0 so r = s = t = 0.
1 0 0 1 0
+ − =
0 − 1 1 0 0 .
5.2.12 If t1 x1 + t2 (x1 + x2 ) + · · · + tk (x1 + x2 + · · · + xk ) = 0,
0 0 1 1 0
then (t1 + t2 + · · · + tk )x1 + (t2 + · · · + tk )x2 + · · · + (tk−1 +
tk )xk−1 + (tk )xk = 0. Hence all these coefficients are zero, so
5.2.2 b. Yes. If r(x + y) + s(y + z) + t(z + x) = 0, then we obtain successively tk = 0, tk−1 = 0, . . . , t2 = 0, t1 = 0.
(r + t)x + (r + s)y + (s + t)z = 0. Since {x, y, z} is
independent, this implies that r + t = 0, r + s = 0, and 5.2.16 b. We show AT is invertible (then A is invertible).
s + t = 0. The only solution is r = s = t = 0. Let AT x = 0 where x = [s t]T . This means as + ct = 0
d. No. In fact, (x + y) − (y + z) + (z + w) − (w + x) = 0. and bs + dt = 0, so
s(ax + by) + t(cx + dy) = (sa + tc)x + (sb + td)y = 0.
Hence s = t = 0 by hypothesis.
2 −1
1
1
5.2.3 b.
, ; dimension 2. 5.2.17 b. Each V −1 xi is in null (AV ) because
0 1
AV (V −1 xi ) = Axi = 0. The set {V −1 x1 , . . . , V −1 xk } is
−1 1
independent as V −1 is invertible. If y is in null (AV ),
−2 1 then V y is in null (A) so let V y = t1 x1 + · · · + tk xk
0 where each tk is in R. Thus
d. , 2 ; dimension 2.
3 −1 y = t1V −1 x1 + · · · + tkV −1 xk is in
1 0 span {V −1 x1 , . . . , V −1 xk }.
645
5.2.20 We have {0} ⊆ U ⊆ W where dim {0} = 0 and 5.3.12 b. We have (x + y) · (x − y) = kxk2 − kyk2 .
dim W = 1. Hence dim U = 0 or dim U = 1 by Hence (x + y) · (x − y) = 0 if and only if kxk2 = kyk2 ;
Theorem 5.2.8, that is U = 0 or U = W , again by if and only if kxk = kyk—where we used the fact that
Theorem 5.2.8. kxk ≥ 0 and kyk ≥ 0.
−1 5.4.3 b. No; no
3
b. t d. No
5.3.5 10 , in R
11 f. Otherwise, if A is m × n, we have
m = dim ( row A) = rank A = dim ( col A) = n
√
5.3.6 b. 29
5.4.4 Let A = c1 . . . cn . Then
d. 19 col A = span {c1 , . . . , cn } = {x1 c1 + · · · + xn cn | xi in R} =
{Ax | x in Rn }.
1 0
5.3.7 b. F. x = and y = .
0 1
6 5
d. T. Every xi · y j = 0 by assumption, every xi · x j = 0 if
0 0
i 6= j because the xi are orthogonal, and every 5.4.7
b. The basis is −4 , −3 so the
yi · y j = 0 if i 6= j because the yi are orthogonal. As all
1 0
the vectors are nonzero, this does it.
0 1
dimension is 2.
f. T. Every pair of distinct vectors in the set {x} has dot
product zero (there are no such pairs). Have rank A = 3 and n − 3 = 2.
5.4.10 b. Write r = rank A. Then (a) gives 5.5.13 b. If A is diagonalizable, so is AT , and they have
r = dim ( col A ≤ dim ( null A) = n − r. the same eigenvalues. Use (a).
5.4.12 We have rank (A) = dim [ col (A)] and 5.5.17 b. cB (x) = [x − (a + b + c)][x2 − k] where
rank (AT ) = dim [ row (AT )]. Let {c1 , c2 , . . . , ck } be a basis k = a2 + b2 + c2 − [ab + ac + bc]. Use Theorem 5.5.7.
of col (A); it suffices to show that {cT1 , cT2 , . . . , cTk } is a basis
of row (AT ). But if t1 cT1 + t2 cT2 + · · · + tk cTk = 0, t j in R, then Section 5.6
(taking transposes) t1 c1 + t2 c2 + · · · + tk ck = 0 so each t j = 0. −20
Hence {cT1 , cT2 , . . . , cTk } is independent. Given v in row (AT ) 5.6.1 b. 121
46 , (AT A)−1
then vT is in col (A); say vT = s1 c1 + s2 c2 + · · · + sk ck , s j in 95
R: Hence v = s1 cT1 + s2 cT2 + · · · + sk cTk , so {cT1 , cT2 , . . . , cTk }
8 −10 −18
spans row (AT ), as required. 1
= 12 −10 14 24
−18 24 43
5.4.15 b. Let {u1 , . . . , ur } be a basis of col (A). Then b
64 6
is not in col (A), so {u1 , . . . , ur , b} is linearly 5.6.2 b.
13 − 13 x
independent. Show that 4 17
d. − 10 − 10 x
col [A b] = span {u1 , . . . , ur , b}.
b.y = 0.127 − 0.024x + 0.194x 2 −1
, (M M) =
5.6.3 T
Section 5.5 3348 642 −426
1
4248 642 571 −187
5.5.1 b. traces = 2, ranks = 2, but det A = −5, −426 −187 91
det B = −1
1 2
d. ranks = 2, determinants = 7, but tr A = 5, tr B = 4 5.6.4 b. 92 (−46x + 66x + 60 · 2x), (M T M)−1 =
115 0 −46
f. traces = −5, determinants = 0, but rank A = 2, 1 0 17 −18
46
rank B = 1 −46 −18 38
5.7.2 Let X denote the number of years of education, and let l. No; only S3 fails.
Y denote the yearly income (in 1000’s). Then x = 15.3, n. No; only S4 and S5 fail.
s2x = 9.12 and sx = 3.02, while y = 40.3, s2y = 114.23 and
sy = 10.69. The correlation is r(X, Y ) = 0.599. 6.1.4 The zero vector is (0, −1); the negative of (x, y) is
(−x, −2 − y).
x1
x2 6.1.5 b. x = 71 (5u − 2v), y = 17 (4u − 3v)
5.7.4 b. Given the sample vector x = .. , let
.
xn 6.1.6 b. Equating entries gives a + c = 0, b + c = 0,
b + c = 0, a − c = 0. The solution is a = b = c = 0.
z1
z2 d. If a sin x + b cosy + c = 0 in F[0, π ], then this must
z= .. where zi = a + bxi for each i. By (a) we hold for every x in [0, π ]. Taking x = 0, π2 , and π ,
.
zn respectively, gives b + c = 0, a + c = 0, −b + c = 0
have z = a + bx, so whence, a = b = c = 0.
s2z = 1
n−1 ∑(zi − z)2 6.1.7 b. 4w
i
= 1
n−1 ∑[(a + bxi) − (a + bx)]2 6.1.10 If z + v = v for all v, then z + v = 0 + v, so z = 0 by
i
cancellation.
= 1
n−1 ∑ b2(xi − x)2
i
6.1.12 b. (−a)v + av = (−a + a)v = 0v = 0 by
= b2 s2x . Theorem 6.1.3. Because also −(av) + av = 0 (by the
√ definition of −(av) in axiom A5), this means that
Now (b) follows because b2 = |b|. (−a)v = −(av) by cancellation. Alternatively, use
Theorem 6.1.3(4) to give
Supplementary Exercises for Chapter 5 (−a)v = [(−1)a]v = (−1)(av) = −(av).
Supplementary Exercise 5.1. b. F 6.1.13 b. The case n = 1 is clear, and n = 2 is axiom S3.
d. T If n > 2, then
(a1 + a2 + · · · + an )v = [a1 + (a2 + · · · + an)]v =
f. T a1 v + (a2 + · · · + an)v = a1 v + (a2 v + · · · + anv) using
h. F the induction hypothesis; so it holds for all n.
f. Yes. −1 0 1 −1 1 1
d. 2 + + =
0 −1 −1 1 1 1
0 0
6.2.5 b. If entry k of x is xk 6= 0, and if y is in Rn , then 0 0
y = Ax where the column of A is x−1 k y, and the other
5 1
columns are zero. f. x2 +x−6
+ x2 −5x+6 − x26−9 = 0
l. Yes. u + v + w 6= 0 because {u, v, w} is independent. 6.4.5 b. The polynomials in S have distinct degrees.
n. Yes. If I is independent, then |I| ≤ n by the
fundamental theorem because any basis spans V . 6.4.6 b. {4, 4x, 4x2 , 4x3 } is one such basis of P3 .
However, there is no basis of P3 consisting of
polynomials that have the property that their
6.3.15 If a linear combination of the subset vanishes, it is a coefficients sum to zero. For if such a basis exists,
linear combination of the vectors in the larger set (coefficients then every polynomial in P3 would have this property
outside the subset are zero) so it is trivial. (because sums and scalar multiples of such
polynomials have the same property).
6.3.19 Because{u, v} islinearly
independent,
su′ + tv′ = 0
a c s 0 6.4.7 b. Not a basis.
is equivalent to = . Now apply
b d t 0
Theorem 2.4.5. d. Not a basis.
6.4.4 b. If z = a + bi, then a 6= 0 and b 6= 0. If 6.5.10 b. If r(x − a)2 + s(x − a)(x − b) + t(x − b)2 = 0,
rz + sz = 0, then (r + s)a = 0 and (r − s)b = 0. This then evaluation at x = a(x = b) gives t = 0(r = 0).
means that r + s = 0 = r − s, so r = s = 0. Thus {z, z} Thus s(x − a)(x − b) = 0, so s = 0. Use
is independent; it is a basis because dim C = 2. Theorem 6.4.4.
650 Polynomials
6.5.11 b. Suppose {p0 (x), p1 (x), . . . , pn−2 (x)} is a f. T [(p + q)(x)] = (p + q)(0) = p(0) + q(0) =
basis of Pn−2 . We show that T [p(x)] + T [q(x)];
{(x − a)(x − b)p0(x), (x − a)(x − b)p1(x), . . . , (x − T [(rp)(x)] = (rp)(0) = r(p(0)) = rT [p(x)]
a)(x − b)pn−2(x)} is a basis of Un . It is a spanning set
h. T (X +Y ) = (X +Y ) · Z = X · Z +Y · Z = T (X) + T (Y ),
by part (a), so assume that a linear combination
and T (rX) = (rX) · Z = r(X · Z) = rT (X)
vanishes with coefficients r0 , r1 , . . . , rn−2 . Then
(x − a)(x − b)[r0 p0 (x) + · · · + rn−2 pn−2 (x)] = 0, so j. If v = (v1 , . . . , vn ) and w = (w1 , . . . , wn ), then
r0 p0 (x) + · · · + rn−2 pn−2 (x) = 0 by the Hint. This T (v + w) = (v1 + w1 )e1 + · · · + (vn + wn )en = (v1 e1 +
implies that r0 = · · · = rn−2 = 0. · · · + vn en ) + (w1 e1 + · · · + wn en ) = T (v) + T (w)
T (av) = (av1 )e + · · · + (avn)en = a(ve + · · · + vn en ) =
aT (v)
Section 6.6
7.2.1
b. 7.2.12 The condition means ker (TA ) ⊆ ker (TB ), so
−3 1 dim [ ker (TA )] ≤ dim [ ker (TB )]. Then Theorem 7.2.4 gives
7 1 1 0
dim [ im (TA )] ≥ dim [ im (TB )]; that is, rank A ≥ rank B.
; 0 , 1 ; 2, 2
1 , 0
1 −1
0 −1
7.2.15 b. B = {x − 1, . . . , xn − 1} is independent
1 0 (distinct degrees) and contained in ker T . Hence B is a
−1
basis of ker T by (a).
0 , 1
d. 2 ; −1
; 2, 1
1
1
1 −2 7.2.20 Define T : Mnn → Mnn by T (A) = A − AT for all A in
Mnn . Then ker T = U and im T = V by Example 7.2.3, so
7.2.2 b. {x2 − x}; {(1, 0), (0, 1)} the dimension theorem gives
n2 = dim Mnn = dim (U) + dim (V ).
d. {(0, 0, 1)}; {(1, 1, 0, 0), (0, 0, 1, 1)}
1 0 0 1 0 0 7.2.22 Define T : Mnn → Rn by T (A) = Ay for all A in Mnn .
f. , , ; {1}
0 −1 0 0 1 0 Then T is linear with ker T = U, so it is enough to show that
h. {(1, 0, 0, . . . , 0, −1), (0, 1, 0, . . . , 0, −1), T is onto (then dim U = n2 − dim ( im T ) = n2 − n). We have
T
. . . , (0, 0, 0, . . . , 1, −1)}; {1} T (0) = 0. Let y = y1 y2 · · · yn 6= 0 in Rn . If yk 6= 0
−1
let ck = yk y, and let c j =
0 1 0 0 0 if j 6= k. If
j. , ; A = c1 c2 · · · cn , then
0 0 0 1
T (A) = Ay = y1 c1 + · · · + yk ck + · · · + yncn = y. This shows
1 1 0 0
, that T is onto, as required.
0 0 1 1
7.2.3 b. T (v) = 0 = (0, 0) if and only if P(v) = 0 and 7.2.29 b. By Lemma 6.4.2, let {u1 , . . . , um , . . . , un } be
Q(v) = 0; that is, if and only if v is in ker P ∩ ker Q. a basis of V where {u1 , . . . , um } is a basis of U. By
Theorem 7.1.3 there is a linear transformation
S : V → V such that S(ui ) = ui for 1 ≤ i ≤ m, and
7.2.4 b. ker T = span {(−4, 1, 3)};
S(ui ) = 0 if i > m. Because each ui is in im S,
B = {(1, 0, 0), (0, 1, 0), (−4, 1, 3)},
U ⊆ im S. But if S(v) is in im S, write
im T = span {(1, 2, 0, 3), (1, −1, −3, 0)}
v = r1 u1 + · · · + rm um + · · · + rn un . Then
S(v) = r1 S(u1 ) + · · · + rm S(um ) = r1 u1 + · · · + rm um is
7.2.6 b. Yes. dim ( im T ) = 5 − dim ( ker T ) = 3, so in U. So im S ⊆ U.
im T = W as dim W = 3.
d. No. T = 0 : R2 → R2 Section 7.3
f. No. T : R2 → R2 , T (x, y) = (y, 0). Then
7.3.1 b. T is onto because T (1, −1, 0) = (1, 0, 0),
ker T = im T
T (0, 1, −1) = (0, 1, 0), and T (0, 0, 1) = (0, 0, 1).
h. Yes. dim V = dim ( ker T ) + dim ( im T ) ≤ Use Theorem 7.3.3.
dim W + dim W = 2 dim W
d. T is one-to-one because 0 = T (X) = UXV implies that
j. No. Consider T : R2 → R2 with T (x, y) = (y, 0). X = 0 (U and V are invertible). Use Theorem 7.3.3.
l. No. Same example as (j). f. T is one-to-one because 0 = T (v) = kv implies that
v = 0 (because k 6= 0). T is onto because T 1k v = v
n. No. Define T : R2 → R2by T (x, y) = (x, 0). If
for all v. [Here Theorem 7.3.3 does not apply if dim V
v1 = (1, 0) and v2 = (0, 1), then R2 = span {v1 , v2 }
is not finite.]
but R2 6= span {T (v1 ), T (v2 )}.
h. T is one-to-one because T (A) = 0 implies AT = 0,
7.2.7 b. Given w in W , let w = T (v), v in V , and write whence A = 0. Use Theorem 7.3.3.
v = r1 v1 + · · · + rn vn . Then
w = T (v) = r1 T (v1 ) + · · · + rn T (vn ). 7.3.4 b. ST (x, y, z) = (x + y, 0, y + z),
T S(x, y, z) = (x, 0, z)
7.2.8 b. im T = {∑i ri vi | ri in R} = span {vi }.
a b c 0
d. ST = ,
c d 0 d
7.2.10 T is linear and onto. Hence 1 = dim R = a b 0 a
dim ( im T ) = dim (Mnn ) − dim ( ker T ) = n2 − dim ( ker T ). TS =
c d d 0
652 Polynomials
7.3.5 b. T 2 (x, y) = T (x + y, 0) = (x + y, 0) = T (x, y). 7.3.26 b. If T (p) = 0, then p(x) = −xp′ (x). We write
Hence T 2 = T . p(x) = a0 + a1x + a2x2 + · · · + anxn , and this becomes
a 0 + a 1 x + a 2 x2 + · · · + a n xn =
a b a+c b+d
d. T 2 = 12 T = −a1 x − 2a2x2 − · · · − nanxn . Equating coefficients
c d a+c b+d yields a0 = 0, 2a1 = 0, 3a2 = 0, . . . , (n + 1)an = 0,
1 a+c b+d
2
whence p(x) = 0. This means that ker T = 0, so T is
a+c b+d
one-to-one. But then T is an isomorphism by
Theorem 7.3.3.
7.3.6 b. No inverse; (1, −1, 1, −1) is in ker T .
7.3.27 b. If ST = 1V for some S, then T is onto by
−1 a b 1 3a − 2c 3b − 2d Exercise 7.3.13. If T is onto, let {e1 , . . . , er , . . . , en }
d. T =5 be a basis of V such that {er+1 , . . . , en } is a basis of
c d a+c b+d
ker T . Since T is onto, {T (e1 ), . . . , T (er )} is a basis
f. T −1 (a, b, c) = 12 2a + (b − c)x − (2a − b − c)x2 of im T = W by Theorem 7.2.5. Thus S : W → V is an
isomorphism where by S{T (ei )] = ei for
i = 1, 2, . . . , r. Hence T S[T (ei )] = T (ei ) for each i,
7.3.7 b. that is T S[T (ei )] = 1W [T (ei )]. This means that
T 2 (x, y) = T (ky − x, y) = (ky − (ky − x), y) = (x, y) T S = 1W because they agree on the basis
d. T 2 (X) = A2 X = IX = X {T (e1 ), . . . , T (er )} of W .
f. x =
1 1 −1
12 (5a − 5b + c − 3d, −5a + 5b − c + 3d, a − b + 11c + 8.2.5 b. √1
1 2 1 1
3d, −3a + 3b + 3c + 3d) + 12 (7a + 5b − c + 3d, 5a +
7b + c − 3d, −a + b + c − 3d, 3a − 3b − 3c + 9d) 0 1 1
√
d. √1 2 0 0
2
1 3 0 1 −1
8.1.3 a. 10 (−9, 3, −21, 33) = 10 (−3, 1, −7, 11)
1 3 √
c. 70 (−63, 21, −147, 231) = 10 (−3, 1, −7, 11) 2√2 3 1 2 −2 1
f. 3√1 2 √2 0 −4 or 13 1 2 2
8.1.4 b. {(1, −1, 0), 12 (−1, −1, 2)}; 2 2 −3 1 2 1 −2
projU x = (1, 0, −1) √
1 −1 √2 0
d. {(1, −1, 0, 1), (1, 1, 0, 0), 31 (−1, 1, 0, 2)}; −1 1 2 √0
h. 12
projU x = (2, 0, 0, 1) −1 −1 0 √2
1 1 0 2
8.1.5 b. U ⊥ = span {(1, 3, 1, 0), (−1, 0, 0, 1)}
√
c 2 a a
8.1.8 Write p = projU x. Then p is in U by definition. If x is 8.2.6 P = √12k −k
√0 k
U, then x − p is in U. But x − p is also in U ⊥ by −a 2 c c
Theorem 8.1.3, so x − p is in U ∩U ⊥ = {0}. Thus x = p.
8.1.10 Let {f1 , f2 , . . . , fm } be an orthonormal basis of U. If x 8.2.10 b. y1 = √15 (−x1 + 2x2 ) and y2 = √1 (2x1 + x2 );
5
is in U the expansion theorem gives q = −3y21 + 2y22.
x = (x · f1 )f1 + (x · f2 )f2 + · · · + (x · fm )fm = projU x.
8.2.11 c. ⇒ a. By Theorem 8.2.1 let
8.1.14 Let {y1 , y2 , . . . , ym } be a basis of U ⊥ , and let A be P−1 AP = D = diag (λ1 , . . . , λn ) where the λi are the
the n × n matrix with rows yT1 , yT2 , . . . , yTm , 0, . . . , 0. Then eigenvalues of A. By c. we have λi = ±1 for each i,
Ax = 0 if and only if yi · x = 0 for each i = 1, 2, . . . , m; if and whence D2 = I. But then
only if x is in U ⊥⊥ = U. A2 = (PDP−1 )2 = PD2 P−1 = I. Since A is symmetric
this is AAT = I, proving a.
8.1.17 d. E T = AT [(AAT )− 1]T (AT )T =
AT [(AAT )T ]−1 A
= AT [AAT ]−1 A = E 8.2.13 b. If B = PT AP = P−1 , then
E 2 = AT (AAT )−1 AAT (AAT )−1 A = AT (AAT )−1 A = E B2 = PT APPT AP = PT A2 P.
654 Polynomials
8.2.15 If x and y are respectively columns i and j of In , then AT = A = LDU be such a factorization. Then
xT AT y = xT Ay shows that the (i, j)-entries of AT and A are U T DT LT = AT = A = LDU, so L = U T by (a). Hence
equal. A = LDLT = V T V where V = LD0 and D0 is diagonal
with D20 = D (the matrix D0 exists because D has
cos θ − sin θ positive diagonal entries). Hence A is symmetric, and
8.2.18 b. det =1 it is positive definite by Example 8.3.1.
sin θ cos θ
cos θ sin θ
and det = −1
sin θ − cos θ
Section 8.4
[Remark: These are the only 2 × 2 examples.]
d. Use the fact that P−1 = PT to show that √1
2 −1 √1
5 3
8.4.1 b. Q = ,R=
PT (I − P) = −(I − P)T . Now take determinants and 5 1 2 5 0 1
use the hypothesis that det P 6= (−1)n .
1 1 0
−1 0 1
8.2.21 We have AAT = D, where D is diagonal with main d. Q = √1 ,
3 0 1 1
diagonal entries kR1 k2 , . . . , kRn k2 . Hence A−1 = AT D−1 , 1 −1 1
and the result follows because D−1 has diagonal entries
3 0 −1
1/kR1k2 , . . . , 1/kRnk2 . √1 0 3 1
R= 3
0 0 2
8.2.23 b. Because I − A and I + A commute,
PPT = (I − A)(I + A)−1 [(I + A)−1 ]T (I − A)T =
(I − A)(I + A)−1 (I − A)−1 (I + A) = I. 8.4.2 If A has a QR-factorization, use (a). For the converse
use Theorem 8.4.1.
Section 8.3
√
Section 8.5
2 −1
2
8.3.1 b. U = 2
0 1
√ √ √ 2
8.5.1 b. Eigenvalues 4, −1; eigenvectors ,
60 5 12√ 5 15√5
−1
1
d. U = 30 0 6 30 10√ 30 1 409
; x4 = ; r3 = 3.94
0 0 5 15 −3 −203
√ √
d. Eigenvalues λ1 = 12(3+ 13), 1
λ2 =2 (3 − 13);
8.3.2 b. If λ k > 0, k odd, then λ > 0.
λ1 λ2 142
eigenvectors , ; x4 = ;
1 1 43
8.3.4 If x 6= 0, then xT Ax > 0 and xT Bx > 0. Hence r3 = 3.3027750 (The true value is λ1 = 3.3027756, to
xT (A + B)x = xT Ax + xT Bx > 0 and xT (rA)x = r(xT Ax) > 0, seven decimal places.)
as r > 0.
√
8.3.6 Let x 6= 0 in Rn . Then
xT (U T AU)x = (Ux)T A(Ux) > 0 8.5.2 b. Eigenvalues λ1 = 12 (3 + 13) = 3.302776,
√
provided Ux 6= 0. But if U = c1 c2 . . . cn and λ2 = 12 (3 − 13) = −0.302776
x = (x1 , x2 , . . . , xn ), then Ux = x1 c1 + x2 c2 + · · · + xn cn 6= 0 3 1 3 −1
1
because x 6= 0 and the ci are independent. A1 = , Q1 = 10 √ ,
1 0 1 3
10 3
R1 = √110
8.3.10 Let PT AP = D = diag (λ1 , . . . , λn ) where PT = P. 0 −1
Since A is √
positive definite,
√ each eigenvalue λi > 0. If 1 33 −1
B = diag ( λ1 , . . . , λn ) then B2 = D, so A2 = 10 ,
−1
−3
A = PB2 PT = T 2
√ (PBP ) . Take C = PBP . Since C has
T
1 33 1
eigenvalues λi > 0, it is positive definite. Q2 = √1090 ,
−1 33
1 109 −3
R2 = 1090
√
8.3.12 b. If A is positive definite, use Theorem 8.3.1 to 0 −10
write A = U T U where U is upper triangular with 1 360 1
positive diagonal D. Then A = (D−1U)T D2 (D−1U) so A3 = 109
1 −33
A = L1 D1U1 is such a factorization if U1 = D−1U, 3.302775 0.009174
D1 = D2 , and L1 = U1T . Conversely, let =
0.009174 −0.302775
655
d. Basis {(1, 0, −2i), (0, 1, 1 − i)}; dimension 2 8.8.1 b. 1−1 = 1, 9−1 = 9, 3−1 = 7, 7−1 = 3.
656 Polynomials
d. 21 = 2, 22 = 4, 23 = 8, 24 = 16 = 6, 25 = 12 = 2, 1 3 2
26 = 22 . . . so a = 2k if and only if a = 2, 4, 6, 8. d. A = 3 1 −1
2 −1 3
8.8.2 b. If 2a = 0 in Z10 , then 2a = 10k for some integer
√1
1 1
k. Thus a = 5k. 8.9.2 b. P = ;
2 1 −1
x1 + x2
8.8.3 b. 11−1 = 7 in Z19 . y = √12 ;
x1 − x2
q = 3y21 − y22 ; 1, 2
8.8.6 b. det A = 15 − 24 = 1 + 4 = 5 6= 0 in Z7 , so A−1
−1 2 2 −1
exists. Since
5 =3 in Z7 , we have
3 −6 3 1 2 3 d. P = 13 2 −1 2 ;
−1
A =3 =3 .
= −1 2 2
3 5 3 5 2 1
2x1 + 2x2 − x3
y = 31 2x1 − x2 + 2x3 ;
8.8.7 b. We have 5 · 3 = 1 in Z7 so the reduction of the −x1 + 2x2 + 2x3
augmented matrix is:
q = 9y21 + 9y22 − 9y23; 2, 3
3 1 4 3 1 5 6 1 −2 1 2
→
4 3 1 1 4 3 1 1 f. P = 13 2 2 1 ;
1 5 6 1 1 −2 2
→
0 4 5 4 −2x1 + 2x2 + x3
y = 31 x1 + 2x2 − 2x3 ;
1 5 6 1
→ 2x1 + x2 + 2x3
0 1 3 1
q = 9y21 + 9y22; 2, 2
1 0 5 3 √ √
→ . −√2 3 1
0 1 3 1
h. P = √16 √2 √0 2 ;
Hence x = 3 + 2t, y = 1 + 4t, z = t; t in Z7 . 2 3 −1
√ √ √
−√2x1 + 2x2 + √2x3
8.8.9 b. (1 + t)−1 = 2 + t. y = √16 3x1 + 3x3 ;
x1 + 2x2 − x3
8.8.10 b. The minimum weight of C is 5, so it detects 4 q = 2y21 + y22 − y23 ; 2, 3
errors and corrects 2 errors.
8.9.3 b. x1 = √1 (2x − y), y1 = √1 (x + 2y); 4x21 − y21 = 2;
5 5
8.8.11 b. {00000, 01110, 10011, 11101}. hyperbola
d. x1 = √15 (x + 2y), y1 = √1 (2x − y);
5
6x21 + y21 = 1;
8.8.12 b. The code is ellipse
{0000000000, 1001111000, 0101100110,
0011010111, 1100011110, 1010101111, 8.9.4 b. Basis {(i, 0, i), (1, 0, −1)}, dimension 2
0110110001, 1111001001}. This has minimum
d. Basis {(1, 0, −2i), (0, 1, 1 − i)}, dimension 2
distance 5 and so corrects 2 errors.
√ √ √
8.9.7 b. 3y21 + 5y22 − y23 − 3 2y1 + 11 3 3y2 + 23 6y3 = 7
8.8.13 b. {00000, 10110, 01101, 11011} is a y1 = √12 (x2 + x3 ), y2 = √13 (x1 + x2 − x3 ),
(5, 2)-code of minimal weight 3, so it corrects single
y3 = √1 (2x1 − x2 + x3 )
errors. 6
Section 9.1
Section 8.9
a
1 0 9.1.1 b. 2b − c
8.9.1 b. A =
0 2 c−b
657
a−b d. MED (S)MDB (T )=
d. 12 a+b
1 −1 0
−a + 3b + 2c 1 −1 0
−1 0 1 =
0 0 1
0 1 0
9.1.2 b. Let v = a + bx + cx2. Then 2 −1 −1
= MEB (ST )
CD [T (v)] = MDB (T)CB (v) 0 1 0
=
a
2 1 3 2a + b + 3c
b = 9.1.7 b.
−1 0 −2 −a − 2c 1
c T −1 (a, b, c) =
2 (b + c − a, a + c − b, a + b − c);
Hence 0 1 1
MDB (T ) = 1 0 1 ;
T (v) = (2a + b + 3c)(1, 1) + (−a − 2c)(0, 1) 1 1 0
= (2a + b + 3c, a + b + c). −1 1 1
MBD (T −1 ) = 12 1 −1 1
1 1 −1
1 0 0 0
0 0 1 0 d. T −1 (a, b, c) 2
9.1.3 b. = (a − b) +(b − c)x + cx ;
0 1 0 0 1 1 1
0 0 0 1 MDB (T ) = 0 1 1 ;
0 0 1
1 1 1
1 −1 0
d. 0 1 2
MBD (T −1 ) = 0 1 −1
0 0 1
0 0 1
1 2 9.1.8 b. MDB (T −1 ) = [MBD (T )]−1 =
5 3 −1
9.1.4 b.
4
;
1 1 1 0 1 −1 0 0
0 0
1 1 0 =
0 1 −1 0
.
1 1 0
0 1 0 0 0 1 0
1 2 2a − b
5 3 3a + 2b 0 0 0 1 0 0 0 1
b
CD [T (a, b)] =
4 0 a−b =
Hence CB [T −1 (a, b, c, d)] =
4b
−1
1 1 a BD (T )CD (a, b, c,
M d) =
1 −1 0 0 a a−b
1 1 1 −1 0 1 −1 0 b b−c
d. 2 ; CD [T (a + bx + cx2)] = = , so
1 1 1 0 0 1 0 c c
a 0 0 0 1 d
1 1 1 −1 b = 1 a+b−c d
2 1 1 1 2 a+b+c a − b b−c
c T −1 (a, b, c, d) = .
c d
1 0 0 0
0 1 1 0 9.1.12 Have
f. ; CD T a b = CD [T (e j )] = column j of In . Hence
0 1 1 0 c d MDB (T ) = CD [T (e1 )] CD [T (e2 )] · · · CD [T (en )] = In .
0 0 0 1
1 0 0 0 a a
0 1 1 0 b b+c 9.1.16 b. If D is the standard basis of Rn+1 and
2
= {1, x, x , . . . , x }, then MDB (T ) =
n
0 1 1 0 c = b+c B
CD [T (1)] CD [T (x)] · · · CD [T (xn )] =
0 0 0 1 d d
1 a0 a20 · · · an0
1 a1 a2 · · · an
1 1
b. MED (S)MDB (T 1 a2 a2 · · · an
9.1.5 ) = 2 2 .
1 1 0 .. .. .. ..
. . . .
1 1 0 0 0 1 1
=
0 0 1 −1 1 0 1 1 an a2n · · · ann
−1 1 0 This matrix has nonzero determinant by
1 2 1 Theorem 3.2.7 (since the ai are distinct), so T is an
= MEB (ST ) isomorphism.
2 −1 1
658 Polynomials
9.1.20 d. [(S + T )R](v) = (S + T )(R(v)) = S[(R(v))] + 1 1 0
T [(R(v))] = SR(v) + T R(v) = [SR + TR](v) holds for 9.2.7 b. P = 0 1 2
all v in V . Hence (S + T )R = SR + TR. −1 0 1
9.3.14 The fact that U and W are subspaces is easily verified 10.1.2 Axioms P1–P5 hold in U because they hold in V .
using the subspace test. If A lies in U ∩V , then A = AE = 0;
that is, U ∩V = 0. To show that M22 = U + V , choose any A
10.1.3 b. √1 f
in M22 . Then A = AE + (A − AE), and AE lies in U [because π
(AE)E = AE 2 = AE], and A − AE lies in W [because
√1
3
(A − AE)E = AE − AE 2 = 0]. d. 17 −1
d. hv, vi = 3v21 + 8v1 v2 + 6v22 = 13 [(3v1 + 4v2)2 + 2v22] 10.2.2 b. {(1, 1, 1), (1, −5, 1), (3, 0, −2)}
10.2.3
b.
1 −2 1 1 1 −2 1 −2 1 0
10.1.13 b. , , ,
−2 1 0 1 3 1 −2 1 0 −1
1 0 −2
d. 0 2 0 10.2.4 b. {1, x − 1, x2 − 2x + 23 }
−2 0 5
10.2.6
b. U ⊥ =
1
span { 1 −1 0 0 , 0 0 1 0 , 0 0 0 1 },
10.1.14 By the condition, hx, yi = 2 hx + y, x + yi = 0 for all
dim U ⊥ = 3, dim U = 1
x, y. Let ei denote column i of I. If A = [ai j ], then
ai j = eTi Ae j = {ei , e j } = 0 for all i and j. d. U ⊥ = span {2 − 3x, 1 − 2x2}, dim U ⊥ = 2,
dim U = 1
1 −1
10.1.16 b. −15 f. U ⊥ = span , dim U ⊥ = 1,
−1 0
dim U = 3
10.1.20 1. Using P2:
hu, v + wi = hv + w, ui = hv, ui + hw, ui = hu, vi + hu, wi. 10.2.7 b.
2. Using P2 and P4: hv, rwi = hrw, vi = rhw, vi = rhv, wi. 1 0 1 1 0 1
U = span , , ;
0 1 1 −1 −1 0
3. Using P3: h0, vi = h0 + 0, vi = h0, vi + h0, vi, so 3 0
h0, vi = 0. The rest is P2. projU A =
2 1
4. Assume that hv, vi = 0. If v 6= 0 this contradicts P5, so
v = 0. Conversely, if v = 0, then hv, vi = 0 by Part 3 of this 3
10.2.8 b. U = span {1, 5 − 3x2}; projU x = 2
13 (1 + 2x )
theorem.
10.3.4 b. hv, (rT )wi = hv, rT (w)i = rhv, T (w)i = d. cT (x) = (x + 1)(x + 1)2. Rotation (of π ) about the x
rhT (v), wi = hrT (v), wi = h(rT )(v), wi axis.
√
d. Given v and w, write T −1 (v) = v1 and T −1 (w) = w1 . f. cT (x) = (x + 1)(x2 − 2x + 1). Rotation (of − π4 )
Then hT −1 (v), wi = hv1 , T (w1 )i = hT (v1 ), w1 i = about the y axis followed by a reflection in the x − z
hv, T −1 (w)i. plane.
10.3.5 b. If B0 = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, then 10.4.6 If kvk = k(aT )(v)k = |a|kT (v)k = |a|kvk for some
7 −1 0
v 6= 0, then |a| = 1 so a = ±1.
MB0 (T ) = −1 7 0 has an orthonormal basis
0 0 2
1 1 0 10.4.12 b. Assume that S = Su ◦ T , u ∈ V , T an isometry
of eigenvectors √12 1 , √12 −1 , 0 . of V . Since T is onto (by Theorem 10.4.2), let
u = T (w) where w ∈ V . Then for any v ∈ V , we have
0 0 1
Hence
n an orthonormal basis of eigenvectors
o of T is (T ◦ Sw ) = T (w + v) = T (w) + T (w) = ST (w) (T (v)) =
√1 (1, 1, 0), √1 (1, −1, 0), (0, 0, 1) . (ST (w) ◦ T )(v), and it follows that T ◦ Sw = ST (w) ◦ T .
2 2
−1 0 1
d. If B0 = {1, x, x2 }, then MB0 (T ) = 0 3 0 Section 10.5
1 0 −1 h i
π 4 cos 3x cos5x
has 10.5.1 b. − cos x + +
an orthonormal
basis ofeigenvectors
2 π 3 2 5 2
0 1 1
1 , √1 0 , √1 0 . d. π4 + sin x − sin22x + sin33x − sin44x + sin55x
2 2 h i
0 1 −1 − π2 cos x + cos323x + cos525x
Hence
n an orthonormal basisoof eigenvectors of T is
x, √2 (1 + x2), √12 (1 − x2) .
1
h i
10.5.2 b. π2 − π8 cos2x
2
2 −1
+ cos 4x
2
4 −1
+ cos 6x
2
6 −1
A 0
10.3.7 b. MB (T ) = , so
0 A R
10.5.4
h cos kx cos lx dxi
xI2 − A 0 π
cT (x) = det = [cA (x)]2 . =2 1 sin[(k+l)x]
− sin[(k−l)x] = 0 provided that k 6= l.
0 xI2 − A k+l k−l 0
662 Polynomials
11
Section 11.1 A.3 b. 5 + 35 i
3 d. ±(2 − i)
11.1.1 b.
cA (x) = (x +1) ;
1 0 0 f. 1 + i
P= 1 1 0 ;
√
1 −3 1 A.4 b. 1
± 3
2 2 i
−1 0 1
1
P−1 AP = 0 −1 0 d. 2, 2
0 0 −1
√
d. cA (x)= (x − 1)2 (x + 2); A.5 b. −2, 1 ± 3i
√ √
−1 0 −1 d. ±2 2, ±2 i
P= 4 1 1 ;
4 2 1 A.6 b. x2 − 4x + 13; 2 + 3i
1 1 0
P−1 AP = 0 1 0 d. x2 − 6x + 25; 3 + 4i
0 0 −2
A.8 x4 − 10x3 + 42x2 − 82x + 65
f. cA (x)= (x + 1)2 (x − 1)
2;
1 1 5 1
0 0 2 −1 A.10 b. (−2)2 + 2i − (4 − 2i) = 0; 2 − i
P= 0 1 2
;
0 d. (−2 + i)2 + 3(1 − i)(−1 + 2i) − 5i = 0; −1 + 2i
1 0 1 1
−1 1 0 0 A.11 b. −i, 1 + i
0 −1 1 0
P−1 AP = 0
d. 2 − i, 1 − 2i
0 1 −2
0 0 0 1
A.12 b. Circle, centre at 1, radius 2
11.1.4 If B is any ordered basis of V , write A = MB (T ). Then d. Imaginary axis
cT (x) = cA (x) = a0 + a1 x + · · · + an xn for scalars ai in R.
f. Line y = mx
Since MB is linear and MB (T k ) = MB (T )k , we have
n
MB [cT (T )] = MV [a0 + a1T + · · · + an T ] =
a0 I + a1A + · · · + an An = cA (A) = 0 by the Cayley-Hamilton A.18 b. 4e−π i/2
theorem. Hence cT (T ) = 0 because MB is one-to-one. d. 8e2π i/3
√
f. 6 2e3π i/4
Section 11.2
√
a 1 0 0 1 0 A.19 b. 12 + 23 i
11.2.2 0 a 0 0 0 1 d. 1 − i
0 0 b 1 0 0 √
f. 3 − 3i
0 1 0 a 1 0
= 0 0 1 0 a 1 √
1
1 0 0 0 0 a A.20 b. − 32 + 323 i
d. −32i
Appendix A
f. −216 (1 + i)
A.1 b. x = 3 √ √ √ √
d. x = ±1 A.23 b. ± 22 ( 3 + i), ± 22 (−1 + 3i)
√ √
d. ±2i, ±( 3 + i), ±( 3 − i)
A.2 b. 10 + i
2π
d. 11
+ 23 A.26 b. The argument in (a) applies using β = n . Then
26 26 i n
1 + z + · · · + zn−1 = 1−z
1−z = 0.
f. 2 − 11i
h. 8 − 6i Appendix B
663
B.1 b. If m = 2p and n = 2q + 1 where p and q are B.4 b. If x is irrational and y is rational, assume that
integers, then m + n = 2(p + q) + 1 is odd. The x + y is rational. Then x = (x + y) − y is the difference
converse is false: m = 1 and n = 2 is a of two rationals, and so is rational, contrary to the
counterexample. hypothesis.
d. x2 − 5x + 6 = (x − 2)(x − 3) so, if this is zero, then
x = 2 or x = 3. The converse is true: each of 2 and 3 B.5 b. n = 10 is a counterexample because 103 = 1000
satisfies x2 − 5x + 6 = 0. while 210 = 1024, so the statement n3 ≥ 2n is false if
n = 10. Note that n3 ≥ 2n does hold for 2 ≤ n ≤ 9.
B.2 b. This implication is true. If n = 2t + 1 where t is
an integer, then n2 = 4t 2 + 4t + 1 = 4t(t + 1) + 1. Now
t is either even or odd, say t = 2m or t = 2m + 1. If Appendix C
2
t = 2m, then n = 8m(2m + 1) + 1; if t = 2m + 1, then n 1 n(n+2)+1 (n+1)2 n+1
n2 = 8(2m + 1)(m + 1) + 1. Either way, n2 has the C.6 n+1 + (n+1)(n+2) = (n+1)(n+2) = (n+1)(n+2) = n+2
2
form n = 8k + 1 for some integer k.
C.14 √
B.3 b. Assume that the statement “one of m and n is √ 2 +n+1 √
1
2 n − 1 + √n+1 = 2 √nn+1 −1 < 2(n+1)
√ −1 = 2 n + 1−1
greater than 12” is false. Then both n ≤ 12 and n+1
m ≤ 12, so n + m ≤ 24, contradicting the hypothesis
that n + m = 25. This proves the implication. The C.18 If n3 − n = 3k, then
converse is false: n = 13 and m = 13 is a (n + 1)3 − (n + 1) = 3k + 3n2 + 3n = 3(k + n2 + n)
counterexample.
d. Assume that the statement “m is even or n is even” is
C.20 Bn = (n + 1)! − 1
false. Then both m and n are odd, so mn is odd,
contradicting the hypothesis. The converse is true: If
m or n is even, then mn is even. C.22 b. Verify each of S1 , S2 , . . . , S8 .
Index
(i, j)-entry, 35 Interpolation and Approxi- analytic geometry, 47 of subspace, 276
3-dimensional space, 58 mation (Davis), 553 angles ordered basis, 503, 505
A-invariance, 176 Introduction to Matrix Com- angle between two vec- orthogonal basis, 287,
B-matrix, 513 putations (Stewart), tors, 228, 546 416, 549
T -invariant, 523 444 radian measure, 60, 111, orthonormal basis, 550,
m × n matrix Raum-Zeit-Materie (“Space- 601 559
canonical forms, 591 Time-Matter”)(Weyl), standard position, 110, standard basis, 106, 272,
defined, 35 329 601 277, 278, 349, 461
difference, 38 The Algebraic Eigenvalue unit circle, 110, 601 vector spaces, 349
elementary row operation, Problem (Wilkinson), approximation theorem, 552, Bessel’s inequality, 556
96 444 578 best approximation, 310
main diagonal, 43 “Linear Programming and Archimedes, 11 best approximation theorem,
matrix transformation, Extensions” (Wu and area 311
503 Coppins), 500 linear transformations of, bilinear form, 497
negative, 38 “if and only if”, 36 255 binary codes, 477
subspaces, 265 “mixed” cancellation, 85 parallelogram Binet formula, 196
transpose, 42 3-dimensional space, 209 equal to zero, 246 binomial coefficients, 367
zero matrix, 38 argument, 601 binomial theorem, 367
n-parity-check code, 479 absolute value arrows, 209 block matrix, 154
n-tuples, 263, 290, 330 complex number, 598, associated homogeneous block multiplication, 74
n-vectors, 47 599 system, 53 block triangular form, 583
n-words, 477 notation, 110 associative law, 38, 71 block triangular matrix, 524
nth roots of unity, 605 real number, 210 attractor, 187 block triangulation theorem,
r-ball, 478 symmetric matrices, 307 augmented matrix, 3, 4, 14 584
x-axis, 209 triangle inequality, 543 auxiliary theorem, 96 blocks, 73
x-compression, 61 abstract vector space, 329 axiomatic method, 615 boundary condition, 199,
x-expansion, 61 action axioms, 611 369
x-shear, 61 same action, 59, 333, 378 axis, 209, 572
y-axis, 209 transformations, 59, 503, cancellation, 333, 334
y-compression, 61 505 back substitution, 14, 118 cancellation laws, 85
y-expansion, 61 addition balanced reaction, 32 canonical forms
z-axis, 209 closed under, 330 ball, 478 m × n matrix, 591
Disquisitiones Arithmeticae closed under addition, 47, Banach, Stephan, 329 block triangular form, 583
(Gauss), 11 263 bases, 276 Jordan canonical form,
How to Read and Do Proofs complex number, 597 basic eigenvectors, 175 591
(Solow), 611 matrix addition, 37 basic solutions, 24, 454 Cartesian coordinates, 209
Introduction to Abstract Al- pointwise addition, 332 basis cartesian geometry, 209
gebra (Nicholson), 475 transformations choice of basis, 503, 508 category, 397
Introduction to Abstract preserving addition, 104 dual basis, 512 Cauchy inequality, 284, 325
Mathematics (Lucas), vector addition, 330, 601 enlarging subset to, 278 Cauchy, Augustin Louis,
611 adjacency matrix, 75 geometric problem of 158, 307
Introduction to the Theory adjugate, 81, 160 finding, 516, 517, 583 Cauchy-Schwarz inequality,
of Error-Correcting adjugate formula, 162 independent set, 415 243
Codes (Pless), 484 adult survival rate, 171 isomorphisms, 393 Cayley, Arthur, 35, 145
Mécanique Analytique (La- aerodynamics, 499 linear operators Cayley-Hamilton theorem,
grange), 245 algebraic method, 4, 9 and choice of basis, 518 588
Calcolo Geometrico algebraic multiplicity, 304 matrix of T corresponding centred, 322
(Peano), 329 algebraic sum, 30 to the ordered bases B centroid, 225
Elements (Euclid), 612 altitude, 262 and D, 505 change matrix, 513
665
666 INDEX
channel, 477 332, 623 Schur’s theorem, 467, 468 conic graph, 21, 491
characteristic polynomial linear combination, 266, skew-hermitian, 471 conjugate, 461, 599
block triangular matrix, 271, 341 spectral theorem, 468 conjugate matrix, 307
525 of the polynomial, 319, standard inner product, conjugate transpose, 463
complex matrix, 465 332, 623 462 consistent system, 1, 17
diagonalizable matrix, of vectors, 341 unitarily diagonalizable, constant, 353, 623
300 sample correlation coeffi- 467 constant matrix, 4
eigenvalues, 174 cient, 324 unitary diagonalization, constant sequences, 406
root of, 174, 368, 369 cofactor, 146 467 constant term, 1
similarity invariant, 518, cofactor expansion, 147, 204 unitary matrix, 466 constrained optimization,
519 cofactor expansion theorem, upper triangular matrix, 497
square matrix, 173, 469 148, 204 467 continuous functions, 538
chemical reaction, 32 cofactor matrix, 160 complex number contraction, 63
choice of basis, 503, 508 column matrix, 35, 47 absolute value, 403, 598, contradiction, proof by, 613
Cholesky factorization, 435 column space, 290, 445 599 convergence, 443
Cholesky, Andre-Louis, 435 column vectors, 172 addition, 597 converges, 139
circuit rule, 30 columns advantage of working converse, 614
classical adjoint, 161 (i, j)-entry, 35 with, 465 coordinate isomorphism,
closed economy, 130 as notations for ordered n- conjugate, 599 395
closed under addition, 47, tuples, 275 equal, 597 coordinate transformation,
263, 330 convention, 36 extension of concepts to, 504
closed under scalar multipli- elementary column opera- 461 coordinate vectors, 215, 236,
cation, 47, 263, 330 tions, 149 form, 597 256, 268, 504
code equal, 21 fundamental theorem of coordinates, 209, 503
(n, k)-code, 479 leading column, 120 algebra, 597 correlation, 322
n-code, 477 shape of matrix, 35 imaginary axis, 600 correlation coefficient
binary codes, 477 Smith normal form, 99 imaginary part, 597 computation
decoding, 483 transpose, 42 imaginary unit, 597 with dot product, 324
defined, 477 commutative law, 38 in complex plane, 600 Pearson correlation coeffi-
error-correcting codes, commute, 69, 72 inverse, 598 cient, 324
472, 477 companion matrix, 157 modulus, 599 sample correlation coeffi-
Hamming (7,4)-code, 485 compatibility rule, 68 multiplication, 597 cient, 324
linear codes, 479 parallelogram law, 601 correlation formula, 326
compatible
polar form, 601 coset, 483
matrix generators, 481 blocks, 74
cosine, 111, 227, 556, 601
minimum distance, 478 for multiplication, 68 product, 600
counterexample, 9, 614
nearest neighbour decod- complement, 528 pure imaginary numbers,
covariance, 501
ing, 478 completely diagonalized, 597
covariance matrix, 501
orthogonal codes, 484 494 real axis, 600
Cramer’s Rule, 158
parity-check code, 479 complex conjugation, 307 real part, 597
Cramer, Gabriel, 164
parity-check matrices, complex distance formula, regular representation,
cross product
482 600 600
and dot product, 236, 245
perfect, 479 complex eigenvalues, 306, root of the quadratic, 609
coordinate vectors, 236
syndrome decoding, 483 444 roots of unity, 604
coordinate-free descrip-
use of, 472 complex matrix scalars, 474
tion, 247
code words, 477, 478, 481, Cayley-Hamilton theo- subtraction, 597
defined, 236, 244
485 rem, 469 sum, 601
determinant form, 236,
coding theory, 477 characteristic polynomial, triangle inequality, 599
244
coefficient matrix, 4, 166 465 complex plane, 600 Lagrange Identity, 245
coefficients conjugate, 461 complex subspace, 470 properties of, 245
binomial coefficients, 367 conjugate transpose, 463 composite, 65, 108, 396 right-hand rule, 248
constant coefficient, 623 defined, 461 composition, 65, 396, 507 shortest distance between
Fourier coefficients, 288, eigenvalues, 465 computer graphics, 258 nonparallel lines, 238
548, 578, 579 eigenvector, 465 conclusion, 611 cryptography, 472
in linear equation, 1 hermitian matrix, 464 congruence, 493
leading coefficient, 166, normal, 469 congruent matrices, 493 data scaling, 326
INDEX 667
Davis, Philip J., 553 example, 171 distributive laws, 41, 71 root of the characteristic
De Moivre’s Theorem, 604 general differential sys- division algorithm, 473, 625 polynomial, 174
De Moivre, Abraham, 604 tems, 199 dominant eigenvalue, 185, solving for, 174
decoding, 483 linear dynamical systems, 441 spectrum of the matrix,
defined, 68 182 dominant eigenvector, 441 426
defining transformation, 59 matrix, 173 dot product symmetric linear operator
degree of the polynomial, multivariate analysis, 501 and cross product, 236, on finite dimensional
332, 623 orthogonal diagonaliza- 245 inner product space,
demand matrix, 131 tion, 424, 557 and matrix multiplication, 560
dependent, 273, 345, 358 quadratic form, 487 67 eigenvector
dependent lemma, 358 test, 302 as inner product, 537 basic eigenvectors, 175
derivative, 403 unitary diagonalization, basic properties, 226 complex matrix, 465
Descartes, René, 209 467 correlation coefficients defined, 173, 300
determinants diagonalization algorithm, computation of, 324 dominant eigenvector, 441
3 × 3, 146 181 defined, 226 fractions, 175
n × n, 147 diagonalization theorem, dot product rule, 55, 67 linear combination, 442
adjugate, 81, 160 489 in set of all ordered n- linear operator, 526
and eigenvalues, 145 diagonalizing matrix, 178 tuples (Rn ), 282, 462 nonzero linear combina-
and inverses, 81, 145 difference inner product space, 540 tion, 175
block matrix, 154 m × n matrices, 38 length, 541 nonzero multiple, 175
coefficient matrix, 166 of two vectors, 213, 334 of two ordered n-tuples, nonzero vectors, 265, 303
cofactor expansion, 146, differentiable function, 198, 54 orthogonal basis, 424
147, 204 340, 368, 402, 403 of two vectors, 226 orthogonal eigenvectors,
Cramer’s Rule, 158 differential equation of order variances 427, 465
cross product, 236, 244 n, 368, 403 computation of, 326 orthonormal basis, 561
defined, 81, 145, 152, 204, differential equations, 198, doubly stochastic matrix, principal axes, 429
518 368, 402 142 electrical networks, 29
inductive method of deter- differential system, 199 dual, 512 elementary matrix
mination, 146 defined, 198 dual basis, 512 and inverses, 96
initial development of, exponential function, 198 economic models defined, 95
256 general differential sys- input-output, 128 LU-factorization, 119
notation, 145 tems, 199 economic system, 128 operating corresponding
polynomial interpolation, general solution, 200 edges, 75 to, 95
165 simplest differential sys- eigenspace, 265, 303, 526, permutation matrix, 123
product of matrices (prod- tem, 198 584 self-inverse, 97
uct theorem), 158, 168 differentiation, 377 eigenvalues Smith normal form, 99
similarity invariant, 518 digits, 477 and determinants, 145 uniqueness of reduced
square matrices, 145, 159 dilation, 63 and diagonalizable matri- row-echelon form, 100
theory of determinants, 35 dimension, 276, 349, 403 ces, 179 elementary operations, 5
triangular matrix, 154 dimension theorem, 375, and eigenspace, 265, 303 elementary row operations
Vandermonde determi- 386, 395 and Google PageRank, corresponding, 95
nant, 167 direct proof, 611 189 inverses, 7, 96
Vandermonde matrix, 153 direct sum, 360, 528, 536 complex eigenvalues, 177, matrices, 5
deviation, 322 directed graphs, 75 306, 444 reversed, 7
diagonal matrices, 45, 79, direction, 211 complex matrix, 465 scalar product, 21
172, 178, 300, 516 direction cosines, 243, 556 computation of, 441 sum, 21
diagonalizable linear opera- direction vector, 218 defined, 173, 300 elements of the set, 263
tor, 557 discriminant, 491, 606 dominant eigenvalue, 185, ellipse, 491
diagonalizable matrix, 178, distance, 285, 541 441 entries of the matrix, 35
300, 516 distance function, 397 iterative methods, 441 equal
diagonalization distance preserving, 251, linear operator, 526 columns, 21
completely diagonalized, 564 multiple eigenvalues, 302 complex number, 597
494 distance preserving isome- multiplicity, 180, 304 fractions, 212
described, 178, 300 tries, 564 power method, 441 functions, 333
eigenvalues, 145, 173, 300 distribution, 500 real eigenvalues, 307 linear transformations,
668 INDEX
commute, 69, 72 multiplicity, 180, 304, 410, orthogonal matrix, 160, 424 paired samples, 324
compatibility rule, 68 625 orthogonal projection, 420, parabola, 491
definition, 66 multivariate analysis, 501 552 parallel, 217
directed graphs, 75 orthogonal set of vectors, parallelepiped, 247, 255
distributive laws, 71 nearest neighbour decoding, 466, 547 parallelogram
dot product rule, 67 478 orthogonal sets, 285, 415 area equal to zero, 246
left-multiplication, 87 negative orthogonal vectors, 229, defined, 110, 212
matrix of composite of correlation, 324 285, 466, 547 determined by geometric
two linear transforma- of m × n matrix, 38 orthogonality vectors, 212
tions, 108 vector, 47, 330 complex matrices, 461 image, 255
matrix products, 66 negative x, 47 constrained optimization, law, 110, 212, 601
non-commutative, 87 negative x-shear, 61 497 rhombus, 230
order of the factors, 71 network flow, 27 dot product, 282 parameters, 2, 14
results of, 67 Newton, Sir Isaac, 11 eigenvalues, computation parametric equations of a
right-multiplication, 87 Nicholson, W. Keith, 475 of, 441 line, 219
matrix of T corresponding to nilpotent, 190, 592 expansion theorem, 548 parametric form, 2
the ordered bases B and noise, 472 finite fields, 474 parity digits, 482
D, 505 nonleading variable, 20 Fourier expansion, 288 parity-check code, 479
matrix of a linear transfor- nonlinear recurrences, 196 Gram-Schmidt orthogo- parity-check matrices, 482
mation, 503 nontrivial solution, 20, 173 nalization algorithm, Parseval’s formula, 556
matrix recurrence, 172 nonzero scalar multiple of a 417, 426, 428, 438, particle physics, 499
matrix theory, 35 basic solution, 24 520, 549 partitioned into blocks, 73
nonzero vectors, 217, 284 normalizing the orthogo-
matrix transformation in- path of length, 75
norm, 463, 541 nal set, 286
duced, 59, 104, 250 Peano, Guiseppe, 329
normal, 233, 469 orthogonal codes, 484
matrix transformations, 90 Pearson correlation coeffi-
normal equations, 311 orthogonal complement,
matrix-vector products, 49 cient, 324
normalizing the orthogonal 418, 484
mean perfect code, 479
set, 286, 547 orthogonal diagonaliza-
”average” of the sample period, 371
null space, 264 tion, 424
values, 322 permutation matrix, 123,
nullity, 383 orthogonal projection,
calculation, 500 432, 521
nullspace, 382 420
sample mean, 322 perpendicular lines, 226
median orthogonal sets, 285, 415 physical dynamics, 469
objective function, 497, 499 orthogonally similar, 431
tetrahedron, 225 objects, 397 pigeonhole principle, 614
positive definite matrix,
triangle, 225 odd function, 579 Pisano, Leonardo, 195
433
messages, 481 odd polynomial, 354 planes, 233, 264
principal axes theorem,
metric, 397 Ohm’s Law, 30 Pless, V., 484
425
midpoint, 216 one-to-one transformations, PLU-factorization, 124
projection theorem, 311
migration matrix, 184 384 point-slope formula, 221
Pythagoras’ theorem, 286
minimum distance, 478 onto transformations, 384 pointwise addition, 332, 333
QR-algorithm, 443
modular arithmetic, 473 open model of the economy, polar decomposition, 455
QR-factorization, 437
modulo, 473 131 polar form, 601
quadratic forms, 429, 487
modulus, 473, 599 open sector, 131 polynomials
real spectral theorem, 426
Moore-Penrose inverse, 322, ordered n-tuple, 47, 275 as matrix entries and de-
statistical principal com-
457 ordered basis, 503, 505 terminants, 152
ponent analysis, 500
morphisms, 397 origin, 209 associated with the linear
triangulation theorem,
multiplication orthocentre, 262 430 recurrence, 408
block multiplication, 73 orthogonal basis, 416, 549 orthogonally diagonalizable, coefficients, 319, 332, 623
compatible, 68, 74 orthogonal codes, 484 425 companion matrix, 157
matrix multiplication, 64 orthogonal complement, orthogonally similar, 431 complex roots, 177, 465,
matrix-vector multiplica- 418, 484, 551 orthonormal basis, 550, 559 626
tion, 48 orthogonal diagonalization, orthonormal matrix, 424 constant, 623
matrix-vector products, 49 424, 557 orthonormal set, 466, 547 defined, 331, 623
scalar multiplication, 39, orthogonal hermitian matrix, orthonormal vector, 285 degree of the polynomial,
330, 332 465 332, 623
multiplication rule, 603 orthogonal lemma, 415, 549 PageRank, 189 distinct degrees, 347
672 INDEX
division algorithm, 625 determinant of product of Rayleigh quotients, 442 and orthogonal matrices,
equal, 332, 623 matrices, 158 real axis, 600 160
evaluation, 182, 340 dot product, 226 real Jordan canonical form, axis, 572
even, 354 matrix products, 66 593 describing rotations, 111
factor theorem, 625 matrix-vector products, 49 real numbers, 1, 47, 330, fixed axis, 575
form, 623 scalar product, 226 332, 461, 465, 474 isometries, 568
indeterminate, 623 standard inner product, real parts, 403, 597 linear operators, 254
interpolating the polyno- 462 real quadratic, 606 linear transformations,
mial, 165 theorem, 158, 168 real spectral theorem, 426 111
Lagrange polynomials, product rule, 404 recurrence, 193 round-off error, 175
365, 548 projection recursive algorithm, 11 row matrix, 35
leading coefficient, 332, linear operator, 564 recursive sequence, 193 row space, 290
623 linear operators, 251 reduced row-echelon form, row-echelon form, 10, 11
least squares approximat- orthogonal projection, 10, 11, 100 row-echelon matrix, 10, 11
ing polynomial, 317 420, 552 reduced row-echelon matrix, row-equivalent matrices, 103
Legendre polynomials, projection matrix, 63, 423, 10, 11 rows
550 432 reducible, 531 (i, j)-entry, 35
nonconstant polynomial projection on U with kernel reduction to cases, 613 as notations for ordered n-
with complex coeffi- W , 551 reflections tuples, 275
cients, 306 projection theorem, 311, about a line through the convention, 36
odd, 354 420, 552 origin, 160 elementary row opera-
remainder theorem, 624 projections, 114, 230, 418 fixed hyperplane, 575 tions, 5
root, 174, 441, 597 proof fixed line, 569 leading 1, 10
root of characteristic poly- by contradiction, 613 fixed plane, 572 shape of matrix, 35
nomial, 174, 368 defined, 611 isometries, 568 Smith normal form, 99
Taylor’s theorem, 364 direct proof, 611 linear operators, 251 zero rows, 10
vector spaces, 331, 363 formal proofs, 612 linear transformations,
reduction to cases, 613 saddle point, 188
with no root, 625, 626 113 same action, 59, 333, 378
zero polynomial, 623 proper subspace, 263, 279 regular representation, 522
pseudoinverse, 457 sample
position vector, 213 regular stochastic matrix, analysis of, 322
positive x-shear, 61 pure imaginary numbers, 139
597 comparison of two sam-
positive correlation, 323 remainder, 473 ples, 323
positive definite, 433, 497 Pythagoras, 221, 612 remainder theorem, 363, 624
Pythagoras’ theorem, 221, defined, 322
positive definite matrix, 433, repellor, 187 paired samples, 324
539 228, 286, 547, 612
reproduction rate, 171 sample correlation coeffi-
positive matrix, 455 restriction, 524 cient, 324
QR-algorithm, 443
positive semi-definite ma- reversed, 7 sample mean, 322
QR-factorization, 438
trix, 455 quadratic equation, 497 rhombus, 230 sample standard deviation,
positive semidefinite, 501 quadratic form, 429, 487, right cancelled invertible 323
power method, 441, 442 541 matrix, 85 sample variance, 323
power sequences, 406 quadratic formula, 606 right-hand coordinate sys- sample vector, 322
practical problems, 1 quotient, 473 tems, 247 satisfy the relation, 406
preimage, 381 right-hand rule, 248 scalar, 39, 330, 474
prime, 474, 614 radian measure, 111, 601 root scalar equation of a plane,
principal argument, 601 random variable, 500 of characteristic polyno- 233
principal axes, 426, 489 range, 382 mial, 174, 368, 369 scalar matrix, 143
principal axes theorem, 425, rank of polynomials, 340, 441, scalar multiple law, 110,
465, 468, 498, 561 linear transformation, 597 215, 217
principal components, 501 383, 508 of the quadratic, 606 scalar multiples, 21, 39, 110
principal submatrices, 434 matrix, 16, 291, 383 roots of unity, 604 scalar multiplication
probabilities, 134 quadratic form, 494 rotation, 575 axioms, 330
probability law, 500 similarity invariant, 518 rotations basic properties, 334
probability theory, 501 symmetric matrix, 494 about a line through the closed under, 47, 330
product theorem, 291 origin, 519 closed under scalar multi-
complex number, 600 rational numbers, 330 about the origin plication, 263
INDEX 673
symmetric matrix third-order differential equa- Vandermonde determinant, subspaces, 338, 360
absolute value, 307 tion, 368, 402 167 theory of vector spaces,
congruence, 493 time, functions of, 172 Vandermonde matrix, 153 333
defined, 43 tip, 212 variance, 322, 501 zero vector space, 336
index, 494 tip-to-tail rule, 213 variance formula, 326 vectors
orthogonal eigenvectors, total variance, 502 vector addition, 330, 601 addition, 330
425 trace, 79, 299, 376, 518 vector equation of a line, 218 arrow representation, 58
positive definite, 433 trajectory, 187 vector equation of a plane, column vectors, 172
rank and index, 494 transformations 234 complex matrices, 461
real eigenvalues, 307 action, 59, 503 vector geometry coordinate vectors, 215,
syndrome, 483 composite, 65 angle between two vec- 236, 256, 268, 504
syndrome decoding, 483 defining, 59 tors, 228 defined, 47, 330, 461
system of linear equations described, 59 computer graphics, 258 difference of, 334
algebraic method, 4 equal, 59 cross product, 236 direction of, 211
associated homogeneous identity transformation, defined, 209 direction vector, 218
system, 53 60 direction vector, 218 fixed vectors, 576
augmented matrix, 3 matrix transformation, 59 line perpendicular to initial state vector, 137
chemical reactions zero transformation, 60 plane, 226 intrinsic descriptions, 211
application to, 32 transition matrix, 135, 136 linear operators, 250 length, 212, 282, 463, 541
coefficient matrix, 4 transition probabilities, 134, lines in space, 218 matrix recurrence, 172
consistent system, 1, 16 136 planes, 233 matrix-vector multiplica-
constant matrix, 4 translation, 61, 380, 564 projections, 231 tion, 48
defined, 1 transpose of a matrix, 42 symmetric form, 225 matrix-vector products, 49
electrical networks transposition, 42, 376 vector equation of a line, negative, 47
application to, 29 triangle 218 nonzero, 217
elementary operations, 5 altitude, 262 vector product, 236 orthogonal vectors, 229,
equivalent systems, 4 centroid, 225 vector quantities, 211 285, 466, 547
gaussian elimination, 9, hypotenuse, 612 vector spaces orthonormal vector, 285,
14 inequality, 244, 284, 599 3-dimensional space, 209 466
general solution, 2 median, 225 abstract, 329 position vector, 213
homogeneous equations, orthocentre, 262 as category, 397 sample vector, 322
20 triangle inequality, 244, 543, axioms, 330, 333, 335 scalar multiplication, 330
inconsistent system, 1 599 basic properties, 329 single vector equation, 48
infinitely many solutions, triangular matrices, 118, 154 state vector, 135
basis, 349
3 steady-state vector, 140
triangulation algorithm, 586 cancellation, 333
inverses and, 82 subtracted, 334
triangulation theorem, 430 continuous functions, 538
matrix multiplication, 70 sum of two vectors, 330
trigonometric functions, 111 defined, 330
network flow application, unit vector, 215, 282, 463
trivial linear combinations, differential equations, 368
27 zero n-vector, 47
271, 345 dimension, 349
no solution, 3 zero vector, 263, 330
trivial solution, 20 direct sum, 528
nontrivial solution, 20 velocity, 211
examples, 329
normal equations, 311 vertices, 75
uncorrelated, 501 finite dimensional spaces,
positive integers, 32 vibrations, 499
unit ball, 499, 541 354
rank of a matrix, 16 volume
unit circle, 110, 542, 601 infinite dimensional, 355
solutions, 1 linear transformations of,
unit cube, 256 introduction of concept,
trivial solution, 20 255
unique solution, 3 unit square, 256 329
of parallelepiped, 247,
with m × n coefficient ma- unit triangular, 128 isomorphic, 392
255
trix, 49 unit vector, 215, 282, 463, linear independence, 345
systematic generator, 482 541 linear recurrences, 405 Weyl, Hermann, 329
unitarily diagonalizable, 467 linear transformations, whole number, 611
tail, 212 unitary diagonalization, 467 375 Wilf, Herbert S., 189
Taylor’s theorem, 364, 515 unitary matrix, 466 polynomials, 331, 363 words, 477
tetrahedron, 225 upper Hessenberg form, 444 scalar multiplication wronskian, 373
theorems, 615 upper triangular matrix, 118, basic properties of, 334
theory of Hilbert spaces, 417 154, 467 spanning sets, 341 zero n-vector, 47
INDEX 675
zero matrix scalar multiplication, 40 zero subspace, 263, 339 zero vector, 263, 330
described, 38 zero polynomial, 332 zero transformation, 60, 117, zero vector space, 336
no inverse, 81 zero rows, 10 376
ADAPTED FORMATIVE ONLINE COURSE COURSE LOGISTICS
OPEN TEXT ASSESSMENT SUPPLEMENTS & SUPPORT
a d v a n c i n g l e a r n i n g
LYRYX.COM