Elementary Linear Algebra With Applications 9th Edition by Kolman Hill ISBN Solution Manual
Elementary Linear Algebra With Applications 9th Edition by Kolman Hill ISBN Solution Manual
Elementary Linear Algebra With Applications 9th Edition by Kolman Hill ISBN Solution Manual
Chapter 2
⎡ ⎤
1 0 0 8 ⎡ ⎤
1 0 0 −1 4
3r3 + r1 → r1 ⎢0 1 0 −1 ⎥
4. (a) ⎢ ⎥ (b) −3r2 + r1 → r1 ⎣0 1 0 1 0⎦
−r3 + r2 → r2 0 0 1 2
⎣ ⎦
0 0 1 −1 0
0 0 0 0
−r1 → r1
−2r1 + r2 → r2 ⎡ ⎤
−2r1 + r3 → r3 −3r1 + r2 → r2 1 0 −3
− 5r1 + r3 → r3 ⎢0
1
2 r2
→ r2 2⎥
1
6. (a) I (b) +r r
2r1 4 → 4
⎢ ⎥
−3r3 → r3 3 ⎣0 0 0⎦
r2 + r3a)→ r3
4
3 r3+ r2 → r 2 8 RREF (c) N
−5r3 + r1 → r1 . REF
2r2 + r1 → r1 ( (b)
9. Consider the columns of A which contain leading entries of nonzero rows of A. If this set of columns is
the entire set of n columns, then A = In . Otherwise there are fewer than n leading entries, and hence
fewer than n nonzero rows of A.
10. (a) A is row equivalent to itself: the sequence of operations is the empty sequence.
(b) Each elementary row operation of types I, II or III has a corresponding inverse operation of the
same type which “undoes” the effect of the original operation. For example, the inverse of the
operation “add d times row r of A to row s of A” is “subtract d times row r of A from row s of
A.” Since B is assumed row equivalent to A, there is a sequence of elementary row operations
which gets from A to B. Take those operations in the reverse order, and for each operation do its
inverse, and that takes B to A. Thus A is row equivalent to B.
(c) Follow the operations which take A to B with those which take B to C.
r
10. x = , where r = 0.
0
⎡ ⎤
− 1
⎢ 4 ⎥ r
⎢ ⎥
12. x = ⎢ 1 r ⎥, where r = 0.
⎣ 4 ⎦
r
that the linear system has only the trivial solution if and only if A is row equivalent to I2 . Now show
that this occurs if and only if ad − bc = 0. If ad − bc = 0 then at least one of a or c is = 0, and it is a
routine matter to show that A is row equivalent to I2 . If ad − bc = 0, then by case considerations we
find that A is row equivalent to a matrix that has a row or column consisting entirely of zeros, so that
A is not row equivalent to I2 .
Alternate proof: If ad − bc = 0, then A is nonsingular, so the only solution is the trivial one. If
ad − bc = 0, then ad = bc. If ad = 0 then either a or d = 0, say a = 0. Then bc = 0, and either b
or c = 0. In any of these cases we get a nontrivial solution. If ad = 0, then ac = bd, and the second
equation is a multiple of the first one so we again have a nontrivial solution.
19. This had to be shown in the first proof of Exercise 18 above. If the alternate proof of Exercise 18 was
given, then Exercise 19 follows from the former by noting that the homogeneous system Ax = 0 has
only the trivial solution if and only if A is row equivalent to I2 and this occurs if and only if ad− bc = 0.
⎡ 3⎤ ⎡ ⎤
2 −1
⎢ ⎥ ⎢ ⎥
20. −2 + 1 t, where t is any number.
⎣ ⎦ ⎣ ⎦
0 0
22. −a + b + c = 0.
24. (a) Change “row” to “column.”
(b) Proceed as in the proof of Theorem 2.1, changing “row” to “column.”
Copyright © 2012 Pearson Education, Inc. Publishing as Prentice Hall
Section 2.2 29
25. Using Exercise 24(b) we can assume that every m × n matrix A is column equivalent to a matrix in
column echelon form. That is, A is column equivalent to a matrix B that satisfies the following:
(a) All columns consisting entirely of zeros, if any, are at the right side of the matrix.
(b) The first nonzero entry in each column that is not all zeros is a 1, called the leading entry of the
column.
(c) If the columns j and j + 1 are two successive columns that are not all zeros, then the leading
entry of column j + 1 is below the leading entry of column j.
We start with matrix B and show that it is possible to find a matrix C that is column equivalent to B
that satisfies
(d) If a row contains a leading entry of some column then all other entries in that row are zero.
If column j of B contains a nonzero element, then its first (counting top to bottom) nonzero element
is a 1. Suppose the 1 appears in row rj . We can perform column operations of the form acj + ck for
each of the nonzero columns ck of B such that the resulting matrix has row rj with a 1 in the (rj , j)
entry and zeros everywhere else. This can be done for each column that contains a nonzero entry hence
we can produce a matrix C satisfying (d). It follows that C is the unique matrix in reduced column
echelon form and column equivalent to the original matrix A.
26. −3a − b + c = 0.
28. Apply Exercise 18 to the linear system given here. The coefficient matrix is
a−r d
.
c b− r
Hence from Exercise 18, we have a nontrivial solution if and only if (a − r)(b − r) − cd = 0.
2x −
3 2
32. x + 12 .
34. (a) x = 0, y = 0 (b) x = 5, y = −7
36. r = 5, r2 = 5.
37. The GPS receiver is located at the tangent point where the two circles intersect.
0
40. x = .
1
4 − 14 i
42. No solution.
⎡ ⎤ ⎡ ⎤
1 0 0 1 0 0
−2 0 1 0 0 1
⎡ ⎤ ⎡ ⎤
1 0 0 1 0 0
⎡ ⎤⎡ ⎤ ⎡ ⎤
1 0 0 1 0 0 1 0 0
(c) AB = ⎣ 0 1 0⎦ ⎣0 1 0⎦ = ⎣0 1 0 ⎦.
−2 0 1 2 0 1 0 0 1
⎡ ⎤⎡ ⎤ ⎡ ⎤
1 0 0 1 0 0 1 0 0
BA = ⎣ 0 1 0⎦ ⎣ 0 1 0⎦ = ⎣0 1 0 ⎦.
2 0 1 −2 0 1 0 0 1
⎡ ⎤
1 −1 0 −1
⎢ ⎥
⎢ ⎥
⎢ 0 − 12 0 0⎥
12. (a) A−1 =⎢ ⎥ . (b) Singular.
⎢ 1 ⎥
⎢−5 1 1 3 ⎥
⎣ 5 5 ⎦
2 1 2 1
5
−2 −5 −5
1 d −b a b 1 ad − bc db − bd 1 0
= = .
ad − bc −c a c d ad − bc −ca + ac −bc + ad 0 1
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
1 0 0 1 0 0 1 0 −5
⎣
22. (a) 0 1 0 ⎦. (b) ⎣ 0 1 0 ⎦. ⎣
(c) 0 1 0 ⎦.
0 0 −3 0 1 0 0 0 1
23. The matrices A and B are row equivalent if and only if B = Ek Ek−1 · · · E2 E1 A.
Let P = Ek Ek−1 · · · E2 E1 .
24. If A and B are row equivalent then B = P A, where P is nonsingular, and A = P −1 B (Exercise 23). If
A is nonsingular then B is nonsingular, and conversely.
25. Suppose B is singular. Then by Theorem 2.9 there exists x = 0 such that Bx = 0. Then (AB)x =
A0 = 0, which means that the homogeneous system (AB)x = 0 has a nontrivial solution. Theorem
2.9 implies that AB is singular, a contradiction. Hence, B is nonsingular. Since A = (AB)B −1 is a
product of nonsingular matrices, it follows that A is nonsingular.
Alternate Proof: If AB is nonsingular it follows that AB is row equivalent to In , so P (AB) = In . Since
P is nonsingular, P = Ek Ek−1 · · · E2 E1 . Then (P A)B = In or (Ek Ek−1 · · · E2 E1 A)B = In . Letting
Ek Ek−1 · · · E2 E1 A = C, we have CB = In , which implies that B is nonsingular. Since P AB = In ,
A = P −1 B −1 , so A is nonsingular.
26. The matrix A is row equivalent to O if and only if A = P O = O where P is nonsingular.
27. The matrix A is row equivalent to B if and only if B = P A, where P is a nonsingular matrix. Now
B T = AT P T , so A is row equivalent to B if and only if AT is column equivalent to B T .
28. If A has a row of zeros, then A cannot be row equivalent to In , and so by Corollary 2.2, A is singular.
If the jth column of A is the zero column, then the homogeneous system Ax = 0 has a nontrivial
solution, the vector x with 1 in the jth entry and zeros elsewhere. By Theorem 2.9, A is singular.
1 0 0 0
29. (a) No. Let A = , B = . Then (A + B)−1 exists but A−1 and B −1 do not. Even
0 0 0 1
supposing they all exist, equality need not hold. Let A = 1 , B = 2 so (A + B)−1 = 1
3 =
1 + 21 = A−1 + B −1 .
Copyright © 2012 Pearson Education, Inc. Publishing as Prentice Hall
32 Chapter 1
(b) Yes, for A nonsingular and r = 0.
1 −1 = r 1
(rA) A A · A−1 = 1 · In = In .
r r
30. Suppose that A is nonsingular. Then Ax = b has the solution x = A−1 b for every n × 1 matrix b.
Conversely, suppose that Ax = b is consistent for every n × 1 matrix b. Letting b be the matrices
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
1 0 0
⎢0⎥ ⎢1⎥ ⎢0⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
e1 = ⎢ .
⎢.⎥
⎥ , e2 = ⎢ 0 ⎥ , . . . , en = ⎢ . ⎥ ,
⎢ ⎥ ⎢.⎥
.
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ ⎦ ⎣.⎦ ⎣0⎦
0 0 1
Letting C be the matrix whose jth column is xj , we can write the n systems in (∗ ) as AC = In , since
In = e1 e2 · · · en . Hence, A is nonsingular.
31. We consider the case that A is nonsingular and upper triangular. A similar argument can be given for
A lower triangular.
By Theorem 2.8, A is a product of elementary matrices which are the inverses of the elementary
matrices that “reduce” A to In . That is,
A = E1−1 · · · E k−1 .
The elementary matrix Ei will be upper triangular since it is used to introduce zeros into the upper
triangular part of A in the reduction process. The inverse of Ei is an elementary matrix of the same
type and also an upper triangular matrix. Since the product of upper triangular matrices is upper
triangular and we have A−1 = Ek · · · E1 we conclude that A−1 is upper triangular.
4. Allowable equivalence operations (“elementary row or elementary column operation”) include in par-
ticular elementary row operations.
5. A and B are equivalent if and only if B = Et · · · E2 E1 AF1 F2 · · · Fs . Let Et Et−1 · · · E2 E1 = P and
F1 F2 · · · Fs = Q.
⎡ ⎤ ⎡ ⎤
−1 2 0 1 0 −1
I2 0
6. B = ; a possible answer is: B = ⎣ 1 −1 0⎦ A ⎣0 1 −1 ⎦.
0 0
−1 1 1 0 0 1
9. Replace “row” by “column” and vice versa in the elementary operations which transform A into B.
10. Possible answers are:
⎡ ⎤ ⎡ ⎤
1 −2 3 0 1 0 0 0 0
1 0
(a) ⎣ 0 −1 4 3 ⎦. (b) . (c) ⎣ 0 1 −2 0 2 ⎦.
0 0
0 2 −5 −2 0 5 5 4 4
11. If A and B are equivalent then B = P AQ and A = P −1 BQ−1 . If A is nonsingular then B is nonsingular,
and conversely.
⎢ 6 1 0 0⎥ ⎢ 0 3 2 1⎥
⎢ −2 ⎥
8. L = ,U= ,x= .
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ −1 2 1 0 ⎦ ⎣ 0 0 −4 1⎦ ⎣ 5⎦
−2 3 2 1 0 0 0 −2 −4
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
1 0 0 0 4 1 0.25 −0.5 −1.5
4. c + 2a − 3b = 0.
1
5. (a) Multiply the jth row of B by k.
(b) Interchange the ith and jth rows of B.
(c) Add −k times the jth row of B to its ith row.
6. (a) If we transform E1 to reduced row echelon form, we obtain In . Hence E1 is row equivalent to In
and thus is nonsingular.
(b) If we transform E2 to reduced row echelon form, we obtain In . Hence E2 is row equivalent to In
Copyright © 2012 Pearson Education, Inc. Publishing as Prentice Hall
and thus is nonsingular.
(c) If we transform E3 to reduced row echelon form, we obtain In . Hence E3 is row equivalent to In
and thus is nonsingular.
⎡ ⎤
1 −a a2 −a3
⎢ 0 1 −a a 2 ⎥
8. ⎢
⎣0 0
⎥.
1 −a ⎦
0 0 0 1
⎡ ⎤ ⎡ ⎤
−41 83
√
12. s = 0, ± 2.
13. For any angle θ, cos θ and sin θ are never simultaneously zero. Thus at least one element in column 1
is not zero. Assume cos θ = 0. (If cos θ = 0, then interchange rows 1 and 2 and proceed in a similar
manner to that described below.) To show that the matrix is nonsingular and determine its inverse,
we put
cos θ sin θ 1 0
− sin θ cos θ 0 1
Since
sin θ
the (2, 2)-element is not zero. Applying row operations cos θ times row 2 and − cos θ times row 2
added to row 1 we obtain
1 0 cos θ − sin θ
.
0 1 sin θ cos θ
cos θ − sin θ
.
sin θ cos θ
16. Suppose at some point in the process of reducing the augmented matrix to reduced row echelon form
we encounter a row whose first n entries are zero but whose (n + 1)st entry is some number c = 0. The
corresponding linear equation is
0 · x1 + · · · + 0 · xn = c or 0 = c.
17. Let u be one solution to Ax = b. Since A is singular, the homogeneous system Ax = 0 has a nontrivial
solution u0 . Then for any real number r, v = ru0 is also a solution to the homogeneous system. Finally,
by Exercise 29, Sec. 2.2, for each of the infinitely many vectors v, the vector w = u + v is a solution
to the nonhomogeneous system Ax = b.
18. s = 1, t = 1.
20. If any of the diagonal entries of L or U is zero, there will not be a unique solution.
If either X = O or Y = O, then XY T = O. Thus assume that there is at least one nonzero component
in X, say xi , and at least one nonzero component in Y , say yj . Then x1i Rowi (X Y T ) makes the ith
row exactly Y T . Since all the other rows are multiples of Y T , row operations of the form −xk Ri + Rp ,
for p = i, can be performed to zero out everything but the ith row. It follows that either XY T is row
equivalent to O or to a matrix with n − 1 zero rows.
2. (a) No.
(b) Infinitely many.
(c) No.
⎡ ⎤
−6 + 2r + 7s
⎢ r ⎥
(d) x = ⎢ ⎥, where r and s are any real numbers.
⎣ −3s ⎦
s
3. k = 6.
6. P = A−1 , Q = B.
7. Possible answers: Diagonal, zero, or symmetric.