Nothing Special   »   [go: up one dir, main page]

Final Sol

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

Final Solutions

1. Find all solutions to the following system of linear equations using the
row-reduced echelon form.
x + 3y − 5z = 4
x + 4y − 8z = 7
−3x − 7y + 9z = −6

Solution. On the augmented matrix


 
1 3 −5 4
 1 4 −8 7  ,
−3 −7 9 −6
we execute following sequence of elementary row operations: R2 →
−R1 + R2 , R3 → 3R1 + R3 , R1 → −3R2 + R1 , and R3 → −2R2 + R3 ,
to obtain  
1 0 4 −5
 0 1 −3 3  .
0 0 0 0
From this, we obtain the solution set:

{(x, y, z) = (−4k − 5, 3k + 3, k) : k ∈ R}.

2. Let V be a vector space of dimension n over a field K. Let V1 and V2


be subspaces of V .
(a) Show that V1 ∩ V2 is a subspace of V .
(b) If V1 6= V2 and dim V1 = dim V2 = n − 1, then show that dim (V1 ∩
V2 ) = n − 2.
Solution. (a) Since 0 ∈ V1 and 0 ∈ V2 , we have that 0 ∈ V1 ∩ V2 . Let
v1 , v2 ∈ V1 ∩ V2 . The fact that v1 , v2 ∈ V1 and v1 , v2 ∈ V2 would imply
that v1 + v2 ∈ V1 ∩ V2 . Also, if c ∈ K and v ∈ V1 ∩ V2 , then cv ∈ V1 ∩ V2
as cv ∈ V1 and cv ∈ V2 . Therefore, V1 ∩ V2 if a subspace of V .
(b) Since V1 6= V2 , we have that dim (V1 + V2 ) is greater than both
dimV1 and dim V2 . This would imply that dim (V1 + V2 ) = n. We know
that dim (V1 + V2 ) = dim V1 + dim V2 − dim (V1 ∩ V2 ). From the above
equation, we can infer that dim V1 ∩ V2 = n − 2.

1
3. Prove the following.
1 λ1 λ21
(a) 1 λ2 λ22 = (λ2 − λ1 )(λ3 − λ1 )(λ3 − λ2 ).
1 λ3 λ23
(b) Suppose that A is a real 3 × 3 symmetric matrix with a repeated
eigenvalue. If v is any vector, show that the vectors v, Av, A2 v are
linearly dependent.
Solution. (a) The required determinant is unaffected by the operations
C2 7→ C2 − λ1 C1 and C3 7→ C3 − λ21 C1 which gives the determinant

1 0 0
1 λ2 − λ1 λ2 − λ21
2

1 λ3 − λ1 λ23 − λ21

Expanding this along the first row gives (λ2 − λ1 )(λ3 − λ1 )(λ3 − λ2 ) as
required.
(b) The vectors v, Av, A2 v are dependent if and only if the 3 × 3 matrix
P = [v, Av, A2 v] has determinant zero. Now there is a matrix N such
that  
λ1
N −1 AN = Λ :=  λ2 .
λ3
We note that det(P ) = 0 iff det(N −1 P ) = 0. Writing v = N w, the
matrix N −1 P = [w, Λw, Λ2 w] which can be written as:
  
w1 1 λ1 λ21
 w2  1 λ2 λ22 
w3 1 λ3 λ23

Since the eigenvalues of A are not distinct (given in the problem), by


the identity of part a) of this problem we see that the determinant of
the above matrix is zero for all w. Thus v, Av, A2 v are dependent for
all v.
4. Let V be a finite dimensional vector space over R with a positive definite
scalar product and let A be symmetric linear operator on V .
(a) State the Spectral Theorem for A.

2
(b) Show that the eigenvalues of A are positive if and only if hAv, vi >
0, for all v 6= 0.

Solution. (a) Let V be a finite dimensional vector space over R with a


positive definite scalar product and let A be symmetric linear operator
on V . Then V has an orthonormal basis consisting of the eigenvectors
of A.
(b) Suppose that hAv, vi > 0 for all v 6= 0. Let v be an eigenvector of
A associated with the eigenvalue λ. Then hAv, vi = hλv, vi = λhv, vi.
Since hv, vi > 0, we have that λ > 0.
Conversely, let us assume that the eigenvalues of A are positive. Since
A is symmetric, the Spectral Theorem would imply that A has an
orthonormal basis {v1 , . . . , vn } consisting of eigenvectors, where vi is
an eigenvector associated with the eigenvalue λi of A. Let v ∈ V
be any nonzero vector. Then there exists c1 , . . . , cn ∈ R such that
v = c1 v1 + . . . + cn vn . So we have that

hAv, vi = hA(c1 v1 + . . . + cn vn ), c1 v1 + . . . + cn vn i
= hc1 A(v1 ) + . . . + cn A(vn ), c1 v1 + . . . + cn vn i
= hc1 λ1 v1 + . . . cn λn vn , c1 v1 + . . . + cn vn i.

Since hvi , vj i = 0, when i 6= j, we have that

hAv, vi = c21 λ1 hv1 , v1 i + . . . + c2n λn hvn , vn i .

Hence, our assumption that λi > 0, for all i, would imply that hAv, vi >
0.
 
1 2
5. Let A = . Find the eigenvalues λ1 and λ2 of A. Verify that there
2 1  
λ 1 0
is a matrix N such that N −1 AN = .
0 λ2
Solution. The characteristic polynomial of A is PA (t) = Det(tI −A) =
t2 − 2t − 3. The eigenvalues of A are the roots of PA (t), which are
λ1 = 3 and λ2 = −1. Any eigenvector v associated with λi will satisfy
Av = "λi v.# The normalized
" eigenvectors
# vλi associated with λi are
√1 √1
2 2
v λ1 = √1
and vλ2 = .
2
− √12

3
" #
√1 √1
2 2
We take N to be the matrix [vλ1 , vλ2 ] = √1
.
2
− √12
 
−1 3 0
It can be verified that N AN =
0 −1

6. Let t 7→ aij (t) be differentiable functions for 1 ≤ i, j ≤ n. Consider


the matrix A(t) whose ij-th entry is aij (t). Similarly, let t 7→ bij (t) be
another collection of differentiable functions for 1 ≤ i, j ≤ n, and let
B(t) be the matrix whose ij-th entry is bij (t). The derivative dtd A(t) is
simply the matrix whose ij-th entry is dtd aij (t).
d
(a) Show that dt
(A(t)B(t)) = ( dtd A(t)) B(t) + A(t) ( dtd B(t)).
(b) Derive a formula for dtd (A(t)−1 ) in terms of dtd (A(t)) (assuming
−1
that A(t) exists).

Solution. a) Let overdot denote time derivative. We have

d ABij X d Aik Bkj X  ˙ ˙



( dtd AB)ij = = = Aik Bkj + Aik Bkj = (ȦB+AḂ)ij
dt k
dt k

−1 dA −1 dA−1
b) Letting B(t) = A(t) in part a), we get A +A = 0 which
dt dt
immediately gives
dA−1 dA
= −A−1 A−1
dt dt
7. Consider n × n real matrices A satisfying A = At = A−1 .

(a) When n = 2, determine all such matrices.


(b) Assuming that −1 is not an eigenvalue of A, determine all such
n × n matrices.
 
t a b
Solution. (a) Since A = A , A has to be a matrix of the form .
b c
c b
 
− ac−b
So we have that Det A = ac − b2 and A−1 = ac−b2 2
b a . The
− ac−b 2 ac−b2
fact that A = A−1 would imply that − ac−b
b
2 = b.

4
2
 that b 6= 0. Then ac − b = Det A = −1, conse-
Suppose we assume
−c b
quently, A−1 = . Once again, A = A−1 would imply that
b −a  
a b
a = −c and hence, A−1 = , with the additional condition
b −a
that det(A) = a2 + b2 = 1.
  1 
a 0 0
If b = 0, then A = . Since A = A−1 = a 1 , we infer that
0 c  0 c
1 ±1 0
= a and hence A = .
a 0 ±1
In general, all real 2 × 2 matrices satisfying A = At = A−1 is
 
a b [
{ | a, b ∈ R and a2 + b2 = 1} {±I2 } .
b −a

(b) Let λ1 , . . . , λn be the eigenvalues of A. By Spectral Theorem, there


exists an invertible n × n matrix N such that
 
λ1 . . . 0
N −1 AN =  ... . . . ...  . (1)
 
0 . . . λn

Taking the inverse on both sides of this equation and using A = A−1 ,
we obtain  
1/λ1 . . . 0
N −1 AN =  ... ... ..  . (2)

. 
0 . . . 1/λn
Comparing these two equations, we have that λ1i = λi , which would
imply that λi = ±1, for 1 ≤ i ≤ n. By our hypothesis λi 6= −1, so we
have that N −1 AN = In , or A = In .

8. Prove or disprove. Please avoid redundantly long answers.

(a) If U and W are subspaces of a finite-dimensional vector space V


over a field K such that dim U +dim W > dim V , then V = U +W .
(b) A subset {v1 , . . . , vk } of a vector space V is linearly independent
if for each i 6= j, the subset {vi , vj } is linearly independent.

5
(c) If A is a symmetric n × n real matrix, then (Im(A))⊥ = Ker(A)
(wrt the standard dot product on Rn )
(d) If V1 and V2 be subspaces of a vector space V , then V1 ∪ V2 is a
subspace of V .
(e) There exists two complex n × n matrices A and B such that AB −
BA = In .
(f) If A is a symmetric linear operator on R3 such that tr(A2 ) = 0,
then A = 0.

Solution. (a) This statement is false. A counterexample to this


statement is as follows: Let V = R4 = hhe1 , e2 , e3 , e4 ii, U = R3 =
hhe1 , e2 , e3 ii, and W = R2 = hhe1 , e2 ii. Then dim U + dim W = 5 >
dim V , but U + W = R3 6= V . The same counterexample can be
generalized to V = K n , for n ≥ 4 with U = K n−1 and W = K n−2 .
Alternatively, consider a subspace X of V such that dim X > dim V /2.
Then choose U = W = X.
(b) This statement is false. For a counterexample, consider the vectors
e1 , e2 and e3 = e1 + e2 in R3 . Clearly, these vectors are pairwise
linearly independent. However, the set {e1 , e2 , e1 + e2 } is not linearly
independent as there is obvious dependancy within the set.
(c) This statement is true. Since A is symmetric we have hx, Ayi =
hAx, yi. Using this property we have:

v ∈ Im(A)⊥ ⇔ hv, Awi = 0 ∀ w ⇔ hAv, wi = 0 ∀ w ⇔ Av = 0 ⇔ v ∈ Ker(A)

(d) This statement is false. A counterexamele to this statement is as


follows: The x-axis (= hhe1 ii) and y-axis (= hhe2 ii) are both subspaces
of R2 . But the union of the two axes is not a subspace as (1, 1) = e1 +e2
is not in the union.
(e) This statement is false. Suppose we assume on the contrary that
there exists A and B such that AB − BA = In . Then tr (AB − BA) =
tr In , that is, tr AB − tr BA = n. Since tr AB = tr BA, this would
mean that 0 = n, which is impossible.
(f) This statement is true. Let λ1 , λ2 and λ3 be the 3 eigenvalues of A.
Since A is symmetric, by Spectral Theorem, there exists an invertible

6
3 × 3 matrix N such that
 
λ1 0 0
N −1 AN =  0 λ1 0  . (3)
0 0 λ3

Squaring both sides of Equation 4, we have that


 2 
λ1 0 0
(N −1 AN )(N −1 AN ) = N −1 A2 N =  0 λ21 0  . (4)
0 0 λ23

Since by assumption tr A2 = 0, we have that tr A2 = tr (N −1 A2 N ) =


λ21 + λ22 + λ23 = 0, which would imply that λ1 = λ2 = λ3 = 0. Therefore,
N −1 AN = 0, or A = 0.
Bonus Questions
9. 
Let A be a symmetric positive definite 2 × 2 matrix. Show that detA ≤
2
tr A
. When can equality occur?
2
Solution. The eigenvalues of a symmetric positive definite matrix are
positive
p real numbers λ1 , λ2 . Their AM is trace(A/2) and their GM is
det (A). Since AM of two p positive real numbers is greater than the
GM, we conclude tr(A/2) > det (A).

10. Let A = (Aij )n×n be matrix with complex entries. The adjoint A∗ of
A is defined by A∗ = (A∗ij )n×n , where A∗ij = Aji , for 1 ≤ i, j ≤ n. In
other words, A∗ is the transpose of the matrix whose ij th entry is the
complex conjugate of the ij th entry of A. A square matrix A is said to
be normal if A∗ A = AA∗ . Let A be a normal.

(a) Show that if v is an eigenvector of A with eigenvalue λ, then v is


also an eigenvector of A∗ with eigenvalue λ. [Hint: A − λIn is also
normal].
(b) Show that the eigenvectors corresponding to distinct eigenvalues
are orthogonal with respect to the standard Hermitian product on
Cn .

7
Solution. a) Using the fact that AA∗ = A∗ A, it is easy to see that
(A − λI)(A∗ − λ̄I) = (A∗ − λ̄I)(A − λI). If Av = λv we have:

||(A∗ − λ̄I)v||2 = v̄ t (A − λI)(A∗ − λ̄I)v = v̄ t (A∗ − λ̄I)(A − λI)v = 0

Therefore (A∗ − λ̄I)v = 0, i.e A∗ v = λ̄v.


b) Let Av1 = λ1 v1 and Av2 = λ2 v2 . The second equation can also
be written as v¯2 t A∗ = λ̄2 v¯2 t . Multiplying on the right by v1 we get
v¯2 t A∗ v1 = λ̄2 v¯2 t v1 . Also by part a) A∗ v1 = λ̄1 v1 . Multiplying this
on the left by v2∗ we get: v¯2 t A∗ v1 = λ̄1 v¯2 t v1 . Comparing these two
expressions for v¯2 t A∗ v1 , we get (λ̄1 − λ̄2 )v¯2 t v1 = 0. Since λ1 and λ2 are
distinct, we conclude that v¯2 t v1 = 0. Hence v1 and v2 are perpendicular.

You might also like