Sons - 2002 LSE Linear Algebra MA201
Sons - 2002 LSE Linear Algebra MA201
Sons - 2002 LSE Linear Algebra MA201
a1
hx, xi = a1 a2 a3 a2 = a21 + a22 + a23 ,
a3
which is the sum of the squares of three real numbers and as such it is real and non-negative.
Further, to show that hx, xi = 0 if and only if x = 0, we note that:
LTR: If hx, xi = 0, then a21 + a22 + a23 = 0. But, this is the sum of the squares of three
real numbers and so it must be the case that a1 = a2 = a3 = 0. Thus, Ax = 0, and since
A is invertible, this gives us x = 0 as the only solution.
RTL: If x = 0, then A0 = 0 and so clearly, h0, 0i = 0t 0 = 0.
(as required).
ii. As hx, yi = xt At Ay is a real number it is unaffected by transposition and so we have:
hx, yi = hx, yit = (xt At Ay)t = yt At Ax = hy, xi,
iii. Taking the vector x + y R we have:
hx + y, zi = (x + y)t At Az = (xt + yt )At Az = xt At Az + yt At Az = hx, zi + hy, zi.
Consequently, the formula given above does define an inner product on R3 (as required).
(c) We are given that A is the matrix
1 1 0
A = 0 1 1 .
1 0 1
1
Of course, strictly speaking, the quantity given by xt At Ay is a 1 1 matrix. However, it is obvious that here we
can treat it as a real number by stipulating that its single entry is the required real number.
1 1 0 x1
x1 + x2
Ax = 0 1 1 x2 = x2 + x3 ,
1 0 1 x3
x1 + x3
and so using the inner product defined in (b), its norm will be given by
kxk2 = hx, xi = (Ax)t (Ax) = (x1 + x2 )2 + (x2 + x3 )2 + (x1 + x3 )2 .
Thus, a condition that must be satisfied by the components of this vector if it is to have unit norm
using the given inner product is,
(x1 + x2 )2 + (x2 + x3 )2 + (x1 + x3 )2 = 1.
Hence, to find a symmetric matrix B which expresses this condition in the form xt Bx = 1, we expand
this out to get:
2x21 + 2x22 + 2x23 + 2x1 x2 + 2x2 x3 + 2x1 x3 = 1,
which can be written in matrix form as
2 1 1 x1
xt Bx = x1 x2 x3 1 2 1 x2 = 1,
1 1 2 x3
where the required symmetric matrix is
2 1 1
B = 1 2 1 .
1 1 2
Otherwise: B = At A.
(d) Since we are given the eigenvectors we can calculate the eigenvalues, using Bx = x, as follows:
For [0, 1, 1]t , we have
2 1 1
0
0
1 2 1 1 = 1 ,
1 1 2 1
1
2 1 1
2
2
1 2 1 1 = 1 ,
1 1 2 1
1
1
4
2 1 1 1
1 2 1 1 = 4 = 4 1 ,
1
4
1 1 2 1
0
2 2
1
P=
3 1 2 ,
6
3 1
2
2
1 0 0
D = 0 1 0 ,
0 0 4
where Pt BP = D.
(e) To find a basis S of R3 such that the coordinate vector of x with respect to this basis, i.e. [x]S
is such that
[x]tS Ct C[x]S = 1,
and the associated diagonal matrix C we note that, since Pt BP = D we have B = PDPt and so
xt Bx = 1 = xt PDPt x = 1 = (Pt x)t D(Pt x) = 1.
Now, x can be written as
y1
x = y1 v1 + y2 v2 + y3 v3 = P y2 = P[x]S ,
y3 S
where the vectors in the set S = {v1 , v2 , v3 } are the column vectors of the matrix P. As such, since
P is orthogonal, we have
Pt x = [x]S ,
i.e. Pt x is the coordinate vector of x with respect to the basis given by the column vectors of P.
That is, if we let S be the basis of R3 given by the set of vectors
0
2
1
1
1
1
S = 1 , 1 , 1 ,
2
6 1
3 1
1
we have
[x]tS D[x]S = 1.
Now, D is the diagonal matrix above, and so if we
C= 0
0
0 0
1 0 ,
0 2
then we have
[x]tS Ct C[x]S = 1,
as required.
Question 2.
(a) The Leslie matrix for such a population involving
a1 a2
L = b1 0
0 b2
a3
0 ,
0
(k1)
(k)
(k1)
x2 = b1 x1
gives the number of females surviving long enough to go from C1 to C2 , and as
(k1)
such the remaining (1 b1 )x1
females in C1 die.
x3 = b2 x2
gives the number of females surviving long enough to go from C2 to C3 , and as
(k1)
such the remaining (1 b2 )x2
females in C2 die.
(k1)
whereas the x3
females in C3 all die by the end of the (k 1)th time period. As such, the number
of different DNA samples that will be available for cloning at the end of the the (k 1)th time period
will be
(k1)
(k1)
(k1)
(1 b1 )x1
+ (1 b2 )x2
+ x3
.
Thus, since the re-birth afforded by cloning creates new members of the first age class together with
those created by natural means, we have
(k)
(k1)
x1 = a1 x1
|
(k1)
+ a2 x2
{z
(k1)
+ a3 x3
(k1)
+ (1 b1 )x1
|
by birth
(k1)
= (1 + a1 b1 )x1
(k1)
+ (1 b2 )x2
{z
(k1)
+ x3
by cloning
(k1)
+ (1 + a2 b2 )x2
(k1)
+ (1 + a3 )x3
females in C1 by the end of the kth time period. Consequently, the Leslie matrix L, i.e. the matrix
such that x(k) = Lx(k1) , is given by
1 + a1 b1 1 + a2 b2 1 + a3
b1
0
0 .
L=
0
b2
0
as required.
The Leslie matrix for the population in question
L= 1
0
we can see that
and,
is
0 3/2
0 0 ,
1 0
0 3/2 0
0 0 3/2 0 0 3/2
L2 = 1 0 0 1 0 0 = 0 0 3/2 ,
1 0
0
0 1 0
0 1 0
3/2 0
0
0 3/2 0
0 0 3/2
1 0 0
3
L3 = 1 0 0 0 0 3/2 = 0 3/2 0 = 0 1 0 .
2
0
0 3/2
1 0
0
0 1 0
0 0 1
4
As such, we can see that after j sixty year periods (i.e. where k = 3j) the population distribution
vector will be given by
j
3
(3j)
x
=
x(0) .
2
as such we can see that:
Every sixty years the proportion of the population in each age class is the same as it was at
the beginning.
Every sixty years the number of females in each age class changes by a factor of 3/2 (i.e.
increases by 50%).
as such we can see that overall the population of females is increasing in a cyclic manner with a
period of sixty years.
(b) The steady states of the coupled non-linear differential equations
y 1 = y1 2y12 3y1 y2
y 2 = 4y2 2y22 y1 y2
are given by the solutions of the simultaneous equations
y1 2y12 3y1 y2 = 0
4y2 2y22 y1 y2 = 0
i.e. by (y1 , y2 ) = (0, 0), (0, 2), (1/2, 0) and (10, 7), where the latter steady state is found by solving
the linear simultaneous equations
1 2y1 3y2 = 0
4 2y2 y1 = 0
for y1 and y2 .
The steady state whose asymptotic stability we have to establish is clearly (0, 2). In order to establish
it, we work find the Jacobian matrix for this system of differential equations, i.e.
1 4y1 3y2
3y1
DF[y] =
,
y2
4 4y2 y1
and evaluating this at the relevant steady state gives
5 0
DF[(0, 2)] =
.
2 4
The eigenvalues of this matrix are 4 and 5 and, since these are both real and negative, this steady
state is asymptotically stable.
Question 3.
(a) For a non-empty subset W of V to be a subspace of V we require that, for all vectors x, y W
and all scalars R:
i. Closure under vector addition: x + y W .
ii. Closure under scalar multiplication: x W .
(b) The sum of two subspaces Y and Z of a vector space V , denoted by Y + Z, is defined to be the
set
Y + Z = {y + z|y Y and z Z}.
To show that Y + Z is a subspace of V , we note that:
As Y and Z are both subspaces of V , the additive identity of V (say, the vector 0) is in both
Y and Z. As such the vector 0 + 0 = 0 Y + Z and so this set is non-empty.
As such, referring to part (a), we consider two general vectors in Y + Z, say
x1 = y1 + z1 where y1 Y
and z1 Z,
x2 = y2 + z2 where y1 Y
and z2 Z,
But, as Y + Z is direct, by uniqueness, we can write 0 Y + Z in only one way using vectors Y
and vectors in Z, i.e. we must have u = 0. Consequently, we have Y Z = {0} (as required).
RTL: We are given Y Z = {0} and we note that any vector u Y + Z can be written as
u = y + z in terms of a vector y Y and a vector z Z. In order to establish that this sum
is direct, we have to show that this representation of x is unique.
To do this, suppose that there are two ways of writing such a vector in terms of a vector in Y
and a vector in Z, i.e.
x = y + z = y0 + z0 for y, y0 Y and z, z0 Z,
as such, we have
0
y y0 = z
z,
| {z } | {z }
Z
r
X
i yi and u =
i=1
s
X
i=1
i zi .
But, since these two linear combinations of vectors represent the same vector, we can
equate them. So doing this and rearranging, we get
1 y1 + + r yr 1 z1 s zs = 0.
But, since this vector equation involves the vectors which form a basis of V , these vectors
are linearly independent and, as such, the only solution to this vector equation is the
trivial solution, i.e. 1 = = r = 1 = = s = 0. Consequently, the vector u
considered above must be 0, and so Y Z = {0} (as required).
Hence, we are asked to establish that if V = Y Z, then dim(V ) = dim(Y ) + dim(Z). But, this is
obvious since using the bases given above we have:
dim(V ) = r + s = dim(Y ) + dim(Z),
as required.
Question 4.
(a) Let A be a real square matrix where x is an eigenvector of A with a corresponding eigenvalue of
, i.e. Ax = x. We are asked to prove that:
i. x is an eigenvector of the identity matrix I with a corresponding eigenvalue of one.
Proof: Clearly, as x is an eigenvector, x 6= 0, and so we have Ix = x = 1 x. Thus, x is an
eigenvector of the identity matrix I with a corresponding eigenvalue of one, as required.
ii. x is an eigenvector of A + I with a corresponding eigenvalue of + 1.
Proof: Clearly, using the information about A and I given above, we have
(A + I)x = Ax + Ix = x + 1x = ( + 1)x.
Thus, x is an eigenvector of A + I with a corresponding eigenvalue of + 1, as required.
iii. x is an eigenvector of A2 with a corresponding eigenvalue of 2 .
Proof: Clearly, using the information about A given above, we have
A2 x = A(x) = (Ax) = (x) = 2 x.
Thus, x is an eigenvector of A2 with a corresponding eigenvalue of 2 , as required.
As such, if the matrix A has eigenvalues and corresponding eigenvectors x, for k N, it should be
clear that the matrix Ak has eigenvalues k and corresponding eigenvectors x.
(b) We are given that the n n invertible matrix A has a spectral decomposition given by
A=
n
X
i xi xi ,
i=1
where, for 1 i n, the xi are an orthonormal set of eigenvectors corresponding to the eigenvalues
i of A. So, using 1(iii), it should be clear that
A2 =
n
X
2i xi xi ,
i=1
n
X
ki xi xi ,
i=1
(c) We are given that the eigenvalues of the matrix A are all real and such that 1 < i 1 for
1 i n. As such, we can re-write the definition of the natural logarithm of the matrix I + A as
follows:
ln(I + A) =
X
(1)k+1
k=1
Ak
" n
#
(1)k+1 X k
=
i xi xi
k
i=1
k=1
"
#
n
X (1)k+1
X
=
ki xi xi
k
i=1
ln(I + A) =
n
X
k=1
ln(1 + i )xi xi ,
i=1
where we have used the Taylor series for ln(1 + x) to establish that, for 1 i n,
X
(1)k+1
k=1
ki = ln(1 + i ),
since it was stipulated that the eigenvalues are real and such that 1 < i 1.
(d) We are given the matrix
1 0 1
1
A = 0 4 0 ,
4
1 0 1
and to find its spectral decomposition, we need to find an orthonormal set of eigenvectors and the
corresponding eigenvalues. For the eigenvalues, we solve the equation det(A I) = 0, i.e.
1 4
0
1
1
0
1 4
and so the eigenvalues are 1/2, 0, 1. The eigenvectors that correspond to these eigenvalues are then
given by solving the matrix equation (A I)x = 0 for each eigenvalue:
For = 1/2 we have:
1 0 1 x
1
0 6 0 y = 0 =
4
1 0 1
z
xz =0
6y = 0 ,
x + z = 0
1 0 1 x
1
0 4 0 y = 0 =
4
1 0 1
z
x z = 0
4y = 0 ,
x z = 0
5 0 1 x
1
0 0 0 y = 0 =
4
1 0 5
z
5x z = 0
0 =0 ,
x 5z = 0
0
1
0
1
1 1
1
1
0 , 0 , 1
0 , 0 , 1 ,
becomes
2 1
0
1
0
1
1
(since the eigenvectors are already mutually orthogonal) and substitute all of this into the expression
in (c). Thus, since the = 0 term vanishes, we have:
1
0
i
h
2
1 1
1
A= 0 2 0
+ 1 1 0 1 0
2
2 1
0
2
1
0 12
0 0 0
1 2
0 0 0 + 1 0 1 0 .
=
2 1
1
0 0 0
2 0 2
10
1 0 1
0
2
1 2
0 0 0 + ln 2 0
ln(I + A) = ln
2
1
1
0
2 0 2
since ln(1/2) = ln 2.
11
0 0
1 0 1
ln
2
0 2 0 ,
1 0 =
2
0 0
1 0 1
Question 5.
(a) A strong generalised inverse of an m n matrix A is any n m matrix AG which is such that:
AAG A = A.
AG AAG = AG .
AAG orthogonally projects Rm onto R(A).
AG A orthogonally projects Rn parallel to N (A).
(b) Given that the matrix equation Ax = b represents a set of m inconsistent equations in n variables,
we can see that any vector of the form
x = AG b + (I AG A)w,
with w Rn is a solution to the least squares fit problem associated with this set of linear equations
since
Ax = AAG b + (A AAG A)w = AAG b + (A A)w = AAG b,
using the first property of AG given in (a). As such, using the third property of AG given in (a), we
can see that Ax is equal to AAG b, the orthogonal projection of b onto R(A). But, by definition, a least
squares analysis of an inconsistent set of equations Ax = b minimises the distance (i.e. kAx bk)
between the vector b and R(A), which is exactly what this orthogonal projection does. Thus, the
given vector is such a solution.
(c) To find a strong generalised inverse of the matrix
1 1 0
A = 1 0 1 ,
1 1 0
we use the given method. This is done by noting that the first column vector of A is linearly dependent
on the other two since
1
1
0
1 = 0 + 1 ,
1
1
0
and so the matrix A is of rank 2 (as the other two column vectors are linearly independent). Thus,
taking k = 2, the matrices B and C are given by:
1 0
1 1 0
B = 0 1 and C =
,
1 0 1
1 0
respectively. So to find the strong generalised inverse, we note that:
1 0
1 1 0
2 0
1 0 1
t
1
t
0 1 =
,
= (B B) =
BB=
0 1
0 1 0
2 0 2
1 0
and,
1 1
1 2 1
2 1
1 1 0
t 1
t
1 0 =
= (CC ) =
CC =
.
1 2
1 0 1
3 1 2
0 1
Thus, since
t
1 1 0 1 0 1
1 1 0 1
B =
=
,
2 0 2 0 1 0
2 0 2 0
1 t
(B B)
12
and,
Ct (CCt )1
1 1
1
1
1
1
2 1
= 1 0
= 2 1 ,
1 2
3
3
0 1
1 2
we have,
1
1
1
2
1
1
1
1 0 1
= 2 2 2 ,
AG = Ct (CCt )1 (Bt B)1 Bt = 2 1
0 2 0
6
6
1 2
1 4 1
which is the sought after strong generalised inverse of A. So to find the set of all solutions to the
least squares fit problem associated with the system of linear equations given by
x+y =1
x+z =2
x + y = 1
we note that these equations can be
1
A = 1
1
1 0
x
1
0 1 x = y and b = 2 .
1 0
z
1
Thus, as
1
2
1
4 2
2
2 1
1
1 1 0
1
1
1
AG A = 2 2 2 1 0 1 = 2 4 2 = 1 2 1 ,
6
6
3
1 4 1
2 2 4
1 1 2
1 1 0
we have
and
1
1 1
1
3 0 0
2 1
1
1
1 2 1 0 3 0 = 1 1 1 ,
AG A I =
3
3
1 1 2
1 1 1
0 0 3
1
2
1
4
2
1
1
1
1
AG b = 2 2 2 2 = 4 = 2 .
6
6
3
1 4 1 1
8
4
1
2 + 1 1 1 w ,
x=
3
4
1 1 1
for any w R3 .
We know from parts (a) and (b) that the (Ag A I)w part of our solutions will yield a vector in
N (A) and so the solution set is the translate of N (A) by 13 [2, 2, 4]t . Further, by the rank-nullity
theorem, we have (A) = 3 (A) = 3 2 = 1 and so N (A) will be a line through the origin. Indeed,
the vector equation of the line representing the solution set is
1
1
2
1 + 1 ,
x=
3
1
2
where R.
13
Question 6
(a) To test the set of functions {1, x, x2 }
instructed, i.e.
f1
W (x) = f10
f 00
1
f2 f3 1 x x2
f20 f30 = 0 1 2x = 2,
f200 f300 0 0 2
(b) We consider the inner product space formed by the vector space P3
Z 1
hf (x), g(x)i =
f (x)g(x)dx.
To find an orthonormal basis of the space Lin{1, x, x2 }, we use the Gram-Schmidt procedure:
We start with the vector 1, and note that
Z
2
k1k = h1, 1i =
1
Consequently, we set e1 = 1/ 2.
1dx = [x]11 = 2.
1
1
1
= x hx, 1i,
u2 = x x,
2
2
2
But, as
x2
hx, 1i =
xdx =
2
1
1
= 0,
1
we have u2 = x. Now, as
Z
kxk2 = hx, xi =
x2 dx =
q
we set e2 =
3
2
x3
3
1
1
2
= ,
3
x.
x2 ,
+r
3
3
1
2 1
x
x x ,
2
2
2
2
3
1
= x2 hx2 , xix hx2 , 1i,
2
2
But, as
x4
hx , xi =
x dx =
4
1
2
and,
x3
hx , 1i =
x dx =
3
1
2
we have u3 =
x2
1/3 =
= 0,
1
2
= ,
3
(3x2
1)/3. Now, as
Z 1
Z
2
2
2
2
2
2
k3x 1k = h3x 1, 3x 1i =
(3x 1) dx =
q
we set e3 =
9 5
9
4
8
3
=
x 2x + x
=2
2+1 =2
=
5
5
5
5
1
5
8
(3x2 1).
14
1
,
2
3
x,
2
)
5
2
(3x 1) ,
8
i=1
|i=1 {z
i=k+1
in S
{z
in S
k
X
hx, ei iei ,
i=1
Z 1
Z 1
3 7 x5
3 1
8
16
4
2
4
2
6
4
hx , 3x 1i =
x (3x 1)dx =
(3x x )dx =
=2
x
=2
=
7
5 1
7 5
35
35
1
1
Z
3
2
35 (10x
15
1).