Applied Linerar Algebra Final
Applied Linerar Algebra Final
Applied Linerar Algebra Final
1 Introduction
Linear Algebra is a part of Mathematics and applied in every field of life. With the help of
Linear Algebra we can convert our real life problems into various models involving, which are
easy to deal with. One can visualize the objects in higher dimensions to get more information.
It helps in solving Differential Equations, game developing, Machine Learning, Data Mining,
Image Processing, Traffic Controlling, Electrical Circuit problems, Genetics, Cryptography, var-
ious economic model and several other fields. With the help of linear transformation concept one
can study the properties of different entities in different spaces. Also a complicated geometrical
problem can be studied by converting it into a simple algebraic problem. Hence Linear Algebra
can be considered as a bridge between Geometry and Algebra.
1
b1
b2
.
b=
. is said to be the righthand side vector or nonhomogeneous vector.
.
bm
The above system of equations can also be represented as
a11 a12 a1n b1
a21 a22 a2n b2
.
x1 + . x 2 + · · · + . xn = . .
. . . .
. . . .
am1 am2 amn bm
x1
x2
.
The values of x1 , x2 , · · · , xn for which the given equations are satisfied form a vector x =
.
.
xn
is a solution of this system of equations.
A system of equations is said to singular if the corresponding coefficient matrix is singular.
A matrix is said to be singular iff its rows or columns are linearly related to each other. That
is a row (or column) can be obtained from the addition of scalar multiples of other rows(or
columns).
The system of equations
a11 x1 + a12 x2 = b1
a21 x1 + a22 x2 = b2
is singular if aa11
21
= aa12
22
. Otherwise it is said to nonsingular. A nonsingular system of equations
has a unique solution.
A singular system of equations has either infinitely many solutions or no solutions. Hence for
the above system of equations
a11 a12
1. if a21
6= a22
, then it has unique solution.
a11 a12 b1
2. if a21
= a22
6= b2
, then it has no solution.
a11 a12 b1
3. if a21
= a22
= b2
, then it has infinitely many solutions.
The following figures give an pictorial representation of the three cases discussed here.
2
Figure 1: One Solution
Figure 2: No Solution
3
Figure 3: whole lines of Solution
4
THE GEOMETRY OF LINEAR EQUATIONS
To solve a system of equations geometrically two methods are used.
2. Find the points of intersection(s) if exist. The x-coordinate value of the point of intersec-
tion represents the value of x and y- coordinate value gives the value of y.
5. If they represent the same line then every point on the line is a solution of it.
1. Write the given system of equations as a linear combinations of column vectors equal to
the rhs vector.
a11 a12 b
x+ y= 1 .
a21 a22 b2
4. Plot the point B = (b1 , b2 ). Draw a line from B to OP parallel to OQ and get the
coordinates of point of intersection (h1 , h2 ).
6.
5. Also, draw a line from B to OQ parallel to OP and get the coordinates of point of
intersection (k1 , k2 ).
a11 h1 a12 k
7. Find the values of x and y from the equations: x= and y= 1 .
a21 h2 a22 k2
2x − y = 1
x + y = 5.
1
Figure 1: Row Picture Method
2
PROBLEM SET 1.2
Q7. Explain why the given system is singular by finding a combination of the three equations
that adds up to 0=1. What value should replace the last 0 on the r.h.s. to allow the equations
to have solutions and what is one of the solutions?
u+v+w =2
u + 2v + 3w = 1
v + 2w = 0.
Ans. Here in the left hand side Row1 + Row3 = Row2 but not in right hand side. Hence the
system is singular but no solution exists. If the last 0 is replaced by −1 then l.h.s. and r.h.s.
both satisfy the condition Row1
+ Row3
= Row2.
3+w
Hence solution exists and −1 − 2w is a general solution. For every value of w it gives a
w
solution of
u+v+w =2
u + 2v + 3w = 1
v + 2w = −1.
4
In patricular, −3 is a solution of it.
1
Q 8. Under what condition on y1 , y2 , y3 do the points (0, y1 ), (1, y2 ), (2, y3 ) lie on a straight
line?
Ans. The points (0, y1 ), (1, y2 ), (2, y3 ) lie on a straight line means the slopes of the line joining
the points (0, y1 ) and (1, y2 ) and the points (1, y2 ) and (2, y3 ) are equal. That is,
y2 − y1 y3 − y2
=
1−0 2−1
which implies that y1 − 2y2 + y3 = 0. Hence for y1 − 2y2 + y3 = 0 the points (0, y1 ), (1, y2 ),
(2, y3 ) lie on a straight line.
1
has infinitely many solutions. For every value of w the vector (w, −2w, w) represents a solu-
tion of it. All solutions of it is represented by the set {(w, −2w, w) ∈ R3 : w ∈ R}.
Q2. Sketch these three lines and decide if the equations are solvable :
x + 2y = 2
x−y =2
y = 1.
What happens if all the right-hand sides are zero? Is there any nonzero choice of right hand
sides that allows the three lines to intersect at the same point ?
Ans. The first figure represents the straight lines represented in the question.
x + 2y = 2
x−y =2
y = 1.
Which shows that there exist no point common to all straight lines. Hence no solution exists.
The second figure gives the straight lines with r.h.s. vector 0, that is
x + 2y = 0
x−y =0
y = 0.
x + 2y = 2
x−y =2
y = 0.
2
With r.h.s. vector 2 and x = 2, y = 0 satisfies all the equations. Hence it has a solution at
0
(2, 0).
2
Figure 1: Solution does not exist
3
Figure 3: Nonzero solution exists
4
Gaussian Elimination
Consider a system of linear equations.
2u + v + w = 5
4u − 6v = −2
−2u + 7v + 2w = 9.
2u + v + w = 5
−8v − 2w = −12
8v + 3w = 14.
Now to eliminate v, let us subtract (−1) times of the second equation from the third to get,
1
2u + v + w = 5
-8v − 2w = −12
1w = 2.
These values 2, −8, 1 are called pivots. The cofficient of u in the first equation and the
coefficient of v in the second equation and the coefficient of w in the third equation in the
triangular form are called 1st, 2nd, 3rd pivots respectively.
In matrix form
2 1 1 5 2 1 1 5 2 1 1 5
4 -6 0 -12 → 0 -8 -2 -12 → 0 -8 -2 -12
-2 7 2 9 0 8 3 14 0 0 1 2
Now back substitution yields the complete solution in the opposite order, beginning with the
last unknown. The last equation 1w = 2 gives w = 2. Then the second equation −8v−2w
= −12
u 1
gives v = 1. Finally, the first equation 2u + v + w = 5 gives u = 1. Hence v = 1 is the
w 2
required solution.
If a zero appears in a pivot position, elimination has to stop either temporarily or permanently.
If the zero pivot can be replaced by a nonzero value by row exchange process then the
breakdown of elemination process is temporary or else it is permanent.
Consider an example of Nonsingular case :
u+v+w =−
2u + 2v + 5w = −
4u + 6v + 8w = −
=⇒
u+v+w =−
3w = −
2v + 4w = −
=⇒
u+v+w =−
2v + 4w = −
3w = −
2
Consider an example of singular case :
u+v+w =−
2u + 2v + 5w = −
4u + 4v + 8w = −
=⇒
u+v+w =−
3w = −
4w = −.
In this case, there is no exchange of equations that can avoid zero in the second pivot position.
Hence the breakdown id permanent.
A matrix is singular if its one row (column) can be written as a linear combination of other
rows (columns).
The breakdown is temporary for a nonsingular system of equations (Having full
set of pivots). The breakdown is permanent if the system of linear equations is
singular.
3x + 2y = 10
6x + 4y =?.
Ans. Here
3 2 10
= = =⇒ ? = 20.
6 4 ?
Hence, if the r.h.s. ? 6= 20 then no solution exists. For r.h.s. ? = 20 the system has infinitely
many solutions. Every point on the straight line 3x + 2y = 10 is a solution. In particular, x = 2,
y = 2 and x = 1, y = 3.5 are two solutions.
2x + by = 16
4x + 8y = g
3
Then choose a r.h.s. g that makes it solvable. Find two solutions in that singular case.
2x + 3y = 1
10x + 9y = 1.
After this elimination step, write down the upper triangular system and darken the two pivots.
10
Ans. Here 2
= 5 = l. Hence
5 Multiple
of equation 1 is subtracted from equation 2 to get
2 3 2 3
the coefficient matrix ∼ . The pivots are 2 and −6.
10 9 0 -6
Q7. What test on b1 and b2 decides where these two equations allow a solution? How many
solutions will they have? Draw the column pictures.
3x − 2y = b1
6x − 4y = b2 .
3 -2 b1
Ans. The given system of equations can be written as x +y = .
6 -4 b2
−2
Note 63 = −4 =⇒ 2b1 = b2 . Hence the two equations allow a solution and they have infinitely
many solutions. If (b1 , b2 ) point lies on the straight line joining (−2, −4), (0, 0) and (3, 6). Then
the system has infinite number of solutions.
4
Problem set 1.3 continued...
Q 8. For which numbers a does elimination breakdown (a) Permanently (b) Temporarily ?
ax + 3y = −3
4x + 6y = 6.
Ans. Here
a 3
= =⇒ a = 2.
4 6
=⇒ Thesystem is singular. Hence there is a permanent breakdown of elimination process.
2 3 2 3
∼ . The second pivot is missing and cannot be replaced by any nonzero value by
4 6 0 0
any row exchange process. Hence there exist a breakdown and the breakdown is permanent.
a 3 0 3
For a = 0, 4
6= 6 , the system is nonsingular. Hence the breakdown is temporary.
4 6
=⇒ The first pivot is missing,
hence
the elimination process breaksdown. By interchanging the
4 6
1st and 2nd rows, we have . Both the pivots 4, 3 are nonzero, hence the breakdown is
0 3
temporary.
Q 14. Which number q makes this system singular and which right hand side t gives it
infinitely many soluions? Find the solution that has z = 1
x + 4y − 2z = 1
x + 7y − 6z = 6
3y + qz = t.
1 4 -2 1 1 4 -2 1
0 3 -4 5 . = 0 3 -4 5.
0 0 q+4 t-5 0 0 0 0
1
Q 16. If rows 1 and 2 are the same, how far can you get with elimination? Which pivot is
missing?
2x − y + z = 0
2x − y + z = 0
4x + y + z = 2
Q 16. If columns 1 and 2 are the same, how far can you get with elimination? Which pivot
is missing?
2x + 2y + z = 0
4x + 4y + z = 0
6x + 6y + z = 2
2
Matrix notation and Matrix multiplication
Matrix addition is compatible If and only if matrix A and B both have same number of rows
and same number of columns.
1 2 2 3 10 51
For example C = A × B = × = , c21 = 5 × 2 + 7 × 4 = 38
5 7 4 16 38 127
1. Ri + kRj → Ri Means kth multiple of jth row is added with ith row.
1
1 2 3
Consider A = 4 5 6.
7 8 9
1 2 3 4 5 6 1 2 3
Then EA = −31 −35 −39, F A = 1 2 3, GA = 12 15 18.
7 8 9 7 8 9 7 8 9
Q. Give 3 by 3 examples (not just the zero matrix) of
2
Q. For the third one, draw the column vectors (2, 1) end (0, 3). Multiplying by (1, 1) just
adds the vectors (do it graphically).
3
Lecture-6
2 1 1
Given: A= 4 −6 0
−2 7 2
2 1 1
= 0 −8 −2 R2 ← R2 − 2R1 (2)
−2 7 2
2 1 1
0 −8 −2 R3 ← R3 + R1 (−1)
=
0 8 3
2 1 1
0 −8 −2 R3 ← R3 + R2 (−1)
=
0 0 1
=U
The elementary matrices are
1 0 0 1 0 0 1 0 0
E21 −2 1 0 , E31 = 0 1 0 , E32 = 0 1 0
=
0 0 1 1 0 1 0 1 1
1
E32 E31 E21 A = U
=⇒ M A = U
1 0 0
where M = E32 E31 E21 =
−2 1 0
−1 1 1
1 0 0 1 0 0 1 0 0
−1 −1 −1
M −1 =(E32 E31 E21 )−1 = E21 E31 E32 =
2 1 0 0 1 0 0 1 0
0 0 1 −1 0 1 0 −1 1
1 0 0
=2 =L
1 0
−1 −1 1
MA=U =⇒ A = M −1 U =⇒ A = LU,which is known as LU factorization
of the matrix A.
1 0 0 2 0 0 1 1/2 1/2
L= 2 1 0 , D = 0 8 0 , U = 0 1 1/4
−1 −1 1 0 0 1 0 0 1
LDU=A, which is known as LDU factorization of the matrix A.
No.2 When an upper triangular matrix is is nonsingular?
Ans: An upper triangular matrix is nonsingular if none of its diagonal el-
ements are zero i.e. it has full set of pivot elements.
No.7 Factor A into LU, and write down the upper triangular system U x =
c which
appears
afterelimination,
for
2 3 3 u 2
Ax=0 v =2
5 7
6 9 8 w 5
2
Ans. Ax=b
2 3 3 u 2
=⇒ 0 5 7 v =2
6 9 8 w 5
2 3 3 u 2
=⇒ R3 ← R3 − 3R1
0 5 7 v = 2
0 0 −1 w −1
=⇒ Ux=c, which is the required upper triangular system.
No.21 What three elimination matrices E21 , E31 , E32 put A into upper tri-
−1 −1 −1
angular form E21 E31 E32 A = U ? Multiply by E32 , E31 andE21 to factor A
−1 −1 −1
LU where
into L = E21 E31 E32 . Find L and U .
1 0 1
A= 2 2 2
3 4 5
1 0 1
Ans. Given: A= 2 2 2
3 4 5
1 0 1
= 0 2 0 R2 ← R2 − 2R1 (2)
3 4 5
1 0 1
= 0 2 0R3 ← R3 − 3R1 (3)
0 0 2
=U
3
1 0 0 1 0 0 1 0 0
E21 =
−2 1 0 , E31 = 0 1 0 , E32 = 0 1 0
0 0 1 −3 0 1 0 0 1
E32 E31 E21 A = U
1 0 0
−1 −1 −1
L = E21 E31 E32 =
2 1 0
3 0 1
LU = A
4
Lecture-7
1 0 0 1
I= , P = ,
0 1 1 0
1 0 0 0 1 0 0 0 1 1 0 0
I=
0 1 0 , P21 = 1 0 0 , P31 = 0 1 0 , P32 = 0 0 1
0 0 1 0 0 1 1 0 0 0 1 0
0 0 1 0 1 0
P21 P32 =
1 0 0 , P32 P21 = 0 0 1
0 1 0 1 0 0
Points to Remember:
1
3. There are n! permutation matrices of order n.
Example.
2 3 0 1
A= , P =
4 5 1 0
0 1 2 3 4 5
PA = =
1 0 4 5 2 3
Example.
1 2 3 0 1 0 1 0 0
A=
4 5 6,
P21 =
1 0 0,
P32 =
0 0 1,
7 8 9 0 0 1 0 1 0
0 1 0 1 2 3 4 5 6
P21 A =
1 0 0 4 5 6 = 1 2 3
0 0 1 7 8 9 7 8 9
1 0 0 1 2 3 1 2 3
P32 A =
0 0 1 4 5 6 = 7 8 9
0 1 0 7 8 9 4 5 6
2
exchanges the what of A?
0 0 6
A=
1 2 3
0 4 5
Ans. Given:
0 0 6
A= 1 2 3
0 4 5
1 0 0 0 1 0 0 1 0
P = P32 P21 =
0 0 1
1 0 0 = 0 0 1
0 1 0 0 0 1 1 0 0
1 2 3
PA = 0 4 , which is upper triangular.
5
0 0 6
1 0 0 0 0 1
P1 = P32 = 0 0 1
, P2 = P 31 = 0 1 0
0 1 0 1 0 0
6 0 0
P1 AP2 = 5 4 , which is lower triangular.
0
3 2 1
3
Tridiagonal matrix: A square matrix is said to be a tridiagonal matrix if
all its elememts are zero except on the main diagonal and the two adjacent
diagonals.
No.28 Find the LU and LDU factorization of the following tridiagonal
matrix.
1 1 0
A=
1 2 1
0 1 2
Ans. Given:
1 1 0
A=
1 2 1
0 1 2
1 1 0
=
0 1 R2 ← R2 − R1 (1)
1
0 1 2
1 1 0
=
0 1 R3 ← R3 − R2 (1)
1
0 0 1
= U
4
LU factorization:
1 0 0 1 1 0
L=
1 1 0,
U =
0 1 1
0 1 1 0 0 1
LU = A
LDU factorization:
1 0 0 1 0 0 1 1 0
L=
1 1 0,
D=
0 1 0,
U =
0 1 1
0 1 1 0 0 1 0 0 1
LDU = A
5
Lecture-8
⇒ |A| = −2
⇒ |A| 6= 0
4 3
Minor of A = M =
2 1
4 −3
Cofactor of A = C =
−2 1
1
4 −2
Adjoint of A (AdjA) = C T =
−3 1
Therefore,
AdjA
A−1 =
|A|
CT
=
|A|
−1 4 −2
=
2 −3 1
Example.
a b
Let A =
c d
⇒ |A| = ad − bc
⇒ |A| 6= 0
⇒ A−1 willexist.
d c
Minor of A = M =
b a
d −c
Cofactor of A = C =
−b a
d −b
Adjoint of A (AdjA) = C T =
−c a
2
Therefore,
AdjA
A−1 =
|A|
CT
=
|A|
1 d −b
=
ad − bc −c a
2 0 1/2 0
Example. A = =⇒ A−1 =
0 3 0 1/3
Example.
2 −1 0
Let A =
−1 2 −1
0 −1 2
⇒ |A| = 6 − 2
⇒ |A| = 4
⇒ |A| 6= 0
3
3 2 1
Adjoint of A (AdjA) = C T =
2 4 2
1 2 3
Therefore,
AdjA
A−1 =
|A|
CT
=
|A|
3 2 1
1
= 2 4 2
4
1 2 3
2 0 0 1/2 0 0
Example. A = 0 3 0 =⇒ A−1 = 0 1/3 0
0 0 4 0 0 1/4
Points to Remember:
1. The inverse of a square matrix if exits is unique and is known as both
sided inverse.
2. AA−1 = I = A−1 A.
3. (AB)−1 = B −1 A−1 , (ABC)−1 = C −1 B −1 A−1 .
4. Ax = b =⇒ x = A−1 b.
4
Gauss-Jordan Method: (The calculation of A−1 )
1 2
Example. Let A =
3 4
⇒ |A| = −2 6= 0
⇒ A−1 will exist.
Now consider the matrix
A I i.e,
1 2 1 0
=
3 4 0 1
1 2 01
= R2 ← R2 − 3R1
0 −2 −3 1
1 0 −2 0
= R1 ← R1 + R2
0 −2 −3 1
1 0 −2 1 −1
= R2 ← R2
2
0 1 3 −1
2 2
= I | A−1 ,
−2 1
where A−1 =
3/2 −1/2
−1 4 −2
= .
2 −3 1
5
Example.
1 1 1
LetA =
1 2 2
1 2 3
⇒ |A| = 1(6 − 4) − 1(3 − 2) + 1(2 − 2)
= 2−1
= 1
6= 0
A I
1 1 1 1 0 0
=
1 2 2 0 1 0
1 2 3 0 0 1
1 1 1 1 0 0
=
0 1 1 −1 1 R2 ← R2 − R1
0
1 2 3 0 0 1
1 1 1 1 0 0
=
0 1 1 −1 1 R3 ← R3 − R1
0
0 1 2 −1 0 1
6
1 1 1 1 0 0
= 1 1 −1 R3 ← R3 − R2
0 1 0
0 0 1 0 −1 1
1 1 0 1 2 −1
= 1 1 −1 R1 ← R1 − R3
0 1 0
0 0 1 0 −1 1
1 1 0 1 2 −1
= 1 0 −1 2 −1 R2 ← R2 − R3
0
0 0 1 0 −1 1
1 0 0 2 0 0
= 1 0 −1 2 −1 R1 ← R1 − R2
0
0 0 1 0 −1 1
= I A−1 ,
2 0 0
where A−1 =
−1 2 −1
0 −1 1
No.6 (a) If A is invertible and AB=AC, prove that B=C.
Proof. Let A be invertible and AB=AC.
AB = AC
=⇒ A−1 (AB) = A−1 (AC)
=⇒ (A−1 A)B) = (A−1 A)C)
=⇒ IB = IC
=⇒ B = C
7
1 0
(b) If A = , find an example with AB = AC but B 6= C.
0 0
1 0
Sol. Given:A=
0 0
2 3 2 3
Let B = and C = .
4 5 6 7
2 3 2 3
AB = and AC = .
0 0 0 0
AB = AC but B 6= C.
1 0 0
No.10 Use the Gauss-Jordan method to find the inverse of A1 =
1 1 1.
0 0 1
1 0 0
Sol. Given A1 =
1 1 1
0 0 1
|A1 | = 1 6= 0
So, A−1
1 will exist.
A1 I
1 0 0 1 0 0
=
1 1 1 0 1 0
0 0 1 0 0 1
8
1 0 0 1 0 0
=
0 1 1 −1 1 0 R2 ← R2 − R1
0 0 1 0 0 1
1 0 0 1 0 0
=
0 1 0 −1 1 −1 R2 ← R2 − R3
0 0 1 0 0 1
= I −1 ,
A1
1 0 0
where A−1
1 = −1 1 −1 .
0 0 1
9
Lecture-9
1 2 3 1 4 7
A= 4 5 6 =⇒ AT = 2 5 8
7 8 9 3 6 9
Symmetric matrix:
A square matrix A is said
to besymmetric if AT = A.
1 2 3
1 2
Ex. A= , B=
2 5 6
2 4
3 6 9
Skew-symmetric matrix:
A square matrix A is said
to be skew-symmetric
if AT = −A.
0 2 3
0 2
Ex. A= , B= −2
0 6
−2 0
−3 −6 0
Points to Remember:
1. (A + B)T = AT + B T .
2.(AB)T = B T AT , (ABC)T = C T B T AT .
3.(AT )T = A.
4.(A−1 )T = (AT )−1 .
If A is any real square matrix, then A + AT is always symmetric and A − AT
is always skew-symmetric.
1
Let B = A + AT . Then
B T = (A + AT )T = AT + (AT )T = AT + A = B
So, B is symmetric.
Let C = A − AT . Then
C T = (A − AT )T = AT − (AT )T = AT − A = −(A − AT ) = −C
So, C is skew-symmetric.
A + AT A − AT
Now, A = (symmetric) + (skew − symmetric)
2 2
So, any real square matrix can be expressed as a sum of symmetric and skew-
symmetric matrix.
Example. Express the following matrix as a sum of symmetric and skew-
symmetric
matrix.
1 2
3 4
1 2 T 1 3
Sol. A= , A =
3 4 2 4
A + AT A − AT
A= +
2 2
1 2 1 5/2 0 −1/2
=⇒ = (symmetric) + (skew-symmetric)
3 4 5/2 4 1/2 0
2
1 3 5
Given: A= 3 12 18
5 18 30
1 3 5
= 0 3 3 R2 ← R2 − 3R1 (3)
5 18 30
1 3 5
= 0 3 3 R3 ← R3 − 5R1 (5)
0 3 5
1 3 5
= 0 3 3 R3 ← R3 − R2 (1)
0 0 2
=U
LU
factorization:
1 0 0 1 3 5
L=
3 1 0, U=
0 3 3
5 1 1 0 0 2
LU = A
LDU factorization:
1 0 0 1 0 0 1 3 5
L=
3 1 0, D=
0 3 0 U= 0 1 1
5 1 1 0 0 2 0 0 1
LDU = A
Here U = LT . So, A = LDLT .
3
In this example since AT = A, so LDU=A is LDLT = A.
No.11 If B is square, show that A = B + B T is always symmetric and
T
K=B − B is always skew-symmetric. Find these matrices A and K when
1 3
B = and write B as the sum of a symmetric matrix and a skew-
1 1
symmetric matrix.
Sol. Let B be a square matrix.
Let A = B + B T and K = B − B T .
AT = (B + B T )T = B T + (B T )T = B T + B = A
=⇒ A is symmetric.
Again, K T = (B − B T )T = B T − (B T )T = B T − B = −(B − B T ) = K
=⇒ K is skew-symmetric.
1 3 1 1 2 4
A = B + BT = + =
1 1 3 1 4 2
1 3 1 1 0 2
K = B − BT = + =
1 1 3 1 −2 0
B + BT B − BT
B= +
2 2
1 3 1 2 0 1
=⇒ = (symmetric) + (skew-symmetric)
1 1 2 1 −1 0
4
(c) all of A, B
and A+B
areinvertible.
1 0 0 1 1 1
Sol. (a) A= , B= , A+B=
0 1 1 0 1 1
A+B is not invertible
although
A and B are
invertible.
1 0 0 0 1 0
(b) A= , B= , A+B=
0 0 0 1 0 1
A+B is
invertible
although
A and B are not
invertible.
1 0 2 0 3 0
(c) A= , B= , A+B=
0 1 0 2 0 3
All of A, B and A+B are invertible.
No.41 True or false:
(a) A 4 by 4 matrix with a row of zeros is not invertible.
Ans. True since the matrix is singular.
(b) A matrix with 1s down the main diagonal is invertible.
Ans. False since the matrix is singular.
(c) If A is invertible then A−1 is invertible.
Ans. True since A−1 is non-singular if A is no-singular.
(d) If AT is invertible then A is invertible.
Ans. True since A is non-singular if AT is non-singular.
No.42 For which three numbers c is this matrix not invertible, and why
not?
2 c c
A= c c c
8 7 c
5
Sol.
For c=0,2,7 the matrix is not invertible, as for these three values of c the
determinant of the matrix is zero i.e. the matrix is singular.
c=0 =⇒ zero column(or zero row).
c=2 =⇒ identical rows.
c=7 =⇒ identical columns.
6
Lecture-10
1
addition and last four are coming under scalar multiplication, so the following
is an alternate definition of vector space.
A nonempty set V is said to be a vector space if it satisfies the following
properties:
1. Vector Addition
i.e. x ∈ V, y ∈ V =⇒ x + y ∈ V
2. Scalar Multiplication
i.e. c ∈ R, x ∈ V =⇒ cx ∈ V
Examples of vector spaces: R, R2 , R3 ,. . . . . . . . . , Rn .
Example.
Show that R2 is a vector space.
Proof. R2 contains infinitely many elements and the elements are pairs like
(0, 0), (1, 2), (−1, 2), . . . . . . . . .
So, R2 is nonempty.
1. Vector addition:
Let x = (x1 , x2 ) and y = (y1 , y2 ) ∈ R2 .
x + y = (x1 + y1 , x2 + y2 ) ∈ R2 as x1 + y1 ∈ R and x2 + y2 ∈ R.
2. Scalar multiplication:
Let c ∈ R and (x1 , x2 ) ∈ R2 .
cx = (cx1 , cx2 ) ∈ R2 as cx1 ∈ R and cx2 ∈ R.
So, R2 is a vector space.
Example. Verify whether the 1st quadrant of R2 is a vector space or not.
Proof: Let the set V be the 1st quadrant of R2 .
Then V = {(x1 , x2 ) ∈ R2 : x1 ≥ 0, x1 ≥ 0}
= {(0, 0), (1, 1), (1, 2), (2, 3), . . . . . . . . . .}
2
So, V is nonempty.
1. Vector addition:
Let x = (x1 , x2 ) and y = (y1 , y2 ) ∈ V.
Then x1 ≥ 0, x2 ≥ 0, y1 ≥ 0, y2 ≥ 0.
x + y = (x1 + y1 , x2 + y2 ) ∈ V as x1 + y1 ≥ 0 and x2 + y2 ≥ 0.
2. Scalar multiplication:
Let c = −2 ∈ R and x = (1, 2) ∈ V .
cx = (−2, −4) ∈
/V
So, V does not satisfy scalar multiplication property.
Hence, V i.e. 1st quadrant is not a vector space.
Example. Verify whether combinedly the 1st and 3rd quadrant of R2 is a
vector space or not.
Proof: Let the set V be the 1st and 3rd quadrant of R2 .
Then V = {(1, 1), (1, 2), (−1, −1), (−1, −2), . . . . . . . . . .}.
So, V is nonempty.
1. Vector addition:
Let x = (−1, −2) and y = (2, 1) ∈ V.
x + y = (1, −1) ∈
/ V.
So, V does not satisfy vector addition property.
Hence, V i.e. 1st and 3rd quadrant combinedly is not a vector space.
Subspace: A subspace of a vector space is a nonempty subset that satisfies
the requirements for a vector space.
Example. Show that y = x line is a subspace of the vector space R2 .
Proof. Let the set V be y = x line.
Then V = {(x1 , x2 ) ∈ R2 : x1 = x2 }
3
= {(0, 0), (1, 1), (2, 2), (−1, −1), (−2, −2), . . . . . . . . . .}.
So, V is a nonempty subset of R2 .
1. Vector addition:
Let x = (x1 , x2 ) and y = (y1 , y2 ) ∈ V.
Then x1 = x2 and y1 = y2 .
x + y = (x1 + y1 , x2 + y2 ) ∈ V as x1 + y1 = x2 + y2 .
2. Scalar multiplication:
Let c ∈ R and (x1 , x2 ) ∈ V.
cx = (cx1 , cx2 ) ∈ V as cx1 = cx2 .
So, V i.e. y = x line is a subspace space R2 .
Vector Space: R2
Subspaces:
1. R2
2. Any line passing through origin.
3. Origin i.e. {(0,0)}.
Vector Space: R3
Subspaces:
1. R3
2. Any plane passing through origin.
3. Any line passing through origin.
3. Origin i.e. {(0,0,0)}.
Points to remember:
1. Every vector space is a subspace of itself.
2. A subspace is a vector space in its own right.
3. Every vector space is the largest subspace of itself and origin is the smallest
4
subspace.
No.2 Which of the following subsets of R3 are actually subspaces?
(a) The plane of vectors (b1 , b2 , b3 ) with first component b1 = 0.
(b) The plane of vectors b with b1 = 1.
Sol. (a).
Let V = {T he plane of vectors (b1 , b2 , b3 ) with f irst component b1 = 0}.
= {(0, 0, 0), (0, 1, 0), (0, 1, 2), . . . . . . . . . }
So, V is a nonempty subset of R3 .
1. Vector addition:
Let b = (b1 , b2 , b3 ) and c = (c1 , c2 , c3 ) ∈ V. T hen b1 = 0 and c1 = 0.
b + c = (b1 + c1 , b2 + c2 , b3 + c3 ) ∈ V as b1 + c1 = 0.
2. Scalar multiplication:
Let α ∈ R and b = (b1 , b2 , b3 ) ∈ V. T hen b1 = 0 =⇒ αb1 = 0
αb = (αb1 , αb2 , αb3 ) ∈ V as αb1 = 0
So, V is a subspace of R3 .
Sol. (b).
Let V = {T he plane of vectors b = (b1 , b2 , b3 ) with f irst component b1 = 1}.
= {(1, 0, 0), (1, 1, 0), (1, 1, 2), . . . . . . . . . }
So, V is a nonempty subset of R3 .
1. Vector addition:
Let b = (b1 , b2 , b3 ) and c = (c1 , c2 , c3 ) ∈ V. T hen b1 = 1 and c1 = 1 =⇒
b1 + c1 = 2.
b + c = (b1 + c1 , b2 + c2 , b3 + c3 ) ∈
/ V as b1 + c1 6= 1.
So, V does not satisfy vector addition property.
Hence, V is not a subspace of R3 .
5
Column Space and Null Space of a Matrix
Column Space of a matrix is all linear combination of the columns of A. and denoted by
C(A).
Column Space of Matrix Let C1 , C2 , ...., Cn be 1st column, 2nd column,....,nth column of the
matrix Am×n
then C(A) = {a1 C1 + a2 C2 + ... + an Cn /a1 , a2 , ...an ∈ R}
where R is set of real numbers.
Note : The system Ax = b is solvable iff the vector b can be expressed as a combination
of the columns of A . then b is in the column cpace of A
Note : C(A) is a subspace of Rm
Exercise 2.1.5 : find the column space and null space of the matrices
(a):
1 −1
A=
0 0
1
for the null space solve Ax
=0
1 −1 0
Aug.matrix = A 0 =
0 0 0
since since ,so assumex2 = k where k is real number.so x1 − k = 0 , x1 k
x2is free variable
k 1
N (A) = /k ∈ R = k /k ∈ R
k 1
(b)
0 0 3 R1 ↔R2 1 2 3
B= −−−−→ =U
1 2 3 0 0 3
Since first and third columns are pivot in U.
0 3
So C(B) = a +b | a, b ∈ R
1 3
for null space solve BX = 0
0 0 3 0 R1 ↔R2 1 2 3 0
Aug. matrix= [B | 0] = −−−−→ Since x2 is free variable so
1 2 3 0 0 0 3 0
x2 = k, K ∈ R
(c)
0 0 0
C=
0 0 0
0 0 0
C(A) = a +b +c | a, b, c ∈ R
0 0 0
0
C(A) =
0
Since x1 , x2 , x3 are free variable.
So x1 = k1 , , k3 ∈ R.
x2= k2 , x3 = k3 , k1 , k2
1 k
So N (A) = k2 | k1 , k2 , k3 ∈ R = R3
k3
2
Exercise 2.1.24: For which Right hand side(find a condition on b1 , b2 , b3 ) are these systems
solvable?
1 4 2 x1 b1
(a) 2 8 4 x 2 = b2
−1 −4 −2 x3 b3
1 4 2 b1 1 4 2 b1
Aug. matrix = [A|b] = 2 8 4 b2 R1 +R3 → R3 , R2 −2R1 → R2 ∼ 0 0 0 b2 − 2b1
−1 −4 −2 b3 0 0 0 b1 + b3
i.e
1x1 + 4x2 + 2x3 = b1
0x1 + 0x2 + 0x3 = b2 − 2b1
0x1 + 0x2 + 0x3 = b1 + b3
solution exist only if b2 − 2b1 = 0, b1 + b3 = 0
⇒ b2 = 2b1 and b3 = −b1
1 4 b1
x1
(b) 2
9 = b2
x2
−1 −4 b3
1 4 b1 1 4 b1
Aug. matrix = [A|b] = 2 9 b2 R2 − 2R1 → R2 ∼ 0 1 b2 − 2b1
−1 −4 b3 0 0 b1 + b3
i.e
1x1 + 4x2 = b1
0x1 + 1x2 = b2 − 2b1
0x1 + 0x2 = b1 + b3
solution exist only if b3 + b1 = 0
3
2.2 - Solving Ax=0 And Ax=b
x1 − 2x2 + x3 = 0
2x2 − 8x3 = 8 (1)
−4x1 + 5x2 + 9x3 = −9
Solution: The elimination procedure is shown here with and without matrix notation and the
results are placed side by side for comparison:
x1 − 2x2 + x3 = 0 1 −2 1 0
2x2 − 8x3 = 8 0 2 −8 8
−4x1 + 5x2 + 9x3 = −9 −4 5 9 −9
Keep x1 is the first equation and eliminate it from the other equations. To do so, add 4 times
equation 1 to equation 3. After some practice, this type of calculation is usually performed
mentally:
1
4.[equation 1] : 4x1 − 8x2 + 4x3 = 0
+[equation 3] : −4x1 + 5x2 + 9x3 = −9
new equation 3 : −3x2 + 13x3 = −9
The result of this calculation is written in place of the original third equation:
x1 − 2x2 + x3 = 0 1 −2 1 0
2x2 − 8x3 = 8 0 2 −8 8
−3x2 + 13x3 = −9 0 −3 13 −9
Now, multiply equation 2 by 1/2 in order to obtain 1 as the coefficient for x2 . (This calcu-
lation will simplify the arithmatic in the next step.)
x1 − 2x2 + x3 = 0 1 −2 1 0
x2 − 4x3 = 4 0 1 −4 4
−3x2 + 13x3 = −9 0 −3 13 −9
Use the x2 in equation 2 to eliminate the −3x2 in equation 3. The ”mental” computation is
Eventually, you want to eliminate the −2x2 term from equation 1, but it is more efficient to use
the x3 in equation 3 first, to eliminate the −4x3 and +x3 terms in equation 2 and 1. The two
”mental” calculations are
Now, having cleaned out the column above the x3 in equation 3, move back to the x2 in equation
2 and use it to eliminate the −2x2 above it. Because of the previous work with x3 , there is now
2
no arithmetic involving x3 terms. Add 2 times equations 2 to equation 1 and obtain the system:
x1 = 29 1 0 0 29
x2 = 16 0 1 0 16
x3 = 3 0 0 1 3
Ex 2 - A system of linear equations is a list of linear equations with the same unknowns. In
particular, a system of 2 linear equations in 2 unknowns x1 , x2 can be put in the standard form
)
a11 x1 + a12 x2 = b1
(2)
a21 x1 + a22 x2 = b2
a11 a12 x1 b
= 1 (3)
a21 a22 x2 b2
a11 a12 b1
(4)
a21 a22 b2
a11 a12 b1 a a
A= , C = 11 12
a21 a22 b2 a21 a22
where A is called Augmented Matrix and C is called Cofficient Matrix of the system
1 aa12 b1
a11 a12 b1 1 a
R1 → R1 ∼ 11 11
a21 a22 b2a11 a21 a22 b2
a12 b1
1 a11 a11
R2 − a21 R1 → R2 ∼ (5)
0 a22 − a21a11a12 b2 − aa2111b1
a12 b1
)
x1 + x
a11 2
= a11
a21 a12 a21 b1
(6)
0x1 + (a22 − a11
)x2 = b2 − a11
a21 b1
a21 a12 b2 − a11 a12 b1
Case 1 : if a22 − a11
6= 0 then x2 = a a12
a22 − 21
, x1 ca be calculated from x1 + x
a11 2
= a11
a11
Conclusion- unique solution of (2)
3
Case 2 : if a22 − a21a11a12 = 0 and b2 − aa2111b1 = 0 then second equation of (7) becomes 0x1 +0x2 = 0
Conclution - infinite solution of (2)
Case 3 : if a22 − a21a11a12 = 0 and b2 − aa2111b1 6= 0 then second equation of (6)becomes 0x1 + 0x2 =
b2 − aa2111b1 i.e. 0 = b2 − aa2111b1 which is not true
Conclution - solution does not exit of (2).
Supppose A is a matrix with rows R1 , R2 ,..., Rm . The following operations on A are called
elementary row operations.
[E2 ] (Row Scaling): Replace row Ri by a nonzero multiple kRi of itself. This may be writ-
ten as
” Replace Ri by kRi (k 6= 0) or ”kRi → Ri ”
[E3 ] (Row Addition): Replace row Rj by the sum of a multiple kRi of a row Ri and itself.
This may be written as
” Replace Rj by kRi + Rj ” or ”kRi + Rj → Rj .
Sometimes (say to avoid fractions when all the given scalars are integers) we may apply [E2 ]
and [E3 ] in one step; that is, we may apply the following operation:
[E] Replace Rj by the sum of a multiple kRi of a row Ri and a nonzero multiple k 0 Rj of
itself. This may ge written as
”Replace Rj by kRi + k 0 Rj (k 0 6= 0)” or ”kRi + k 0 Rj → Rj ”
Echelon Matrices U
4
A Matrix U is called an echelon matrix or is said to be in echelon form , if the following two
conditions hold :
(1) All zero rows,if any, are at the bottem of the matrix.
(2) Each leading nonzero entry in a row is to the right of the leading nonzero entry in the
preceding row.
0 ○
2 3 4 5 9 0 7
0 0 0 ○
3 4 1 2 5
○
A=0 0 0 0 0 5 7 2
0 0 0 0 0 0 ○
8 6
0 0 0 0 0 0 0 0
NOTE 1 - The major difference between an echelon matrix in row reduced form is that in
an echelon matrix there must be zeros below the pivots [properties(1)and (2)] but in a matrix
in row reduced form , each pivot must also equal 1 [property (3)] and there must also be zeros
above the pivots [properties(4)].
Ex-4 The following are echelon matrices whose pivots have been circled
○
2 3 2 0 4 5 −6
○ 0 ○
1 2 3 1 3 0 0 4
0 0 ○
0 ○
1 −3 2 0 0 0 ○ 0 0 0 ○
, 1 , 1 0 −3
0 ○
0 0 0 0 6 2
0 0 0 0 0 0 0 ○
1 2
0 0 0 0 0 0 0
The Third matrix is also an example of a matrix in row reduced form. the second matrix
is not in row reduced form ,since it does not satisfy property(4),taht is,there is a nonzero entry
above the second pivot in the third column.The first matrix is not in row reduced form, because
it satisfies neither property (3) nor property (4); that is, some pivots are not equal to 1 and
5
there are nonzero entries above the pivots.
Note : If Ax = 0 has more unknowns than equations (n > m), it has at least one special
solution: There are more solutions than the trivial x = 0.
Note : xcomplete = xparticlar + xnullspace
Note : if there are n column in a matrix A and there are r pivots then there are r pivot variables
and n − r free variable.and this important number r is called Rank of a Matrix.
Rank of a Matrix = The rank of a matrix A, written rank(A), is equal to the maximum
number of linearly independent columns of A
= number of pivot column in the echelon form of a matrix A
=maximum number of linearly independent rows of A
= dimension of the column space of A
= dimension of the row space of A.
Ex 6: Find Rank of A
1 2 3 5
A= 0 0 2 2
0 0 0 0
Sol. Since Echelon form of A is itself A.and 1st and 3rd column are pivot column.
So Rank of A is 2.
6
Method for solving System of linear equation
Method-1
Ex 7 - Consider a System of linear equation
1x1 + 2x2 + 3x3 + 5x4 = b1
2x1 + 4x2 + 8x3 12x4 = b2 (7)
3x + 6x + 7x + 13x = b
1 2 3 4 3
Sol.
Step 1: Reduce Ax = b to U x = c
i.e. Reduce Augmented Matrix [A b] to Augmented Matrix [U c]
1 2 3 5 b1 1 2 3 5 b1
A b = 2 4 8 12 b2 R2 − 2R1 → R2 , R3 − 3R1 → R3 ∼ 0 0 2 2 b2 − 2b1
3 6 7 13 b3 0 0 −2 −2 b3 − 3b1
1 2 3 5 b1
R3 + R2 → R3 ∼ 0 0 2 2 b2 − 2b1
0 0 0 0 b3 + b2 − 5b1
(8)
= U c
(8)means
1x + 2x + 3x + 5x = b
1 2 3 4 1
0x + 0x + 2x + 2x = b − 2b
1
2 3 4 2 1
0x1 + 0x2 + 0x3 + 0x4 = b3 + b2 − 5b1
third equation hold only if b3 + b2 − 5b1 = 0
it means if b + b − 5b = 0 then system of equation has infinite solution.
3 2 1
if b3 + b2 − 5b1 6= 0 then system of equation has no solution.
Here
1 2 3 5 b1
U = 0 0 2 2 , C = b2 − 2b1
0 0 0 0 b3 + b2 − 5b1
Step 2 :
Find Special Solution : U x = 0
Take particularly b1 = 0,b2 = 6,b3 = −6
1 2 3 5 0
U 0 = 0 0 2 2 0 (9)
0 0 0 0 0
7
Here x2 and x4 are free variables
Let x2 = a,x4 = b where a,b belongs to Set of Real Number
Now we can rewrite (10) as
1x1 + 2x2 + 3x3 + 5x4 = 0—–(*)
0x1 + 0x2 + 2x3 + 2x4 = 0—–(**)
now put the value of x2 in (**)
2x3 + 2b = 0
i.e. x3 = −b
now put the value of x3 in (*)
x1 + 2a − 3b + 5b = 0
i.e. x1 + 2a + 2b = 0
i.e. x1 = −2a − 2b
−2a − 2b
a
Special Solution xn =
−b
b
−2 −2
1 0
xn = a
0 + b 1 where a, b belongs to set of real number
0 −1
Step 3 :
Find Particular Solution xp , U xp = c and put all free variables= 0
So put x2 = a = 0, x4 = b = 0
1 2 3 5 0
U c = 0 0 2 2 6 (10)
0 0 0 0 0
8
−9
0
xp =
3
0
Step 4 :
−2 −2 −9 −2a − 2b − 9
1
+ b 0 + 0 = a
Complete Solution x = xn + xp = a 0 1 3 −b + 3
0 −1 0 b
−9 −2a −2b
0 a 0b
= 3 + 0a + b
0 0a −b
−9 −2 −2
0 1 0
= 3 + a 0 + b 1
0 0 −1
where a, b belongs to Set of real numbers
Method-2
Ex - Consider a System of linear equation
1x1 + 2x2 + 3x3 + 5x4 = 0
2x1 + 4x2 + 8x3 + 12x4 = 6 (11)
3x1 + 6x2 + 7x3 + 13x4 = −6
Step 1: Reduce Ax = b to U x = c
i.e. Reduce Augmented Matrix [A b] to Augmented Matrix [U c]
1 2 3 5 0 1 2 3 5 0
A b = 2 4 8 12 6
R2 − 2R1 → R2 , R3 − 3R1 → R3 ∼ 0 0 2
2 6
3 6 7 13 −6 0 0 −2 −2 −6
1 2 3 5 0
R3 + R2 → R3 ∼ 0 0 2 2 6
0 0 0 0 0
(12)
= U c
9
(12)means
1x1 + 2x2 + 3x3 + 5x4 = 0
0x1 + 0x2 + 2x3 + 2x4 = 6
0x1 + 0x2 + 0x3 + 0x4 = 0
Here
1 2 3 5 0
U = 0 0 2 2 , c = 6
0 0 0 0 0
Step 2:
Here x2 and x4 are free variables
Let x2 = a,x4 = b where a,b belongs to Set of Real Number
Now we can rewrite (12) as
1x1 + 2x2 + 3x3 + 5x4 = 0—–(*)
0x1 + 0x2 + 2x3 + 2x4 = 6—–(**)
now put the value of x2 in (**)
2x3 + 2b = 6
i.e. x3 = 3 − b
now put the value of x3 in (*)
x1 + 2a + 3(3 − b) + 5b = 0
i.e. x1 + 2a + 9 + 2b = 0
i.e. x1 = −9 − 2a − 2b
−2a − 2b − 9 −9 −2a −2b
a 0 a 0b
Complete Solution x = xn + xp = −b + 3 = 3 + 0a + b
b 0 0a −b
−9 −2 −2
0
+ a 1 + b 0
=3 0 1
0 0 −1
where a, b belongs to Set of real numbers
Exercise 2.2.1 : find the value of c that makes it possible to solve Ax = b, and solve it:
u + v + 2w = 2
2u + 3v − w = 5
3u + 4u + w = c
10
1 1 2 2 1 1 2 2
Solution Aug matrix = 2 3 −1 5 R2 − 2R1 → R2 ∼ 0
1 −5 1 R3 − R2 → R2 ∼
3 4 1 c 0 1 −5 c
1 1 2 2
0 1 −5 1 Solution Exit only if c − 7 = 0 so assume w =k∈R
0 0 0 c−7
‘
v − 5w = 1
v − 5k = 1
v = 1 + 5k
u + v + 2w = 2
u + (1 + 5k) + 2k = 2
u = 1 − 7k
u 1 − 7k 1 −7
v = 1 + 5k = 1 + k 5 where k ∈ R
w k 0 1
Exercise 2.2.4 Write the complete solution x = xp + xn to these systems ,(as in equation
(4))
u u
1 2 2 1 1 2 2 1
v = , v = ,
2 4 5 4 2 4 4 4
w w
1 2 2 1 1 2 2 1
Solution (1) Aug matrix = R2 − 2R1 → R2 ∼
2 4 5 4 0 0 1 2
Since v is free variable so take v = k , k ∈ R
w=2
u + 2v + 2w = 1
u + 2k + 4 = 1
u = −3 − 2k
u −3 − 2k −3 −2
v = k = 0 + k 1 , where k ∈ R
w 2 2 0
11
1 2 2 1 1 2 2 1
(2) Aug matrix = R2 − 2R1 → R2 ∼
2 4 4 4 0 0 0 2
i.e.
u + 2v + 2w = 1
0u + 0v + 0w = 2
i.e. 0 = 2 which is not true.
So there is no solution.
Exercise 2.2.5 Reduce A and B to echelon form, to find their ranks, which variables are
free ?
1 2 0 1 1 2 3
A = 0 1 1 0 , B = 4 5 6 find the special solutions to Ax = 0 and Bx = 0. find all
1 2 0 1 7 8 9
solutions.
1 2 0 1 1 2 0 1
Solution:(1) A = 0 1 1 0 R3 − R1 → R2 ∼ 0 1 1 0 = U. Since first two col-
1 2 0 1 0 0 0 0
umn in U are L.I. So ρ(A) = 2.
Now for solving Ax = 0.
1 2 0 1 0 1 2 0 1 0
Aug. matrix = [A|0] = 0 1 1 0 0 R3 − R1 → R3 ∼ 0 1
1 0 0
1 2 0 1 0 0 0 0 0 0
Since x3 and x4 are free variable; so assume x3 = k1 , x4 = k2 , where k1 , k2 ∈R
x2 + x3 = 0 ⇒ x2 + k = 0 ⇒ x2 = −k
x1 + 2x2 + x4 = 0
x1 − 2k1 + k2 = 0
x1 = 2k1 − k2
x1 2k1 − k2 2 −1
x2 −k1 −1 0
=
x3 k1 = k1 1 + k2 0
x4 k2 0 1
where k1 , k2 ∈ R.
This is general solution.
2 −1
−1 0
Hence special solutions are
1 and 0 .
0 1
1 2 3 1 2 3
(2) B = 4
5 6 R3 − 7R1 → R3 , R2 − 4R1 → R2 ∼ 0 −3 −6 R3 − 2R2 → R3 ∼
7 8 9 0 −6 −12
12
1 2 3
0 −3 −6 = U
0 0 0
Since U has two pivot columns, so ρ(B) = 2.
for solving Bx = 0
1 2 3 0 1 2 3 0
Aug. matrix = [B|0] = 4 5 6 0 R3 −7R1 → R3 , R2 −4R1 → R2 ∼ 0 −3 −6 0 R3 −
7 8 9 0 0 −6 −12 0
1 2 3 0
2R2 → R3 ∼ 0 −3 −6 0
0 0 0 0
Since x3 is free variable.
So x3 = k, k ∈ R.
−3x2 − 6x3 = 0
x2 = −2k
x1 + 2x2 + 3x3 = 0
x1 − 4k + 3k = 0
x1 − k = 0 ⇒ x1 = k.
x1 k 1 1
General solution x2 = −2k = k −2 , k ∈ R. Special solution is −2.
x3 k 1 1
13
PROBLEM SET 2.2 CONTINUED...
Exercise 2.2.13: (a) find the special solutions to U x = 0, Reduce U to R and repeat
x1
1 2 3 4 0
x2
Ux = 0 0 1 2 = 0
x3
0 0 0 0 0
x4
x3 + 2x4 = 0
x3 = −2b
x1 + 2x2 + 3x3 + 4x4 = 0
x1 − 2a − 6b + 4b = 0
x1 − 2a − 2b = 0
x1 = 2a + 2b.
x1 2a + 2b −2 2
x2 a 1 0
x3 −2b = a 0 + b −2 , a, b ∈ R.
=
x4 b 0 1
1 2 3 4 1 2 0 −2
Since U = 0 0 1 2 R1 − 3R2 → R1 ∼ 0 0 1 2 = R
0 0 0 0 0 0 0 0
(b) U x = b
1 2 3 4 a
Aug matrix = [U |b] = 0 0 1 2 b
0 0 0 0 0
Since x2 , x4 are free variables. So assume x2 = c, x4 = d, c, d ∈ R.
x2 + 2x4 = b
1
x3 + 2d = b ⇒ x3 = b − 2d
x1 + 2x2 + 3x3 + 4x4 = a
x1 + 2c + 3(b − 2d) + 4d = a
x1 + 2c + 3b − 6d + 4d = a
x1 + 2c + 3b − 2d = a
x1 = a − 3b − 2c + 2d
x1 a − 3b − 2c + 2d a − 3b −2 2
x2 c = 0 + c 1 + d 0 , c, d ∈ R.
=
x3 b − 2d 0 0 −2
x4 d 0 0 1
Exercise
2.2.34: What
conditions
on b1 , b2 , b
3 , b4make each system solvable? solve for x,
1 2 b1 1 2 3 b1
2 4 x1 b2 2 4 6 x1 b2
2 5 x2 = b3 and 2 5 7 x2 = b3
x3
3 9 b4 3 9 12 b4
1 2 b1
2 4 b2
Solutions:(a) Aug. matrix = 2 5 b3 R4 − 3R1 → R4 , R2 − 2R1 → R2 , R3 − 2R1 → R3 ∼
3 9 b4
1 2 b1
0 0 b2 − 2b1
0 1 b3 − 2b1
0 3 b4 − 3b1
1 2 b1
0 0 b2 − 2b1
R4 − 3R3 → R3 ∼
0 1 b3 − 2b1
0 0 b4 − 3b3 + 3b1
b − 2b1 = 0, b4 − 3b3 + 3b1 = 0
solution exist only if 2
b2 = 2b1 , b4 = b1 + b3
Since x2 = b3 − 2b1
x1 + 2x2 = b1
x1 + 2(b3 − 2b1 ) = b1
x1 + 2b3 − 4b1 = b1
x1 = 5b1 − 2b3
x1 5b1 − 2b3
General solution =
x2 b3 − 2b1
2
1 2 3 b1
2 4 6 b2
(b)Aug. matrix = 2 5 7 b3 R4 − 3R1 → R4 , R2 − 2R1 → R2 ,
3 9 12 b4
1 2 3 b1 1 2 3 b1
0 0 0 b2 − 2b1 0 0 0 b2 − 2b1
R3 − 2R1 → R3 ∼ 0 1 1 b3 − 2b1 R4 − 3R3 → R4 ∼ 0 1 1
b3 − 2b1
0 3 3 b4 − 3b1 0 0 0 b4 − 3b3 + 3b1
solution exist only if b2 − 2b1 = 0 and b4 − 3b3 + 3b1 = 0 for the general solution.
Since x3 is free variable assume x3 = k, k ∈ R.
x2 + k = b3 − 2b1
x2 = b3 − 2b1 − k
x1 + 2x2 + 3x3 = b1
x1 + 2(b3 − 2b1 − k) + 3k = b1
x1 + 2b3 − 4b1 − 2k + 3k = b1
x1 = 5b1 − 2b3 − k
x1 5b1 − 2b3 − k 5b1 − 2b3 −1
x2 = b3 − 2b1 − k = b3 − 2b1 + k −1
x3 k 0 1
3
PROBLEM SET 2.2 CONTINUED...
Excercise 2.2.44 Choose the number q so that (if possible) the ranks (a) 1 (b) 2 (c) 3
6 4 2
3 1 3
A = −3 −2
−1 , C =
q 2 q
9 6 q
6 4 2 6 4 2
Solution (1) A = −3 −2 −1 2R2 + R1 → R2 , 6R3 − 9R1 → R3 ∼ 0 0 0
9 6 q 0 0 6q − 18
6 4 2
R3 ↔ R2 ∼ 0 0 6q − 18
0 0 0
(a) rank of A is 1 if q = 3
(b) rank of A is 2 if q 6= 3
(c) rank of A is 3 not possible.
3 1 3 3 1 3
(2) B = 3R2 − qR1 → R2 ∼
q 2 q 0 6−q 0
(a) rank of B is 1 if q = 6
(b) rank of B is 2 if q 6= 6
(c) rank of B is 3 not possible.
Excercise 2.2.54 :True or False? (Give reason if true, or counterexample to show it is false.)
(a) A square matrix has no free variables.
(b) An invertible matrix has no free variables.
(c) An m by n matrix has no more than n pivot variables.
(d) An m by n matrix has no more than m pivot variables.
Solution :
(a) A square matrix has no
freevariables.
1 1
Ans. False , because A =
0 0
1
(b) An invertible matrix has no free variables.
Ans. True , a matrix A is invertible if and only if its columns are linearly independent.so all
column has pivot.so invertible matrix has no free variables.
x − 3y − z = 0
Solution: Since Consider
x = 3y + z, y, z ∈ R
x 3y + z 3 1
y = y = y 1 + z 0
z z 0 1
3 1
Hence special solutions are 1 and 0 We have to find a matrix A whose special solutions
0 1
3 1
are 1 and 0
0 1
a11 a12 a13
Let A = a21 a22 a23
a31
a32 a33
3
Consider A 1 = 0
0
a11 a12 a13 3 0
a21 a22 a23 1 = 0
a31 a32 a33 0 0
3a11 + a12 = 0 ⇒ a12 = −3a11
3a21 + a22 = 0 ⇒ a22 = −3a21
3a31 + a32 = 0 ⇒ a32 = −3a31
2
1
Consider A 0 = 0
1
a11 a12 a13 1 0
a21 a22 a23 0 = 0
a31 a32 a33 1 0
a11 + a13 = 0 ⇒ a13 = −a11
3
Linearly independent and dependent
Linearly independent
A subset {v1 , v2 , .....vn } of a vector space V is said to be linearly independent if whenever
c1 , c2 , ....cn ∈ R such that c1 v1 + c2 v2 + ..... + cn vn = 0 then c1 = c2 = ..... = cn = 0
Linearly dependent
A non empty finite subset {v1 , v2 , .....vn } of a vector space V is said to be linearly dependent if
there exists scalars c1 , c2 , ....cn ∈ R (not all zero)such that c1 v1 + c2 v2 + ..... + cn vn = 0
Ex 1 if v1 = zero vector , then the set is linearly dependent . we may choose c1 = 1 and
all other ci = 0, this is a non trivial combination that produces zero.
i.e. 1v1 + 0v2 + ..... + 0vn = 1 × 0 + 0 + .... + 0 = 0
are linearly dependent , since the 2nd column is 3 times the first,the combination
of columns
with weights -3,1,0,0 gives the zero vector. i.e. say A = C1 C2 C3 C4 , then −3C1 + 1C2 +
0C3 + 0C4 = 0
The rows
are also linearly dependent, row 3 is two times row 2 minus five times row1. i.e. say
R1
A = R2 , then R3 − 2R2 + 5R1 = 0
R3
EX 3
The Column of this Triangular Matrix are Linearly Independent
3 4 2
A= 0 1
5
0 0 2
1
3 4 2 c1 0
0 1 5 c2 = 0
0 0 2 c3 0
it means 3c1 + 4c2 + 2c3 = 0, 0c1 + 1c2 + 5c3 = 0, 0c1 + 0c2 + 2c3 = 0 i.e.
c3 = 0, c2 = 0, c1 = 0
A similar reasoning applies to the rows of A, which are also independent. Suppose
. From the first components we find 3c1 = 0 orc1 = 0. Then the second components give c2 = 0,
and finally c3 = 0.
2
If we give the value 1 to the free variable c3 , then back-substitution in U c = 0 gives c2 = −1,
c1 = 1
i.e. if A = C1 , C2 , C3 then C1 − C2 + C3 = 0
Exercise 2.3.1: Choose three independent columns of V , then make two other choices. Do the
samefor A. You
have found
bases for which spaces?
2 3 4 1 2 3 4 1
0 6 7 0 0 6 7 0
U =0 0 0 9 , A = 0 0 0 9
0 0 0 0 4 6 8 2
2 3 4 1
0 6 7 0
0 0 0 9 R4 −2R1 →
Solution: Let U = [U1 U2 U3 U4 ] A = [C1 C2 C3 C4 ] Consider, A =
4 6 8 2
2 3 4 1
0 6 7 0
R4 ∼
0 0 0 9 = U
0 0 0 0
i.e. U is echelon form of A.
Note: Columns of A which have pivot are linearly independent.
Case (i) U1 , U2 , U4 are L.I.(using the note).
Case (ii) U1 , U3 , U4 are L.I.
as consider aU1 + bU3 + cU4 = 0
2 4 1 0
0 7 0 0
a0 + b 0 + c 9 = 0
0 0 0 0
2a + 4b + c 0
0a + 7b + 0c 0 9c = 0 c=0
⇒ = ⇒
0a + 0b + 9c 0 7b = 0 ⇒ b=0
2a + 4b + c = 0 a=0
0a + 0b + 0c 0
Case (iii) U1 , U3 , U4 are L.I.
as consider aU2 + bU3 + cU4 = 0
3 4 1 0
6 7 0 0
a
0 + b 0 + c 9 = 0
0 0 0 0
3a + 4b + c 0
6a + 7b + 0c 0 9c = 0 c=0
⇒
= ⇒ 6a + 7b = 0 ⇒ b=0
0a + 0b + 9c 0
3a + 4b + c = 0 a=0
0a + 0b + 0c 0
3
Note: Columns of a matrix A are linearly independent which are corresponding to the pivot
column of echelon matrix of A.
Case (i) C1 , C2 , C4 are L.I.(using the note).
Case (ii) C1 , C3 , C4 are L.I.
as we can see consider aC1 + bC3 + cC4 = 0
2 4 1 0
0 7 0 0
a
0 + b 0 + c 9 = 0
4 8 2 0
2 4 1 0 0
0 7 0 a 0 a 0
0 0 9 b = 0 i.e S b = 0
c c
4 8 2 0 0
2 4 1 0 2 4 1 0
0 7 0 0
R4 − 2R1 → R4 ∼ 0 7 0 0
Aug. matrix=[S|0] 0 0 9 0 0 0 9 0
4 8 2 0 0 0 0 0
2a + 4b + c = 0 c=0
⇒ 7b = 0 ⇒ b=0
9c = 0 a=0
Case (iii) C2 , C3 , C4 are L.I.
consider aC2 + bC3 +cC4 = 0
0 0
a a
0
b = 0
C2 C3 C4 b = 0 say B 0
c c
0 0
3 4 1 0 3 4 1 0
0 6 7 0
R4 − 2R1 → R4 ∼ 0 6 7 0
Aug. matrix=[B|0] 0 0 9 0 0 0 9 0
6 8 2 0 0 0 0 0
3a + 4b + c = 0 c=0
⇒ 6b + 7c = 0 ⇒ b=0
9c = 0 a=0
The all three cases, we found bases for R4×3 space.
4
1 2 3 a 0 a 0
3 1 2 b = 0 say A b = 0
2 3 1 c 0 c 0
1 2 3 0 1 2 3 0
Aug. matrix=[A|0] 3 1 2 0 R3 − 2R1 → R3 , R2 − 3R1 → R2 ∼ 0 −5 −7 0
2 3 1 0 0 −1 −5 0
1 2 3 0
− 5R3 + R2 → R3 ∼ 0 −5 −7 0
0 0 18 0
1a + 2b + 3c = 0 c=0
⇒ −5b − 7c = 0 ⇒ b = 0
18c = 0 a=0
Vectors are L.I.
5
Lecture 16
2.3 Linear Independence, Basis and
Dimension
• spanning a subspace.
• dimension of a subspace.
Notes:
1
Basis for a Vector Space: A basis for a vector space V is a subset with a
sequence of vectors having two properties at once:
Notes:
• Spanning involves the column space and independence involves the null
space.
Example: Check whether the following sets are the basis of R3 or not?
Solution:
(a)
1 −1 0
Let A = 2 2 8
2 1 0
⇒ |A| = −24
⇒ |A| 6= 0
2
OR. The no. of elements present in the basis of a vector space is known as
the dim. of vector space.
Problem Set-2.3
16. Describe the subspace of R3 (is it a line or a plane or R3 ?) spanned by
(a) the two vectors (1, 1, −1) and (−1, −1, 1).
(b) the three vectors (0, 1, 1) and (1, 1, 0) and (0, 0, 0).
(c) the columns of a 3 by 5 echelon matrix with 2 pivots.
(d) all vectors with positive components.
Ans.(a)
1 −1 R1
Let A = 1 −1 R2 ← R2 − R1
−1 1 R3 ← R3 + R1
1 −1
⇒ A = 0 0 (echelonf orm)
0 0
3
Here rank of A=2.
∴ The subspace of R3 spanned by the three vectors (0, 1, 1) and (1, 1, 0) and (0, 0, 0)
is a plane of R3 .
(c)
1 0 0 0 0
Let A = 0 1 0 0 0
0 0 0 0 0
4
23 Find bases for the two column spaces. Find bases for the two row spaces.
Find bases for the two nullspace.
1 3 2
A= 0 1
1
1 3 2
Ans.
1 3 2 R1
Let A = 0
1 1 R2
1 3 2 R3 ← R3 − R1
1 3 2
⇒A = 0 1 1 (echelonf orm)
0 0 0
1 3
Basis f or C(A) = 0 , 1
1 3
1 0
Basis f or C(AT ) = 3 , 1
2 1
Ax
= 0
1 3 2 x 0
⇒ 0 1 1 y = 0
0 0 0 z 0
⇒ x + 3y + 2z = 0, y + z = 0
x 1
Hence x = y = −1
z 1
1
Basis f or N (A) = −1
1
5
Lecture 17
2.4 The Four Fundamental Subspaces
The Null Space N (A): The nullspace of A is denoted by N(A). Its dimen-
sion is n − r.
The Row Space C(AT ): The row space of A is the column space of AT . It
is C(AT ), and it is spanned by the rows of A. Its dimension is also r.
The Left Null Space N (AT ): The left nullspace of A is the nullspace of
AT . It contains all vectors y such that AT y = 0, and it is written N (AT ).
Its dimension is m − r.
Notes
• The nullspace N(A) and row space C(AT ) are subspaces of Rn .
• The left nullspace N (AT ) and column space C(A) are subspaces of Rm .
1
Existence of Inverses:
let A be a matrix of order m × n with rank r.
(1)Full row rank r = m. Ax = b has at least one solution x for every b if
and only if the columns span Rm . Then A has a right-inverse C such that
AC = Im (m by m). This is possible only if m ≤ n.
Ans.
4 0 0
Let A =
0 5 0
The matrix A is already in Echelon form with 2 pivots.
So, rank of A=r=2.
Here m=2 and n=3.
So, m=r=2
⇒ A is a full row rank matrix.
⇒ right inverse C of A will exist and is given by
C = AT (AAT )−1
1
4 0
⇒C = 0 1
5
0 0
2
Lecture 18
2.4 The Four Fundamental Subspaces
Problem Set-2.4
2. Find the dimension and a basis for the four fundamental subspaces for
1 2 0 1
A= 0 1 1 0
1 2 0 1
Ans.
1 2 0 1 R1
Let A = 0 1 1 0 R2
1 2 0 1 R3 ← R3 − R1
1 2 0 1
⇒ A = 0 1 1 0 (echelonf orm)
0 0 0 0
Rank of A = r = 2
Here m = 3, n = 3
dimC(A) = r = 2
dimC(AT ) = r = 2
dimN (A) = n − r = 2
dimN (AT ) = m − r = 2
1
The Null Space N (A):
Ax
= 0
x1
1 2 0 1 0
x2
⇒ 0 1 1 0
x3 =
0
0 0 0 0 0
x4
⇒ x1 + 2x2 + x4 = 0, x2 + x3 = 0
x1
x2
Hence x =
x3
x4
2x3 − x4
−x3
=
x3
x4
2 −1
−1 0
= x3
1
+ x4
0
0 1
2 −1
−1 0
Basis f or N (A) = ,
1
0
0 1
2
1 0 1 R1
R2 ← R2 − 2R1
2 1 2
Let AT =
0 1 0 R3
1 0 1 R4 ← R4 − R1
1 0 1 R1
0 1 0 R2
⇒ AT =
0 1 0 R3 ← R3 − R2
0 0 0 R4
1 0 1
0 1 0
⇒ AT =
0
(echelonf orm)
0 0
0 0 0
1 0
2 , 1
Basis f or C(AT ) =
0 1
1 0
AT y = 0
1 0 1 0
2 y1
1 2
= 0
⇒
0 y 2
1 0 0
y3
1 0 1 0
1 0 1 0
0 y 1
1 0
= 0
⇒
0 y 2
0 0 0
y3
0 0 0 0
⇒ y1 + y3 = 0, y2 = 0
3
y1
Hence y = y2
y3
−y3
= 0
y3
−1
= y3 0
1
−1
Basis f or N (AT ) = 0
1
Ans.
0 1 2 3 4 R1
Let A = 0 1 2 4 6 R2 ← R2 − R1
0 0 0 1 2 R3
0 1 2 3 4
⇒A = 0 0 0 1 2
0 0 0 1 2
0 1 2 3 4
⇒A = 0 0 0 1 2
0 0 0 0 0
4
1 3
Basis f or C(A) = 1 , 4
0 1
0 0
1 1
Basis f or C(AT ) =
2 , 2
3 4
4 6
1 0 0
0 −2 2
Basis f or N (A) =
0 , 1
,
0
0 0 −2
0 0 1
1
Basis f or N (AT ) = −1
1
5
(c) Dimension of nullspace = 1+dimension of left nullspace.
1 3
(d) Left nullspace contains , row space contains .
3 1
(e) Row space = column space, nullspace 6= left nullspace.
Ans.
(a) A matrix with the required property is given by
1 0
A= 1 0
0 1
6
Lecture 19
Chapter 3: Orthogonality
Inner Product:
Let x = (x1 , x2 ) and y = (y1 , y2 ). Then the inner product of two vectors x
and y is denoted by xT y and defined as xT y = x1 y1 + x2 y2
Let x = (x1 , x2 , x3 ) and y = (y1 , y2 , y3 ). Then xT y = x1 y1 + x2 y2 + x3 y3
Let x = (x1 , x2 , ......, xn ) and y = (y1 , y2 , ...., yn ).
Then xT y = x1 y1 + x2 y2 + ..... + xn yn
1
Note:
x1
x2
xT x = x1 x2 ......xn
......
xn
= x21 + x22 + ..... + x2n = ||x2 ||
Hence inner product of a vector with itself is equal to the length square of
the vector.
• If xT y > 0, their angle is less than deg 90. If xT y < 0, their angle is
greater than deg 90.
• The only vector with length zero and the only vector orthogonal to
itself is the zero vector.
Orthogonal Vectors:
Two vectors x and y are said to be orthogonal iff xT y = 0
Orthogonal Subspaces:
Two subspaces V and W of the same space Rn are orthogonal if every vector
v in V is orthogonal to every vector w in W i.e v T w = 0 for all v and w.
Examples
• y = x line ⊥ y = −x line in R2 .
Notes
2
Fundamental theorem of orthogonality: The row space is orthogonal to
the nullspace (in Rn ). The column space is orthogonal to the left nullspace
(in Rm ).
Examples
3
Lecture 20
3.1 Orthogonal Vectors and Subspaces
Problem Set-3.1
Ans.
Here v1T v2 6= 0
v1T v3 = 0
v1T v4 6 = 0
v2T v3 = 0
v2T v4 6 = 0
v3T v4 6 = 0
1
Ans.
x = (1, 4, 0, 2)
√
⇒ ||x|| = 21
y = (2, −2, 1, 3)
√
⇒ ||y|| = 18
xT y = 0
⇒x ⊥ y
Ax = 0
u
1 0 2 0
⇒ v =
1 1 4 0
w
⇒ u + 2w = 0, u + v + 4w = 0
⇒ u = −2, v = −2, w = 1
u
Hence x = v
w
−2
= −2
1
−2
Basis f or N (A) = −2
1
2
−2
So, Basis f or C(AT )⊥ = −2
1
x = xr + xn
3 −2
⇒ 3
= xr + −2
3 1
3 −2 5
⇒ xr = 3 − −2 = 5
3 1 2
⇔ (x − y) ⊥ (x + y)
⇔ (x − y)T (x + y) = 0
⇔ (xT − y T )(x + y) = 0
⇔ xT x + xT y − y x − y T y = 0
⇔ ||x||2 − ||y||2 = 0 since xT y = y T x
⇔ ||x|| = ||y||
3
Lecture-21
The cosine of the angle is directly related to inner product. For this consider the
triangle in two dimensional case. Suppose the vectors a and b make angles α and β
with X−axis as in fig. The length k a k is the hypotenuse in the triangle OaQ. So,
a2 a1
the sine and cosine of α are sin α = , cos α = .
kak kak
The cosine of the angle θ = β − α using inner products. For angle β, the sine is
b2 b1
and the cosine is .
kbk kbk
cos θ = cos(β − α)
= cos β cos α + sin β sin α
a1 b 1 + a2 b 2
=
k a kk b k
aT b
=
k a kk b k
ha, bi
=
k a kk b k
Example 1 Project b = (1, 2, 3) onto the line through a = (1, 1, 1) to get xb and p.
aT b 1+1×2+1×3 6
Solution: x b = T = √ = = 2. The projection is p = xba =
a a ( 12 + 12 + 12 )2 3
aT b 6
2(1, 1, 1) = (2, 2, 2). The angle between a and b is cos θ = =√ √ .
k a kk b k 3 14
Schwarz Inequality:
| ab |≤k a kk b k .
2
Exercise-3.2
1. (a) Given any two positive numbers x and y, choose the vector b equal to
√ √
√ √
( x , y), and choose a = y , x ). Apply the Schwarz inequality
1 √
to compare the arithmetic mean (x + y) with the geometric mean xy.
2
√
√
y x
Solution: If a = √ , b = √ . Applying Schwarz inequality, we
x y
get
aT b ≤k a kk b k
√
√ √ x
√ √ √ √
⇒ y x √ ≤k ( y, x) kk ( x, y) k
y
√ √
q
√ 2 √ 2q √ 2 √
⇒ xy + xy ≤ ( y) + ( x) ( x) + ( y)2
√ √ √
⇒ 2 xy ≤ y + x x + y
√ x+y
⇒ xy ≤ (G.M. ≤ A.M.)
2
k x + y k≤k x k + k y k
⇒k x + y k2 ≤ (k x k + k y k)2
⇒ (x + y)T (x + y) ≤k x k2 +2 k x kk y k + k y k2
⇒ (xT + y T )(x + y) ≤k x k2 +2 k x kk y k + k y k2
⇒ xT x + xT y + y T x + y T y ≤k x k2 +2 k x kk y k + k y k2
⇒k x k2 +2xT y+ k y k2 ≤k x k2 +2 k x kk y k + k y k2
⇒ xT y ≤k x kk y k (Schwarz inequality).
3
inequality, we have
aT b ≤k a kk b k
1
1 q √
⇒ a1 a2 · · · an . ≤ a21 + a22 + · · · + a2n 12 + 12 + · · · + 12
..
1
√ q 2
⇒ a1 + a2 + · · · an ≤ n a1 + a22 + · · · + a2n
Assignments
Exercise-3.2, Q. 5,9.
4
Lect-22
Remark 1
aaT
1. Projection matrix is same if a is doubled a = (2, 2, 2, ),.i.e,
aT a
1 1 1
3 3 3
2
aaT 1
1 1 1
P = T = 2 2 2 2 = .
a a 12
3 3 3
2
1 1 1
3 3 3
aaT
2. To project b onto a, multiply it by the projection matrix P : p = P b, P = T
a a
Exercise-3.2
aaT
8. Prove that the trace of P = , which is the sum of its diagonal entries, always
aT a
1.
Solution: Let a = (a1 a2 · · · an )T . Then
aaT
P =
aT a
a1
a
2
. a1 a2 · · · an
..
an
=
a1
a2
a1 a2 · · · an .
..
an
a21 a1 a2 · · · a1 an
2
a a
1 2 a 2 · · · a 2 a n
. .. .. ..
.. . . .
2
an a1 an a2 · · · an
=
a21 + a22 + · · · + a2n
cos θ 1
(a) b = and a =
sin θ 0
2
Solution:
P =x
ba
aT b
= a
aT a
cos θ
1 0
sin θ 1
=
1 0
1 0
0
cos θ 1
=
1 0
cos θ
=
0
1 1
(b) b = and a =
1 −1
3
Solution:
P =x
ba
aT b
= a
aT a
1
1 −1
1 1
=
1 −1
1 −1
−1
0 1
=
2 −1
0
=
0
Assignments
Exercise-3.2, Q. 11,19.
4
Lecture-23
Ax = b (2)
with b 6∈ C(A). Then (2) has no solution. Then one must choose x
b a best fit solution
of it.
1
As b 6∈ C(A), let us obtain a column vector p in C(A), which is the most closest vector
of b in C(A) such that the error e = b − p is of minimum length .i.e,k e k2 =k b − p k2
is minimum. Hence it is called least square approximation.
k e k2 is minimum
⇐⇒ e ⊥ C(A)
⇐⇒ e ∈ N (AT )
⇐⇒ AT e = 0
⇐⇒ AT (b − p) = 0
⇐⇒ AT (b − Ab
x) = 0
⇐⇒ AT b = AT Ab
x
p = A(AT A)−1 AT Ax = Ax = b :
3. When A is square and invertible, the column space is the whole space. Every
vector projects to itself, p equals b, and x
b = x : If A is invertible
This is the only case when we can take apart (AT A)−1 , and write it as A−1 (AT )−1 .
When A is rectangular that is not possible.
4. Suppose A has only one column, containing a. Then the matrix AT A is the
number aT a and x
b is aT b/aT a.
2
5. AT A has the same nullspace as A.
xT AT Ax = 0 ⇒k Ax k2 = 0 ⇒ Ax = 0.
Projection Matrices
i.e, p = P b.
Note that
and
P T = [A(AT A)−1 AT ]T
= (AT )T [(AT A)−1 ]T AT
= A(AT A)−1 AT
= P.
P = A(AT A)−1 AT
= AA−1 (AT )−1 AT
= I.
3
Exercise-3.3
1 0 1
1. Solve Ax = b by least squares and find p = Ab x, if A =
0 1 and b = 1.
1 1 0
Verify that the error b − p is perpendicular to the columns of A.
Solution:
b = (AT A)−1 AT b
x
1
1 2 −1 1 0 1
= 1
3 −1 2 0 1 1
0
1
3
=
1
3
and
P = Ab
x
= (AT A)−1 AT b
1 0 1
3
=0 1 1
1 1 3
1
3
1
= .
3
2
3
1 2
3 3
1
1 2
Here, Error= b − p = 1 − = .
3 3
0
2 −2
3 3
b − p is perpendicular to the column of the matrix.
4
1
2 2 −2 2 2
(b − p)T C1 = 0 = − = 0 and
3 3 3 3 3
1
0
T 2 2 −2 2 2
(b − p) C2 = 1 = − = 0.
3 3 3 3 3
1
1 −1 4
C
4. The following system has no solution: Ax = 1 0
= 5 = b.
D
1 1 9
Sketch and solve a straight-line fit that leads to the minimization of the quadratic
(C − D − 4)2 + (C − 5)2 + (C + D − 9)2 . What is the projection of b onto the
column space of A?
Solution:
b = (AT A)−1 AT b
x
1 4
0 1 1 1
= 3 1 5
−1 0 1
0
2 9
1 1 1
3 3 3 4
= 5
−1 1
0 9
2 2
6
= 5 .
2
Now,
P = Ab
x
1 −1
6
= 1 0 5
1 1 2
7
2
= 6 .
17
2
5
Assignments
Exercise-3.3, Q. 2,9,12,24.
6
Lect-24
b = C + Dt
or
C + Dt1 = b1
C + Dt2 = b2
..
.
C + Dtm = bm
1 t1 b1
1 t C b
2 2
or . = . or Ax = b, x b = (C,
b D).
b
.. ..
D
1 tm bm
m
X
E 2 =k b − Ax k2 = (bi − C − Dti )2 , where k E k2 is minimum for some C and D.
i=1
∂E 2
=0 (1)
∂C
∂E 2
=0 (2)
∂D
Solving (1) and (2) for C and D we get the required straight line
b = C + Dt.
or
1 t1
1 t
2
AT Ab
x = AT b, A = .
..
1 tm
C
b
⇐⇒ AT A = AT b
D
b
1 t1 b1
1 1 ... 1 1 t2 C
b 1 1 ... 1 b2
⇐⇒ =
. .
. .
t1 t2 . . . tm .
D
b t1 t2 . . . tm .
1 tm bm
Xm X m
m ti b bi
C i=1
i=1
⇐⇒ X = X
m m m
ti
X
t2 Db
tb
i i i
i=1 i=1 i=1
Exercise-3.3
1 1 1
A=
1 ,
−1 b=
2 .
−2 4 7
Split b into p + q, with p in the column space and q perpendicular to that space.
Which of the four subspaces contains q?
2
Solution:
p = Ab
x
18 8
1 1 44
44 1
1 1 −2
= 1 −1 8
6
2
1 −1 4
−2 4 44 44 7
26 14
44 44
1
10
2 1
1 −2
= 2
44 44 −1 4
1
7
−4 8
44 44
40 12 4
44 44 44
1
12 8 −12
= 2
44 44 44
7
4 −12 40
44 44 44
92
1
= −56 .
44
260
92 1
1
Let find the vector q such that b = p + q ⇒ q = p−b = −56 − 2 =
44
260 7
48 48 1 1 0
1
−144 . Thus, q T A = 1
= 0. Hence, q belongs
−144 1 −1
44 44
−48 −48 −2 4 0
to the left null space.
Assignments
Exercise- 3.3, Q. 12, 24.
3
Lect-25
a b c a b c
e f d f d e
2. det d e f = d e f = a − b + c
h i g i g h
g h i g h i
1 0 0
1 0
3. The determinant of the identity matrix is 1, eg- = 1 and 0 1 0 = 1
0 1
0 0 1
a b
4. The determinant changes sign when two rows are interchanged, i.e., =
c d
c d
− .
a b
0 0
a + a b + b a b
5. The determinant depends linearly on the first row, i.e., = +
c d c d
0 0
a b ta tb a b
and = t .
c d c d c d
a c
6. If any two rows of A are equal then det(A) = 0, eg, = 0.
a c
7. Substracting
a multiple of one
row from another row leaves the same determinant,
a b a − tc b − td
i.e., = .
c d c d
9. det(AB) = det(A)det(B)
1 2 −2 0
2 3 −4 1
(i) det .
−1 −2 0 2
0 2 5 3
Solution:
1 2 −2 0 1 2 −2 0
2 3 −4 1 0 −1 0 1
= R2 ← R2 − 2R1 , R3 ← R3 + R1
−1 −2 0 2 0 0 −2 1
0 2 5 3 0 2 5 3
1 2 −2 0
0 −1 0 1
= R4 ← R4 + 2R2
0 0 −2 2
0 0 5 5
1 2 −2 0
0 −1 0 1 5
= R4 ← R4 + R3
2
0 0 −2 2
0 0 0 10
2 −1 0 0
−1 2 −1 0
(ii) det .
0 −1 2 −1
0 0 −1 −2
2
Solution:
0 2 −1 0 0
2 −1 0
3
−1 2 −1 0 0 −1 0
1
=
2 R2 ← R2 + R1
2
0 −1 2 −1 0 −1 2 −1
0 0 −1 −2 0 0 −1 −2
2
−1 0 0
3
0 −1 0
2 R3 ← R3 + 2
= 4 R2
0 0
−1 3
3
0 0 −1 −2
2 −1 0 0
3
−1 0
0 3
=
2 R4 ← R4 +
R3
4 4
0 0 −1
3
0 5
0 0
4
3 4 5
= 2 × × × = 5.
2 3 4
Solution:
3
1 2 −1 2 2 −1 2
(a) A = 4
2 −1 2 = 8 −4 8 . Hence, det(A) = 8 −4 8 =
2 4 −2 4 4 −2 4
2
−1 2
0
0 0 = 0
0 0 0
(b) | U |= 4 × 1 × 2 × 2 = 16
Assignments
Exercise-4.2, Q. 2,6,13.
4
LECTURE-26
AN INVERSE FORMULA
1
A−1 = adjA
detA
Hence
1 7 −3
A−1 =
2
−4 2
CARMER’S RULE
detAi
xi = , i = 1, 2, · · ·
detA
1
EXAMPLE 2. (Text Q. 14) Use Cramer’s rule to solve the system
x1 + 4x2 = 2
(b) 2x1 + x2 = 1
x1 + 2x2 + x3 = 0
x2 + 2x3 = 0
detA1 −6
x1 = = = −2 (1)
detA 3
detA2 3
x2 = = =1 (2)
detA 3
y
C
5
D
4
1
B
A E
0
−2 −1 0 1 2 3 4 5 x
−1
2
From the figure OA = l(length) and DE = h(heigh) where h = |E − D|
of A)
y
4
B(−1, 3) 3
2 A(2, 2)
0
−2 −1 C(0,
0 0) 1 2 3 x
−1
−2
EXAMPLE 3(b) Move the third vertex to C = (1, −4) and justify the formula
x1 y1 1 2 2 1
1 1 19
area(ABC) = det x2 y2 1 = det −1 3 1
= 2
2 2
x3 y 3 1 1 −4 1
3
y
4
B(−1, 3) 3
2 A(2, 2)
0
−3 −2 −1 0 1 2 3 4 x
−1
−2
−3
−4
C(1, −4)
EXAMPLE 3(c) Sketch A0 = (1, 6), B 0 = (−2, 7), and C 0 = (0, 0) and their
relation to A, B, C.
1 6 1
1 6 19
1 1
area(A0 B 0 C 0 ) = det
−2 7 1 = det =
2 2 2
−2 7
0 0 1
4
y
8
B 0 (−2, 7) 7
6 A0 (1, 6)
B(−1, 3) 3
2 A(2, 2)
0
−3 −2 −1C 0 (0,
0 0) 1 2 3 x
−1
−2
−3
−4
C(1, −4)
−5
The formula is justified because A0 = (1, 6), B 0 = (−2, 7), and C 0 = (0, 0) are
EXAMPLE 4 (Text Q.29) A box has edges from (0, 0, 0) to (3, 1, 1) and (1, 1, 3).
Find its volume and also find the area of each parallelogram face?
Solution:(First Part)
5
y
x
z
−→ →
OA = −
u = (3, 1, 1)
−−→ →
OB = −
v = (1, 3, 1)
−→ →
OC = −
w = (1, 1, 3)
3 1 1
→
− →
− →
−
Volume of the box is V = ( u × v · u ) =
1 3 1 = 20
1 1 3
(Second Part) Area of the face
−→ −−→
OAEB = ||(OA × OB)|| = ||(→
−
u ×→
−
v )||
Now,
i j k
→
− →
−
(u × v)=
3 1 1 = −2i − 2j + 8k
1 3 1
Hence,
√ √
||(→
−
u ×→
− p
v )|| = (−2)2 + (−2)2 + 82 = 72 = 6 2
Similarly,
−−→ −→
OBGC = ||(OB × OC)|| = ||(→
−
v ×→
−
w )||
6
Now,
i j k
→
− →
−
( v × w) =
1 3 1 = 8i − 2j − 2k
1 1 3
Hence,
√ √
||(→
−
v ×→
− p
w )|| = 82 + (−2)2 + (−2)2 = 72 = 6 2
−→ −→
OADC = ||(OA × OC)|| = ||(→
−
u ×→
−
w )||
Now,
i j k
→
− →
−
( u × w) =
= 2i − 8j + 2k
3 1 1
1 1 3
Hence,
√ √
||(→
−
u ×→
− p
w )|| = 22 + (−8)2 + (2)2 = 72 = 6 2
Q.27 The Parallelogram with sides (2, 1) and (2, 3) has the same area as the par-
allelogram with sides (2, 2) and (1, 3). Find those areas from 2 by 2 determinants
7
LECTURE-27
5.1 INTRODECTION
Eigenvalues and Eigenvectors are most important and interesting topic in Linear
Algebra. Some basic concept of determinant and Matrices will be use to study the
Eigenvalues and Eigenvectors will help to solve many problems in Linear Al-
gebra. However the basic concepts of Eigenvalues and Eigenvectors are useful
throughout pure and applied mathematics. Eigenvalues are also used to study
differential equations and continuous dynamics systems, they provide critical in-
formation in engineering design and they arise naturally in fields such as physics
and chemistry.
in goes vector then outcomes vector is Ax (i.e. gradient of A). It is like a function,
x is in goes then then f (x) is outcomes function. Since Ax is a vector, so, question
will be arise about the direction of Ax. This outcome vector may be goes different
write
Ax = λx.
This special vector of x is called eigenvector of the matrix A and this scalar value
1
of λ is called the eigenvalue of the matrix A.
Example 1. Let
3 −2 −1 2
A=
,
u=
,
and v =
.
1 0 1 1
Solution:
3 −2 −1 −5 5 −1
Au = · = = (−1) 6= λ 6= λu
1 0 1 −1 1 1
ple of u. Again,
3 −2 2 4 2 2
Av = · = = (2) = λ = λv
1 0 1 2 1 1
Av
u v
x
Au
2
Example 2. Let
1 3 −3 2
A=
,
u=
,
and v =
.
2 2 2 −2
Solution:
1 3 −3 3 −3
Au = · = = (−1) = λu
2 2 2 −2 2
where λ = −1. Therefore u is an eigenvector corresponding to an eigenvalue of
1 3 2 −4 2
Av = · = 6= λ 6= λv
2 2 −2 0 −2
Thus v is not an eigenvector of A, because Av is not a multiple of v.
Av
x
Au
v
Above two examples, it is clear to see that some special vectors only the eigenvec-
tor corresponding to eigenvalue λ of the any matrix A. This scalar value λ may be
3
zero or nonzero(i.e. positive, negative or imaginary). The numbers of eigenvalue
(i.e. λ)and eigenvectors are depends on the order of the matrix. If the order of the
matrix is n × n then we will get n eigenvalues and for each corresponding eigenval-
ues will get n linearly independent eigenvectors. But some times eigenvalue may
DEFINATION
vector corresponding to λ.
i.e. det(A) = 0
then λ = 0.
is the key equation to find the eigenvalue, and is called eigenvalue equation or
4
characteristic equation.
The first step is to understand how eigenvalues can be useful. One of their appli-
dv
= 4v − 5w, v=8 t=0
dt
dv
= 2v − 3w, w = 5 t = 0.
dt
du
= Au, with u = u(0)att = 0 (1)
dt
where
v(t) 8) 4 −5)
u(t) =
,
att = 0, u(0) =
,
A=
w(t) 5 2 −3
The pure exponential solution of the above the equation (1) is u(t) = eat u(0).
Now we are going to find the eigenvalue and eigenvector of the equation (1) with
i.e. λ2 − λ − 2 = 0
i.e. λ = −1 or λ=2
5
Therefor λ = −1 and lambda = 2 are the eigenvalue of the matrix A
vector x in its nullspace. In fact the nullspace contains a whole line of eigenvectors.
It is a subspace.
(A + I)x = 0
5 −5 y 0
i.e. · =
2 −2 z 0
(A − 2I)x = 0
2 −5 y 0
i.e. · =
2 −5 z 0
Eigenvaetor for λ2 = 2,
5
x2 =
2
6
y
x2
x1
The special solution of equation (1) is u = eλt x. Therefore the complete solution
equation (1) is
Where c1 and c2 are two free parameter. The initial condition of the system is
u = u(0) at t = 0.
c1 x 1 + c2 x 2 = 0
1 5 c1 8
i.e. · =
1 2 c2 5
equation is
1 5
u(t) = 3e−t x1 · + e2t
(3)
1 2
7
LECTURE-28
Example 4 The eigenvalues of the diagonal are main diagonal element of the
matrix.
Solution:
Let
3 0
A=
has λ1 = 3 and λ2 = 2.(try yourself )
0 2
(A − 3I)x = 0
0 0 y 0
i.e. · =
0 −1 z 0
Eigenvaetor for λ1 = 3,
1
x1 =
0
Let x2 be the eigenvector of the corresponding eigenvalue λ2 = 2.
(A − 2I)x = 0
1 0 y 0
i.e. · =
0 0 z 0
Eigenvaetor for λ2 = 2,
0
x2 =
1
1
y
x2
x1
x
Solution:
1 1
2 2
A=
has λ1 = 1 and λ2 = 0.(check yourself )
1 1
2 2
(A − I)x = 0
1 1
2 − 1 2 y 0
i.e. · =
1 1
2 2
−1 z 0
−1 1
2 y 0
2
i.e. · = R2 ← R2 + R1
1 −1
2 2
z 0
−1 1
2 2 y 0
i.e. · =
0 0 z 0
Eigenvaetor for λ1 = 1,
1
x1 =
1
2
(A − 0 · I)x = 0
1 1
2 −0 2 y 0
i.e. · =
1 1
2 2
−0 z 0
1 1
2 2 y 0
i.e. · = , R2 ← R2 − R1
1 1
2 2
z 0
1 1
2 2 y 0
i.e. · =
0 0 z 0
Eigenvaetor for λ2 = 0,
1
x2 =
−1
x1
x2
Solution: Let
1 4 5
A = 0 3 6
4
0 0 12
The characteristic equation is
det(A − λI) = 0
3
1 − λ 4 5
i.e. 0 3
−λ 6
4
1
0 0 − λ = 0
2
3 1
i.e. (1 − λ)( − λ)( − λ = 0)
4 2
3 1
i.e. λ = 1, λ= or
4 2
The above eigenvalues are the main diagonal of the given triangular matrix.
PROPERTIES OF EIGENVALUES
1. The sum of the eigenvalue of a matrix is equal to the trace of the matrix.(Trace
e.g. Let
a b
A=
c d
Since A is a 2 × 2 matirx. Therefore the numbers eigenvalues will be two.
3. Any square matrix A and its transpose AT have the same eigenvalue.
4
5. Eigenvalue of the projection matrix are always 1 or 0. If numbers of
1
8. If λ is an eigenvalue of an orthogonal matrix A. Then λ
is also an
Ax = λx (λ is an eigenvalue of A)
i.e. AT Ax = λAT x
i.e. Ix = λAT x
1
i.e. x = AT x
λ
1
i.e. AT x = x
λ
1
i.e. isaneigenvalueof AT
λ
1
we know that A and AT have same eigenvalue. Therefore λ and λ
is an eigenvalue
of A as well as AT
1
NOTE: If A is an orthogonal matrix then λ or λ
are an eigenvalue of A as well
as AT .
5
(d) The eigenvalue of A ± kI λ1 ± k, λ2 ± k, λ3 ± k, ....., λm ± k
PROPERTIES OF EIGENVECTOR
x1 , x2 , x3 then
x1 T · x2 = 0 x2 T · x 3 = 0 x3 T · x1 = 0
Verify that the trace equals the sum of the eigenvalues, and the determinant equals
their product.
det.(A − λ.I) = 0
1 − λ −1
i.e. =0
2
4 − λ
6
i.e. (1 − λ)(4 − λ) + 2 = 0
i.e λ2 − 5λ + 6 = 0
i.e. λ = 2 or λ=3
(A − 2I)x = 0
−1 −1 y 0
i.e. · =
2 2 z 0
−1 −1 y 0
i.e. · = R2 ← R2 + 2R1
0 0 z 0
Eigenvaetor for λ1 = 2,
−1
x1 =
1
(A − 3I)x = 0
−2 −1 y 0
i.e. · =
2 1 z 0
−2 −1 y 0
i.e. · = , R2 ← R2 + R1
0 0 z 0
Eigenvaetor for λ2 = 3,
7
1
x2 =
−2
x1
x2
Second part
Determinant of A,
1 −1
=6
2 4
Therefore Determinant of A=Product of the eigenvalue.
du
Q.2. With the same matrix, Solve the differential equation dt
= Au,
0
u(0) =
, What are the two pure exponential solutions?
6
Solution: The given coefficient matrix is
1 −1
A=
2 4
8
du
The pure exponential solution the differential equation dt
= Au is u = eλt x,
−1 1
u(t) = eλ1 t x1 = e2 ·
and u(t) = eλ2 t x1 = e3 ·
1 2
These two special solution gives the complete solution. Therefore the complete
du
solution dt
= Au is
c1 x 1 + c2 x 2 = 0
−1 1 c1 0
i.e. · = R2 ← R2 + R1
1 2 c2 6
−1 1 c1 0
i.e. · =
0 3 c2 6
equation is
−1 1
u(t) = 2e2t x1 ·
+ e3t
1 2
9
LECTURE-29
Q.6. Shoe that the determinant equals the product of the eigenvalues by imagining
Solution:
If λ = 0 then,
detA = λ1 λ2 λ3 ....λn
Solution:
Since the A is triangular matrix. Therefore the eigenvalues of the matrix A are
λ1 = 3, λ2 = 1 and λ3 = 0
λ1 + λ2 + λ3 = 3 + 1 + 0 = 4
1
Trace of of the matrix A=3 + 1 + 0=4
λ1 λ2 λ3 = 3 · 1 · 0 = 0
Second part
(A − 3I)x = 0
3 − 3 4 2 x 0
i.e. 0 1 − 3 2 · y = 0
0 0 0−3 z 0
0 4 2 x 0
0 −2 2 · y = 0 1
i.e. R2 ← R2 + R1
2
0 0 −3 z 0
0 4 2 x 0
i.e. 0 0 3 · y = 0 R3 ← R3 + R2
0 0 −3 z 0
2
0 4 2 x 0
i.e. 0 0 3 · y = 0
0 0 0 z 0
Eigenvaetor for λ1 = 3, is
1
x1 =
0
0
(A − I)x = 0
3 − 1 4 2 x 0
i.e. 0 1 − 1 2 · y = 0
0 0 0−1 z 0
2 4 2 x 0
0 0 2 · y = 0 R3 ← R3 + 1 R2
i.e.
2
0 0 −1 z 0
2 4 2 x 0
i.e. 0 0 2 · y = 0
0 0 0 z 0
3
Eigenvaetor for λ2 = 1, is
2
x2 =
1
0
(A − 0)x = 0
3 − 0 4 2 x 0
i.e. 0 1 − 0 2 · y = 0
0 0 0−0 z 0
3 4 2 x 0
i.e. 0 1 2 · y = 0
0 0 0 z 0
Here z is the free variable since z’s pivot is missing. Here y = −2z and x = 2z
Eigenvaetor for λ3 = 0, is
2
x3 =
−2
1
4
y
1
0 2
1
0
0 x
2
z
−2
1
Solution:
We know that, sum of the eigenvalue of the matrix is equal to trace of the matrix.
λ1 + λ2 = 3 − 3 = 0 (1)
3 4
λ1 × λ2 = = −25
(2)
4 −3
corresponding eigenvalue λ1 = 5.
(A − 5I)x = 0
5
−2 4 y 0
i.e. · = , R2 ← R2 + 2R1
4 −8 z 0
−2 4 y 0
i.e. · =
0 0 z 0
Eigenvaetor for λ1 = 5,
−2
x1 =
−1
Let x2 be the eigenvector of the corresponding eigenvalue λ2 = −5.
(A + 5I)x = 0
8 4 y 0
i.e. · =
4 2 z 0
8 4 y 0 1
i.e. · = , R2 ← R2 − R1
2
0 0 z 0
Eigenvaetor for λ2 = −5,
1
x2 =
−2
x
x1
x2
6
Q.17. The Powers Ak of this matrix A approaches a limit as k → ∞
.8 .3 .70 .45 .6 .6
and A∞
A =
and A2 =
=
.2 .7 .30 .55 .4 .4
Solution: Given
.8 .3
A =
.2 .7
The characteristic equation for the matrix A is
.8 − λ .3
det(A − λI) =
.2
.7 − λ
= λ2 − 1.5k + 0.5
= λ1 = 1 or λ2 = 0.5
7
Again,
1
µi = · (λi + νi )
2
1
µ i xi = · (λi mui + νi )xi
2
1
A2 xi = · (A + A∞ )xi
2
1 1
A2 x1 + A2 x2 = · (A + A∞ )x1 + · (A + A∞ )xi
2 2
1
A2 (x1 + X2 ) = (A + A∞ )(x1 + X2 )
2
1
A2 = (A + A∞ )
2
Q.3. If you shift to A − 7I, what are the eigenvalue and eigenvectors and how are
Q.9 The eigenvalue of A equal the eigenvalue of AT . This is because det(A − λI)
equals det(AT − λI). That is true because ............. Show by an example that the
eigenvalues
a b
c d
Solution: Show that (1, 1) is a vector of the matrix A for some eigenvalue λ. So,
we have
to
find the eigenvalue for the given eigenvector.
1
Let x =
1
8
a b 1 a + b a + b 1
Ax = · = = = (a + b) · since a + b = c + d
c d 1 c+d a+b 1
9
LECTURE-30
λ1
λ2
Diagonalization S −1 AS = Λ= (1)
..
.
λn
ital lambda because of the small lambdas for the eigenvalues on its diagonal.
multiplying by S −1
AS = SΛ −−−−−−−−−−−→ S −1 AS = Λ (2)
from the left
Eigenvdecomposition
multiplying by S −1
From AS = SΛ −−−−−−−−−−−→ A = SΛS −1
from the right
1
dents.
Note that depending on which side you multiply by the inverse of “S”, you get
sition
invertible).
Example 1 The eigenvalues of the diagonal are main diagonal element of the
matrix.
Solution:
3 0
Let A =
has λ1 = 3 and λ2 = 2.(try yourself )
0 2
(A − 3I)x = 0
0 0 y 0
i.e. · =
0 −1 z 0
2
Eigenvaetor for λ1 = 3,
1
x1 =
0
(A − 2I)x = 0
1 0 y 0
i.e. · =
0 0 z 0
Eigenvaetor for λ2 = 2,
0
x2 =
1
x2
x1
x
From the above figure it is clear that, the eigenvector of matrix A are independent.
Example 2 The eigenvalues of the diagonal are main diagonal element of the
matrix.
3
Solution:
3 1
Let B =
has λ1 = 3 and λ2 = 3.(try yourself )
0 3
(A − 3I)x = 0
0 1 y 0
i.e. · =
0 0 z 0
Eigenvaetor for λ1 = 3,
1
x1 =
0
Eigenvaetor for λ2 = 3,
2
x2 =
0
x1
x
x2
From the above figure it is clear that, the eigenvector of matrix A are not inde-
pendent.
4
Therefore the eigenvector matrix is
1 2
S=
0 0
Since the matrix S is singular, therefore inverse of the this matrix is not exist.
peated eigenvalues leave even more freedom in S. For the trivial example A = I,
any invertible S will do: S −1 IS is always diagonal (Λ is just I). All vectors are
Remark 3: Other matrices S will not produce a diagonal Λ. Suppose the first
the first column of AS, which by matrix multiplication is Ay, then y must be an
Example:
0 1
A=
0 0
5
Its eigenvalues are λ1 = λ2 = 0, since it is triangular with zeros on the diagonal:
−λ 1
A − λI = = λ2 .
0 −λ
0 1 0 c
x = x, or x = .
0 0 0 0
S.
= 0. It came from λ1 = λ2 .
3 1 2 −1
Repeated eigenvalues A=
and B =
0 3 1 0
Their eigenvalues are 3, 3 and 1, 1. They are not singular! The problem is the
6
Invertibility of A depends on nonzero eigenvalues.
Diagonalization can fail only if there are repeated eigenvalues. Even then, it does
not always fail. A = I has repeated eigenvalues 1,1,...,1 but it is already diagonal!
The test is to check, for an eigenvalue that is repeated p times, whether there are
complete that circle of ideas, we have to show that distinct eigenvalues present no
problem.
2 −2 2 0 2 0
A1 =
,
A2 =
,
and A3 =
2 −2 2 −2 2 2
Solution:
7
Lecture-31
3 1 2 −1
Repeated eigenvalues A= and B =
0 3 1 0
Their eigenvalues are 3, 3 and 1, 1. They are not singular! The problem is the
shortage of eigenvectors—which are needed for S.
Diagonalization can fail only if there are repeated eigenvalues. Even then, it
does not always fail. A = I has repeated eigenvalues 1,1,...,1 but it is already
diagonal! There is no shortage of eigenvectors in that case.
The test is to check, for an eigenvalue that is repeated p times, whether there are
p independent eigenvectors—in other words, whether A − λI has rank n − p. To
complete that circle of ideas, we have to show that distinct eigenvalues present
no problem .
1
two vectors are independent; only the trivial combination gives zero.
det(A − λI) = 0 ⇒ λ = 1, 2, 7
Since all the eigenvalues are distinct, the matrix A is diagonalized and the di-
agonal matrix
1 0 0
Λ = 0 2 0 .
0 0 7
There is one more situation in which the calculations are easy. The eigenvalue
of A2 are exactly λ21 , ..., λ2n , and every eigenvector of A is also an eigenvector of
A2 . We start from Ax = λx, and multiply again by A:
A2 x = Aλx = λAx = λ2 x.
Thus λ2 is an eigenvalue of A2 , with the same eigenvector x. If the first mul-
tiplication by A leaves the direction of x unchanged, then so does the second.
The same result comes from diagonalization, by squaring S −1 AS = Λ:
2
Eigenvalues of A2 (S −1 AS)(S −1 AS) = Λ2 or S −1 A2 S = Λ2 .
2
The matrix A is diagonalized by the same S, so the eigenvectors are unchanged.
The eigenvalues are squared. This continues to hold for any power of A:
The eigenvalues of Ak are λk1 , λk2 , ..., λkn and each eigenvector of A is
still an eigenvector of Ak .
Answer : |A − λI|= 0
1 − λ 1
= 0 ⇒ λ(λ − 2) = 0 ⇒ λ = 2, 0.
1 1 − λ
3
−x1 + x2 = 0
x 1
x= 1 = .
x2 1
1
Eigenvector corresponding to λ = 2 is : x = .
1
For λ = 0, solve (A − λI)x = 0.
1 1 x1 0
=
1 1 x2 0
x1 + x2 = 0
x 1
x= 1 = .
x2 −1
1
Eigenvector corresponding to λ = 0 is : x = .
−1
1 1 2 0
S= and Λ =
1 −1 0 0
−1
−1 1 1 2 0 1 1
So A = SΛS =
1 −1 0 0 1 −1
Similarly,
|B − λI|= 0
2 − λ 1
= 0 ⇒ λ(λ − 2) = 0 ⇒ λ = 2, 0.
0 −λ
x2 = 0
x 1
x= 1 = .
x2 0
1
Eigenvector corresponding to λ = 2 is : x = .
0
For λ = 0, solve (B − λI)x = 0.
4
2 1 x1 0
=
0 0 x2 0
2x1 + x2 = 0
x 1
x= 1 = .
x2 −2
1
Eigenvector corresponding to λ = 0 is : x = .
−2
1 0 2 0
S= and Λ =
1 −2 0 0
−1
1 1 2 0 1 1
So A = SΛS −1 =
0 −2 0 0 0 −2
5
Lecture-32
4 1
Q.4 If A = , find A100 by diagonalizing A.
1 2
Answer:
λ1 + λ2 = 4 + 2 = 6
λ1 λ2 = 5 ⇒ λ = 5, 1.
−x1 + 3x2 = 0
x 3
x= 1 = .
x2 1
3
Eigenvector corresponding to λ = 5 is : x = .
1
For λ = 1, solve (A − λI)x = 0.
3 3 x1 0
=
1 1 x2 0
x1 + x2 = 0
x 1
x= 1 = .
x2 −1
1
Eigenvector corresponding to λ = 1 is : x = .
−1
1
3 1 5 0
S= and Λ =
1 −1 0 1
−1
−1 3 1 5 0 3 1
So A = SΛS =
1 −1 0 1 1 −1
100 −1
100 −1 3 1 5 0 3 1
A = SΛS =
1 −1 0 1100 1 −1
1 3 × 5100 + 1 3 × 5100 − 3
= .
4 5100 − 1 5100 + 3
Answer:
We first find its eigenvalues by solving the characteristic equation:
1 − λ 1 1 1 4 − λ 4−λ 4−λ 4 − λ
1 1 − λ 1 1 R1 ← R1 + R2 + R3 + R4 1 1−λ 1 1
0 = det(A−λI) = −−−−−−−−−−−−−−−−→
1 1 1−λ 1 1 1 1−λ 1
1 1 1 1 − λ 1 1 1 1 − λ
1 1 1 1 1 1 1 1
1 1−λ 1 1 R2 ← R2 − R1 , R3 ← R3 − R1 0 −λ 0 0
= (4−λ) −−−−−−−−−−−−−−−−−−−−→ (4−λ)
1 1 1−λ 1 R4 ←R4 −R1 0 0 −λ 0
1 1 1 1 − λ 0 0 0 −λ
λ1 =0
λ
2 =0
= λ3 (λ − 4) ⇒
λ3 =0
λ4 =4
2
We now find the eigenvectors corresponding to λ = 0:
1 1 1 1 0 1 1 1 1 0
1 1 1 1 0
=⇒ 0 0 0 0 0
=⇒ a + b + c + d = 0
1 1 1 1 0 0 0 0 0 0
1 1 1 1 0 0 0 0 0 0
−b − c − d −1 0 0
b 0 −1 0
=⇒ x = = a + b + c
−1
c 0 0
d 1 1 1
−1 0 0
0 −1 0
SO 0 , 0 and −1 are the eigen vectors with respect to the eigen
1 1 1
value 0.
Notice that
1 1 1 1 1 1
1 1 1 1 1 1
= 4 =⇒ λ4 = 4 and the corresponding eigen vec-
1 1 1 1 1 1
1 1 1 1 1 1
1
1
tor is:
1.
1
( 1 )
1 1
By normalizing it we get √ .
4 1
1
3
The first diagonalizing matrix
−1 0 0 1
0 −1 0 1
S1 =
0 0 −1 1
1 1 1 1
1 1
√ √
− 2 0 0
4
1 1
0 −√ 0 √
S2 =
2 4
1 1
0 0 −√ √
2 4
1 1 1 1
√ √ √ √
2 2 2 4
ASSIGNMENT PROBLEMS
2 −2 2 0 2 0
A1 = A2 = A3 =
2 −2 2 −2 2 2
(a) A is invertible.
(b) A is diagonalizable.
4
1 2 1 2
A= and B = .
0 3 2 2
(a) A is invertible.
(b) A is diagonalizable.
(c) S is invertible.
(d) S is diagonalizable.
5
Lecture-33
By definition, the complex vector space C n contains all vectors x with n complex
components:
x1
x2
Complex vector x = . with components xj = aj + ibj .
..
xn
1 2 2+i
Example x= and kxk = 2; y = and kyk2 = 25.
i 2 − 4i
For complex vector, kxk2 = x̄T x. ( For real vectors, kxk2 = xT x.)
Note that ȳ T x is different from x̄T y; we have to watch the order of the vectors.
For vectors and matrices, a superscript H (or a star) combines both operations -
conjugate and transpose. This matrix ĀT = AH = A∗ is called “A Hermitian”:
1
H
2+i 3i
4 − i 2−i 4+i 0
Conjugate transpose 5 = .
−3i 5 0
0 0
Note-
2
The outside numbers are λ1 xH y = λ2 xH y, since the λ’s are real. Now we use
the assumption λ1 6= λ2 , which forces the conclusion that xH y = 0.
3
Lecture-34
λ –xT1 –
1
A = QΛQT = x1 ... xn
.. ..
. .
λn –xTn –
1
Length Unchanged kU xk2 = xH U H U x = kxk2 .
This follows directly from U x = λx, by comparing the lengths of the two sides:
kU xk = kxk by Property-1, and always kλxk = |λ|kxk. Therefore |λ| = 1.
(U V )H (U V ) = (V H U H )(U V ) = V IV H = V V H = I
2
So indeed (U V )H = (U V )−1 , and thus U V is unitary.
1 i 0
A= .
i 0 1
Answer :
1 − i2
1 −i i −i 2 i −i
1 i 0
C = AH A = −i 0 = −i −i2 0 = −i 1 0
i 0 1
0 1 i 0 1 i 0 1
2 i −i
CH = −i 1 0
i 0 1
3
Lecture-35
0 1−i 0 −1 + i
A= K=
1+i 3 1+i i
|A − λI| = 0
0 − λ 1 − i
⇒ = 0 ⇒ −λ(1−λ)−(1−i)(1+i) = 0λ2 −λ−2 = 0 ⇒ λ = 2, −1
1+i 1 − λ
−2 1 − i x1 0
=
1+i −1 x2 0
−2x1 + (1 − i)x2 = 0
x1 1 1 1
(1 + i)x1 = x2 x = = . Normalized vector, u = √ .
x2 (1 + i) 3 (1 + i)
1
Similarly, for λ = −1, solve Ax = λy or (A − λI)y = 0
1 1 − i y1 0
=
1+i 2 y2 0
y1 + (1 − i)y2 = 0
y1 −1 + i 1 −1 + i
y1 = (−1 + i)y2 y = = Normalized vector, v = √
y2 1 3 1
1 1 −1 + i
U=√
3 1 + i 1
H 1 1 1−i
U =√
3 −1 − i 1
2 0
Λ=
0 −1
H 1 1 −1 + i 2 0 1 1 1−i
Hence A = U ΛU =√ √
3 1 + i 1 0 −1 3 −1 − i 1
Diagonalization of matrix K
|K − λI| = 0
0 − λ −1 + i
⇒ = 0 ⇒ −λ(i − λ) − (1 + i)(−1 + i) = 0λ2 − iλ + 2 = 0 ⇒ λ =
1+i i−λ
2
2i, −i
−2i −1 + i x1 0
=
1+i −i x2 0
−2ix1 + x2 (−1 + i) = 0
x1 1 1 1
(1 − i)x1 = x2 x = = Normalized vector, u = √
x2 (1 − i) 3 (1 − i)
i −1 + i y1 0
=
1+i 2i y2 0
iy1 + y2 (−1 + i) = 0
y1 −1 − i 1 −1 − i
y1 = (−1 − i)y2 y = = Normalized vector, v = √
y2 1 3 1
1 1 −1 − i
U=√
3 1−i 1
3
1 1 1+i
UH = √
3 −1 + i 1
2i 0
Λ=
0 −i
H 1 1 −1 − i 2i 0 1 1 1+i
Hence A = U ΛU =√ √ .
3 1 − i 1 0 −i 3 −1 + i 1
ASSIGNMENT PROBLEMS
2 − 4i 2 + 4i
x= and y= .
4i 4i
i 1 i
A=
1 i i
4
Q.43 43 A matrix with orthonormal eigenvectors has the form
A = U ΛU −1 = U ΛU H . Prove that AAH = AH A. These are exactly nor-
mal matrices.
5
POSITIVE DEFINITE MATRIX
lecturenote36
Example:
2 −1
A= (2)
−1 2
1
(I) xT Ax ≥ 0 for all nonzero real vectors x.
(II) All the eigen values of A satisfy λi ≥ 0
(III)No principal submatrices have negative determinants..
(IV) No pivots negative.
Example:
0 0
A= (3)
0 1
Example:
−1 0
A= (4)
0 −3
Example:
0 0
A= (5)
0 −1
Note: This is another way to test positive definite, the symmetric matrix A is positive definite
if and only if there is a matrix R with independent columns such that A = RT R
√ √ √
Elimination A = LDLT = (L D)( DLT ) so take R = DLT
√ √ √
Eigenvalues A = QΛQT = (Q Λ)( ΛQT ) so take R = ΛQT
2
6 Problem Set 6.2
1. Decide for or against the positive definiteness of
2 −1 −1 2 −1 −1 0 1 2
A = −1 2 −1, B = −1 2 1 , C = 1 0 1
−1 −1 2 −1 1 2 2 1 0
Answer: detA=0 implies since the determinant is the product of its eigenvalue it follows that
atleast of eigenvalues of A is equal to zero.Thus A is not positive definite.
det(B − λI)=0
⇒ (λ − 1)2 (λ − 4)=0
⇒ λ = 1, 1, 4 > 0. Thus B is positive
definite.
0 1
det C2 = -1, where C2 =
1 0
⇒ det C = −1 < 0. Thus C is not positive definite.
3. If A and B are positive definite,m then A + B is positive definite. Pivots and eigenvalues
are not convenient for A + B. Much better to prove xT (A + B)x > 0.
Answer: Given A = RT R
⇒ |xT Ay|2 = |xT RT Ry|2 = |(Rx)T (Ry)|2 ≤ kRxk2 kRyk2
= [(Rx)T (Rx)][(Ry)T (Ry)]=(xT RT Rx)(y T RT Ry)= (xT Ax)(y T Ay).
Hence proved.
3
Lecturenote37
1 Work Step
To find U , we follow the following steps:
1. Find eigen values of AAT . Arrange them in decreasing order i.e.
xi
3. Normalize the eigenvectors ui = kxi k
4. Write the orthogonal matrix U = u1 u2 ........ um with ui as its ith column.
vn
To find Σ, we follow the following steps:
1
1. The diagonal (but rectangular ) matrix Σ has eigen values from AT A
Those positive entries will be σ1 , σ2 , .........σr . They are the singular values of A. They fill the
the first r places on the main diagonal of Σ, the rest are 0.
σ1 0 0 · · · 0
0 σ2 0 · · · 0 √
Σ= 0 0 σ 3 0 · · · . 0 with σi = λi as its dogonal.
· · ·
0 0 0 ··· 0
NOTE: All nonzero eigenvalues of AAT and AT A are same.
SVD of
T
Amxn = Umxm Σmxn Vnxn
1
···
σ1
0 0 0
1
0
σ2
0 ··· 0
T + + T + 1
If A = U ΣV (the SVD), then its pseudoinverse is A = V Σ U , where Σ =
0 0 σ3
0 ··· .
· · ·
0 0 0 ··· 0
T
It is obtained from Σ by replacing all the nonzero singular values by their reciprocals.
If A−1 exist then A−1 = A+
2 Problem Set-6.3
Question 1. (a) Compute AAT and its eigenvalues σ1 2 , 0 and unit eigenvectors u1 , u2 .
(b) Choose signs so that Av1 = σ1 u1 and verify the SVD:
1 4 σ1 T
= u1 u2 v1 v2
2 8 0
.
(c) Which four vectors give orthonormal bases for C(A), N (A), C(AT ), N (AT )
Answer: (a) Let’s compute AAT and its eigen value
T
T 1 4 1 4 1 4 1 2 17 34
AA = = =
2 8 2 8 2 8 4 8 34 68
det( AAT - λI ) = 0
17 − λ 34
⇒ =0
34 68 − λ
⇒ (17 − λ)(68 − λ) − 34 = 0
⇒ λ2 − 85λ = 0
2
⇒ λ= 0, 85
⇒ σ1 2 = λ1 = 85, λ2 = 0
Now let’s find corresponding unit eigen vectors for eigenvalues of AAT .
( AAT - λ1 I) x = 0
−68 34 x1 0
⇒ =
34 −17 x2 0
⇒ 68x1 + 34x2 = 0
34x1 − 17x2 = 0
⇒ 2x1 − x2 = 0
⇒ 2x1 = x2
x1
⇒ = x22
1
1
∴x= is an eigenvector for λ1 = 85
2
" #
√1
x 5
⇒ u1 = kxk
= √2
(unit eigenvectors corresponding to λ1 =85)
5
( AAT - λ2 I) x = 0
17 34 x1 0
⇒ =
34 68 x2 0
⇒ 17x1 + 34x2 = 0
34x1 + 68x2 = 0
⇒ x1 + 2x2 = 0
⇒ x1 = −2x2
x1
⇒ = x12
−2
−2
∴x= is an eigenvector for λ2 = 0
1
" #
−2
√
x 5
⇒ u2 = kxk
= √1
(unit eigenvectors corresponding to λ2 = 0)
5
" #
√1
17
(b) If we take v1 = √4
from the previous problem, let’s choose a sign for σ1 so that
17
Av1 = σ1 u1 and verify SVD for the matrixA.
3
" 1 # √ " #
√ √1 √
1 4 √17 √17
Av1 = 4 = = 85 √25 = 85 u1 = σ1 u1
2 8 √17 2 17 5
√
σ1 T 1 −2 85 0 √1 1 −4 1 −2 1 0 1 −4
v1 v2 = √15
u1 u2 =
0 2 1 0 0 17 4 1 2 1 0 0 4 1
1 0 1 −4 1 4
= = =A
2 0 4 1 2 8
(c) Since the rank of A and consequently of AT A and AAT are equal to 1, it follows that u1
forms a basis for C(A), while u2 forms a basis for the N (AT ). On the otherhand, v1 forms a
basis for C(AT ), while v2 forms a basis for N (A).
⇒ λ2 − 3λ + 1 = 0
√
⇒ λ = 23 ± 25 are the eigenvalues.
Now we determine eigenvectors.
AT Ax = λx
2 1 x1 λx1
⇒ =
1 1 x2 λx2
⇒ x1 (λ − 2) = x2
x2
⇐⇒ x1 = λ−2
√
1
(x2 = 1), x1 = λ−2 = 3±√25−4 = −1±2 √5 = 1±2 5
1+√5 1−√5
∴ v1 = 2 and v2 = 2 which norms are
1 1
√ √
5+ 5 5− 5
kv1 k = 2
and kv2 k = 2
4
q √
5+ 5 q √
5− 5
q 10 −
the unit vectors are vˆ1 = 2√
and vˆ2 = q 10
5+ 5 2√
5− 5
q √ q √
On the otherhand we have σ1 = 3+2 5 and σ2 = 3− 5
2
q √ q √
5+ 5 5+ 5
1 1 1 1 q 10 q 10
u1 = σ1 Avˆ1 = 3+√5
q = = vˆ1
2
1 0 2√ 2√
5+ 5 5+ 5
q √
5− 5
q √ 10
1 1 −
5− 5
u2 = σ12 Avˆ2 = q 3−
1
√
5 1
10
= =
2
0 q q
2√ 2√
5− 5
− 5− 5
−vˆ2
Finally the SVD decomposition of the matrix A result:
q √ q
q √ q √ q √ 5+ 5 2√
5+ 5 5− 5 3+ 5 10 5+ 5
0
A = U ΣV T = q 10 q 10 2 q √
2√ 2√
− 0 3− 5 q √ q
5+ 5 5− 5 2 − 5−10 5 2√
5− 5
5
Lecturenote38
1
det( AAT - λI ) = 0
⇒ 4 − λ = 0
λ=4
and x = 1 is an eigen vector.
1
1 1 1 1 1
1
1
1 1 1
AT A =
1 1 1 1 1 = 1
1 1 1
1 1 1 1 1
det( AT A- λI ) = 0
The eigen values of AT A are 4,0,0,0.
T
= 4, (A A − λI)x
Forλ = 0
−3 1 1 1 x1 0 1 −3 1 1 x1 0
1 −3 1 1 x2 0 0 −8 4 4 x2 0
⇒ = =⇒
0 0 −2 −2x3 = 0
1 1 −3 1 x3 0
1 1 1 −3 x4 0 0 0 0 0 x4 0
⇒ −2x3 + 2x4 = 0
−8x2 + 4x3 + 4x4 = 0
x1 − 3x2 + x3 + x4 = 0
⇒ x3 = x4 , let x4 = 1
⇒ x2 =
1,
x1 = 1 1
1 2
1 1
∴ x= 2
1 or x= 1
2
1
1 2
T
Forλ
= 0, (A A −λI)x=0
1 1 1 1 x1 0
1 1 1 1x2 0
⇒1 1 1 1x3 = 0
1 1 1 1 x4 0
⇒ x1 + x2 + x 3 + x4 = 0
−1 −1 −1
1 0 0
∴x= 0 , 1 , 0
0 0 1
−1 −1 −1
√ √ √
2 2 2
√1
0 0
2 ,
√1 , 0
The eigen vectors for λ = 0 are
0 2
0 0 √1
2
1 −1 −1 −1
T
√ √ √
2 2 2 2
1 √1
0 0
= 1 2 0 0 0 21
∴ SV D of A = U ΣV T 2
0 √1 0
2 2
1 √1
2
0 0 2
−1 −1 −1
1
√ √ √ 1 1
2 2 2 2 2 4
1 √1 0 0 1
2 0
∴ A+ = V Σ+ U T = 1 2
√1
1 = 41
2 0 2
0 0
4
1 1 1
2
0 0 √ 0 4
2
2
Now B matrix SVDwill find
out:
0 1
0 1 0
B= , B T = 1 0
1 0 0
0 0
0 1
T 0 1 0 1 0
BB = B = 1 0 =
1 0 0 0 1
0 0
det( BB T - λI ) = 0
1 − λ 0
⇒ =0
0 1 − λ
λ = 1, 1
T
and eigenvectoros
BB are(BB T − λI)x = 0
0 0 x1 0
⇒ =
0 0 x2 0
0 1 0 1
∴ the eigenvectors are x = , . Hence U =
1 0 1 0
0 1 1 0 0
T 0 1 0
B B = 1 0 = 0 1 0
1 0 0
0 0 0 0 0
det( B T B- λI ) = 0
The eigen values of B T B are 1,1,0,.
T
= 1, (B B− λI)x
Forλ = 0
0 0 0 x1 0
⇒ 0 0 0 x2 = 0
0 0 −1 x3 0
1 0
∴ x = 0, 1
0 0
T
Forλ
= 0,(B
B − λI)x
=0
1 0 0 x1 0
⇒ 0 1 0x2 = 0
0 0 0 x3 0
0
x1 = 0, x2 = 0, x3 = 1(f reevariable) so the eigen vector is x = 0. Hereλ is repeated three
1
times,but we
got only one eigen vector. So we have to consider orthornmal eigen vectors.
1 0 0
V = 0 1 0
0 0 1
1 0 0 T
0 1 1 0 0
SVD of B = U ΣV T = 0 1 0
1 0 0 1 0
0 0 1
3
1 0 0 1 0 0 1
0 1
Pseuodinverse is B + = V Σ+ U T = 0 1 00 1 = 1 0
1 0
0 0 1 0 0 0 0
Now
C matrix SVD
will find out:
1 1 1 0
C= , CT =
0 0 1 0
T 1 1 1 0 2 0
CC = =
0 0 1 0 0 0
T
det( CC - λI ) = 0
The eigen values of CC T are 2,0
T
Forλ − λI)x
= 2, (CC =0
0 0 x1 0
⇒ =
0 −2 x2 0
⇒ −2x2 = 0
⇒ x2 = 0and x1 = 1 (free variable)
1
∴x=
0
Forλ = 0(CCT −λI)x
=0
2 0 x1 0
⇒ =
0 0 x2 0
⇒ 2x1 = 0
⇒ x1 = 0and x2 = 1(free variable)
0
∴x=
1
1 0
U =
0 1
T 1 0 1 1 1 1
C C= =
1 0 0 0 1 1
det( C T C- λI ) = 0
The eigen values of C T C are 2,0
T
= 2, (CC −
Forλ λI)x
= 0
−1 1 x1 0
⇒ =
1 −1 x2 0
1
⇒ x1 = x2 , the eigenvector is x=
" # 1
√1
2
. So v1 = √1
2
T
= 0,(C C −λI)x
Forλ =0
1 1 x1 0
⇒ =
1 1 x2 0
1
⇒ x1 = −x2 , the eigenvector is x=
" # −1
√1
2
. So v2 = −1
√
.
2
4
" #
√1 √1
2 2
Hence V = √1 −1
√
2 2
" 1 #T
√1
√
1 0 2 0 √
SVD of C = U ΣV T = 2
−1
2
0 1 0 0 √12 √ 2
" #
1
√ √1 1 1
+ 2 2
√ 0 1 0 0
Pseuodinverse of C is C = √1 √ −1
2 = 21
2 2
0 0 0 1 2
0
5
Lecturenote39
For a positive definite matrix, A. The norm of A is denoted as kAk which is λmax of A
kA−1 k= 1
λmin
of A = 1
2
1
2 1
Now B =
1 2
⇒ det(B − λI)=0
⇒ (2 − λ)2 − 1=0
⇒ λ = 3, 1
kB −1 k= 1
λmin
of B =1
c = kBk kB −1 k = 3
3 1
Now let D =
1 1
⇒ det(D − λI)=0
⇒ (3 − λ)(1 − λ) − 1=0
⇒ λ2 − 4λ +
√3 = 0
⇒ λ=2± 2
√
kDk = λmax of D=2 + 2
kD−1 k= 1
λmin
of D = 1√
2− 2
√
c = kDk kD−1 k = 2+√2
2− 2
Question-17: Find the norms and condition numbers from the square roots of λmax (AT A) and
kA−1 k = √ 1
= 1
2
λmin of AT A
c = kAk kA−1 k = 2* 12 = 1
1 1 T 1 0 T 1 1
Next matrix name as B = ,B = ⇒B B=
0 0 1 0 1 1
The eigen values are 2.0
p √
kBk = λmax of B T B = 2
2
kB −1 k = √ 1
= 10 = ∞
λmin of B T B
c = kBk kB −1 k = 2* 10 = ∞
1 1 T 1 −1 T 2 0
Next matrix name as D = ,D = ⇒D D=
−1 1 1 1 0 2
The eigen values are 2,2.
p √
kDk = λmax of DT D = 2
kD−1 k = √ 1
= √1
2
λmin of DT D
√
c = kDk kD−1 k = 2* √1
2
=1
lecturenote40
ITERATIVE METHODS FOR Ax = b
We know that the solution of the system for Ax = b, required finite number of steps for a full
matrix by Gaussian elimination, but when it is large, we may have to set for an approximate x
that can be obtained more quickly.
An iterative method is easy to invent, by splittin the matrix A. Put A = S − T in Ax = b.
⇒ Sx = T x + b Iterative from xk to xk+1 ⇒ Sxk+1 = T xk + b
The iterative method in the above equation is convergent if and only if every eigen value of
S −1 T satisfies |λ| < 1. Its rate of convergence depends on the maximum of |λ|.
Working Procedure:
1. Write A as sum of Lower trianguar L, Diagonal D, and Upper triangular U matrix form i.e.
A=L+D+U
2. to select S = digonal part of A (Jacobi’s method) i.e. S = D
S = triangular part of A (Gauss-Seidel method) i.e. S = D + L
3.T = (-L-U) for (Jacobi’s method), T = (-U) for (Gauss-Seidel method)
4. Find S −1 T =D−1 (−L − U ) for (Jacobi’s method)
S −1 T =(D + L)−1 (−U ) for (Gauss-Seidel method)
3
2 −1 0 0 0 0 2 0 0 0 −1 0
Answer: A = −1 2 −1 = L+D+U = −1 0 0 + 0 2 0 + 0 0 −1
0 −1 2 0 −1 0 0 0 2 0 0 0
1 1
2
0 0 0 1 0 0 2 0
Jacobi matrix D−1 (−L − U ) = 0 12 −11 0 1 = 12 0 21
0 0 12 0 1 0 0 1 0
1 2 1
2
0 0 0 1 0 0 2 0
Gauss-Seidel matrix (D + L) (−U ) = 4 2 0 0 0 1 = 0 41 21
−1 1 1
1 1
1 0 0 0 0 18 41
. 8 4
4
Lecturenote40
Working Procedure:
1. Write A as sum of Lower trianguar L, Diagonal D, and Upper triangular U matrix form i.e.
A=L+D+U
2. to select S = digonal part of A (Jacobi’s method) i.e. S = D
S = triangular part of A (Gauss-Seidel method) i.e. S = D + L
3.T = (-L-U) for (Jacobi’s method), T = (-U) for (Gauss-Seidel method)
4. Find S −1 T =D−1 (−L − U ) for (Jacobi’s method)
S −1 T =(D + L)−1 (−U ) for (Gauss-Seidel method)