Nothing Special   »   [go: up one dir, main page]

Applied Linerar Algebra Final

Download as pdf or txt
Download as pdf or txt
You are on page 1of 193

Linear Algebra

September 29, 2020

1 Introduction
Linear Algebra is a part of Mathematics and applied in every field of life. With the help of
Linear Algebra we can convert our real life problems into various models involving, which are
easy to deal with. One can visualize the objects in higher dimensions to get more information.
It helps in solving Differential Equations, game developing, Machine Learning, Data Mining,
Image Processing, Traffic Controlling, Electrical Circuit problems, Genetics, Cryptography, var-
ious economic model and several other fields. With the help of linear transformation concept one
can study the properties of different entities in different spaces. Also a complicated geometrical
problem can be studied by converting it into a simple algebraic problem. Hence Linear Algebra
can be considered as a bridge between Geometry and Algebra.

2 System of Linear Equations


Definition : A linear system of equations is formed when two or more linear equations involving
two or more unknowns considered together to represent a problem. For example:

a11 x1 + a12 x2 + ... + a1n xn = b1


a21 x1 + a22 x2 + ... + a2n xn = b2
... ... ...
am1 x1 + am2 x2 + ... + amn xn = bm
 
a11 a12 ... a1n
 a21 a22 ... a2n 
Which can also be written as Ax = b. Where A =   ...

... ... ... 
am1 am2 ... amn
is a m × n matrix having thecoefficients
 of ith unknown as ith column elements and is said
x1
 x2 
 
.
to be coefficient matrix. x =  .  is a vector of unknowns said to be solution vector and

 
.
xn

1
 
b1
 b2 
 
 . 
b=
 .  is said to be the righthand side vector or nonhomogeneous vector.

 
 . 
bm
The above system of equations can also be represented as
       
a11 a12 a1n b1
 a21   a22   a2n   b2 
       
 . 
 x1 +  .  x 2 + · · · +  .  xn =  .  .
     

 .   .   .   . 
       
 .   .   .   . 
am1 am2 amn bm
 
x1
 x2 
 
.
The values of x1 , x2 , · · · , xn for which the given equations are satisfied form a vector x = 
.

 
.
xn
is a solution of this system of equations.
A system of equations is said to singular if the corresponding coefficient matrix is singular.
A matrix is said to be singular iff its rows or columns are linearly related to each other. That
is a row (or column) can be obtained from the addition of scalar multiples of other rows(or
columns).
The system of equations

a11 x1 + a12 x2 = b1
a21 x1 + a22 x2 = b2

is singular if aa11
21
= aa12
22
. Otherwise it is said to nonsingular. A nonsingular system of equations
has a unique solution.

A singular system of equations has either infinitely many solutions or no solutions. Hence for
the above system of equations
a11 a12
1. if a21
6= a22
, then it has unique solution.
a11 a12 b1
2. if a21
= a22
6= b2
, then it has no solution.
a11 a12 b1
3. if a21
= a22
= b2
, then it has infinitely many solutions.

The following figures give an pictorial representation of the three cases discussed here.

2
Figure 1: One Solution

Figure 2: No Solution

3
Figure 3: whole lines of Solution

4
THE GEOMETRY OF LINEAR EQUATIONS
To solve a system of equations geometrically two methods are used.

(i) Row picture method :

1. Plot the straight lines corresponding to the given equations.

2. Find the points of intersection(s) if exist. The x-coordinate value of the point of intersec-
tion represents the value of x and y- coordinate value gives the value of y.

3. Here if the lines are intersecting then unique solution.

4. If they are parallel then no solution.

5. If they represent the same line then every point on the line is a solution of it.

(See the first figure)

(ii) Column picture method :

1. Write the given system of equations as a linear combinations of column vectors equal to
the rhs vector.
     
a11 a12 b
x+ y= 1 .
a21 a22 b2

2. Plot the points P = (a11 , a21 ), Q = (a12 , a22 ) in xy-plane.

3. Join each point with the origin O. Extend the lines.

4. Plot the point B = (b1 , b2 ). Draw a line from B to OP parallel to OQ and get the
coordinates of point of intersection (h1 , h2 ).

6.
5. Also, draw a line from B to OQ parallel to OP and get the coordinates of point of
intersection (k1 , k2 ).
       
a11 h1 a12 k
7. Find the values of x and y from the equations: x= and y= 1 .
a21 h2 a22 k2

(See the second figure)

Consider a system of equations

2x − y = 1
x + y = 5.

1
Figure 1: Row Picture Method

Figure 2: Column Picture Method

2
PROBLEM SET 1.2
Q7. Explain why the given system is singular by finding a combination of the three equations
that adds up to 0=1. What value should replace the last 0 on the r.h.s. to allow the equations
to have solutions and what is one of the solutions?
u+v+w =2
u + 2v + 3w = 1
v + 2w = 0.
Ans. Here in the left hand side Row1 + Row3 = Row2 but not in right hand side. Hence the
system is singular but no solution exists. If the last 0 is replaced by −1 then l.h.s. and r.h.s.
both satisfy the condition Row1
 + Row3
 = Row2.
3+w
Hence solution exists and −1 − 2w is a general solution. For every value of w it gives a
w
solution of
u+v+w =2
u + 2v + 3w = 1
v + 2w = −1.
 
4
In patricular, −3 is a solution of it.
1
Q 8. Under what condition on y1 , y2 , y3 do the points (0, y1 ), (1, y2 ), (2, y3 ) lie on a straight
line?
Ans. The points (0, y1 ), (1, y2 ), (2, y3 ) lie on a straight line means the slopes of the line joining
the points (0, y1 ) and (1, y2 ) and the points (1, y2 ) and (2, y3 ) are equal. That is,
y2 − y1 y3 − y2
=
1−0 2−1
which implies that y1 − 2y2 + y3 = 0. Hence for y1 − 2y2 + y3 = 0 the points (0, y1 ), (1, y2 ),
(2, y3 ) lie on a straight line.

Q 11. The column picture form of a system is


     
1 1 1
u 1 + v 2 + w 3 = b.
    
0 1 2
Show that the three columns on the left lie in the same plane by expressing the third column as
a combination of the first two. What are all the solutions (u, v, w) if b is the zero vector (0, 0, 0)?
     
1 1 1
Ans. Here it can be observed that 2C2 − C1 = C3 that is, 2 2 − 1 = 3. Hence the
    
1 0 2
system of the equations        
1 1 1 0
u 1 + v 2 + w 3 = 0 .
      
0 1 2 0

1
has infinitely many solutions. For every value of w the vector (w, −2w, w) represents a solu-
tion of it. All solutions of it is represented by the set {(w, −2w, w) ∈ R3 : w ∈ R}.

Q2. Sketch these three lines and decide if the equations are solvable :

x + 2y = 2
x−y =2
y = 1.

What happens if all the right-hand sides are zero? Is there any nonzero choice of right hand
sides that allows the three lines to intersect at the same point ?

Ans. The first figure represents the straight lines represented in the question.

x + 2y = 2
x−y =2
y = 1.

Which shows that there exist no point common to all straight lines. Hence no solution exists.

The second figure gives the straight lines with r.h.s. vector 0, that is

x + 2y = 0
x−y =0
y = 0.

Hence x = 0 and y = 0 is a solution of it.

The third figure gives the graph of

x + 2y = 2
x−y =2
y = 0.
 
2
With r.h.s. vector 2 and x = 2, y = 0 satisfies all the equations. Hence it has a solution at
0
(2, 0).

2
Figure 1: Solution does not exist

Figure 2: Solution exist and is the zero solution

3
Figure 3: Nonzero solution exists

4
Gaussian Elimination
Consider a system of linear equations.

a11 x1 + a12 x2 + ... + a1n xn = b1


a21 x1 + a22 x2 + ... + a2n xn = b2
.........
am1 x1 + am2 x2 + ... + amn xn = bm
 
a11 a12 ... a1n
 a21 a22 ... a2n 
Which can also be written as Ax = b. Where A =  is a m × n matrix
 ... ... ... ... 
am1 am2 ... amn
having the coefficients
  of ith unknown as ith column elements and is said to be  coefficient

x1 b1
 x2   b2 
   
 .   . 
matrix. x =  is a vector of unknowns said to be solution vector and b =
    is said
 .  . 

 .   . 
xn bm
to be the righthand side vector or nonhomogeneous vector.   The values of x 1 , x2 , · · · , xn
x1
 x2 
 
 . 
for which the given equations are satisfied form a vector x =  .  is a solution of this system

 
 . 
xn
of equations.

Let us try to understand elimination method by an example :

2u + v + w = 5
4u − 6v = −2
−2u + 7v + 2w = 9.

Here u, v, w are the unknowns.

To eliminate a variable, means making its coefficient 0.


(a) Let us subtract 2 times the first equation from the second
(b) Let us subtract −1 times the first equation from the third to get

2u + v + w = 5
−8v − 2w = −12
8v + 3w = 14.

Here the first variable u is eliminated.

Now to eliminate v, let us subtract (−1) times of the second equation from the third to get,

1
2u + v + w = 5
-8v − 2w = −12
1w = 2.

These values 2, −8, 1 are called pivots. The cofficient of u in the first equation and the
coefficient of v in the second equation and the coefficient of w in the third equation in the
triangular form are called 1st, 2nd, 3rd pivots respectively.

In matrix form
     
2 1 1 5 2 1 1 5 2 1 1 5
 4 -6 0 -12 → 0 -8 -2 -12 → 0 -8 -2 -12
-2 7 2 9 0 8 3 14 0 0 1 2

Now back substitution yields the complete solution in the opposite order, beginning with the
last unknown. The last equation 1w = 2 gives w = 2. Then the second equation  −8v−2w
  = −12
u 1
gives v = 1. Finally, the first equation 2u + v + w = 5 gives u = 1. Hence v = 1 is the
  
w 2
required solution.

The Breakdown of Elimination :

If a zero appears in a pivot position, elimination has to stop either temporarily or permanently.

If the zero pivot can be replaced by a nonzero value by row exchange process then the
breakdown of elemination process is temporary or else it is permanent.
Consider an example of Nonsingular case :

u+v+w =−
2u + 2v + 5w = −
4u + 6v + 8w = −

=⇒

u+v+w =−
3w = −
2v + 4w = −

=⇒

u+v+w =−
2v + 4w = −
3w = −

The breakdown is temporary.

2
Consider an example of singular case :

u+v+w =−
2u + 2v + 5w = −
4u + 4v + 8w = −

=⇒

u+v+w =−
3w = −
4w = −.

In this case, there is no exchange of equations that can avoid zero in the second pivot position.
Hence the breakdown id permanent.

Singular system of equations: A system of linear equations is said to be singular if and


only if the corresponding coefficient matrix is singular.

A matrix is singular if its one row (column) can be written as a linear combination of other
rows (columns).
The breakdown is temporary for a nonsingular system of equations (Having full
set of pivots). The breakdown is permanent if the system of linear equations is
singular.

Problem set 1.3


Q 1. Choose a r.h.s. which gives no solution and another r.h.s. which gives infinitely many
solutions. What are two of those solutions?

3x + 2y = 10
6x + 4y =?.

Ans. Here
3 2 10
= = =⇒ ? = 20.
6 4 ?
Hence, if the r.h.s. ? 6= 20 then no solution exists. For r.h.s. ? = 20 the system has infinitely
many solutions. Every point on the straight line 3x + 2y = 10 is a solution. In particular, x = 2,
y = 2 and x = 1, y = 3.5 are two solutions.

Q 3. Choose a coefficient b that makes this system singular.

2x + by = 16
4x + 8y = g

3
Then choose a r.h.s. g that makes it solvable. Find two solutions in that singular case.

Ans. The system is singular ⇐⇒ 42 = 8b =⇒ b = 4. The system is singular and solvable


⇐⇒ 42 = 8b = 16g
=⇒ b = 4 & g = 32. In this case, the system has infinitely many solutions.
Every point on the straight line 2x + 4y = 16 is a solution. In particular, x = 2, y = 3 and
x = 6, y = 1 are two solutions.

Q 6. What multiple l of equation 1 should be subtracted from equation 2.

2x + 3y = 1
10x + 9y = 1.

After this elimination step, write down the upper triangular system and darken the two pivots.

10
Ans. Here 2
= 5 = l. Hence
 5 Multiple
 of equation 1 is subtracted from equation 2 to get
2 3 2 3
the coefficient matrix ∼ . The pivots are 2 and −6.
10 9 0 -6
Q7. What test on b1 and b2 decides where these two equations allow a solution? How many
solutions will they have? Draw the column pictures.

3x − 2y = b1
6x − 4y = b2 .
     
3 -2 b1
Ans. The given system of equations can be written as x +y = .
6 -4 b2
−2
Note 63 = −4 =⇒ 2b1 = b2 . Hence the two equations allow a solution and they have infinitely
many solutions. If (b1 , b2 ) point lies on the straight line joining (−2, −4), (0, 0) and (3, 6). Then
the system has infinite number of solutions.

Figure 1: Column Picture

4
Problem set 1.3 continued...
Q 8. For which numbers a does elimination breakdown (a) Permanently (b) Temporarily ?

ax + 3y = −3
4x + 6y = 6.

Ans. Here
a 3
= =⇒ a = 2.
4 6
=⇒ Thesystem is singular. Hence there is a permanent breakdown of elimination process.
2 3 2 3
∼ . The second pivot is missing and cannot be replaced by any nonzero value by
4 6 0 0
any row exchange process. Hence there exist a breakdown and the breakdown is permanent.
 
a 3 0 3
For a = 0, 4
6= 6 , the system is nonsingular. Hence the breakdown is temporary.
4 6
=⇒ The first pivot is missing,
 hence
 the elimination process breaksdown. By interchanging the
4 6
1st and 2nd rows, we have . Both the pivots 4, 3 are nonzero, hence the breakdown is
0 3
temporary.

Q 14. Which number q makes this system singular and which right hand side t gives it
infinitely many soluions? Find the solution that has z = 1

x + 4y − 2z = 1
x + 7y − 6z = 6
3y + qz = t.

Ans. The augment matrix of the given system of equations is :


     
1 4 -2 1 1 4 -2 1 1 4 -2 1
1 7 -6 6 R2 − R1 → R2 ∼ 0 3 -4 5 R3 − R2 → R3 ∼ 0 3 -4 5 .
0 3 q t 0 3 q t 0 0 q+4 t-5
Here q + 4 = 0 makes the 3rd pivot missing and hence the system is singular. For q + 4 = 0,
t − 5 = 0 we get infinitely many solutions since

   
1 4 -2 1 1 4 -2 1
0 3 -4 5 . = 0 3 -4 5.
0 0 q+4 t-5 0 0 0 0

Hence 3y − 4z = 5, z = 1 =⇒ y = 3 and x + 4y − 2z = 1 =⇒ x = −9.


   
x -9
The solution is y =  3 .
z 1

1
Q 16. If rows 1 and 2 are the same, how far can you get with elimination? Which pivot is
missing?

2x − y + z = 0
2x − y + z = 0
4x + y + z = 2

Ans. The augment matrix of the given system of equations is :


     
2 -1 1 0 2 -1 1 0 2 -1 1 0
2 -1 1 0 R2 − R1 → R2 ∼ 0 0 0 0 R3 − 2R1 → R3 ∼ 0 0 0 0.
4 1 1 2  4 1 1  2 0 3 -1 2
2 -1 1 0
Interchanging R2 with R3 , we get 0 3 -1 2. The second pivot is missing hence elimination
0 0 0 0
breaks down and a row exchange gives a nonzero second pivot but the third pivot is missing,
hence the breakdown is permanent.

Q 16. If columns 1 and 2 are the same, how far can you get with elimination? Which pivot
is missing?

2x + 2y + z = 0
4x + 4y + z = 0
6x + 6y + z = 2

Ans. The augment matrix of the given system of equations is :


     
2 2 1 0 2 2 1 0 2 2 1 0
4 4 1 0 R2 − 2R1 → R2 ∼ 0 0 -1 0 R3 − 3R1 → R3 ∼ 0 0 -1 0.
6 6 1 2 6 6 1 2 0 0 -2 2
The second pivot is missing and the breakdown is permanent.

2
Matrix notation and Matrix multiplication
Matrix addition is compatible If and only if matrix A and B both have same number of rows
and same number of columns.

If C = A + B then cij = ith row jth column element of C

cij = aij + bij


.      
1 2 2 3 3 5
For example, let A = and B = then C = A + B = .
5 7 4 16 9 23
Matrix multiplication A × B is compatible if and only if number of columns of A equals to
number of rows of B. i.e. A = [aij ]m×n and B = [bij ]n×p then A × B is possible. In this case,
n
X
cij = aik · bkj
k=1

     
1 2 2 3 10 51
For example C = A × B = × = , c21 = 5 × 2 + 7 × 4 = 38
5 7 4 16 38 127

Elementary Row Operations


There are three elimintary row operations :

1. Ri + kRj → Ri Means kth multiple of jth row is added with ith row.

2. Ri ↔ Rj Means interchange ith row with jth row.

3. cRi → Ri Means ith row is replaced by its c multiple.


 
1 0 0
Consider the 3 × 3 identity matrix I = 0 1 0.
0 0 1
     
1 0 0 1 0 0 1 0 0
Now I = 0 1 0 R2 -3R1 → R2 ∼ -3 1 0= E =⇒ E −1 = +3 1 0.
0 0 1 0 0 1 0 0 1
   
1 0 0 0 1 0
I = 0 1 0 R2 ↔ R1 ∼ 1 0 0= F =⇒ F −1 = F
0 0 1 0 0 1 
  
1 0 0 1 0 0 1 0 0
I = 0 1 0 5R2 → R2 ∼ 0 5 0 = G =⇒ G−1 = 0 1/5 0 .
0 0 1 0 0 1 0 0 1

1
 
1 2 3
Consider A = 4 5 6.
 7 8 9     
1 2 3 4 5 6 1 2 3
Then EA = −31 −35 −39, F A = 1 2 3, GA = 12 15 18.
7 8 9 7 8 9 7 8 9
Q. Give 3 by 3 examples (not just the zero matrix) of

(a) a diagonal matrix : aij = 0 if i 6= j.


 
4 0 0
Ans. 0 2 0
0 0 9
(b) a symmetric matrix aij = aji ∀ i & j.
 
1 2 3
Ans. 2 15 18.
3 18 9
(c) an upper triangular matrix aij = 0 if i > j.
 
1 2 3
Ans. 0 15 18.
0 0 9
(b) a skew-symmetric matrix aij = −aji ∀ i & j.
 
0 2 −3
Ans. −2 0 18 .
3 −18 0
 
cos(θ) -sin(θ)
Q. The matrix that rotates the xy plane by an angle θ is A(θ) = . Verify
sin(θ) cos(θ)
that A(θ1 )A(θ2 ) = A(θ1 + θ2 ). What is A(θ) times A(−θ) ?
   
cos(θ1 ) -sin(θ1 ) cos(θ2 ) -sin(θ2 )
Ans. We know A(θ1 ) = and A(θ2 ) = .
sin(θ1 ) cos(θ1 ) sin(θ2 ) cos(θ2 )
 
cos(θ1 )cos(θ2 ) − sin(θ1 )sin(θ2 ) -cos(θ1 )sin(θ2 ) − cos(θ2 )sin(θ1 )
A(θ1 )A(θ2 ) = .
sin(θ1 )cos(θ2 ) + cos(θ1 )sin(θ2 ) -sin(θ1 )sin(θ2 ) + cos(θ1 )cos(θ2 )
 
cos(θ1 + θ2 ) -sin(θ1 + θ2 )
= = A(θ1 + θ2 )
sin(θ1 + θ2 ) cos(θ1 + θ2 )
   
cos(0) -sin(0) 1 0
A(θ)A(−θ) = A(θ − θ) = A(0) = = = I.
sin(0) cos(0) 0 1
Q. Compute the products.
           
4 0 1 3 12+0+5 17 1 0 0 5 5     
0 1 0 4 =  0+4+0  =  4 , 0 1 0 -2 = = -2, 2 0 1 2
= .
1 3 1 4
4 0 1 5 12+0+5 17 0 0 1 3 3

2
Q. For the third one, draw the column vectors (2, 1) end (0, 3). Multiplying by (1, 1) just
adds the vectors (do it graphically).

Ans. The graphical representation is :

Figure 1: whole lines of Solution

3
Lecture-6

1.5 Triangular Factors and Row Exchanges


Course Outcome: Students will have understanding about the triangular
factorization like LU and LDU factorization and permutation matrices that
are being used for row exchanges purpose.
Triangular Factorization

 
2 1 1
 
Given: A=   4 −6 0 

 
−2 7 2
 
2 1 1
 
= 0 −8 −2  R2 ← R2 − 2R1 (2)
 
 
−2 7 2
 
2 1 1
 
0 −8 −2 R3 ← R3 + R1 (−1)
= 
 
0 8 3
 
2 1 1
 
0 −8 −2 R3 ← R3 + R2 (−1)
= 
 
0 0 1
=U
The elementary matrices are

     
 1 0 0 1 0 0 1 0 0
     
E21 −2 1 0 , E31 = 0 1 0 , E32 = 0 1 0
=     
     
0 0 1 1 0 1 0 1 1

1
E32 E31 E21 A = U
=⇒ M A = U  
 1 0 0
 
where M = E32 E31 E21 =
 −2 1 0

 
−1 1 1
   
1 0 0  1 0 0 1 0 0
−1 −1 −1
   
M −1 =(E32 E31 E21 )−1 = E21 E31 E32 = 
2 1 0  0 1 0 0 1 0
  
   
0 0 1 −1 0 1 0 −1 1
 
1 0 0
 
=2  =L
1 0
 
−1 −1 1
MA=U =⇒ A = M −1 U =⇒ A = LU,which is known as LU factorization
of the matrix A.
     
1 0 0 2 0 0 1 1/2 1/2
     
L=  2 1 0  , D = 0 8 0 , U = 0 1 1/4
    
     
−1 −1 1 0 0 1 0 0 1
LDU=A, which is known as LDU factorization of the matrix A.
No.2 When an upper triangular matrix is is nonsingular?
Ans: An upper triangular matrix is nonsingular if none of its diagonal el-
ements are zero i.e. it has full set of pivot elements.
No.7 Factor A into LU, and write down the upper triangular system U x =
c which
 appears
 afterelimination,
  for

2 3 3  u  2 
    
Ax=0   v =2 
5 7    
    
6 9 8 w 5

2
Ans. Ax=b
    
2 3 3  u  2 
    
=⇒ 0 5 7  v =2 
   
    
6 9 8 w 5
    
2 3 3   u   2 
    
=⇒  R3 ← R3 − 3R1
0 5 7   v = 2
  

    
0 0 −1 w −1
=⇒ Ux=c, which is the required upper triangular system.
No.21 What three elimination matrices E21 , E31 , E32 put A into upper tri-
−1 −1 −1
angular form E21 E31 E32 A = U ? Multiply by E32 , E31 andE21 to factor A
−1 −1 −1
LU where
into   L = E21 E31 E32 . Find L and U .
1 0 1
 
A=  2 2 2 

 
3 4 5
 
1 0 1
 
Ans. Given: A=  2 2 2

 
3 4 5
 
1 0 1
 
= 0 2 0 R2 ← R2 − 2R1 (2)

 
3 4 5
 
1 0 1
 
= 0 2 0R3 ← R3 − 3R1 (3)
 
 
0 0 2
=U

The elementary matrices are

3
     
 1 0 0  1 0 0 1 0 0
     
E21 =
−2 1 0 , E31 =  0 1 0 , E32 = 0 1 0
    
     
0 0 1 −3 0 1 0 0 1
E32 E31 E21 A = U  
1 0 0
−1 −1 −1
 
L = E21 E31 E32 =
2 1 0

 
3 0 1
LU = A

4
Lecture-7

1.5 Triangular Factors and Row Exchanges


Row Exchanges and Permutation Matrices
During Gaussian elimination in case of breakdown problems, zero is appear-
ing in the pivot place. To make that pivot place zero into nonzero, we are
taking the help of row exchange. For this row exchange purpose, we will use
permutation matrices.
Permutation Matrices
Order 2: There are 2!=2 permutation matrices of order 2. That are

   
1 0 0 1
I= , P =  ,
0 1 1 0

Order 3: There are 3!=6 permutation matrices of order 3. That are

       
1 0 0 0 1 0 0 0 1 1 0 0
       
I=
0 1 0 , P21 = 1 0 0 , P31 = 0 1 0 , P32 = 0 0 1
      
       
0 0 1 0 0 1 1 0 0 0 1 0
   
0 0 1 0 1 0
   
P21 P32 =
1 0 0 , P32 P21 = 0 0 1
  
   
0 1 0 1 0 0

Points to Remember:

1. Elements of permutation matrices are either 0 or 1.

2. Product of two permutation matrices is again a permutation matrix.

1
3. There are n! permutation matrices of order n.

Example.    
2 3 0 1 
A= , P = 
4 5 1 0

    
0 1 2 3 4 5
PA =   = 
1 0 4 5 2 3

Example.
     
1 2 3 0 1 0 1 0 0
     
A=
4 5 6,
 P21 =
1 0 0,
 P32 =
0 0 1,

     
7 8 9 0 0 1 0 1 0

    
0 1 0 1 2 3 4 5 6
    
P21 A = 
1 0 0 4 5 6 = 1 2 3
   
    
0 0 1 7 8 9 7 8 9
    
1 0 0 1 2 3 1 2 3
    
P32 A = 
0 0 1 4 5 6 = 7 8 9
   
    
0 1 0 7 8 9 4 5 6

No.40 Which permutation makes P A upper triangular? Which permu-


tations make P1 AP2 lower triangular? Multiplying A on the right by P2

2
exchanges the what of A?  
0 0 6
 
A=
1 2 3

 
0 4 5

Ans. Given:
 
0 0 6
 
A= 1 2 3

 
0 4 5
    
1 0 0 0 1 0 0 1 0
    
P = P32 P21 = 
0 0 1

1 0 0 = 0 0 1
  
    
0 1 0 0 0 1 1 0 0
 
1 2 3
 
PA =  0 4  , which is upper triangular.
5
 
0 0 6
   
1 0 0 0 0 1
   
P1 = P32 = 0 0 1
 , P2 = P 31 = 0 1 0
 
   
0 1 0 1 0 0
 
6 0 0
 
P1 AP2 = 5 4  , which is lower triangular.
0
 
3 2 1

Multiplying A on the right by P2 exchanges the columns of A.

3
Tridiagonal matrix: A square matrix is said to be a tridiagonal matrix if
all its elememts are zero except on the main diagonal and the two adjacent
diagonals.
No.28 Find the LU and LDU factorization of the following tridiagonal
matrix.  
1 1 0
 
A=
1 2 1

 
0 1 2

Ans. Given:
 
1 1 0
 
A=
1 2 1

 
0 1 2
 
1 1 0
 
=
0 1  R2 ← R2 − R1 (1)
1
 
0 1 2
 
1 1 0
 
=
0 1  R3 ← R3 − R2 (1)
1
 
0 0 1

= U

4
LU factorization:
   
1 0 0 1 1 0
   
L=
1 1 0,
 U =
0 1 1

   
0 1 1 0 0 1

LU = A

LDU factorization:
     
1 0 0 1 0 0 1 1 0
     
L=
1 1 0,
 D=
0 1 0,
 U =
0 1 1

     
0 1 1 0 0 1 0 0 1

LDU = A

5
Lecture-8

1.6 Inverses and Transposes


Course Outcome: Students will have understanding about existence of
inverse, Gauss-Jordan method to find the inverse of square matrices and
transpose of matrices.
Existence of Inverse:
Inverse of a square matrix A exists if it is nonsingular i.e. |A| 6= 0. It is
denoted by A−1 . If a square matrix has full set of pivot elements, then it
is also nonsingular. So, inverse of a square matrix exists if it has full set of
pivots.
Example.
 
1 2 
Let A =  
3 4
⇒ |A| = 4 − 6

⇒ |A| = −2

⇒ |A| 6= 0

⇒A−1 will exist.

 
4 3
Minor of A = M =  
2 1
 
 4 −3
Cofactor of A = C =  
−2 1

1
 
 4 −2
Adjoint of A (AdjA) = C T =  
−3 1
Therefore,

AdjA
A−1 =
|A|
CT
=
|A|
 
−1  4 −2
=
2 −3 1
 

Example.
 
a b 
Let A =  
c d
⇒ |A| = ad − bc

⇒ |A| 6= 0

⇒ A−1 willexist.
 
d c 
Minor of A = M =  
b a
 
 d −c
Cofactor of A = C =  
−b a
 
 d −b
Adjoint of A (AdjA) = C T =  
−c a

2
Therefore,

AdjA
A−1 =
|A|
CT
=
|A|
 
1  d −b
=
ad − bc −c a
 

   
2 0 1/2 0 
Example. A =   =⇒ A−1 =  
0 3 0 1/3
Example.
 
 2 −1 0 
 
Let A = 
−1 2 −1

 
0 −1 2
⇒ |A| = 6 − 2

⇒ |A| = 4

⇒ |A| 6= 0

⇒ A−1 will exist.


 
3 −2 1
 
Minor of A = M = −2 4 −2

 
1 −2 3
 
3 2 1
 
Cofactor of A = C = 
2 4 2

 
1 2 3

3
 
3 2 1
 
Adjoint of A (AdjA) = C T = 
2 4 2

 
1 2 3
Therefore,

AdjA
A−1 =
|A|
CT
=
|A|
 
3 2 1
1 
= 2 4 2
4


1 2 3

   
2 0 0 1/2 0 0 
   
Example. A = 0 3 0 =⇒ A−1 =  0 1/3 0 
   
   
0 0 4 0 0 1/4
Points to Remember:
1. The inverse of a square matrix if exits is unique and is known as both
sided inverse.
2. AA−1 = I = A−1 A.
3. (AB)−1 = B −1 A−1 , (ABC)−1 = C −1 B −1 A−1 .
4. Ax = b =⇒ x = A−1 b.

4
Gauss-Jordan Method: (The calculation of A−1 )
 
1 2
Example. Let A =  
3 4
⇒ |A| = −2 6= 0
⇒ A−1 will exist.
Now consider the matrix

 
A I i.e,
 
1 2 1 0
=  
3 4 0 1
 
1 2 01
=   R2 ← R2 − 3R1
0 −2 −3 1
 
1 0 −2 0
=   R1 ← R1 + R2
0 −2 −3 1
 
1 0 −2 1  −1
=  R2 ← R2
2

0 1 3 −1
2 2
 
= I | A−1 ,
 
 −2 1 
where A−1 =  
3/2 −1/2
 
−1  4 −2
= .
2 −3 1

5
Example.
 
1 1 1
 
LetA = 
1 2 2

 
1 2 3
⇒ |A| = 1(6 − 4) − 1(3 − 2) + 1(2 − 2)

= 2−1

= 1

6= 0

⇒ A−1 will exist.

 
A I
 
1 1 1 1 0 0
 
= 
1 2 2 0 1 0

 
1 2 3 0 0 1
 
1 1 1 1 0 0
 
= 
0 1 1 −1 1  R2 ← R2 − R1
0
 
1 2 3 0 0 1
 
1 1 1 1 0 0
 
= 
0 1 1 −1 1  R3 ← R3 − R1
0
 
0 1 2 −1 0 1

6
 
1 1 1 1 0 0
 
= 1 1 −1  R3 ← R3 − R2
0 1 0

 
0 0 1 0 −1 1
 
1 1 0 1 2 −1
 
= 1 1 −1  R1 ← R1 − R3
0 1 0

 
0 0 1 0 −1 1
 
1 1 0 1 2 −1
 
= 1 0 −1 2 −1  R2 ← R2 − R3
0

 
0 0 1 0 −1 1
 
1 0 0 2 0 0
 
= 1 0 −1 2 −1  R1 ← R1 − R2
0

 
0 0 1 0 −1 1
 
= I A−1 ,
 
2 0 0
 
where A−1 =
−1 2 −1

 
0 −1 1
No.6 (a) If A is invertible and AB=AC, prove that B=C.
Proof. Let A be invertible and AB=AC.
AB = AC
=⇒ A−1 (AB) = A−1 (AC)
=⇒ (A−1 A)B) = (A−1 A)C)
=⇒ IB = IC
=⇒ B = C

7
 
1 0
(b) If A =  , find an example with AB = AC but B 6= C.
0 0
 
1 0
Sol. Given:A=  
0 0
   
2 3 2 3 
Let B =   and C =  .
4 5 6 7
   
2 3 2 3
AB =   and AC =  .
0 0 0 0

AB = AC but B 6= C.  
1 0 0
 
No.10 Use the Gauss-Jordan method to find the inverse of A1 = 
 1 1 1.

 
0 0 1
 
1 0 0
 
Sol. Given A1 = 
1 1 1

 
0 0 1
|A1 | = 1 6= 0
So, A−1
1 will exist.

 
A1 I
 
1 0 0 1 0 0
 
= 
1 1 1 0 1 0

 
0 0 1 0 0 1

8
 
1 0 0 1 0 0
 
=
0 1 1 −1 1 0  R2 ← R2 − R1
 
0 0 1 0 0 1
 
1 0 0 1 0 0
 
=
0 1 0 −1 1 −1  R2 ← R2 − R3
 
0 0 1 0 0 1
 
= I −1 ,
A1
 
1 0 0
where A−1
 
1 = −1 1 −1 .
 
 
0 0 1

9
Lecture-9

1.6 Inverses and Transposes


Transpose:

   
1 2 3 1 4 7
   
A= 4 5 6 =⇒ AT = 2 5 8
   
   
7 8 9 3 6 9

Symmetric matrix:
A square matrix A is said
 to besymmetric if AT = A.

1 2 3
 
1 2  
Ex. A=  , B= 
 
2 5 6 
2 4  
3 6 9

Skew-symmetric matrix:
A square matrix A is said
 to be skew-symmetric
 if AT = −A.

0 2 3
 
0 2  
Ex. A=  , B=  −2
 
 0 6 

−2 0  
−3 −6 0

Points to Remember:
1. (A + B)T = AT + B T .
2.(AB)T = B T AT , (ABC)T = C T B T AT .
3.(AT )T = A.
4.(A−1 )T = (AT )−1 .
If A is any real square matrix, then A + AT is always symmetric and A − AT
is always skew-symmetric.

1
Let B = A + AT . Then
B T = (A + AT )T = AT + (AT )T = AT + A = B
So, B is symmetric.
Let C = A − AT . Then
C T = (A − AT )T = AT − (AT )T = AT − A = −(A − AT ) = −C
So, C is skew-symmetric.
A + AT A − AT
Now, A = (symmetric) + (skew − symmetric)
2 2
So, any real square matrix can be expressed as a sum of symmetric and skew-
symmetric matrix.
Example. Express the following matrix as a sum of symmetric and skew-
symmetric
  matrix.

1 2
 
3 4
   
1 2 T 1 3
Sol. A=  , A =  
3 4 2 4

A + AT A − AT
A= +
2 2
     
1 2  1 5/2  0 −1/2
=⇒  =  (symmetric) +   (skew-symmetric)
3 4 5/2 4 1/2 0

Note: If A = AT can be factored into A = LDU without row exchanges,


then U is the transpose of L and A = LDLT .
Example.

2
 
1 3 5 
 
Given: A=  3 12 18

 
5 18 30
 
1 3 5 
 
= 0 3 3  R2 ← R2 − 3R1 (3)
 
 
5 18 30
 
1 3 5
 
= 0 3 3 R3 ← R3 − 5R1 (5)
 
 
0 3 5
 
1 3 5
 
= 0 3 3 R3 ← R3 − R2 (1)
 
 
0 0 2
=U
LU 
factorization:
  
1 0 0 1 3 5
   
L= 
3 1 0, U=
 0 3 3
 
   
5 1 1 0 0 2

LU = A
LDU factorization:
    
1 0 0 1 0 0 1 3 5
     
L= 
3 1 0, D=

0 3 0 U= 0 1 1
   
     
5 1 1 0 0 2 0 0 1

LDU = A
Here U = LT . So, A = LDLT .

3
In this example since AT = A, so LDU=A is LDLT = A.
No.11 If B is square, show that A = B + B T is always symmetric and
T
K=B  − B is always skew-symmetric. Find these matrices A and K when
1 3 
B =   and write B as the sum of a symmetric matrix and a skew-
1 1
symmetric matrix.
Sol. Let B be a square matrix.
Let A = B + B T and K = B − B T .
AT = (B + B T )T = B T + (B T )T = B T + B = A
=⇒ A is symmetric.
Again, K T = (B − B T )T = B T − (B T )T = B T − B = −(B − B T ) = K
=⇒ K is skew-symmetric.

     
1 3 1 1  2 4 
A = B + BT =  + = 
1 1 3 1 4 2
     
1 3 1 1   0 2
K = B − BT =  + = 
1 1 3 1 −2 0

B + BT B − BT
B= +
2 2
     
1 3 1 2  0 1
=⇒  = (symmetric) +   (skew-symmetric)
1 1 2 1 −1 0

No.17 Give examples of A and B such that


(a) A+B is not invertible although A and B are invertible.
(b) A+B is invertible although A and B are not invertible.

4
(c) all of A, B 
and A+B
 areinvertible.
  
1 0 0 1 1 1
Sol. (a) A=  , B=  , A+B=  
0 1 1 0 1 1
A+B is not  invertible
 although
  A and B are 
invertible.

1 0 0 0 1 0
(b) A=  , B=  , A+B=  
0 0 0 1 0 1

A+B is
 invertible
 although
  A and B are not
 invertible.
1 0  2 0 3 0
(c) A=  , B=  , A+B=  
0 1 0 2 0 3
All of A, B and A+B are invertible.
No.41 True or false:
(a) A 4 by 4 matrix with a row of zeros is not invertible.
Ans. True since the matrix is singular.
(b) A matrix with 1s down the main diagonal is invertible.
Ans. False since the matrix is singular.
(c) If A is invertible then A−1 is invertible.
Ans. True since A−1 is non-singular if A is no-singular.
(d) If AT is invertible then A is invertible.
Ans. True since A is non-singular if AT is non-singular.

No.42 For which three numbers c is this matrix not invertible, and why
not?
 
2 c c
 
A=  c c c

 
8 7 c

5
Sol.
For c=0,2,7 the matrix is not invertible, as for these three values of c the
determinant of the matrix is zero i.e. the matrix is singular.
c=0 =⇒ zero column(or zero row).
c=2 =⇒ identical rows.
c=7 =⇒ identical columns.

6
Lecture-10

2.1 Vector Spaces and Subspaces


Course Outcomes: Students will have understanding about vector spaces
and subspaces. Also, will be acquainted with the column space and the
nullspace of different matrices.
In this article we will discuss about the followings:
(i) Vector Space
(ii) Subspaces
(iii) The Column Space
(iv) The Nullspace
Vector Space: A nonempty set V is said to be a vector space if it satisfies
the following properties:
1. x + y = y + x (Commutative law of addition )
2. x + (y + z) = (x + y) + z (Associative law of addition)
3. There is a unique vector ‘0’ (zero vector) such that x + 0 = x for all x.
(Additive identity property)
4. For each x there is a unique vector –x such that x + (−x) = 0.(Additive
inverse property)
5.1x = x
6. (c1 c2 )x = c1 (c2 x)
7. c(x + y) = cx + cy
8. (c1 + c2 )x = c1 x + c2 x,
where x, y, z ∈ V and c, c1 , c2 ∈ R.
Since out of the above eight properties first four are coming under vector

1
addition and last four are coming under scalar multiplication, so the following
is an alternate definition of vector space.
A nonempty set V is said to be a vector space if it satisfies the following
properties:
1. Vector Addition
i.e. x ∈ V, y ∈ V =⇒ x + y ∈ V
2. Scalar Multiplication
i.e. c ∈ R, x ∈ V =⇒ cx ∈ V
Examples of vector spaces: R, R2 , R3 ,. . . . . . . . . , Rn .
Example.
Show that R2 is a vector space.
Proof. R2 contains infinitely many elements and the elements are pairs like
(0, 0), (1, 2), (−1, 2), . . . . . . . . .
So, R2 is nonempty.
1. Vector addition:
Let x = (x1 , x2 ) and y = (y1 , y2 ) ∈ R2 .
x + y = (x1 + y1 , x2 + y2 ) ∈ R2 as x1 + y1 ∈ R and x2 + y2 ∈ R.
2. Scalar multiplication:
Let c ∈ R and (x1 , x2 ) ∈ R2 .
cx = (cx1 , cx2 ) ∈ R2 as cx1 ∈ R and cx2 ∈ R.
So, R2 is a vector space.
Example. Verify whether the 1st quadrant of R2 is a vector space or not.
Proof: Let the set V be the 1st quadrant of R2 .
Then V = {(x1 , x2 ) ∈ R2 : x1 ≥ 0, x1 ≥ 0}
= {(0, 0), (1, 1), (1, 2), (2, 3), . . . . . . . . . .}

2
So, V is nonempty.
1. Vector addition:
Let x = (x1 , x2 ) and y = (y1 , y2 ) ∈ V.
Then x1 ≥ 0, x2 ≥ 0, y1 ≥ 0, y2 ≥ 0.
x + y = (x1 + y1 , x2 + y2 ) ∈ V as x1 + y1 ≥ 0 and x2 + y2 ≥ 0.
2. Scalar multiplication:
Let c = −2 ∈ R and x = (1, 2) ∈ V .
cx = (−2, −4) ∈
/V
So, V does not satisfy scalar multiplication property.
Hence, V i.e. 1st quadrant is not a vector space.
Example. Verify whether combinedly the 1st and 3rd quadrant of R2 is a
vector space or not.
Proof: Let the set V be the 1st and 3rd quadrant of R2 .
Then V = {(1, 1), (1, 2), (−1, −1), (−1, −2), . . . . . . . . . .}.
So, V is nonempty.
1. Vector addition:
Let x = (−1, −2) and y = (2, 1) ∈ V.
x + y = (1, −1) ∈
/ V.
So, V does not satisfy vector addition property.
Hence, V i.e. 1st and 3rd quadrant combinedly is not a vector space.
Subspace: A subspace of a vector space is a nonempty subset that satisfies
the requirements for a vector space.
Example. Show that y = x line is a subspace of the vector space R2 .
Proof. Let the set V be y = x line.
Then V = {(x1 , x2 ) ∈ R2 : x1 = x2 }

3
= {(0, 0), (1, 1), (2, 2), (−1, −1), (−2, −2), . . . . . . . . . .}.
So, V is a nonempty subset of R2 .
1. Vector addition:
Let x = (x1 , x2 ) and y = (y1 , y2 ) ∈ V.
Then x1 = x2 and y1 = y2 .
x + y = (x1 + y1 , x2 + y2 ) ∈ V as x1 + y1 = x2 + y2 .
2. Scalar multiplication:
Let c ∈ R and (x1 , x2 ) ∈ V.
cx = (cx1 , cx2 ) ∈ V as cx1 = cx2 .
So, V i.e. y = x line is a subspace space R2 .
Vector Space: R2
Subspaces:
1. R2
2. Any line passing through origin.
3. Origin i.e. {(0,0)}.
Vector Space: R3
Subspaces:
1. R3
2. Any plane passing through origin.
3. Any line passing through origin.
3. Origin i.e. {(0,0,0)}.
Points to remember:
1. Every vector space is a subspace of itself.
2. A subspace is a vector space in its own right.
3. Every vector space is the largest subspace of itself and origin is the smallest

4
subspace.
No.2 Which of the following subsets of R3 are actually subspaces?
(a) The plane of vectors (b1 , b2 , b3 ) with first component b1 = 0.
(b) The plane of vectors b with b1 = 1.
Sol. (a).
Let V = {T he plane of vectors (b1 , b2 , b3 ) with f irst component b1 = 0}.
= {(0, 0, 0), (0, 1, 0), (0, 1, 2), . . . . . . . . . }
So, V is a nonempty subset of R3 .
1. Vector addition:
Let b = (b1 , b2 , b3 ) and c = (c1 , c2 , c3 ) ∈ V. T hen b1 = 0 and c1 = 0.
b + c = (b1 + c1 , b2 + c2 , b3 + c3 ) ∈ V as b1 + c1 = 0.
2. Scalar multiplication:
Let α ∈ R and b = (b1 , b2 , b3 ) ∈ V. T hen b1 = 0 =⇒ αb1 = 0
αb = (αb1 , αb2 , αb3 ) ∈ V as αb1 = 0
So, V is a subspace of R3 .
Sol. (b).
Let V = {T he plane of vectors b = (b1 , b2 , b3 ) with f irst component b1 = 1}.
= {(1, 0, 0), (1, 1, 0), (1, 1, 2), . . . . . . . . . }
So, V is a nonempty subset of R3 .
1. Vector addition:
Let b = (b1 , b2 , b3 ) and c = (c1 , c2 , c3 ) ∈ V. T hen b1 = 1 and c1 = 1 =⇒
b1 + c1 = 2.
b + c = (b1 + c1 , b2 + c2 , b3 + c3 ) ∈
/ V as b1 + c1 6= 1.
So, V does not satisfy vector addition property.
Hence, V is not a subspace of R3 .

5
Column Space and Null Space of a Matrix

Column Space of a matrix is all linear combination of the columns of A. and denoted by
C(A).
Column Space of Matrix Let C1 , C2 , ...., Cn be 1st column, 2nd column,....,nth column of the
matrix Am×n
then C(A) = {a1 C1 + a2 C2 + ... + an Cn /a1 , a2 , ...an ∈ R}
where R is set of real numbers.

Steps for finding C(Am×n )


Given: Suppose we are given a matrix A
Output: C(A)
Step 1: find Echelon form of A ,say U is echelon form of A
step 2: find the pivot column in U
step 3: then C(A) is linear combination of those column of A which are corresponding to pivot
column of U.
For Ex Let 1st and 5th are only pivot column in U,then C(A) = {a1 C1 + a5 C5 /a1 , a5 ∈ R}

Note : The system Ax = b is solvable iff the vector b can be expressed as a combination
of the columns of A . then b is in the column cpace of A
Note : C(A) is a subspace of Rm

Null Space of Matrix


let A be m × n matrices.then
Null Space of A consists of all vectors x such that Ax=0
and denoted by N(A).
i.e. N (A) = {x ∈ Rn /Ax = 0}
Note : N(A) is subspace of Rn

Exercise 2.1.5 : find the column space and null space of the matrices
(a):  
1 −1
A=
0 0

Sol: Since echelon form of A is itself A. i.e. U = A


and first column of U is pivot. so
   
1
C(A) = a /a ∈ R
0

1
for the null space solve Ax
 =0 
  1 −1 0
Aug.matrix = A 0 =
0 0 0
since since   ,so assumex2 = k where k is real number.so x1 − k = 0 , x1 k
x2is free variable
k 1
N (A) = /k ∈ R = k /k ∈ R
k 1
(b)    
0 0 3 R1 ↔R2 1 2 3
B= −−−−→ =U
1 2 3 0 0 3
Since first and third columns are pivot in U.
     
0 3
So C(B) = a +b | a, b ∈ R
1 3
for null space solve BX = 0
   
0 0 3 0 R1 ↔R2 1 2 3 0
Aug. matrix= [B | 0] = −−−−→ Since x2 is free variable so
1 2 3 0 0 0 3 0
x2 = k, K ∈ R

0x1 + 0x2 + 3x3 = 0 ⇒ x3 = 0


x1 + 2k + 3x3 = 0
x1 + 2k + 3 × 0 = 0
x1 = −2k
      
 −2k   −2 
N (A) =  k  |k∈R = k  1  |k∈R
0 0
   

(c)  
0 0 0
C=
0 0 0
       
0 0 0
C(A) = a +b +c | a, b, c ∈ R
0 0 0
 
0
C(A) =
0
Since x1 , x2 , x3 are free variable.
So x1 = k1 ,  , k3 ∈ R.
x2= k2 , x3 = k3 , k1 , k2
 1 k 
So N (A) = k2  | k1 , k2 , k3 ∈ R = R3
k3
 

2
Exercise 2.1.24: For which Right hand side(find a condition on b1 , b2 , b3 ) are these systems
solvable?
    
1 4 2 x1 b1
(a) 2 8 4  x 2 = b2 
 
−1 −4 −2 x3 b3   
1 4 2 b1 1 4 2 b1
Aug. matrix = [A|b] =  2 8 4 b2  R1 +R3 → R3 , R2 −2R1 → R2 ∼  0 0 0 b2 − 2b1 
−1 −4 −2 b3 0 0 0 b1 + b3
i.e
1x1 + 4x2 + 2x3 = b1
0x1 + 0x2 + 0x3 = b2 − 2b1
0x1 + 0x2 + 0x3 = b1 + b3
solution exist only if b2 − 2b1 = 0, b1 + b3 = 0
⇒ b2 = 2b1 and b3 = −b1
   
1 4   b1
x1
(b) 2
 9  = b2 

x2
−1 −4 b3   
1 4 b1 1 4 b1
Aug. matrix = [A|b] =  2 9 b2  R2 − 2R1 → R2 ∼  0 1 b2 − 2b1 
−1 −4 b3 0 0 b1 + b3
i.e
1x1 + 4x2 = b1
0x1 + 1x2 = b2 − 2b1
0x1 + 0x2 = b1 + b3
solution exist only if b3 + b1 = 0

3
2.2 - Solving Ax=0 And Ax=b

Vocab : Cofficient Mtrix,Augmented Matrix,EchelonForm,Row Reduced Form ,Rank,Pivot


Variable,free Variable

Ex 1 - Consider a System of Linear Equation


x1 − 2x2 + x3 = 0 


2x2 − 8x3 = 8 (1)


−4x1 + 5x2 + 9x3 = −9

Solution: The elimination procedure is shown here with and without matrix notation and the
results are placed side by side for comparison:
 
x1 − 2x2 + x3 = 0 1 −2 1 0
2x2 − 8x3 = 8  0 2 −8 8 
−4x1 + 5x2 + 9x3 = −9 −4 5 9 −9
Keep x1 is the first equation and eliminate it from the other equations. To do so, add 4 times
equation 1 to equation 3. After some practice, this type of calculation is usually performed
mentally:

1
4.[equation 1] : 4x1 − 8x2 + 4x3 = 0
+[equation 3] : −4x1 + 5x2 + 9x3 = −9
new equation 3 : −3x2 + 13x3 = −9
The result of this calculation is written in place of the original third equation:
 
x1 − 2x2 + x3 = 0 1 −2 1 0
2x2 − 8x3 = 8  0 2 −8 8 
−3x2 + 13x3 = −9 0 −3 13 −9
Now, multiply equation 2 by 1/2 in order to obtain 1 as the coefficient for x2 . (This calcu-
lation will simplify the arithmatic in the next step.)
 
x1 − 2x2 + x3 = 0 1 −2 1 0
x2 − 4x3 = 4  0 1 −4 4 
−3x2 + 13x3 = −9 0 −3 13 −9
Use the x2 in equation 2 to eliminate the −3x2 in equation 3. The ”mental” computation is

3.[equation 2] : 3x2 − 12x3 = 12


+[equation 3] : −3x2 + 13x3 = −9
new equation 3 : x3 = 3
The new system has a triangular form.
 
x1 − 2x2 + x3 = 0 1 −2 1 0
x2 − 4x3 = 4  0 1 −4 4 
x3 = 3 0 0 1 3

Eventually, you want to eliminate the −2x2 term from equation 1, but it is more efficient to use
the x3 in equation 3 first, to eliminate the −4x3 and +x3 terms in equation 2 and 1. The two
”mental” calculations are

4.[equation 3] : 4x3 = 12 −1.[equation 3] : −x3 = −3


+[equation 2] : x2 − 4x3 = 4 +[equation 1] : x1 − 2x2 + x3 = 0
new equation 2 : x2 = 16 new equation 1 : x1 − 2x2 = −3

It is convenient to combine the


 results of thesetwo operations:
x1 − 2x2 = −3 1 −2 0 −3
x2 = 16  0 1 0 16 
x3 = 3 0 0 1 3

Now, having cleaned out the column above the x3 in equation 3, move back to the x2 in equation
2 and use it to eliminate the −2x2 above it. Because of the previous work with x3 , there is now

2
no arithmetic involving x3 terms. Add 2 times equations 2 to equation 1 and obtain the system:
 
x1 = 29 1 0 0 29
x2 = 16  0 1 0 16 
x3 = 3 0 0 1 3

Ex 2 - A system of linear equations is a list of linear equations with the same unknowns. In
particular, a system of 2 linear equations in 2 unknowns x1 , x2 can be put in the standard form

)
a11 x1 + a12 x2 = b1
(2)
a21 x1 + a22 x2 = b2

where aij , bi are constant.and we can rewrite system (2) as :

    
a11 a12 x1 b
= 1 (3)
a21 a22 x2 b2

again we can rewrite system (3)(without using unknown,for the simplicity) to as

 
a11 a12 b1
(4)
a21 a22 b2
   
a11 a12 b1 a a
A= , C = 11 12
a21 a22 b2 a21 a22
where A is called Augmented Matrix and C is called Cofficient Matrix of the system

1 aa12 b1
   
a11 a12 b1 1 a
R1 → R1 ∼ 11 11
a21 a22 b2a11 a21 a22 b2
 a12 b1 
1 a11 a11
R2 − a21 R1 → R2 ∼ (5)
0 a22 − a21a11a12 b2 − aa2111b1

(5) can be written as

a12 b1
)
x1 + x
a11 2
= a11
a21 a12 a21 b1
(6)
0x1 + (a22 − a11
)x2 = b2 − a11

a21 b1
a21 a12 b2 − a11 a12 b1
Case 1 : if a22 − a11
6= 0 then x2 = a a12
a22 − 21
, x1 ca be calculated from x1 + x
a11 2
= a11
a11
Conclusion- unique solution of (2)

3
Case 2 : if a22 − a21a11a12 = 0 and b2 − aa2111b1 = 0 then second equation of (7) becomes 0x1 +0x2 = 0
Conclution - infinite solution of (2)
Case 3 : if a22 − a21a11a12 = 0 and b2 − aa2111b1 6= 0 then second equation of (6)becomes 0x1 + 0x2 =
b2 − aa2111b1 i.e. 0 = b2 − aa2111b1 which is not true
Conclution - solution does not exit of (2).

Elementary Row Operations

Supppose A is a matrix with rows R1 , R2 ,..., Rm . The following operations on A are called
elementary row operations.

[E1 ] (Row Interchange): Interchange rows Ri and Rj . This may be written as


”Interchange Ri and Rj ” or ”Ri ↔ Rj ”

[E2 ] (Row Scaling): Replace row Ri by a nonzero multiple kRi of itself. This may be writ-
ten as
” Replace Ri by kRi (k 6= 0) or ”kRi → Ri ”

[E3 ] (Row Addition): Replace row Rj by the sum of a multiple kRi of a row Ri and itself.
This may be written as
” Replace Rj by kRi + Rj ” or ”kRi + Rj → Rj .

The arrow → in E2 and E3 may be read as ”replaces”.

Sometimes (say to avoid fractions when all the given scalars are integers) we may apply [E2 ]
and [E3 ] in one step; that is, we may apply the following operation:

[E] Replace Rj by the sum of a multiple kRi of a row Ri and a nonzero multiple k 0 Rj of
itself. This may ge written as
”Replace Rj by kRi + k 0 Rj (k 0 6= 0)” or ”kRi + k 0 Rj → Rj ”

We emphasize that in operations [E3 ] and [E] only row Rj is changed.

Echelon Matrices (or in echelon form) U and Row Reduced Form R

Echelon Matrices U

4
A Matrix U is called an echelon matrix or is said to be in echelon form , if the following two
conditions hold :
(1) All zero rows,if any, are at the bottem of the matrix.
(2) Each leading nonzero entry in a row is to the right of the leading nonzero entry in the
preceding row.

Row Reduced Form R


A Matrix is said to be in row reduced form R if it is an echelon matrix and if satisfies the
following additional two properties:
(3)Each pivot(leading nonzero entry) is equal to 1.
(4) Each pivot is the only nonzero entry in its column.

EX 3 The following is an echelon matrix whose pivots have been circled

0 ○
 
2 3 4 5 9 0 7
0 0 0 ○
3 4 1 2 5

 
A=0 0 0 0 0 5 7 2 
0 0 0 0 0 0 ○
 
8 6
0 0 0 0 0 0 0 0

NOTE 1 - The major difference between an echelon matrix in row reduced form is that in
an echelon matrix there must be zeros below the pivots [properties(1)and (2)] but in a matrix
in row reduced form , each pivot must also equal 1 [property (3)] and there must also be zeros
above the pivots [properties(4)].

Ex-4 The following are echelon matrices whose pivots have been circled


 
2 3 2 0 4 5 −6
○ 0 ○
   
1 2 3 1 3 0 0 4
0 0 ○
0 ○
1 −3 2 0 0 0 ○ 0 0 0 ○
, 1 , 1 0 −3
0 ○

0 0 0 0 6 2
0 0 0 0 0 0 0 ○
1 2
0 0 0 0 0 0 0

The Third matrix is also an example of a matrix in row reduced form. the second matrix
is not in row reduced form ,since it does not satisfy property(4),taht is,there is a nonzero entry
above the second pivot in the third column.The first matrix is not in row reduced form, because
it satisfies neither property (3) nor property (4); that is, some pivots are not equal to 1 and

5
there are nonzero entries above the pivots.

Ex-5 The entries of a 5 by 8 echelon matrix U and its reduced form R

Pivot Variable and Free Variable


Pivot Variable: Pivot Variable are those variable that correspond to columns with pivots.
Free Variable : Free Variable are those variable that correspond to columns without pivots.

Note : If Ax = 0 has more unknowns than equations (n > m), it has at least one special
solution: There are more solutions than the trivial x = 0.
Note : xcomplete = xparticlar + xnullspace
Note : if there are n column in a matrix A and there are r pivots then there are r pivot variables
and n − r free variable.and this important number r is called Rank of a Matrix.

Rank of a Matrix = The rank of a matrix A, written rank(A), is equal to the maximum
number of linearly independent columns of A
= number of pivot column in the echelon form of a matrix A
=maximum number of linearly independent rows of A
= dimension of the column space of A
= dimension of the row space of A.

Note : Let A be an n-square matrix. then A is invertible if and only if rank(A) = n

Ex 6: Find Rank of A  
1 2 3 5
A= 0 0 2 2 
0 0 0 0

Sol. Since Echelon form of A is itself A.and 1st and 3rd column are pivot column.
So Rank of A is 2.

6
Method for solving System of linear equation
Method-1
Ex 7 - Consider a System of linear equation


1x1 + 2x2 + 3x3 + 5x4 = b1 


2x1 + 4x2 + 8x3 12x4 = b2 (7)


3x + 6x + 7x + 13x = b 
1 2 3 4 3

Sol.
Step 1: Reduce Ax = b to U x = c
i.e. Reduce Augmented Matrix [A b] to Augmented Matrix [U c]

   
  1 2 3 5 b1 1 2 3 5 b1
A b =  2 4 8 12 b2  R2 − 2R1 → R2 , R3 − 3R1 → R3 ∼  0 0 2 2 b2 − 2b1 
3 6 7 13 b3 0 0 −2 −2 b3 − 3b1
 
1 2 3 5 b1
R3 + R2 → R3 ∼  0 0 2 2 b2 − 2b1 
0 0 0 0 b3 + b2 − 5b1
(8)
 
= U c

 

 (8)means 

1x + 2x + 3x + 5x = b
 
1 2 3 4 1

 

 
0x + 0x + 2x + 2x = b − 2b
 
 1
 2 3 4 2 1


0x1 + 0x2 + 0x3 + 0x4 = b3 + b2 − 5b1
third equation hold only if b3 + b2 − 5b1 = 0

 


 

it means if b + b − 5b = 0 then system of equation has infinite solution.
 
3 2 1

 

 
if b3 + b2 − 5b1 6= 0 then system of equation has no solution.
 

Here    
1 2 3 5 b1
U =  0 0 2 2  , C =  b2 − 2b1 
0 0 0 0 b3 + b2 − 5b1
Step 2 :
Find Special Solution : U x = 0
Take particularly b1 = 0,b2 = 6,b3 = −6

 
  1 2 3 5 0
U 0 = 0 0 2 2 0  (9)
0 0 0 0 0

7
Here x2 and x4 are free variables
Let x2 = a,x4 = b where a,b belongs to Set of Real Number
Now we can rewrite (10) as
1x1 + 2x2 + 3x3 + 5x4 = 0—–(*)
0x1 + 0x2 + 2x3 + 2x4 = 0—–(**)
now put the value of x2 in (**)
2x3 + 2b = 0
i.e. x3 = −b
now put the value of x3 in (*)
x1 + 2a − 3b + 5b = 0
i.e. x1 + 2a + 2b = 0
i.e. x1 = −2a − 2b  
−2a − 2b
 a 
Special Solution xn = 
 −b 

b
   
−2 −2
1 0
xn = a 
 0  + b  1  where a, b belongs to set of real number
  

0 −1
Step 3 :
Find Particular Solution xp , U xp = c and put all free variables= 0

So put x2 = a = 0, x4 = b = 0

 
  1 2 3 5 0
U c = 0 0 2 2 6  (10)
0 0 0 0 0

(10) Can be rewritten as


1x1 + 2x2 + 3x3 + 5x4 = 0—–(*)
0x1 + 0x2 + 2x3 + 2x4 = 6—–(**)
Now put b = 0 in (**)
2x3 + 0 = 6
x3 = 3
Now a = 0,x3 = 3,b = 0 in (*)
x1 0 + 9 + 0 = 0
x1 = −9

8
 
−9
0
xp = 
3

0
Step 4 :        
−2 −2 −9 −2a − 2b − 9
1
 + b 0  +  0  =  a
     
Complete Solution x = xn + xp = a  0  1   3   −b + 3 

0 −1 0 b
     
−9 −2a −2b
 0   a   0b 
= 3  +  0a  +  b 
    

0 0a −b
     
−9 −2 −2
0 1 0
= 3  + a 0  + b 1 
    

0 0 −1
where a, b belongs to Set of real numbers

Method-2
Ex - Consider a System of linear equation


1x1 + 2x2 + 3x3 + 5x4 = 0 


2x1 + 4x2 + 8x3 + 12x4 = 6 (11)


3x1 + 6x2 + 7x3 + 13x4 = −6

Step 1: Reduce Ax = b to U x = c
i.e. Reduce Augmented Matrix [A b] to Augmented Matrix [U c]

   
  1 2 3 5 0 1 2 3 5 0
A b = 2 4 8 12 6
  R2 − 2R1 → R2 , R3 − 3R1 → R3 ∼ 0 0 2
 2 6 
3 6 7 13 −6 0 0 −2 −2 −6
 
1 2 3 5 0
R3 + R2 → R3 ∼  0 0 2 2 6 
0 0 0 0 0
(12)
 
= U c

9
 

 (12)means 

1x1 + 2x2 + 3x3 + 5x4 = 0
 

 0x1 + 0x2 + 2x3 + 2x4 = 6 

0x1 + 0x2 + 0x3 + 0x4 = 0
 

Here    
1 2 3 5 0
U = 0 0 2 2 , c = 6
  
0 0 0 0 0

Step 2:
Here x2 and x4 are free variables
Let x2 = a,x4 = b where a,b belongs to Set of Real Number
Now we can rewrite (12) as
1x1 + 2x2 + 3x3 + 5x4 = 0—–(*)
0x1 + 0x2 + 2x3 + 2x4 = 6—–(**)
now put the value of x2 in (**)
2x3 + 2b = 6
i.e. x3 = 3 − b
now put the value of x3 in (*)
x1 + 2a + 3(3 − b) + 5b = 0
i.e. x1 + 2a + 9 + 2b = 0
i.e. x1 = −9 − 2a − 2b        
−2a − 2b − 9 −9 −2a −2b
 a   0   a   0b 
Complete Solution x = xn + xp =   −b + 3  =  3  +  0a  +  b 
      

b 0 0a −b
     
−9 −2 −2
0
 + a 1  + b 0 
   
=3 0 1
0 0 −1
where a, b belongs to Set of real numbers

Exercise 2.2.1 : find the value of c that makes it possible to solve Ax = b, and solve it:

u + v + 2w = 2

2u + 3v − w = 5

3u + 4u + w = c

10
   
1 1 2 2 1 1 2 2
Solution Aug matrix = 2 3 −1 5 R2 − 2R1 → R2 ∼ 0
   1 −5 1  R3 − R2 → R2 ∼
  3 4 1 c 0 1 −5 c
1 1 2 2
 0 1 −5 1  Solution Exit only if c − 7 = 0 so assume w =k∈R
0 0 0 c−7

v − 5w = 1

v − 5k = 1

v = 1 + 5k

u + v + 2w = 2

u + (1 + 5k) + 2k = 2

u = 1 − 7k

       
u 1 − 7k 1 −7
 v  = 1 + 5k  = 1 + k  5  where k ∈ R
w k 0 1
Exercise 2.2.4 Write the complete solution x = xp + xn to these systems ,(as in equation
(4))
   
  u     u  
1 2 2   1 1 2 2   1
v = , v = ,
2 4 5 4 2 4 4 4
w w
   
1 2 2 1 1 2 2 1
Solution (1) Aug matrix = R2 − 2R1 → R2 ∼
2 4 5 4 0 0 1 2
Since v is free variable so take v = k , k ∈ R

w=2

u + 2v + 2w = 1

u + 2k + 4 = 1

u = −3 − 2k

       
u −3 − 2k −3 −2
v  =  k  =  0  + k  1 , where k ∈ R
w 2 2 0

11
   
1 2 2 1 1 2 2 1
(2) Aug matrix = R2 − 2R1 → R2 ∼
2 4 4 4 0 0 0 2
i.e.
u + 2v + 2w = 1
0u + 0v + 0w = 2
i.e. 0 = 2 which is not true.
So there is no solution.

Exercise 2.2.5 Reduce A and B to echelon form, to find their ranks, which variables are
free ?   
1 2 0 1 1 2 3
A = 0 1 1 0 , B = 4 5 6 find the special solutions to Ax = 0 and Bx = 0. find all
1 2 0 1 7 8 9
solutions.
   
1 2 0 1 1 2 0 1
Solution:(1) A = 0 1 1 0 R3 − R1 → R2 ∼ 0 1 1 0 = U. Since first two col-
1 2 0 1 0 0 0 0
umn in U are L.I. So ρ(A) = 2.
Now for solving Ax = 0.   
1 2 0 1 0 1 2 0 1 0
Aug. matrix = [A|0] = 0 1 1 0 0 R3 − R1 → R3 ∼ 0 1
   1 0 0 
1 2 0 1 0 0 0 0 0 0
Since x3 and x4 are free variable; so assume x3 = k1 , x4 = k2 , where k1 , k2 ∈R
x2 + x3 = 0 ⇒ x2 + k = 0 ⇒ x2 = −k

x1 + 2x2 + x4 = 0
x1 − 2k1 + k2 = 0
x1 = 2k1 − k2
       
x1 2k1 − k2 2 −1
x2   −k1  −1 0
 =
x3   k1  = k1  1  + k2  0 
    

x4 k2 0 1
where k1 , k2 ∈ R.
This is general solution.    
2 −1
−1 0
Hence special solutions are 
 1  and  0  .
  

  0 1  
1 2 3 1 2 3
(2) B = 4
 5 6 R3 − 7R1 → R3 , R2 − 4R1 → R2 ∼ 0 −3 −6  R3 − 2R2 → R3 ∼
7 8 9 0 −6 −12

12
 
1 2 3
0 −3 −6 = U
0 0 0
Since U has two pivot columns, so ρ(B) = 2.
for solving Bx = 0    
1 2 3 0 1 2 3 0
Aug. matrix = [B|0] =  4 5 6 0  R3 −7R1 → R3 , R2 −4R1 → R2 ∼  0 −3 −6 0  R3 −
 7 8 9 0 0 −6 −12 0
1 2 3 0
2R2 → R3 ∼ 0 −3 −6 0 

0 0 0 0
Since x3 is free variable.
So x3 = k, k ∈ R.

−3x2 − 6x3 = 0

x2 = −2k

x1 + 2x2 + 3x3 = 0

x1 − 4k + 3k = 0

x1 − k = 0 ⇒ x1 = k.
       
x1 k 1 1
General solution x2 = −2k = k −2 , k ∈ R. Special solution is −2.
      
x3 k 1 1

13
PROBLEM SET 2.2 CONTINUED...

Exercise 2.2.13: (a) find the special solutions to U x = 0, Reduce U to R and repeat
 
  x1  
1 2 3 4   0
x2   
Ux = 0 0 1 2   = 0
  
x3
0 0 0 0 0
x4

(b) If the Right hand side is changedfrom (0,0,0) to (a,b,0)


 what is solution?
1 2 3 4 0
Solution: (a) Aug matrix = [U |0] =  0 0 1 2 0 
0 0 0 0 0
Since x2 , x4 are free variables. So assume x2 = a, x4 = b, a, b ∈ R.

x3 + 2x4 = 0
x3 = −2b
x1 + 2x2 + 3x3 + 4x4 = 0
x1 − 2a − 6b + 4b = 0
x1 − 2a − 2b = 0
x1 = 2a + 2b.
       
x1 2a + 2b −2 2
x2   a  1 0
x3   −2b  = a  0  + b −2 , a, b ∈ R.
 =     

x4 b 0 1
   
1 2 3 4 1 2 0 −2
Since U = 0 0 1 2 R1 − 3R2 → R1 ∼ 0 0 1 2  = R
  
0 0 0 0 0 0 0 0
(b) U x = b  
1 2 3 4 a
Aug matrix = [U |b] =  0 0 1 2 b 
0 0 0 0 0
Since x2 , x4 are free variables. So assume x2 = c, x4 = d, c, d ∈ R.

x2 + 2x4 = b

1
x3 + 2d = b ⇒ x3 = b − 2d
x1 + 2x2 + 3x3 + 4x4 = a
x1 + 2c + 3(b − 2d) + 4d = a
x1 + 2c + 3b − 6d + 4d = a
x1 + 2c + 3b − 2d = a
x1 = a − 3b − 2c + 2d
         
x1 a − 3b − 2c + 2d a − 3b −2 2
x2   c  =  0  + c  1  + d  0  , c, d ∈ R.
      
 =
x3   b − 2d   0  0 −2
x4 d 0 0 1

Exercise
  2.2.34:  What
 conditions
 on b1 , b2 , b
3 , b4make each system solvable? solve for x,
1 2   b1 1 2 3   b1
2 4 x1 b2  2 4 6  x1 b2 
2 5 x2 = b3  and 2 5 7  x2 = b3 
        
x3
3 9 b4 3 9 12 b4
 
1 2 b1
 2 4 b2 
Solutions:(a) Aug. matrix =  2 5 b3  R4 − 3R1 → R4 , R2 − 2R1 → R2 , R3 − 2R1 → R3 ∼

3 9 b4
 
1 2 b1
 0 0 b2 − 2b1 
 
 0 1 b3 − 2b1 
0 3 b4 − 3b1
 
1 2 b1
 0 0 b2 − 2b1 
R4 − 3R3 → R3 ∼  
 0 1 b3 − 2b1 
0 0 b4 − 3b3 + 3b1
b − 2b1 = 0, b4 − 3b3 + 3b1 = 0
solution exist only if 2
b2 = 2b1 , b4 = b1 + b3

Since x2 = b3 − 2b1

x1 + 2x2 = b1
x1 + 2(b3 − 2b1 ) = b1
x1 + 2b3 − 4b1 = b1
x1 = 5b1 − 2b3
   
x1 5b1 − 2b3
General solution =
x2 b3 − 2b1

2
 
1 2 3 b1
 2 4 6 b2 
(b)Aug. matrix =  2 5 7 b3  R4 − 3R1 → R4 , R2 − 2R1 → R2 ,

3 9 12 b4
   
1 2 3 b1 1 2 3 b1
 0 0 0 b2 − 2b1   0 0 0 b2 − 2b1 
R3 − 2R1 → R3 ∼   0 1 1 b3 − 2b1  R4 − 3R3 → R4 ∼  0 1 1
  
b3 − 2b1 
0 3 3 b4 − 3b1 0 0 0 b4 − 3b3 + 3b1
solution exist only if b2 − 2b1 = 0 and b4 − 3b3 + 3b1 = 0 for the general solution.
Since x3 is free variable assume x3 = k, k ∈ R.

x2 + k = b3 − 2b1

x2 = b3 − 2b1 − k

x1 + 2x2 + 3x3 = b1

x1 + 2(b3 − 2b1 − k) + 3k = b1

x1 + 2b3 − 4b1 − 2k + 3k = b1

x1 = 5b1 − 2b3 − k
       
x1 5b1 − 2b3 − k 5b1 − 2b3 −1
x2  =  b3 − 2b1 − k  =  b3 − 2b1  + k −1
x3 k 0 1

3
PROBLEM SET 2.2 CONTINUED...

Excercise 2.2.44 Choose the number q so that (if possible) the ranks (a) 1 (b) 2 (c) 3
 
6 4 2  
3 1 3
A = −3 −2
 −1  , C =
q 2 q
9 6 q
   
6 4 2 6 4 2
Solution (1) A = −3 −2 −1 2R2 + R1 → R2 , 6R3 − 9R1 → R3 ∼ 0 0 0 
 9 6 q 0 0 6q − 18
6 4 2
R3 ↔ R2 ∼ 0 0 6q − 18
0 0 0
(a) rank of A is 1 if q = 3
(b) rank of A is 2 if q 6= 3
(c) rank of A is 3 not possible.

   
3 1 3 3 1 3
(2) B = 3R2 − qR1 → R2 ∼
q 2 q 0 6−q 0

(a) rank of B is 1 if q = 6
(b) rank of B is 2 if q 6= 6
(c) rank of B is 3 not possible.

Excercise 2.2.54 :True or False? (Give reason if true, or counterexample to show it is false.)
(a) A square matrix has no free variables.
(b) An invertible matrix has no free variables.
(c) An m by n matrix has no more than n pivot variables.
(d) An m by n matrix has no more than m pivot variables.
Solution :
(a) A square matrix has no
 freevariables.
1 1
Ans. False , because A =
0 0

1
(b) An invertible matrix has no free variables.
Ans. True , a matrix A is invertible if and only if its columns are linearly independent.so all
column has pivot.so invertible matrix has no free variables.

(c) An m by n matrix has no more than n pivot variables.


Ans. True , Since n is number of columns. and each column has atmost 1 pivot. An m by n
matrix has no more than n pivot variables.

(d) An m by n matrix has no more than m pivot variables.


Ans. True , Since m is number of rows. and each row has atmost 1 pivot.So An m by n matrix
has no more than m pivot variables.

Excercise 2.2.59: The equation x − 3y − z = 0 determines a plane in R3 . What is the


matrix A in this equation? Which are the free variables? The special solutions are(3,1,0) and
(...), the parallel plane x − 3y − z = 12 contains the particular point (12,0,0) all points on this
plane have the following form (fill in the first component).

x − 3y − z = 0
Solution: Since Consider
x = 3y + z, y, z ∈ R
       
x 3y + z 3 1
y  =  y  = y 1 + z 0
z z 0 1
   
3 1
Hence special solutions are 1 and 0 We have to find a matrix A whose special solutions
  
    0 1
3 1
are 1 and 0
  
0  1 
a11 a12 a13
Let A = a21 a22 a23 
a31
 a32 a33
3
Consider A 1 = 0

 0    
a11 a12 a13 3 0
a21 a22 a23  1 = 0
a31 a32 a33 0 0
3a11 + a12 = 0 ⇒ a12 = −3a11
3a21 + a22 = 0 ⇒ a22 = −3a21
3a31 + a32 = 0 ⇒ a32 = −3a31

2
 
1
Consider A 0 = 0

 1    
a11 a12 a13 1 0
a21 a22 a23  0 = 0
a31 a32 a33 1 0
a11 + a13 = 0 ⇒ a13 = −a11

a21 + a23 = 0 ⇒ a23 = −a21

a31 + a33 = 0 ⇒ a33 = −a31


 
a11 −3a11 −a11
So A = a21 −3a21 −a21 

a31 −3a31 −a31
given parallel plane is x − 3y − z = 12
x = 3y + z + 12.
         
x 12 + 3y + z 12 3 1
y  =  y  =  0  + y 1 + z 0
z z 0 0 1

3
Linearly independent and dependent

Linearly independent
A subset {v1 , v2 , .....vn } of a vector space V is said to be linearly independent if whenever
c1 , c2 , ....cn ∈ R such that c1 v1 + c2 v2 + ..... + cn vn = 0 then c1 = c2 = ..... = cn = 0

Linearly dependent
A non empty finite subset {v1 , v2 , .....vn } of a vector space V is said to be linearly dependent if
there exists scalars c1 , c2 , ....cn ∈ R (not all zero)such that c1 v1 + c2 v2 + ..... + cn vn = 0

Ex 1 if v1 = zero vector , then the set is linearly dependent . we may choose c1 = 1 and
all other ci = 0, this is a non trivial combination that produces zero.
i.e. 1v1 + 0v2 + ..... + 0vn = 1 × 0 + 0 + .... + 0 = 0

Ex 2 : The Column of the Matrix


 
1 3 3 2
A= 2 6 9 5 
−1 −3 3 0

are linearly dependent , since the 2nd column is 3 times  the first,the combination
 of columns
with weights -3,1,0,0 gives the zero vector. i.e. say A = C1 C2 C3 C4 , then −3C1 + 1C2 +
0C3 + 0C4 = 0
The rows
 are also linearly dependent, row 3 is two times row 2 minus five times row1. i.e. say
R1
A = R2  , then R3 − 2R2 + 5R1 = 0
R3
EX 3
The Column of this Triangular Matrix are Linearly Independent
 
3 4 2
A= 0 1
 5 
0 0 2

Consider a linear combination of the columnsthat makes zero


Solve Ac = 0        
3 4 2 0
c1 0 + c2 1 + c3 5 = 0
      
0 0 2 0

1
    
3 4 2 c1 0
0 1 5 c2  = 0
0 0 2 c3 0
it means 3c1 + 4c2 + 2c3 = 0, 0c1 + 1c2 + 5c3 = 0, 0c1 + 0c2 + 2c3 = 0 i.e.

c3 = 0, c2 = 0, c1 = 0

So column of A are Linearly Depandent.


and null space of A contains only zero vector

A similar reasoning applies to the rows of A, which are also independent. Suppose

c1 (3, 4, 2) + c2 (0, 1, 5) + c3 (0, 0, 2) = (0, 0, 0)

. From the first components we find 3c1 = 0 orc1 = 0. Then the second components give c2 = 0,
and finally c3 = 0.

Note : The columns of A are independent exactly when N (A) = {zerovector}


Note : It is the columns with pivots that are guaranteed to be independent

Ex 4 The columns of the n by n identity matrix are independent:


 
1 0 . 0
0 1 . 0
I= . .

. 0
0 0 0 1
Note :To check any set of vectors v1 , ..., vn for independence, put them in the columns of A.Then
solve the system Ac = 0;
1. The vectors are dependent if there is a solution other than c = 0.
2. With no free variables (rank n), there is no nullspace except c = 0; (i.e. N (A) = {0})the
vectors are independent.
3. If the rank is less than n, at least one free variable can be nonzero and the columns are
dependent.

Note :A set of n vectors in Rm must be linearly dependent if n > m.

Ex 5 These three column in R2 can not be independent:


 
1 2 1
A=
1 3 2

Sol : To find the combination of the columns producing zero we solve Ac = 0


   
1 2 1 1 2 1
A= R − R1 → R2 ∼ =U
1 3 2 2 0 1 1

2
If we give the value 1 to the free variable c3 , then back-substitution in U c = 0 gives c2 = −1,
c1 = 1  
i.e. if A = C1 , C2 , C3 then C1 − C2 + C3 = 0

Exercise 2.3.1: Choose three independent columns of V , then make two other choices. Do the
samefor A. You 
have found
 bases for which spaces?
2 3 4 1 2 3 4 1
0 6 7 0 0 6 7 0
U =0 0 0 9 , A = 0 0 0 9
  

0 0 0 0 4 6 8 2
 
2 3 4 1
0 6 7 0
0 0 0 9 R4 −2R1 →
Solution: Let U = [U1 U2 U3 U4 ] A = [C1 C2 C3 C4 ] Consider, A =  

4 6 8 2
 
2 3 4 1
0 6 7 0
R4 ∼ 
0 0 0 9 = U

0 0 0 0
i.e. U is echelon form of A.
Note: Columns of A which have pivot are linearly independent.
Case (i) U1 , U2 , U4 are L.I.(using the note).
Case (ii) U1 , U3 , U4 are L.I.
as consider aU1 + bU3 + cU4 = 0
       
2 4 1 0
0 7 0 0
a0 + b 0 + c 9 = 0
      

0 0 0 0
   
2a + 4b + c 0
0a + 7b + 0c 0 9c = 0 c=0
⇒ = ⇒
0a + 0b + 9c 0 7b = 0 ⇒ b=0
2a + 4b + c = 0 a=0
0a + 0b + 0c 0
Case (iii) U1 , U3 , U4 are L.I.
as consider aU2 + bU3 + cU4 = 0
       
3 4 1 0
6 7 0 0
a
0 + b 0 + c 9 = 0
      

0 0 0 0
   
3a + 4b + c 0
6a + 7b + 0c 0 9c = 0 c=0
⇒
  =   ⇒ 6a + 7b = 0 ⇒ b=0
0a + 0b + 9c 0
3a + 4b + c = 0 a=0
0a + 0b + 0c 0

3
Note: Columns of a matrix A are linearly independent which are corresponding to the pivot
column of echelon matrix of A.
Case (i) C1 , C2 , C4 are L.I.(using the note).
Case (ii) C1 , C3 , C4 are L.I.
as we can see consider aC1 + bC3 + cC4 = 0
       
2 4 1 0
0 7 0 0
a
0 + b 0 + c 9 = 0
      

4 8 2 0
     
2 4 1   0   0
0 7 0 a 0 a 0
0 0 9 b = 0 i.e S b = 0
        
c c
4 8 2 0 0
   
2 4 1 0 2 4 1 0
 0 7 0 0 
 R4 − 2R1 → R4 ∼  0 7 0 0 
 
Aug. matrix=[S|0]   0 0 9 0   0 0 9 0 
4 8 2 0 0 0 0 0
2a + 4b + c = 0 c=0
⇒ 7b = 0 ⇒ b=0
9c = 0 a=0
Case (iii) C2 , C3 , C4 are L.I.
consider aC2 + bC3 +cC4 = 0  
  0   0
a a
  0

 b  = 0
 
C2 C3 C4  b  =  0 say B 0
c c
0 0
   
3 4 1 0 3 4 1 0
 0 6 7 0 
 R4 − 2R1 → R4 ∼  0 6 7 0 
 
Aug. matrix=[B|0]   0 0 9 0   0 0 9 0 
6 8 2 0 0 0 0 0
3a + 4b + c = 0 c=0
⇒ 6b + 7c = 0 ⇒ b=0
9c = 0 a=0
The all three cases, we found bases for R4×3 space.

Exercise 2.3.3 : Decide the dependence or independence of


(a) the vectors (1,3,2), (2,1,3) and (3,2,1)
(b) the vectors (1,3,-2), (2,1,-3) and (-3,2,1).
Solution:
(a)        
1 2 3 0
a 3 + b 1 + c 2 = 0
      
2 3 1 0

4
        
1 2 3 a 0 a 0
3 1 2  b  = 0 say A  b  = 0
2 3 1 c 0 c 0
   
1 2 3 0 1 2 3 0
Aug. matrix=[A|0]  3 1 2 0  R3 − 2R1 → R3 , R2 − 3R1 → R2 ∼  0 −5 −7 0 
2 3 1 0  0 −1 −5 0
1 2 3 0
− 5R3 + R2 → R3 ∼ 0 −5 −7 0 

0 0 18 0
1a + 2b + 3c = 0 c=0
⇒ −5b − 7c = 0 ⇒ b = 0
18c = 0 a=0
Vectors are L.I.

(b) Consider 1(1, −3, 2) + 1(2, 1, −3) + 1(−3, 2, 1) = (1 + 2 − 3, −3 + 1 + 2, 2 − 3 + 1) = (0, 0, 0)


⇒ Vectors are L.D.

Exercise 2.3.5 : If w1 , w2 , w3 are independent vectors, show that the differences v1 = w2 − w3 ,


v2 = w1 − w3 and v3 = w1 − w2 are dependent. find a combination of v 0 s gives zero.
Solution: Consider av1 + bv2 + cv3 = 0
a(w2 − w3 ) + b(w1 − w3 ) + c(w1 − w2 ) = 0
(b + c)w1 + (a − c)w2 + (−a − b)w3 = 0
Since w1 , w2 , w3 are L.I.
So b + c = 0, a − c = 0, −a − b = 0
b = −c, a = c , b = −a
a = −b = c take a = 1, b = −1, c = 1
v1 − v2 + v3 = 0

Exercise 2.3.8 : Suppose v1 , v2 , v3 , v4 are vectors in R3 .


(a) thse four vectors are dependent because ......
(b) The two vector v1 and v2 will be dependent if .....
(c) The vectors v1 and (0,0,0) are dependent because .....

Solution: (a) Since dim(R3 ) = 3


Therefore each base of R3 contains exactly 3 vectors.
So collection of vectors which are more than 3 are linearly dependent.
So four vectors are L.D.
(b) Let av1 + bv2 = 0 for {v1 , v2 } should be dependent.
So atleast one of a or b is nonzero.
say a 6= 0
So v1 = −b v So v1 , v2 are dependent if ∃α 6= 0 s.t. v1 = αv2
a 2
(C) Consider 0.v1 + 1(0, 0, 0) = (0, 0, 0) a = 0, b = 1 6= 0
So v1 and (0,0,0) are L.D .

5
Lecture 16
2.3 Linear Independence, Basis and
Dimension

Course Outcomes: Students will have understanding about linear inde-


pendence,dependence, spanning a subspace, basis and dimension of a vector
space.
The aim of this section is to explain and use four ideas:

• Linear Independence or dependence.

• spanning a subspace.

• basis for a subspace.

• dimension of a subspace.

Spanning a Subspace: If S = {w1 , w2 , ...., wl } is a set of vectors in a vector


space V, then the span of S is the set of all linear combinations of the vectors
in S.
Span(S) = {c1 w1 + c2 w2 + ...... + cl wl | f or all ci ∈ R}
If every vector in a given vector space can be written as the linear combina-
tion of vectors in a given set S, then S is called a spanning set of the vector
space.

Notes:

• The column space is spanned by its columns.

• The row space is spanned by its rows.

1
Basis for a Vector Space: A basis for a vector space V is a subset with a
sequence of vectors having two properties at once:

• The vectors are linearly independent.(not too many vectors)

• They span the space V .(not too few vectors)

Notes:

• A basis of a vector space is the maximal independent set.

• A basis of a vector space is also a minimal spanning set.

• Spanning involves the column space and independence involves the null
space.

• No elements of a basis will be wasted.

Example: Check whether the following sets are the basis of R3 or not?

(a)B1 = {(1, 2, 2), (−1, 2, 1), (0, 8, 0)}


(b)B2 = {(1, 2, 2), (−1, 2, 1), (0, 8, 6)}
(c)B3 = {(1, 2, 2), (−1, 2, 1)}
(d)B4 = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}
(e)B5 = {(1, 1, −1), (2, 3, 4), (4, 1, −1), (0, 1, −1)}

Solution:
(a)
 
1 −1 0
Let A = 2 2 8
2 1 0
⇒ |A| = −24
⇒ |A| 6= 0

The vectors (1, 2, 2), (−1, 2, 1) and (0, 8, 0) are LI.


So, B1 is a basis of R3 .

Dimension of Vector Spaces: Dimension of a vector space is the maxi-


mum number of LI vectors of the vector space.

2
OR. The no. of elements present in the basis of a vector space is known as
the dim. of vector space.

dimR = 1, dimR2 = 2, dimR3 = 3, ........dimRn = n

Problem Set-2.3
16. Describe the subspace of R3 (is it a line or a plane or R3 ?) spanned by
(a) the two vectors (1, 1, −1) and (−1, −1, 1).
(b) the three vectors (0, 1, 1) and (1, 1, 0) and (0, 0, 0).
(c) the columns of a 3 by 5 echelon matrix with 2 pivots.
(d) all vectors with positive components.
Ans.(a)
 
1 −1 R1
Let A =  1 −1 R2 ← R2 − R1
−1 1 R3 ← R3 + R1
 
1 −1
⇒ A = 0 0  (echelonf orm)
0 0

Here rank of A=1.


∴ The subspace of R3 spanned by the two vectors (1, 1, −1) and (−1, −1, 1)
is a line passing through the origin of R3 .
(b)
 
0 1 0
Let A = 1 1 0
1 0 0
Interchanging R1 by R2 we have,
 
1 1 0 R1
A = 0 1 0 R2
1 0 0 R3 ← R3 − R1
 
1 1 0 R1
⇒ A = 0 1 0 R2
0 −1 0 R3 ← R3 + R2
 
1 1 0
⇒ A = 0 1 0 (echelonf orm)
0 0 0

3
Here rank of A=2.
∴ The subspace of R3 spanned by the three vectors (0, 1, 1) and (1, 1, 0) and (0, 0, 0)
is a plane of R3 .
(c)
 
1 0 0 0 0
Let A = 0 1 0 0 0
0 0 0 0 0

Here rank of A=2.


∴ The subspace of R3 spanned by the the columns of a 3 by 5 echelon matrix
with 2 pivots is a plane.

19. Find a basis for the plane x − 2y + 3z = 0 in R3 . Then find a basis


for the intersection of that plane with the xy-plane. Then find a basis for all
vectors perpendicular to the plane.
3
Ans. The basis
  plane x − 2y + 3z = 0 in R is the nullspace of the
for the
matrix A = 1 −2 3 .
The plane x − 2y + 3z = 0 in the matrix form can be written as
 
  x  
1 −2 3 y  = 0
z
Here x is the pivot variable and y, z are the free variables.
So, x =2y − 3z    
2y − 3z 2 −3
∴ x =  y  = y 1 + z  0 
z 0  1 
 2 −3 
Basis for the plane is  1  ,  0 
0 1
 
A basis forthe
 intersection
 of the plane x − 2y + 3z = 0 with the xy-plane
 2 
i.e z=0 is  1 
0
 
 
 1 
A basis for all vectors perpendicular to the plane is  −2 
3
 

4
23 Find bases for the two column spaces. Find bases for the two row spaces.
Find bases for the two nullspace.
 
1 3 2
A= 0 1
 1
1 3 2

Ans.
 
1 3 2 R1
Let A = 0
 1 1 R2
1 3 2 R3 ← R3 − R1
 
1 3 2
⇒A = 0  1 1 (echelonf orm)
0 0 0
   
 1 3 
Basis f or C(A) =  0 , 1 
 
1 3
 
   
 1 0 
Basis f or C(AT ) =  3  ,  1 
2 1
 

To find basis for nullspace we have,

  Ax
 = 0 
1 3 2 x 0
⇒  0 1 1  y  =  0 
0 0 0 z 0

⇒ x + 3y + 2z = 0, y + z = 0
   
x 1
Hence x =  y  =  −1 
z 1
 
 1 
Basis f or N (A) =  −1 
1
 

5
Lecture 17
2.4 The Four Fundamental Subspaces

Course Outcomes: Students will have understanding about four funda-


mental subspaces of matrices and one sided inverse of rectangular matrices.
The four fundamental subspaces of matrices are as follows :
• The column space C(A).

• The null space N (A).

• The row space C(AT ).

• The left null space N (AT ).


The Column Space C(A): The column space of A is denoted by C(A). Its
dimension is the rank r.

The Null Space N (A): The nullspace of A is denoted by N(A). Its dimen-
sion is n − r.

The Row Space C(AT ): The row space of A is the column space of AT . It
is C(AT ), and it is spanned by the rows of A. Its dimension is also r.

The Left Null Space N (AT ): The left nullspace of A is the nullspace of
AT . It contains all vectors y such that AT y = 0, and it is written N (AT ).
Its dimension is m − r.

Notes
• The nullspace N(A) and row space C(AT ) are subspaces of Rn .

• The left nullspace N (AT ) and column space C(A) are subspaces of Rm .

1
Existence of Inverses:
let A be a matrix of order m × n with rank r.
(1)Full row rank r = m. Ax = b has at least one solution x for every b if
and only if the columns span Rm . Then A has a right-inverse C such that
AC = Im (m by m). This is possible only if m ≤ n.

(2)Full column rank r = n. Ax = b has at most one solution x for every b


if and only if the columns are linearly independent. Then A has an n by m
left-inverse B such that BA = In . This is possible only if m ≥ n.

Notes: One-sided inverses are B = (AT A)−1 AT and C = AT (AAT )−1

Example. Find a left-inverse and/or a right-inverse (when they exist) for


 
4 0 0
A=
0 5 0

Ans.  
4 0 0
Let A =
0 5 0
The matrix A is already in Echelon form with 2 pivots.
So, rank of A=r=2.
Here m=2 and n=3.
So, m=r=2
⇒ A is a full row rank matrix.
⇒ right inverse C of A will exist and is given by

C = AT (AAT )−1
 
1
 4 0 
⇒C =  0 1 
 
 5 
0 0

2
Lecture 18
2.4 The Four Fundamental Subspaces

Problem Set-2.4

2. Find the dimension and a basis for the four fundamental subspaces for
 
1 2 0 1
A= 0 1 1 0 
1 2 0 1

Ans.
 
1 2 0 1 R1
Let A =  0 1 1 0  R2
1 2 0 1 R3 ← R3 − R1
 
1 2 0 1
⇒ A =  0 1 1 0  (echelonf orm)
0 0 0 0

Rank of A = r = 2
Here m = 3, n = 3
dimC(A) = r = 2
dimC(AT ) = r = 2
dimN (A) = n − r = 2
dimN (AT ) = m − r = 2

The Column Space C(A):


   
 1 2 
Basis f or C(A) =  0  ,  1 
1 2
 

1
The Null Space N (A):

Ax
 = 0

 x1
  
1 2 0 1  0
x2 
⇒  0 1 1 0 
 x3  =
   0 
0 0 0 0 0
x4

⇒ x1 + 2x2 + x4 = 0, x2 + x3 = 0
 
x1
 x2 
Hence x = 
 x3 

x4
 
2x3 − x4
 −x3 
=  
 x3 
x4
   
2 −1
 −1   0 
= x3 
 1 
 + x4  
 0 
0 1
   

 2 −1 
−1  0 
  
Basis f or N (A) =  ,
 1  


 0 

0 1
 

The Row Space C(AT ):


 
1 2 0 1
A= 0 1 1 0 
1 2 0 1

2
 
1 0 1 R1
 R2 ← R2 − 2R1
 2 1 2 
Let AT = 
 0 1 0  R3
1 0 1 R4 ← R4 − R1
 
1 0 1 R1
 0 1 0  R2

⇒ AT = 
 0 1 0  R3 ← R3 − R2
0 0 0 R4
 
1 0 1
 0 1 0 
⇒ AT = 
 0
 (echelonf orm)
0 0 
0 0 0
   

 1 0 

2 , 1
   
Basis f or C(AT ) =  


 0   1 

1 0
 

The Leftnull Space N (AT ):

AT y = 0
   
1 0 1   0
 2 y1
1 2 

 =  0 
 
⇒
 0 y 2
1 0   0 
y3
1 0 1 0
   
1 0 1   0
 0 y 1
1 0 

 =  0 
 
⇒
 0 y 2
0 0   0 
y3
0 0 0 0

⇒ y1 + y3 = 0, y2 = 0

3

y1
Hence y =  y2 
y3
 
−y3
=  0 
y3
 
−1
= y3  0 
1
 
 −1 
Basis f or N (AT ) =  0 
1
 

13. Find a basis for each of the four subspaces of


 
0 1 2 3 4
A= 0 1 2 4 6 
0 0 0 1 2

Ans.

 
0 1 2 3 4 R1
Let A =  0 1 2 4 6  R2 ← R2 − R1
0 0 0 1 2 R3
 
0 1 2 3 4
⇒A =  0 0 0 1 2 
0 0 0 1 2
 
0 1 2 3 4
⇒A =  0 0 0 1 2 
0 0 0 0 0

4
   
 1 3 
Basis f or C(A) =  1 , 4 
 
0 1
 
   

 0 0  
1   1 


   

Basis f or C(AT ) = 
 2 , 2 
  
3   4 

  

 

4 6
 
     

 1 0 0 

0   −2   2

 

  


Basis f or N (A) = 
 0 , 1 
 
,
 0 

0   0   −2

  

 

0 0 1
 
 
 1 
Basis f or N (AT ) =  −1 
1
 

18. Find a 1 by 3 matrix whose nullspace consists of all vectors in R3 such


that x1 + 2x2 + 4x3 = 0. Find a 3 by 3 matrix with that same nullspace.
3
Ans. A 1 by 3 matrix whose nullspace
 consists of all vectors in R such that
x1 + 2x2 + 4x3 = 0 is A = 1 2 4 .
Since x1 + 2x2 + 4x3 = 0 in the matrix form can be written as
 
  x1  
1 2 4 x2  = 0
x3
 
1 2 4
A 3 by 3 matrix with that same nullspace is A = 2 4 8 
3 6 12
24. Construct a matrix with the
   required property, or explain why you can’t.
1 0    
1 2
(a) Column space contains  1  ,  0 , row space contains , .
2 5
0  1  
1 3
(b) Column space has basis  2 , nullspace has basis  2 .
3 1

5
(c) Dimension of nullspace = 1+dimension of left nullspace.
 
1 3
(d) Left nullspace contains , row space contains .
3 1
(e) Row space = column space, nullspace 6= left nullspace.
Ans.
(a) A matrix with the required property is given by
 
1 0
A= 1 0 
0 1

(b) Impossible: dimensions 1 + 1 6= 3.


(c) A matrix with the required property is given by
 
A= 1 1

(d) A matrix with the required property is given by


 
−9 −3
A=
3 1

(e) Impossible: Row space = column space requires m = n.


Then m − r = n − r .

6
Lecture 19
Chapter 3: Orthogonality

3.1 Orthogonal Vectors and Subspaces

Course Outcomes: Students will be acquainted with orthogonal vectors,


orthonormal vectors, orthonormal subspaces, and orthogonal compliment of
subspaces.

Length of a vector: It is denoted by ||x||.


Let x = (x1 , x2 ). p
Length in 2D = ||x|| = x21 + x22 .
Length squared=||x||2 = x21 + x22 .
Let x = (x1 , x2 , x3 ). p
Length in 3D = ||x|| = x21 + x22 + x23 .
Length squared=||x||2 = x21 + x22 + x23
Let x = (x1 , x2 , ....., xn ).p
Length in Rn = ||x|| = x21 + x22 + ....... + x2n .
Length squared=||x||2 = x21 + x22 + ....... + x2n

Inner Product:
Let x = (x1 , x2 ) and y = (y1 , y2 ). Then the inner product of two vectors x
and y is denoted by xT y and defined as xT y = x1 y1 + x2 y2
Let x = (x1 , x2 , x3 ) and y = (y1 , y2 , y3 ). Then xT y = x1 y1 + x2 y2 + x3 y3
Let x = (x1 , x2 , ......, xn ) and y = (y1 , y2 , ...., yn ).
Then xT y = x1 y1 + x2 y2 + ..... + xn yn

1
Note:  
x1
  x2 
xT x = x1 x2 ......xn  
 ...... 
xn
= x21 + x22 + ..... + x2n = ||x2 ||
Hence inner product of a vector with itself is equal to the length square of
the vector.

• The inner product xT y is zero if and only if x and y are orthogonal


vectors.

• If xT y > 0, their angle is less than deg 90. If xT y < 0, their angle is
greater than deg 90.

• The only vector with length zero and the only vector orthogonal to
itself is the zero vector.

Orthogonal Vectors:
Two vectors x and y are said to be orthogonal iff xT y = 0

Orthogonal Subspaces:
Two subspaces V and W of the same space Rn are orthogonal if every vector
v in V is orthogonal to every vector w in W i.e v T w = 0 for all v and w.

Examples

• x-axis and y-axis are subspaces of R2 and every vector of x-axis is


orthogonal to every vector in y-axis. So, x-axis ⊥ y-axis in R2

• y = x line ⊥ y = −x line in R2 .

• All the three axes in R3 are orthogonal to each other.

Notes

• The subspace {0} is orthogonal to all subspaces.

• A line can be orthogonal to another line, or it can be orthogonal to a


plane, but a plane cannot be orthogonal to a plane.

2
Fundamental theorem of orthogonality: The row space is orthogonal to
the nullspace (in Rn ). The column space is orthogonal to the left nullspace
(in Rm ).

Orthogonal Complement Given a subspace V of Rn , the space of all vec-


tors orthogonal to V is called the orthogonal complement of V. It is denoted
by V ⊥ = “V perp.”

Examples

• x-axis is the orthogonal complement of y-axis in R2 .

• y = x line is the orthogonal complement of y = −x line in R2 .

• x-axis is the orthogonal complement of yz-plane in R3 .

Fundamental Theorem of Linear Algebra: The nullspace is the orthog-


onal complement of the row space in Rn . The left nullspace is the orthogonal
complement of the column space in Rm .

3
Lecture 20
3.1 Orthogonal Vectors and Subspaces

Problem Set-3.1

1. Which pairs are orthogonal among the vectors v1 , v2 , v3 , v4 ?


       
1 4 1 1
 2   0   −1   1 
v1 =  −2  , v2 =  4  , v3 =  −1
     , v4 =  
  1 
1 0 −1 1

Ans.

Here v1T v2 6= 0
v1T v3 = 0
v1T v4 6 = 0
v2T v3 = 0
v2T v4 6 = 0
v3T v4 6 = 0

So v1 &v3 and v2 &v3 are orthogonal pairs.

7. Find the lengths and the inner product of x = (1, 4, 0, 2) and y =


(2, −2, 1, 3).

1
Ans.
x = (1, 4, 0, 2)

⇒ ||x|| = 21
y = (2, −2, 1, 3)

⇒ ||y|| = 18
xT y = 0
⇒x ⊥ y

9. Find a basis for the orthogonal complement of the row space of A:


 
1 0 2
A=
1 1 4
Split x = (3, 3, 3) into a row space component xr and a nullspace component
xn .
Ans. Basis for the orthogonal complement of the row space of A is same as
basis for nullspace. i.e. C(AT )⊥ = N (A).
The Null Space N (A):

 Ax = 0
  u  
1 0 2 0
⇒  v  =
1 1 4 0
w

⇒ u + 2w = 0, u + v + 4w = 0
⇒ u = −2, v = −2, w = 1
 
u
Hence x =  v 
w
 
−2
=  −2 
1
 
 −2 
Basis f or N (A) =  −2 
1
 

2
 
 −2 
So, Basis f or C(AT )⊥ =  −2 
1
 

x = xr + xn
   
3 −2
⇒ 3
  = xr + −2
 
3 1
     
3 −2 5
⇒ xr =  3 − −2 = 5 
 
3 1 2

12. Show that x − y is orthogonal to x + y if and only if ||x|| = ||y||


Proof.
x − y is orthogonal to x + y

⇔ (x − y) ⊥ (x + y)
⇔ (x − y)T (x + y) = 0
⇔ (xT − y T )(x + y) = 0
⇔ xT x + xT y − y x − y T y = 0
⇔ ||x||2 − ||y||2 = 0 since xT y = y T x
⇔ ||x|| = ||y||

3
Lecture-21

3.2 Cosines and Projections Onto Lines


Course outcome: The course outcome of this article is to know about the Cosine
angle between two lines and also the projection matrix.

Inner Product and Cosines

The cosine of the angle is directly related to inner product. For this consider the
triangle in two dimensional case. Suppose the vectors a and b make angles α and β
with X−axis as in fig. The length k a k is the hypotenuse in the triangle OaQ. So,
a2 a1
the sine and cosine of α are sin α = , cos α = .
kak kak

The cosine of the angle θ = β − α using inner products. For angle β, the sine is
b2 b1
and the cosine is .
kbk kbk

The cosine formula

cos θ = cos(β − α)
= cos β cos α + sin β sin α
a1 b 1 + a2 b 2
=
k a kk b k
aT b
=
k a kk b k
ha, bi
=
k a kk b k

Law of Cosines: k b − a k2 =k b k2 + k a k2 −2 k b kk a k cos θ. When θ is a right


angle, we have k b − a k2 =k b k2 + k a k2 , which is Pythagoras Theorem.
k b − a k2 = (b − a)T (b − a)
= (bT − aT )(b − a)
= bT b − abT − aT b + aT a
= b T b − aT b − aT b + aT a
=k b k2 −2aT b+ k b k2
=k b k2 −2 k a kk b k cos θ+ k b k2 .

When θ is a right angle, cos θ = 0. Thus k b − a k2 =k a k2 + k b k2 .

Projection Onto a Line


Suppose that we want to find the distance from a point b to the line in the direction
of the vector a. We are looking also that instead of a line for the point p closest to b.
The line connecting b to p is perpendicular to a. The situation is the same when we
are given a plane or any subspace S instead of a line. Again, the problem is to find the
point P on the subspace that is closed to b onto the subspace. Every point on the line
is a multiple of a. So,
aT b
p=x ba, where xb= T .
a a
Hence, the projection of the vector b onto the line in the direction of a is
aT b aT b
p=x
ba = a = a.
aT a k a k2

Example 1 Project b = (1, 2, 3) onto the line through a = (1, 1, 1) to get xb and p.
aT b 1+1×2+1×3 6
Solution: x b = T = √ = = 2. The projection is p = xba =
a a ( 12 + 12 + 12 )2 3
aT b 6
2(1, 1, 1) = (2, 2, 2). The angle between a and b is cos θ = =√ √ .
k a kk b k 3 14
Schwarz Inequality:
| ab |≤k a kk b k .

2
Exercise-3.2

1. (a) Given any two positive numbers x and y, choose the vector b equal to
√ √ 
√ √ 
( x , y), and choose a = y , x ). Apply the Schwarz inequality
1 √
to compare the arithmetic mean (x + y) with the geometric mean xy.
2
√ 

 
y x
Solution: If a = √  , b = √  . Applying Schwarz inequality, we
x y
get

aT b ≤k a kk b k
√ 

√ √  x
 √ √ √ √
⇒ y x √ ≤k ( y, x) kk ( x, y) k
y
√ √
q
√ 2 √ 2q √ 2 √
⇒ xy + xy ≤ ( y) + ( x) ( x) + ( y)2
√ √ √
⇒ 2 xy ≤ y + x x + y
√ x+y
⇒ xy ≤ (G.M. ≤ A.M.)
2

(b) Using the triangle inequality k x + y k≤k x k + k y k, reduce to Schwarz


inequality.
Solution: From the triangle inequality, we know

k x + y k≤k x k + k y k
⇒k x + y k2 ≤ (k x k + k y k)2
⇒ (x + y)T (x + y) ≤k x k2 +2 k x kk y k + k y k2
⇒ (xT + y T )(x + y) ≤k x k2 +2 k x kk y k + k y k2
⇒ xT x + xT y + y T x + y T y ≤k x k2 +2 k x kk y k + k y k2
⇒k x k2 +2xT y+ k y k2 ≤k x k2 +2 k x kk y k + k y k2
⇒ xT y ≤k x kk y k (Schwarz inequality).

3. By using the correct b in the Schwarz Inequality, prove that

(a1 + a2 + · · · + an )2 ≤ n(a21 + a22 + · · · a2n ).

When does equality hold?


Solution: Let b = (1, 1, · · · , 1) and a = (a1 , a2 , · · · , an ). Using Schwarz

3
inequality, we have

aT b ≤k a kk b k
 
1
 
  1 q √
⇒ a1 a2 · · · an  .  ≤ a21 + a22 + · · · + a2n 12 + 12 + · · · + 12
 
 .. 
 
1
√ q 2
⇒ a1 + a2 + · · · an ≤ n a1 + a22 + · · · + a2n

⇒ (a1 + a2 + · · · an )2 ≤ n(a21 + a22 + · · · + a2n ).

The inequality becomes equality, if ai = aj for i = j = 1, 2, . . . , n.

Assignments
Exercise-3.2, Q. 5,9.

4
Lect-22

Projection Matrix of Rank 1


The projection onto a line is carried out by a projection matrix P which is a symmetric
aaT
matrix and P 2 = P, P = T .
a a
Example 1 The matrix that projects onto the line through a = (1, 1, 1) is
1 1 1
  3 3 3
 
1   
aaT 1  1
 
1 1

P = T =  1 1 1 1 = .
a a 3   3 3 3
1
 
 
1 1 1
 
3 3 3
Here, rank(P ) = 1.

Remark 1
aaT
1. Projection matrix is same if a is doubled a = (2, 2, 2, ),.i.e,
aT a
1 1 1

  3 3 3
 
2   
aaT 1    
1 1 1

P = T = 2 2 2 2 =  .
a a 12

3 3 3
  
2

 
1 1 1
 
3 3 3
aaT
2. To project b onto a, multiply it by the projection matrix P : p = P b, P = T
a a
Exercise-3.2

aaT
8. Prove that the trace of P = , which is the sum of its diagonal entries, always
aT a
1.
Solution: Let a = (a1 a2 · · · an )T . Then

aaT
P =
aT a 

a1
 
a   
 2
 .  a1 a2 · · · an
 .. 
 
an
=  
a1
 
   a2 
a1 a2 · · · an  . 
 
 .. 
 
an
 
a21 a1 a2 · · · a1 an
 
2
a a
 1 2 a 2 · · · a 2 a n


 . .. .. .. 
 .. . . . 
 
2
an a1 an a2 · · · an
=
a21 + a22 + · · · + a2n

a21 + a22 + · · · + a2n


Hence tr(P ) = = 1.
a21 + a22 + · · · + a2n

17. Draw the projection of b onto a and also compute it from p = x


ba :

   
cos θ 1
(a) b =   and a =  
sin θ 0

2
Solution:

P =x
ba
aT b
= a
aT a  
  cos θ
1 0   
sin θ 1
=    
  1 0
1 0  
0
 
cos θ 1
=
1 0
 
cos θ
= 
0

   
1 1
(b) b =   and a =  
1 −1

3
Solution:

P =x
ba
aT b
= a
aT a  
  1
1 −1    
1 1
=   
  1 −1
1 −1  
−1
 
0 1
=  
2 −1
 
0
= 
0

Assignments
Exercise-3.2, Q. 11,19.

4
Lecture-23

3.3 Projections and Least Squares


A system of equations Ax = b has a solution iff b ∈ C(A).

a11 x1 + a12 x2 + · · · + a1n xn = b1


a21 x1 + a22 x2 + · · · + a2n xn = b2
.. (1)
.
am1 x1 + am2 x2 + · · · + amn xn = bm
Equation (1) can be written as
       
a11 a12 a1n b1
       
a  a  a  b 
 21   22   2n   2
 .  x1 +  .  x2 + · · ·  .  xn =  .  .
 ..   ..   ..   .. 
       
am1 am2 amn bm
Let the co-efficient vector of xi be ai . Now, ∈ C(A),i.e, there exists c1 , c2 , · · · , cn such
 b
x1
 
n x 
X  2
that b = ci ai . Then for xi = ci , x =  .  is a solution of (1).
 .. 
i=1  
xn
If x is a solution of (1), then ci = xi , i.e., b is a linear combination of columns of A,
i.e., b ∈ C(A).
Consider a system of linear equations

Ax = b (2)

with b 6∈ C(A). Then (2) has no solution. Then one must choose x
b a best fit solution
of it.

1
As b 6∈ C(A), let us obtain a column vector p in C(A), which is the most closest vector
of b in C(A) such that the error e = b − p is of minimum length .i.e,k e k2 =k b − p k2
is minimum. Hence it is called least square approximation.

k e k2 is minimum
⇐⇒ e ⊥ C(A)
⇐⇒ e ∈ N (AT )
⇐⇒ AT e = 0
⇐⇒ AT (b − p) = 0
⇐⇒ AT (b − Ab
x) = 0
⇐⇒ AT b = AT Ab
x

i.e., Normal equations AT Ab


x = AT b.
b = (AT A)−1 AT b.
Best estimate: x
x = A(AT A)−1 AT b.
Projection p = Ab
Projection matrix P = A(AT A)−1 AT .

Remark 1 1. Suppose b is actually in the column space of A it is a combination


b = Ax of the columns. Then the projection of b is still b: b in column space

p = A(AT A)−1 AT Ax = Ax = b :

The closest point p is just b itself, which is obvious.

2. At the other extreme, suppose b is perpendicular to every column, so AT b = 0. In


this case b projects to the zero vector: b in left nullspace, i.e., b ∈ N (AT )

p = A(AT A)−1 AT b = A(AT A)−1 0 = 0 :

3. When A is square and invertible, the column space is the whole space. Every
vector projects to itself, p equals b, and x
b = x : If A is invertible

p = A(AT A)−1 AT b = AA−1 (AT )−1 AT b = b :

This is the only case when we can take apart (AT A)−1 , and write it as A−1 (AT )−1 .
When A is rectangular that is not possible.

4. Suppose A has only one column, containing a. Then the matrix AT A is the
number aT a and x
b is aT b/aT a.

2
5. AT A has the same nullspace as A.

If Ax = 0, then AT Ax = 0. Vectors x in the nullspace of A are also in the


nullspace of AT A. To go in the other direction, start by supposing that AT Ax = 0,
and take the inner product with x to show that Ax = 0 :

xT AT Ax = 0 ⇒k Ax k2 = 0 ⇒ Ax = 0.

Hence, the two nullspaces are identical.

6. If A has independent columns , then N (AT A) = N (A) = {0}.

7. If A has independent columns, then AT A is square, symmetric, and invertible.

Projection Matrices

P = Projection matrix, which projects a vector onto C(A)


= A(AT A)−1 AT

i.e, p = P b.
Note that

P 2 = A(AT A)−1 AT A(AT A)−1 AT


= A(AT A)−1 (AT A)(AT A)−1 AT
= A(AT A)−1 AT
=P

and

P T = [A(AT A)−1 AT ]T
= (AT )T [(AT A)−1 ]T AT
= A(AT A)−1 AT
= P.

Suppose that A is invertible. Then,

P = A(AT A)−1 AT
= AA−1 (AT )−1 AT
= I.

Hence, p = P b = Ib = b. Therefore, error = p − b = 0.

3
Exercise-3.3
   
1 0 1
   
1. Solve Ax = b by least squares and find p = Ab x, if A = 
0 1 and b = 1.
  

1 1 0
Verify that the error b − p is perpendicular to the columns of A.
Solution:

b = (AT A)−1 AT b
x
 
   1
1 2 −1 1 0 1  
=    1
3 −1 2 0 1 1
 
0
1
3
= 
 
1
 
3
and

P = Ab
x
= (AT A)−1 AT b
  
1 0 1
 3
  
=0 1   1 
1 1 3
1
3
 
 
1
 
=  .
3
 
 
2
 
3
1  2 

   3  3 
   
1    
   1  2 
  
Here, Error= b − p = 1 −   =  .
 
3  3 
0
   
   
2 −2
   
3 3
b − p is perpendicular to the column of the matrix.

4
 
  1
2 2 −2   2 2
(b − p)T C1 = 0 = − = 0 and
3 3 3   3 3
1
 
  0
T 2 2 −2   2 2
(b − p) C2 = 1 = − = 0.
3 3 3   3 3
1
   
1 −1   4
  C  
4. The following system has no solution: Ax =  1 0 
   = 5 = b.
D
 
1 1 9
Sketch and solve a straight-line fit that leads to the minimization of the quadratic
(C − D − 4)2 + (C − 5)2 + (C + D − 9)2 . What is the projection of b onto the
column space of A?
Solution:

b = (AT A)−1 AT b
x
   
1  4
0 1 1 1  
= 3 1   5

−1 0 1
 
0
2 9
 1 1 1  
 3 3 3  4
=  5
  
−1 1
 
0 9
 2 2
6
= 5 .
2
Now,

P = Ab
x
 
1 −1  
  6
= 1 0  5

1 1 2
7
2
=  6 .
 
17
 
2

5
Assignments
Exercise-3.3, Q. 2,9,12,24.

6
Lect-24

Least Squares Fitting a Straight Line

b = C + Dt

or

C + Dt1 = b1
C + Dt2 = b2
..
.
C + Dtm = bm

   
1 t1   b1
   
1 t  C b 
2  2
or  . =  .  or Ax = b, x b = (C,
b D).
   b
 ..  .. 

 D
   
1 tm bm
m
X
E 2 =k b − Ax k2 = (bi − C − Dti )2 , where k E k2 is minimum for some C and D.
i=1

∂E 2
=0 (1)
∂C

∂E 2
=0 (2)
∂D

Solving (1) and (2) for C and D we get the required straight line

b = C + Dt.
or

 
1 t1
 
1 t 
2
AT Ab
x = AT b, A =  .
 
 ..


 
1 tm
 
C
b
⇐⇒ AT A   = AT b
D
b
   
  1 t1   b1 
  
1 1 ... 1  1 t2   C
b 1 1 ... 1  b2 
⇐⇒   =  
.  . 
. .

t1 t2 . . . tm  .
 D
 b t1 t2 . . . tm  . 


1 tm bm
 Xm  X m 
 m ti  b bi 
 
  C   i=1 

i=1
⇐⇒ X = X

m m  m 

ti
X
t2 Db 
tb

i i i
i=1 i=1 i=1

Exercise-3.3

6. Find the projection of b onto the column space of A :

   
1 1 1
   
A=
1 ,
−1 b=
2 .

−2 4 7

Split b into p + q, with p in the column space and q perpendicular to that space.
Which of the four subspaces contains q?

2
Solution:

p = Ab
x
18 8
 
   
1 1  44
 44    1
 1 1 −2  

 
= 1 −1 8

6
  2
 1 −1 4
  

−2 4  44 44  7

 26 14 
 44 44   
   1
 
 10

2  1
 1 −2  
=   2
 44 44  −1 4
 1
 
7

 
−4 8
 
44 44
 40 12 4 
 44 44 44  
 
  1
 12 8 −12  
  
=  2
 44 44 44   
 7
 

4 −12 40
 
44 44 44
 
92
1  
= −56 .
44  
260
  
92 1
1    
Let find the vector q such that b = p + q ⇒ q = p−b = −56 − 2 =
44    
260 7
      
48 48 1 1 0
1  
−144 . Thus, q T A = 1      
 = 0. Hence, q belongs
−144  1 −1  
44   44  
−48 −48 −2 4 0
to the left null space.

Assignments
Exercise- 3.3, Q. 12, 24.

3
Lect-25

4.2 Properties of the Determinant





a b a b
1. det  = = ad − bc.
c d c d

 
a b c a b c













  e f d f d e
2. det d e f  = d e f = a − b + c
 
h i g i g h


g h i g h i

1 0 0

1 0
3. The determinant of the identity matrix is 1, eg- = 1 and 0 1 0 = 1
0 1


0 0 1


a b
4. The determinant changes sign when two rows are interchanged, i.e., =
c d



c d
− .
a b


0 0
a + a b + b a b

5. The determinant depends linearly on the first row, i.e., = +
c d c d


0 0
a b ta tb a b
and = t .
c d c d c d



a c
6. If any two rows of A are equal then det(A) = 0, eg, = 0.
a c

7. Substracting
a multiple of one
row from another row leaves the same determinant,
a b a − tc b − td

i.e., = .
c d c d

8. If A is singular then det(A) = 0 and if A is nonsingular then det(A) 6= 0.

9. det(AB) = det(A)det(B)

10. det(A) = det(AT )


Exercise-4.2

4. By applying row operations to produce an upper triangular U, compute

 
1 2 −2 0
 
2 3 −4 1 
(i) det  .
 
−1 −2 0 2
 
0 2 5 3
Solution:


1 2 −2 0 1 2 −2 0


2 3 −4 1 0 −1 0 1
= R2 ← R2 − 2R1 , R3 ← R3 + R1


−1 −2 0 2 0 0 −2 1


0 2 5 3 0 2 5 3

1 2 −2 0


0 −1 0 1
= R4 ← R4 + 2R2


0 0 −2 2


0 0 5 5

1 2 −2 0


0 −1 0 1 5
= R4 ← R4 + R3

2

0 0 −2 2


0 0 0 10

= 1 × (−1) × (−2) × 10 = 20.

 
2 −1 0 0
 
−1 2 −1 0 
(ii) det  .
 
 0 −1 2 −1
 
0 0 −1 −2

2
Solution:

0 2 −1 0 0

2 −1 0

3

−1 2 −1 0 0 −1 0

1

=
2 R2 ← R2 + R1
2

0 −1 2 −1 0 −1 2 −1



0 0 −1 −2 0 0 −1 −2


2
−1 0 0
3
0 −1 0

2 R3 ← R3 + 2
= 4 R2
0 0

−1 3

3
0 0 −1 −2

2 −1 0 0

3

−1 0

0 3
=
2 R4 ← R4 +

R3
4 4
0 0 −1
3

0 5
0 0
4

3 4 5
= 2 × × × = 5.
2 3 4

5. Find the determinants of:


 
1
  
(a) a rank one matrix A = 
4 2 −1
 2
2
 
4 4 8 8
 
0 1 2 2
(b) the upper triangular matrix U = 
 

0 0 2 6
 
0 0 0 2

(c) the lower triangular matrix U T .

(d) the inverse matrix U −1 .


 
0 0 0 2
 
0 0 2 6
(e) the reverse-triangular matrix that results from row exchanges, M = 
 

0 1 2 2
 
4 4 8 8

Solution:

3
   
1   2 −1 2 2 −1 2

   
(a) A =  4
 
 2 −1 2 = 8 −4 8 . Hence, det(A) = 8 −4 8 =
 

2 4 −2 4 4 −2 4

2
−1 2

0
0 0 = 0

0 0 0
(b) | U |= 4 × 1 × 2 × 2 = 16

(c) | U T |=| U |= 16.


1 1
(d) | U −1 |= = .
|U | 16

0 0 0 2


0 0 2 6
(e) | M |= = 4 × 1 × 2 × 2 = 16.

0 1 2 2


4 4 8 8

Assignments
Exercise-4.2, Q. 2,6,13.

4
LECTURE-26

4.4 APPLICATIONS OF DETERMINANTS

This section follows four major applications: inverse of A, solving Ax = 0 using

Cramer’s Rule, Area or Volume and pivots.

 AN INVERSE FORMULA

Let A be invertible n × n matrix. Then

1
A−1 = adjA
detA

EXAMPLE 1. Compute of A−1 of 2 × 2 matrix


 
2 3
A=



4 7

Solution: Since detA = 2. Therefore inverse is exist. We know that, adjA =

transpose of the cofactor matrix of the matrix A.


 
 7 −3
∴ adjA = 



−4 2

Hence  
1 7 −3
A−1 =  
2 
−4 2

 CARMER’S RULE

Let A be an invertible n × n matrix. For any b in Rn , the unique solution x of

Ax = b has entries given by

detAi
xi = , i = 1, 2, · · ·
detA
1
EXAMPLE 2. (Text Q. 14) Use Cramer’s rule to solve the system

(a) 2x1 + 5x2 = 1

x1 + 4x2 = 2

(b) 2x1 + x2 = 1

x1 + 2x2 + x3 = 0

x2 + 2x3 = 0

Solution (a): Given the system as Ax = b where


     
2 5 1 5 2 1
A=

 , A1 = 
 
 , A2 = 
 


1 4 2 4 1 2

Since detA = 3, the system has a unique solution. By Cramer’s rule,

detA1 −6
x1 = = = −2 (1)
detA 3
detA2 3
x2 = = =1 (2)
detA 3

 AREA AND VOLUME

y
C
5
D
4

1
B
A E
0
−2 −1 0 1 2 3 4 5 x
−1

2
From the figure OA = l(length) and DE = h(heigh) where h = |E − D|

The area (Volume) of the parallelogram is l×h = |detA|(determine by the columns

of A)

The area of the triangle determine by the columns of A is 21 parallelogram= 12 detA

EXAMPLE 3(a)(TEXT Q. 2)Draw the triangle with vertices A = (2, 2), B =

(−1, 3) and C = (0, 0). By regarding it as half of a parallelogram , explain why

its area equal  


1 2 2
area(ABC) =  =4
2 
−1 3

y
4

B(−1, 3) 3

2 A(2, 2)

0
−2 −1 C(0,
0 0) 1 2 3 x
−1

−2

EXAMPLE 3(b) Move the third vertex to C = (1, −4) and justify the formula
   
x1 y1 1 2 2 1
   
1   1   19
area(ABC) = det x2 y2 1 = det −1 3 1
  
= 2
2   2  
   
x3 y 3 1 1 −4 1

3
y
4

B(−1, 3) 3

2 A(2, 2)

0
−3 −2 −1 0 1 2 3 4 x
−1

−2

−3

−4
C(1, −4)

EXAMPLE 3(c) Sketch A0 = (1, 6), B 0 = (−2, 7), and C 0 = (0, 0) and their

relation to A, B, C.
 
 1 6 1
 
 1 6 19
 
1  1
area(A0 B 0 C 0 ) = det 

−2 7 1 = det  =
2   2   2



 −2 7
0 0 1

4
y
8

B 0 (−2, 7) 7

6 A0 (1, 6)

B(−1, 3) 3

2 A(2, 2)

0
−3 −2 −1C 0 (0,
0 0) 1 2 3 x
−1

−2

−3

−4
C(1, −4)
−5

The formula is justified because A0 = (1, 6), B 0 = (−2, 7), and C 0 = (0, 0) are

translations of the vertices A = (2, 2), B = (−1, 3) and C = (1, 4).

EXAMPLE 4 (Text Q.29) A box has edges from (0, 0, 0) to (3, 1, 1) and (1, 1, 3).

Find its volume and also find the area of each parallelogram face?

Solution:(First Part)

5
y

x
z

−→ →
OA = −
u = (3, 1, 1)

−−→ →
OB = −
v = (1, 3, 1)

−→ →
OC = −
w = (1, 1, 3)
 
3 1 1
 

− →
− →
− 
Volume of the box is V = ( u × v · u ) = 

1 3 1 = 20

 
 
1 1 3
(Second Part) Area of the face

−→ −−→
OAEB = ||(OA × OB)|| = ||(→

u ×→

v )||

Now,

 
 i j k
 

− →
− 
(u × v)=

3 1 1 = −2i − 2j + 8k

 
 
1 3 1

Hence,
√ √
||(→

u ×→
− p
v )|| = (−2)2 + (−2)2 + 82 = 72 = 6 2

Similarly,

−−→ −→
OBGC = ||(OB × OC)|| = ||(→

v ×→

w )||

6
Now,  
 i j k
 

− →
− 
( v × w) = 

1 3 1 = 8i − 2j − 2k

 
 
1 1 3

Hence,
√ √
||(→

v ×→
− p
w )|| = 82 + (−2)2 + (−2)2 = 72 = 6 2

−→ −→
OADC = ||(OA × OC)|| = ||(→

u ×→

w )||

Now,  
 i j k
 

− →
− 
( u × w) = 

 = 2i − 8j + 2k
3 1 1 
 
 
1 1 3

Hence,
√ √
||(→

u ×→
− p
w )|| = 22 + (−8)2 + (2)2 = 72 = 6 2

Q.27 The Parallelogram with sides (2, 1) and (2, 3) has the same area as the par-

allelogram with sides (2, 2) and (1, 3). Find those areas from 2 by 2 determinants

and say why they must be equal.

7
LECTURE-27

5 EIGENVALUES AND EIGENVECTORS

5.1 INTRODECTION

Eigenvalues and Eigenvectors are most important and interesting topic in Linear

Algebra. Some basic concept of determinant and Matrices will be use to study the

Eigenvalue and Eigenvector problems. One important thing is that, Eigenvalues

and Eigenvectors we can find only from the square matrix.

Eigenvalues and Eigenvectors will help to solve many problems in Linear Al-

gebra. However the basic concepts of Eigenvalues and Eigenvectors are useful

throughout pure and applied mathematics. Eigenvalues are also used to study

differential equations and continuous dynamics systems, they provide critical in-

formation in engineering design and they arise naturally in fields such as physics

and chemistry.

Let us consider A be square matrix and x be the vector multiply with A as a

in goes vector then outcomes vector is Ax (i.e. gradient of A). It is like a function,

x is in goes then then f (x) is outcomes function. Since Ax is a vector, so, question

will be arise about the direction of Ax. This outcome vector may be goes different

direction. But some particular vector of x, Ax will be parallel to x. Then we can

write

Ax = λx.

This special vector of x is called eigenvector of the matrix A and this scalar value

1
of λ is called the eigenvalue of the matrix A.

Example 1. Let
     
3 −2 −1 2
A=

,
 u=
 ,
 and v = 
 .

1 0 1 1

Are both u and v are eigenvectors of A?

Solution:
         
3 −2 −1 −5 5 −1
Au =   ·   =   = (−1)   6= λ   6= λu
         
1 0 1 −1 1 1

where λ = −1. Therefore u is not an eigenvector of A, because Au is not a multi-

ple of u. Again,

         
3 −2 2 4 2 2
Av =   ·   =   = (2)   = λ   = λv
         
1 0 1 2 1 1

Where λ = 2. Thus v is an eigenvector of A corresponding to an eigenvalue λ = 2,

and Av is a multiple of v only.

Av

u v
x
Au

NOTE: If λ is positive, then Ax and x are in same direction.

2
Example 2. Let
     
1 3  −3 2
A=

,
 u=
 ,
 and v = 
 .

2 2 2 −2

Are both u and v are eigenvectors of A?

Solution:        
1 3 −3  3  −3
Au =   ·   =   = (−1)   = λu
       
2 2 2 −2 2
where λ = −1. Therefore u is an eigenvector corresponding to an eigenvalue of

(-1), and Au is a multiple of u only. Again,

       
1 3  2  −4 2
Av =   ·   =   6= λ   6= λv
       
2 2 −2 0 −2
Thus v is not an eigenvector of A, because Av is not a multiple of v.

Av
x

Au
v

NOTE: If λ is negative, then Ax and x are in opposite direction.

Above two examples, it is clear to see that some special vectors only the eigenvec-

tor corresponding to eigenvalue λ of the any matrix A. This scalar value λ may be

3
zero or nonzero(i.e. positive, negative or imaginary). The numbers of eigenvalue

(i.e. λ)and eigenvectors are depends on the order of the matrix. If the order of the

matrix is n × n then we will get n eigenvalues and for each corresponding eigenval-

ues will get n linearly independent eigenvectors. But some times eigenvalue may

repeat. In that case all eigenvalue are not linearly independent.

DEFINATION

An eigenvector of an n×n matrix A is anonzero vector x such that

Ax = λx for some scalar λ. A scalar λ is called an eigenvalue of A if

there is a nontrivial solution x of Ax = λx; such an x is called an eigen-

vector corresponding to λ.

Possible two cases of λ of the Special equation Ax = λx

Case-1 The solution of Ax = 0,

If λ = 0, then the special equation becomes Ax = 0 and x is in the

nullspace of A. Since x is a vector so A will be zero. i.e. A is a singular matrix.

i.e. det(A) = 0

NOTE: If λ = 0, then A is a singular matrix and if A is a singular matrix

then λ = 0.

Case-2 The solutions of Ax = λx

If λ 6= 0 then the equation Ax = λx is becomes (A − λ · I)x = 0 . Again

same condition, x is a nullspace of (A − λ · I). Since x is vector so (A − λ · I)

will be zero. i.e. (A − λ · I) is a singular matrix. i.e. det(A − λ · I) = 0. This

is the key equation to find the eigenvalue, and is called eigenvalue equation or

4
characteristic equation.

EXAMPLES OF EIGENVALUES AND EIGENVECTORS

The first step is to understand how eigenvalues can be useful. One of their appli-

cation is to ordinary differential equations.

Example 3. Let us consider the coupled pair of equations

dv
= 4v − 5w, v=8 t=0
dt
dv
= 2v − 3w, w = 5 t = 0.
dt

The system of matrix form is

du
= Au, with u = u(0)att = 0 (1)
dt

where
     
 v(t)  8) 4 −5)
u(t) = 

,
 att = 0, u(0) = 
 ,
 A=



w(t) 5 2 −3

The pure exponential solution of the above the equation (1) is u(t) = eat u(0).

Now we are going to find the eigenvalue and eigenvector of the equation (1) with

the help of coefficient matrix A.

The characteristic equation is det(A − λI) = 0.

i.e. det(A − λI) = 0




4 − λ −5)

i.e. =0

2
−3 − λ

i.e. λ2 − λ − 2 = 0

i.e. λ = −1 or λ=2

5
Therefor λ = −1 and lambda = 2 are the eigenvalue of the matrix A

NOTE: A matrix with zero determinant is singular, so there must be nonzero

vector x in its nullspace. In fact the nullspace contains a whole line of eigenvectors.

It is a subspace.

Let x1 be the eigenvector of the corresponding eigenvalue λ = −1.

(A + I)x = 0
     
5 −5 y  0
i.e.  · = 
     
2 −2 z 0

The solution (the First eigenvalue)is any nonzero multiple of x1

Eigenvaetor for λ1 = −1,  


1
x1 = 
 

1

Let x2 be the eigenvector of the corresponding eigenvalue λ2 = 2.

(A − 2I)x = 0
     
2 −5 y  0
i.e.  · = 
     
2 −5 z 0

The solution (the second eigenvector)is any nonzero multiple of x2

Eigenvaetor for λ2 = 2,  
5
x2 =  
 
2

6
y

x2
x1

The special solution of equation (1) is u = eλt x. Therefore the complete solution

equation (1) is

u(t) = c1 eλ1 t x1 + c2 eλ2 t x2 . (2)

Where c1 and c2 are two free parameter. The initial condition of the system is

u = u(0) at t = 0.

At t = 0, the equation (2) becomes

c1 x 1 + c2 x 2 = 0
     
1 5 c1  8
i.e.  · = 
     
1 2 c2 5

The constant are c1 = 3 and c2 = 1 Therefor the solution to the original

equation is
   
1 5
u(t) = 3e−t x1 ·   + e2t  
    (3)
1 2

where u(t) = 3e−1 + 5e2t , w(t) = 3e−1 + 2e2t

Hence at t = 0, v(0) = 8 and w(0) = 5

7
LECTURE-28

Continue from the previous lecture...

Example 4 The eigenvalues of the diagonal are main diagonal element of the

matrix.

Solution:

Let  
3 0
A=


 has λ1 = 3 and λ2 = 2.(try yourself )
0 2

Let x1 be the eigenvector of the corresponding eigenvalue λ1 = 3.

(A − 3I)x = 0
     
0 0  y  0
i.e.  · = 
     
0 −1 z 0
Eigenvaetor for λ1 = 3,
 
1
x1 =  
 
0
Let x2 be the eigenvector of the corresponding eigenvalue λ2 = 2.

(A − 2I)x = 0
     
1 0 y  0
i.e.  · = 
     
0 0 z 0
Eigenvaetor for λ2 = 2,
 
0
x2 =  
 
1

1
y

x2
x1
x

Example 5. The eigenvalues of a projection matrix are 1 or 0.

Solution:
 
1 1
2 2
A=


 has λ1 = 1 and λ2 = 0.(check yourself )
1 1
2 2

Let x1 be the eigenvector of the corresponding eigenvalue λ1 = 1.

(A − I)x = 0
     
1 1
2 − 1 2 y  0
i.e.  · = 
     
1 1
2 2
−1 z 0

     
−1 1
 2  y  0
2
i.e.  · =  R2 ← R2 + R1
     
1 −1
2 2
z 0

     
−1 1
2 2 y  0
i.e.  · = 
     
0 0 z 0

Eigenvaetor for λ1 = 1,
 
1
x1 =  
 
1

Let x2 be the eigenvector of the corresponding eigenvalue λ2 = 0.

2
(A − 0 · I)x = 0
     
1 1
2 −0 2  y  0
i.e.  · = 
     
1 1
2 2
−0 z 0
     
1 1
2 2 y  0
i.e.   ·   =  , R2 ← R2 − R1
     
1 1
2 2
z 0

     
1 1
 2 2  y  0
i.e.  · = 
     
0 0 z 0
Eigenvaetor for λ2 = 0,  
1
x2 = 
 

−1

x1

x2

Example 6. The eigenvalues are on the main diagonal when A is triangular.

Solution: Let  
1 4 5 
 
 
A = 0 3 6 

 4 

 
0 0 12
The characteristic equation is

det(A − λI) = 0

3


1 − λ 4 5



i.e. 0 3
−λ 6
4


1
0 0 − λ = 0

2

3 1
i.e. (1 − λ)( − λ)( − λ = 0)
4 2
3 1
i.e. λ = 1, λ= or
4 2

The above eigenvalues are the main diagonal of the given triangular matrix.

PROPERTIES OF EIGENVALUES

1. The sum of the eigenvalue of a matrix is equal to the trace of the matrix.(Trace

means sum of the principal diagonal of the matrix)

e.g. Let  
a b 
A=



c d
Since A is a 2 × 2 matirx. Therefore the numbers eigenvalues will be two.

Let λ1 and λ2 are the eigenvalue of the matrix A.

i.e. λ1 + λ2 = traceof thematrixA = a + d

2. The product of the eigenvalue of a matrix A is equal to the determinate of A




a b

λ1 · λ2 =

c d

3. Any square matrix A and its transpose AT have the same eigenvalue.

4. Eigenvalue of the diagonal matrix or scalar matrix or triangular matrix

are the principal diagonal elements of the matrix.

4
5. Eigenvalue of the projection matrix are always 1 or 0. If numbers of

eigenvalues λ = 1 repeated r times of the n × n matrix, then λ = 0

repeated n−r times.

6. If λ is an eigenvalue of A, then eigenvalue of A ± kI is λ ± k.

7. Eigenvalues of symmetric matrix are always distinct.

1
8. If λ is an eigenvalue of an orthogonal matrix A. Then λ
is also an

eigenvalue of the same matrix.

Ax = λx (λ is an eigenvalue of A)

i.e. AT Ax = λAT x

i.e. Ix = λAT x

1
i.e. x = AT x
λ
1
i.e. AT x = x
λ
1
i.e. isaneigenvalueof AT
λ
1
we know that A and AT have same eigenvalue. Therefore λ and λ
is an eigenvalue

of A as well as AT

1
NOTE: If A is an orthogonal matrix then λ or λ
are an eigenvalue of A as well

as AT .

9. If λ1 , λ2 , λ3 , ....., λm are an eigenvalue of the matrix A then,

(a) The eigenvalue of A−1 are 1


, 1 , 1 , ......, λ1m
λ1 λ2 λ3

(b) The eigenvalue of An are λ1 n , λ2 n , λ3 n , ....., λm n

(c) The eigenvalue of kA are kλ1 , kλ2 , kλ3 , ....., kλm

5
(d) The eigenvalue of A ± kI λ1 ± k, λ2 ± k, λ3 ± k, ....., λm ± k

PROPERTIES OF EIGENVECTOR

1. Eigenvector of A and AT are always different.

2. Eigenvector of A and A ± kI are always same.

3. Eigenvector of symmetric matrix for corresponding eigenvalue are orthogonal.

Let A is asymmetric matrix of order 3. Let λ1 , λ2 , λ3 are three eigenvalue of

the matrix A. For corresponding eigenvalue λ1 , λ2 , λ3 , we have three eigenvector

x1 , x2 , x3 then

x1 T · x2 = 0 x2 T · x 3 = 0 x3 T · x1 = 0

i.e. x1 , x2 , x3 are orthogonal.

Problem Set 5.1

Q.1 Find the eigenvalues and eigenvectors of the matrix


 
1 −1
A=

.

2 4

Verify that the trace equals the sum of the eigenvalues, and the determinant equals

their product.

Solution: The characteristic equation is

det.(A − λ.I) = 0


1 − λ −1

i.e. =0

2
4 − λ

6
i.e. (1 − λ)(4 − λ) + 2 = 0

i.e λ2 − 5λ + 6 = 0

i.e. λ = 2 or λ=3

Let x1 be the eigenvector of the corresponding eigenvalue λ1 = 2.

(A − 2I)x = 0

     
−1 −1 y  0
i.e.  · = 
     
2 2 z 0
     
−1 −1 y  0
i.e.  · =  R2 ← R2 + 2R1
     
0 0 z 0

Eigenvaetor for λ1 = 2,
 
−1
x1 = 
 

1

Let x2 be the eigenvector of the corresponding eigenvalue λ2 = 3.

(A − 3I)x = 0

     
−2 −1 y  0
i.e.  · = 
     
2 1 z 0

     
−2 −1 y  0
i.e.   ·   =  , R2 ← R2 + R1
     
0 0 z 0

Eigenvaetor for λ2 = 3,

7
 
1
x2 = 
 

−2

x1

x2

Second part

Trace of A=sum of diagonal of the matrix A =1 + 4 =5

and Sum of eigenvalue= λ1 + λ2 =2 + 3 =5

Therefore Trace of A=λ1 + λ2

Again, Product of the eigenvalue =λ1 × λ2 =2 · 3 =6

Determinant of A,

1 −1

=6

2 4

Therefore Determinant of A=Product of the eigenvalue.

du
Q.2. With the same matrix, Solve the differential equation dt
= Au,
 
0
u(0) = 
 , What are the two pure exponential solutions?

6
Solution: The given coefficient matrix is
 
1 −1
A=



2 4

8
du
The pure exponential solution the differential equation dt
= Au is u = eλt x,

and the two special solutions are

   
−1 1
u(t) = eλ1 t x1 = e2 · 
 
 and u(t) = eλ2 t x1 = e3 ·  
 
1 2

These two special solution gives the complete solution. Therefore the complete

du
solution dt
= Au is

u(t) = c1 eλ1 t x1 + c2 eλ2 t x2 .


   
−1 1
i.e. u(t) = c1 e2t · 
 
 + c2 e3t ·   .
 
1 −2

At initial condition t = 0, u(t) = u(0)

c1 x 1 + c2 x 2 = 0

     
−1 1 c1  0
i.e.  · =  R2 ← R2 + R1
     
1 2 c2 6

     
−1 1 c1  0
i.e.  · = 
     
0 3 c2 6

The constant are c1 = 2 and c2 = 3 Therefor the solution to the original

equation is
   
−1 1
u(t) = 2e2t x1 · 
 
 + e3t  
 
1 2

9
LECTURE-29

continue from the previous lecture...

Q.6. Shoe that the determinant equals the product of the eigenvalues by imagining

that the characteristic polynomial is factored into

det(A − λt) = (λ1 − λ)(λ2 − λ)....(λn − λ)

and making a cleaver choice of λ.

Solution:

If λ = 0 then,

det(A − 0) = (λ1 − 0)(λ2 − 0)....(λn − 0)

detA = λ1 λ2 λ3 ....λn

Q.7. Find the eigenvalues and eigenvectors of


   
3 4 2 0 0 2
   
   
A=
0 1 2
 and B=
0 2 2

   
   
0 0 0 2 0 0

Check that λ1 + λ2 + λ3 equals the trace and λ1 λ2 λ3 equals the determinant.

Solution:

Since the A is triangular matrix. Therefore the eigenvalues of the matrix A are

λ1 = 3, λ2 = 1 and λ3 = 0

Sum of the eigenvalue

λ1 + λ2 + λ3 = 3 + 1 + 0 = 4

1
Trace of of the matrix A=3 + 1 + 0=4

Therefore Sum of the eigenvalue is equal to the trace of of the matrix A.

Product of the eigenvalue

λ1 λ2 λ3 = 3 · 1 · 0 = 0

and determinant of the matrix A is




3 4 2



0 1
2 = 3(0 − 0) − 4(0 − 0) + 2(0 − 0) = 0


0 0 0

Therefore Product of the eigenvalue is equal to the determinant of the matrix A.

Second part

Let Let x1 be the eigenvector of the corresponding eigenvalue λ1 = 3.

(A − 3I)x = 0
     
3 − 3 4 2  x 0
     
     
i.e.  0 1 − 3 2  · y  = 0
     
     
     
0 0 0−3 z 0
     
0 4 2  x 0
     
     
0 −2 2  · y  = 0 1
i.e. R2 ← R2 + R1


    
     2
     
0 0 −3 z 0
     
0 4 2  x 0
     
     
i.e. 0 0 3  · y  = 0 R3 ← R3 + R2
     
     
     
0 0 −3 z 0

2
     
0 4 2 x 0
     
     
i.e. 0 0 3 · y  = 0
     
     
     
0 0 0 z 0

Here x is free variable since x’s pivot is missing.

Eigenvaetor for λ1 = 3, is
 
1
 
 
x1 = 
0

 
 
0

Let x2 be the eigenvector of the corresponding eigenvalue λ2 = 3.

(A − I)x = 0
     
3 − 1 4 2  x 0
     
     
i.e.  0 1 − 1 2  · y  = 0
     
     
     
0 0 0−1 z 0
     
2 4 2  x 0
     
0 0 2  · y  = 0 R3 ← R3 + 1 R2
     
i.e. 

    
     2
     
0 0 −1 z 0
     
2 4 2 x 0
     
     
i.e. 0 0 2 · y  = 0
     
     
     
0 0 0 z 0

Here y is free variable since y’s pivot is missing. Here z = 0 and x = 2y

3
Eigenvaetor for λ2 = 1, is
 
2
 
 
x2 = 
1

 
 
0

Let x3 be the eigenvector of the corresponding eigenvalue λ3 = 0.

(A − 0)x = 0
     
3 − 0 4 2  x 0
     
     
i.e.  0 1 − 0 2  · y  = 0
     
     
     
0 0 0−0 z 0
     
3 4 2 x 0
     
     
i.e. 0 1 2 · y  = 0
     
     
     
0 0 0 z 0

Here z is the free variable since z’s pivot is missing. Here y = −2z and x = 2z

Eigenvaetor for λ3 = 0, is
 
2
 
 
x3 = 
−2

 
 
1

4
y
 
1  
 
 
0 2
   
   
  1
0
 
 
 
0 x

 
2
 
z  
−2
 
 
 
1

Q. 15. Find the eigenvalue and eigenvectors of


   
3 4  a b 
A=


 and B = 



4 −3 b a

Solution:

We know that, sum of the eigenvalue of the matrix is equal to trace of the matrix.

Since A is 2 × 2 matrix, therefore Matrix A has two eigenvalue. Let λ1 and λ2 .

λ1 + λ2 = 3 − 3 = 0 (1)


3 4

λ1 × λ2 = = −25
(2)
4 −3

Solving Eq. (1) and (2), λ1 = 5 or λ2 = −5 Let x1 be the eigenvector of the

corresponding eigenvalue λ1 = 5.

(A − 5I)x = 0

5
     
−2 4  y  0
i.e.   ·   =   , R2 ← R2 + 2R1
     
4 −8 z 0
     
−2 4 y  0
i.e.  · = 
     
0 0 z 0
Eigenvaetor for λ1 = 5,  
−2
x1 = 
 

−1
Let x2 be the eigenvector of the corresponding eigenvalue λ2 = −5.

(A + 5I)x = 0
     
8 4 y  0
i.e.  · = 
     
4 2 z 0

     
8 4 y  0 1
i.e.   ·   =  , R2 ← R2 − R1
      2
0 0 z 0
Eigenvaetor for λ2 = −5,
 
1
x2 = 
 

−2

x
x1

x2

6
Q.17. The Powers Ak of this matrix A approaches a limit as k → ∞



.8 .3 .70 .45 .6 .6
and A∞

A =
and A2 =
=

.2 .7 .30 .55 .4 .4

The matrix A2 is halfway between A and A∞ . Explain why A2 = 12 (A + A∞ ) from

the eigenvalues and eigenvectors of these three matrices.

Solution: Given

.8 .3

A =

.2 .7

The characteristic equation for the matrix A is


.8 − λ .3

det(A − λI) =

.2
.7 − λ

= (.8 − λ)(.7 − λ) − 0.03 · 0.2 = 0

= λ2 − 1.5k + 0.5

= λ1 = 1 or λ2 = 0.5

Therefore the eigenvalue of A2 are µ1 = 1 and µ2 = 0.25

Similarly, the eigenvalue of A∞ are ν1 = 1 and ν2 = 0

7
Again,

1
µi = · (λi + νi )
2
1
µ i xi = · (λi mui + νi )xi
2
1
A2 xi = · (A + A∞ )xi
2
1 1
A2 x1 + A2 x2 = · (A + A∞ )x1 + · (A + A∞ )xi
2 2
1
A2 (x1 + X2 ) = (A + A∞ )(x1 + X2 )
2
1
A2 = (A + A∞ )
2

Q.3. If you shift to A − 7I, what are the eigenvalue and eigenvectors and how are

they related to those of A?

Q.9 The eigenvalue of A equal the eigenvalue of AT . This is because det(A − λI)

equals det(AT − λI). That is true because ............. Show by an example that the

eigenvectors of A and AT are not the same.

Q.39. When a + b = c + d show that (1, 1) is an eigenvector and find both

eigenvalues  
a b 
 
 
c d

Solution: Show that (1, 1) is a vector of the matrix A for some eigenvalue λ. So,

we have 
to 
find the eigenvalue for the given eigenvector.

1
Let x = 
 

1

8
         
a b  1 a + b a + b 1
Ax =  · = =  = (a + b) ·   since a + b = c + d
         
c d 1 c+d a+b 1

1.e. Ax = λx, where λ = a + b

∴ (1, 1) is the eigenvector for the corresponding eigenvalue λ = a + b.

9
LECTURE-30

5.2 DIAGONALIZATION OF A MATRIX

Suppose the n by n matrix A has n linearly independent eigenvectors. If these

eigenvectors are the columns of a matrix S, then S −1 AS is a diagonal matrix Λ.

The eigenvalues of A are on the diagonal of Λ:

 
λ1 
 
 

 λ2 

Diagonalization S −1 AS = Λ=   (1)
..
 

 . 

 
 
λn

We call S the “eigenvector matrix” and Λ the “eigenvalue matrix”—using a cap-

ital lambda because of the small lambdas for the eigenvalues on its diagonal.

Where the eigenvectors xi in the columns of S, and


 
 
 
 
S=x x
 1 2 . . . x 
n
 
 

multiplying by S −1
AS = SΛ −−−−−−−−−−−→ S −1 AS = Λ (2)
from the left

Eigenvdecomposition

multiplying by S −1
From AS = SΛ −−−−−−−−−−−→ A = SΛS −1
from the right

S is invertible, because its columns(the eigenvector) were assumed to be indepen-

1
dents.

Note that depending on which side you multiply by the inverse of “S”, you get

the equations for “diagonalization” and “eigendecomposition”.

Underlying assumption behind the diagonalization and eigendecompo-

sition

In order for the matrix “A” to be either diagonalized or eigendecomposed, it

has to meet the following critera:

• A Must be a Square matrix.

• A has linearly independent eigenvectors (i.e. eigenvector matrix S has to be

invertible).

Remark 1: If the matrix A has no repeated eigenvalues—the numbers λ1 , λ2 , ..., λn

are distinct—then its n eigenvectors are automatically independent. Therefore any

matrix with distinct eigenvalues can be diagonalized.

Example 1 The eigenvalues of the diagonal are main diagonal element of the

matrix.

Solution:
 
3 0
Let A = 


 has λ1 = 3 and λ2 = 2.(try yourself )
0 2

Let x1 be the eigenvector of the corresponding eigenvalue λ1 = 3.

(A − 3I)x = 0
     
0 0  y  0
i.e.  · = 
     
0 −1 z 0

2
Eigenvaetor for λ1 = 3,  
1
x1 =  
 
0

Let x2 be the eigenvector of the corresponding eigenvalue λ2 = 2.

(A − 2I)x = 0
     
1 0 y  0
i.e.  · = 
     
0 0 z 0

Eigenvaetor for λ2 = 2,  
0
x2 =  
 
1

x2
x1
x

From the above figure it is clear that, the eigenvector of matrix A are independent.

Therefore the eigenvector matrix is


   
1 0 1 0
S=


 and S −1 = 



0 1 0 1

Hence the matrix A is diagoanlized.

Example 2 The eigenvalues of the diagonal are main diagonal element of the

matrix.

3
Solution:
 
3 1
Let B = 


 has λ1 = 3 and λ2 = 3.(try yourself )
0 3

Let x1 be the eigenvector of the corresponding eigenvalue λ1 = 3.

(A − 3I)x = 0
     
0 1 y  0
i.e.  · = 
     
0 0 z 0

Eigenvaetor for λ1 = 3,  
1
x1 =  
 
0

Let x2 be the eigenvector of the corresponding eigenvalue λ2 = 3.

Eigenvaetor for λ2 = 3,  
2
x2 =  
 
0

x1
x
x2

From the above figure it is clear that, the eigenvector of matrix A are not inde-

pendent.

4
Therefore the eigenvector matrix is
 
1 2
S=



0 0

Since the matrix S is singular, therefore inverse of the this matrix is not exist.

Hence the matrix B is not diagoanlized.

Remark 2: The diagonalizing matrix S is not unique. An eigenvector x can

be multiplied by a constant, and remains an eigenvector. We can multiply the

columns of S by any nonzero constants, and produce a new diagonalizing S. Re-

peated eigenvalues leave even more freedom in S. For the trivial example A = I,

any invertible S will do: S −1 IS is always diagonal (Λ is just I). All vectors are

eigenvectors of the identity.

Remark 3: Other matrices S will not produce a diagonal Λ. Suppose the first

column of S is y. Then the first column of SΛ is λ1 y. If this is to agree with

the first column of AS, which by matrix multiplication is Ay, then y must be an

eigenvector: Ay = λ1 y. The order of the eigenvectors in S and the eigenvalues in

Λ is automatically the same.

Remark 4: Not all matrices possess n linearly independent eigenvectors, so not

all matrices are diagonalizable.

Example:
 
0 1
A=



0 0

5
Its eigenvalues are λ1 = λ2 = 0, since it is triangular with zeros on the diagonal:



−λ 1

A − λI = = λ2 .

0 −λ

All eigenvectors of this A are multiples of the vector (1,0):

     
0 1 0 c
  x =  x, or x =  .
     
0 0 0 0

λ = 0 is a double eigenvalue—its algebraic multiplicity is 2. But the geometric

multiplicity is 1—there is only one independent eigenvector. We can’t construct

S.

Note: Algebraic multiplicity is the number repeated eigen value of a

particular matrix A and geometric multiplicity is the numberof inde-

pendent vectors. That failure of diagonalization was not a result of λ

= 0. It came from λ1 = λ2 .

   
3 1 2 −1
Repeated eigenvalues A=

 and B = 
 


0 3 1 0

Their eigenvalues are 3, 3 and 1, 1. They are not singular! The problem is the

shortage of eigenvectors—which are needed for S.

Diagonalizability of A depends on enough eigenvectors .

6
Invertibility of A depends on nonzero eigenvalues.

Diagonalization can fail only if there are repeated eigenvalues. Even then, it does

not always fail. A = I has repeated eigenvalues 1,1,...,1 but it is already diagonal!

There is no shortage of eigenvectors in that case.

The test is to check, for an eigenvalue that is repeated p times, whether there are

p independent eigenvectors—in other words, whether A − λI has rank n − p. To

complete that circle of ideas, we have to show that distinct eigenvalues present no

problem.

Q.8. Which of these matrices cannot be diagonalized?

     
2 −2 2 0  2 0
A1 = 

,
 A2 = 

,
 and A3 = 



2 −2 2 −2 2 2

Solution:

The eigen value of A1 matrix are λ1 = 0 and λ2 = 0

i.e. repeated eigenvalue. Hence A1 cannot be diagonaled.

The eigen value of A2 matrix are λ1 = 2 and λ2 = −2

i.e. non repeated eigenvalue. Hence A2 is a diagonaled.

The eigen value of A2 matrix are λ1 = 2 and λ2 = 2

i.e. repeated eigenvalue. Hence A3 cannot be diagonaled.

7
Lecture-31

   
3 1 2 −1
Repeated eigenvalues A= and B =
0 3 1 0

Their eigenvalues are 3, 3 and 1, 1. They are not singular! The problem is the
shortage of eigenvectors—which are needed for S.

Diagonalizability of A depends on enough eigenvectors .

Invertibility of A depends on nonzero eigenvalues.

Diagonalization can fail only if there are repeated eigenvalues. Even then, it
does not always fail. A = I has repeated eigenvalues 1,1,...,1 but it is already
diagonal! There is no shortage of eigenvectors in that case.

The test is to check, for an eigenvalue that is repeated p times, whether there are
p independent eigenvectors—in other words, whether A − λI has rank n − p. To
complete that circle of ideas, we have to show that distinct eigenvalues present
no problem .

 If eigenvectors x1 , ..., xk correspond to different eigenvalues λ1 , ..., λk ,then


those eigenvectors are linearly independent.

Suppose first that k = 2, and that some combination of x1 and x2 produces


zero: c1 x1 +c2 x2 = 0. Multiplying by A, we find c1 λ1 x1 +c2 λ2 x2 = 0. Subtract-
ing λ2 times the previous equation, the vector x2 disappears: c1 (λ1 - λ2 )x1 = 0.
Since λ1 6= λ2 and x1 6= 0, we are forced into c1 = 0. Similarly c2 = 0, and the

1
two vectors are independent; only the trivial combination gives zero.

This same argument extends to any number of eigenvectors: If some combi-


nation produces zero, multiply by A, subtract λk times the original combination,
and xk disappears— leaving a combination of x1 , ..., xk−1 , which produces zero.
By repeating the same steps (this is really mathematical induction) we end up
with a multiple of x1 that produces zero. This forces c1 = 0, and ultimately
every ci = 0. Therefore eigenvectors that come from distinct eigenvalues are
automatically independent.

A matrix with n distinct eigenvalues can be diagonalized.

Example If a 3 by 3 upper triangular matrix has diagonal entries 1,


2, 7, how do you know it can be diagonalized? What is Λ?
 
1 ∗ ∗ 1 − λ
• •
Answer- A = 0 2 ∗ . Then det(A − λI) = 0
 2−λ •
0 0 7 0 0 7 − λ

= (1 − λ)(2 − λ)(7 − λ).

det(A − λI) = 0 ⇒ λ = 1, 2, 7

Since all the eigenvalues are distinct, the matrix A is diagonalized and the di-
agonal matrix
 
1 0 0
Λ = 0 2 0 .
0 0 7

Powers and Products: Ak and AB

There is one more situation in which the calculations are easy. The eigenvalue
of A2 are exactly λ21 , ..., λ2n , and every eigenvector of A is also an eigenvector of
A2 . We start from Ax = λx, and multiply again by A:
A2 x = Aλx = λAx = λ2 x.
Thus λ2 is an eigenvalue of A2 , with the same eigenvector x. If the first mul-
tiplication by A leaves the direction of x unchanged, then so does the second.
The same result comes from diagonalization, by squaring S −1 AS = Λ:

2
Eigenvalues of A2 (S −1 AS)(S −1 AS) = Λ2 or S −1 A2 S = Λ2 .
2
The matrix A is diagonalized by the same S, so the eigenvectors are unchanged.
The eigenvalues are squared. This continues to hold for any power of A:

 The eigenvalues of Ak are λk1 , λk2 , ..., λkn and each eigenvector of A is
still an eigenvector of Ak .

When S diagonalizes A, it also diagonalizes Ak :


Λk = (S −1 AS)(S −1 AS)...(S −1 AS) = (S −1 Ak S)
Each S −1 cancels an S, except for the first S −1 the last S.
If A is invertible this rule also applies to its inverse (the power k = −1). The
eigenvalues of A−1 are λ1i . That can be seen even without diagonalizing:
if Ax = λx then x = λA−1 x and λ1 x = A−1 x.
 Diagonalizable matrices share the same eigenvector matrix S if and
only if AB = BA.

Proof. If the same S diagonalizes both A = SΛ1 S −1 and B = SΛ2 S −1 , we can


multiply in either order:
AB = SΛ1 S −1 SΛ2 S −1 = SΛ1 Λ2 S −1 and BA = SΛ2 S −1 SΛ1 S −1 = SΛ2 Λ1 S −1 .
Since Λ1 Λ2 = Λ2 Λ1 (diagonal matrices always commute) we have AB = BA.
In the opposite direction, suppose AB = BA. Starting from Ax = λx, we
have
ABx = BAx = Bλx = λBx.
Thus x and Bx are both eigenvectors of A, sharing the same λ (or else Bx =
0). If we assume for convenience that the eigenvalues of A are distinct—the
eigenspaces are all one-dimensional—then Bx must be a multiple of x. in other
words x is an eigenvector of B as well as A. The proof with repeated eigenvalues
is a little longer.

Problem Set 5.2

Q.3 Factor the following matrices into SΛS −1 :


   
1 1 2 1
A= and B = .
1 1 0 0

Answer : |A − λI|= 0

1 − λ 1
= 0 ⇒ λ(λ − 2) = 0 ⇒ λ = 2, 0.
1 1 − λ

For λ = 2, solve (A − λI)x = 0.


    
−1 1 x1 0
=
1 −1 x2 0

3
−x1 + x2 = 0
   
x 1
x= 1 = .
x2 1
 
1
Eigenvector corresponding to λ = 2 is : x = .
1
For λ = 0, solve (A − λI)x = 0.
    
1 1 x1 0
=
1 1 x2 0

x1 + x2 = 0
   
x 1
x= 1 = .
x2 −1
 
1
Eigenvector corresponding to λ = 0 is : x = .
−1
   
1 1 2 0
S= and Λ =
1 −1 0 0
   −1
−1 1 1 2 0 1 1
So A = SΛS =
1 −1 0 0 1 −1

Similarly,
|B − λI|= 0

2 − λ 1
= 0 ⇒ λ(λ − 2) = 0 ⇒ λ = 2, 0.
0 −λ

For λ = 2, solve (B − λI)x = 0.


    
0 1 x1 0
=
0 −2 x2 0

x2 = 0
   
x 1
x= 1 = .
x2 0
 
1
Eigenvector corresponding to λ = 2 is : x = .
0
For λ = 0, solve (B − λI)x = 0.

4
    
2 1 x1 0
=
0 0 x2 0

2x1 + x2 = 0
   
x 1
x= 1 = .
x2 −2
 
1
Eigenvector corresponding to λ = 0 is : x = .
−2
   
1 0 2 0
S= and Λ =
1 −2 0 0
   −1
1 1 2 0 1 1
So A = SΛS −1 =
0 −2 0 0 0 −2

5
Lecture-32

 
4 1
Q.4 If A = , find A100 by diagonalizing A.
1 2

Answer:

λ1 + λ2 = 4 + 2 = 6

λ1 λ2 = 5 ⇒ λ = 5, 1.

For λ = 5, solve (A − λI)x = 0.


    
−1 3 x1 0
=
1 −3 x2 0

−x1 + 3x2 = 0
   
x 3
x= 1 = .
x2 1
 
3
Eigenvector corresponding to λ = 5 is : x = .
1
For λ = 1, solve (A − λI)x = 0.
    
3 3 x1 0
=
1 1 x2 0

x1 + x2 = 0
   
x 1
x= 1 = .
x2 −1
 
1
Eigenvector corresponding to λ = 1 is : x = .
−1

1
   
3 1 5 0
S= and Λ =
1 −1 0 1
   −1
−1 3 1 5 0 3 1
So A = SΛS =
1 −1 0 1 1 −1

   100  −1
100 −1 3 1 5 0 3 1
A = SΛS =
1 −1 0 1100 1 −1

1 3 × 5100 + 1 3 × 5100 − 3
 
= .
4 5100 − 1 5100 + 3

Q.6 Find all the eigenvalues and eigenvectors of


 
1 1 1 1
1 1 1 1
A=1

1 1 1
1 1 1 1

and write two diagonalizing matrices S.

Answer:
We first find its eigenvalues by solving the characteristic equation:

1 − λ 1 1 1 4 − λ 4−λ 4−λ 4 − λ

1 1 − λ 1 1 R1 ← R1 + R2 + R3 + R4 1 1−λ 1 1
0 = det(A−λI) = −−−−−−−−−−−−−−−−→
1 1 1−λ 1 1 1 1−λ 1
1 1 1 1 − λ 1 1 1 1 − λ

1 1 1 1 1 1 1 1

1 1−λ 1 1 R2 ← R2 − R1 , R3 ← R3 − R1 0 −λ 0 0
= (4−λ) −−−−−−−−−−−−−−−−−−−−→ (4−λ)
1 1 1−λ 1 R4 ←R4 −R1 0 0 −λ 0
1 1 1 1 − λ 0 0 0 −λ



 λ1 =0

λ
2 =0
= λ3 (λ − 4) ⇒


 λ3 =0
λ4 =4

2
We now find the eigenvectors corresponding to λ = 0:
   
1 1 1 1 0 1 1 1 1 0

 1 1 1 1 0 
 =⇒  0 0 0 0 0
 =⇒ a + b + c + d = 0
 1 1 1 1 0   0 0 0 0 0
1 1 1 1 0 0 0 0 0 0

       
−b − c − d −1 0 0
 b  0 −1 0
=⇒ x =   = a  + b  + c 
−1
 c  0 0
d 1 1 1
     
−1 0 0
 0  −1 0
SO  0  ,  0  and −1 are the eigen vectors with respect to the eigen
    

1 1 1
value 0.

By orthonormalizing them, we obtain


     
( −1 0 0 )
1  0  1 −1 1  0 
√  , √  , √  
2 0  2 0  2 −1
1 1 1

We finally find the eigenvector corresponding to λ = 4:

Notice that

    
1 1 1 1 1 1
1 1 1 1 1 1
    = 4   =⇒ λ4 = 4 and the corresponding eigen vec-
1 1 1 1 1 1
1 1 1 1 1 1
 
1
1
tor is: 
1.

1
 
( 1 )
1 1
By normalizing it we get √   .
4 1

1

3
The first diagonalizing matrix
 
−1 0 0 1
0 −1 0 1
S1 = 
 
0 0 −1 1
1 1 1 1

and the second diagonalizing matrix

1 1
 
√ √
− 2 0 0
4
 1 1 
 0 −√ 0 √ 
 
S2 = 
 2 4
1 1 
 0 0 −√ √ 
2 4
 

 1 1 1 1 
√ √ √ √
2 2 2 4

ASSIGNMENT PROBLEMS

Q.8 Which of these matrices can not be diagonalized?

     
2 −2 2 0 2 0
A1 = A2 = A3 =
2 −2 2 −2 2 2

Q.12 If the eigenvalues of A are 1, 1, 2, which of the following are


certain to be true? Give a reason if true or counterexample if false:

(a) A is invertible.

(b) A is diagonalizable.

(c) A is not diagonalizable.

Q.16 Factor these two matrices into A = SΛS −1 :

4
   
1 2 1 2
A= and B = .
0 3 2 2

Q.17 True or false: If the n columns of S (eigenvectors of A) are


independent, then

(a) A is invertible.

(b) A is diagonalizable.

(c) S is invertible.

(d) S is diagonalizable.

Q.32 Diagonalize B and compute SΛk S −1 to prove this formula for


Bk :
 k
3k − 2k
  
3 1 3
B= has B k = .
0 2 0 2k

5
Lecture-33

5.5 Complex Matrices


Lengths and Transposes in the Complex Case

By definition, the complex vector space C n contains all vectors x with n complex
components:  
x1
 x2 
Complex vector x =  .  with components xj = aj + ibj .
 
 .. 
xn

Vectors x and y are still added component by component. Scalar multiplication


cx is now done with complex numbers c. The vectors v1 , ..., vk are linearly
dependent if some nontrivial combination gives c1 v1 + ... + ck vk = 0; the cj
may now be complex. The unit coordinate vectors are still in C n ; they are still
independent; and they still form a basis. Therefore C n is a complex vector space
of dimension n.

Length squared kxk2 = |x1 |2 + |x2 |2 + ... + |xn |2

   
1 2 2+i
Example x= and kxk = 2; y = and kyk2 = 25.
i 2 − 4i

For complex vector, kxk2 = x̄T x. ( For real vectors, kxk2 = xT x.)

Inner product x̄T y = x¯1 y1 + x¯2 y2 + ... + x¯n yn .

If we take the inner product of x with itself, we are back to kxk2 .

Note that ȳ T x is different from x̄T y; we have to watch the order of the vectors.
For vectors and matrices, a superscript H (or a star) combines both operations -
conjugate and transpose. This matrix ĀT = AH = A∗ is called “A Hermitian”:

1
 H
2+i 3i  
4 − i 2−i 4+i 0
Conjugate transpose 5 = .
−3i 5 0
0 0

Note-

1. The inner product of x and y is xH y. Orthogonal vectors have xH y.

2. The squared length of x is kxk2 = xH = |x1 |2 + |x2 |2 + ... + |xn |2 .

3. Conjugating (AB)T = B T AT produces (AB)H = B H AH

Hermitian Matrices A matrix A is Hermitian if ĀT = A.


 
2 3 − 3i
Example A= = AH .
3 + 3i 5

• The diagonal entries of a Hermitian matrix must be real.

• A real symmetric matrix is certainly Hermitian.

• For real matrices there is no difference between AT and AH .

Properties of Hermitian matricec

Property 1. If A = AH , then for all complex vectors x, the number


xH Ax is real.
Proof. (xH Ax)H is conjugate of 1 by 1 matrix xH Ax but we actually get the
same number back again:(xH Ax)H = xH AH xHH = xH Ax. So the number must
be real. 

Property 2. If A = AH , every eigenvalue is real.


Proof. Suppose Ax = λx. The trick is to multiply by xH : xH Ax = λxH x. The
left-hand side is real by Property 1, and the right-hand side xH x = kxk2 is real
xH Ax
and positive, because x 6= 0. Therefore λ = T must be real. 
x x
Property 3. Two eigenvectors of a real symmetric matrix or a Hermi-
tian matrix, if they come from different eigenvalues, are orthogonal
to one another.

Proof. The proof starts with Ax = λ1 x, Ay = λ1 y, and A = AH :


(λ1 x)H y = (Ax)H y = xH Ay = xH (λ2 y).

2
The outside numbers are λ1 xH y = λ2 xH y, since the λ’s are real. Now we use
the assumption λ1 6= λ2 , which forces the conclusion that xH y = 0. 

3
Lecture-34

Spectral Theorem: A real symmetric matrix can be factored into


A = QΛQT . Its orthonormal eigenvectors are in the orthogonal matrix
Q and its eigenvalues are in Λ.

 λ –xT1 –
  
1
A = QΛQT = x1 ... xn  
 ..  .. 
.  . 
λn –xTn –

= λ1 x1 xT1 + λ2 x2 xT2 + ... + λn xn xTn

Unitary Matrices A complex matrix with orthonormal columns is called a


unitary matrix. We denote it by U.

Unitary Matrix U H U = U U H = I, and U H = U −1 .

Example. Show that the following matrix is Unitary.


 
1 1+i 1−i
A= 2 1−i 1+i
     
H 1 1+i 1−i 1 1−i 1+i 1 4 0
Solution Since AA = = = I4 .
2 1−i 1+i 2 1+i 1−i 4 0 4

We conclude that AH = A−1 . Therefor, A is unitary matrix.

Property 1. (U x)H (U y = xH U H U y = xH y and the lengths are preserved by


U:

1
Length Unchanged kU xk2 = xH U H U x = kxk2 .

Property 2. Every eigenvalue of U has absolute value |λ| = 1.

This follows directly from U x = λx, by comparing the lengths of the two sides:
kU xk = kxk by Property-1, and always kλxk = |λ|kxk. Therefore |λ| = 1.

Property 3. Eigenvectors corresponding to different eigenvalues are orthonor-


mal .

Start with U x = λ1 x and U y = λ2 y, and take inner product by Property-1:

xH y = (U x)H (U y) = (λ1 x)H (λ2 y) = λ¯1 λ2 xH y.


Comparing the left to the right,λ¯1 λ2 = 1 or xH y = 0.
But Property-2 is λ¯1 λ1 = 1, so we cannot also have λ¯1 λ2 = 1 .
Thus xH y = 0 and the eigenvectors are orthogonal.

Skew-Hermitian Matrices: A matrix K is skew-Hermitian if K H = −K.


 
2i 3 + 3i
K= .
−3 + 3i 5i

If A is Hermitian then K = iA is skew-Hermitian.

Theorem A Hermitian matrix can be factored into K = U ΛU H . Its orthonor-


mal eigenvectors are in the unitary matrix U and its eigenvalues are in Λ.

Q.15 Show that if U and V are unitary, so is U V. Use the criterion


U H U = I.

Answer : When U and V are unitary,


(U V )(U V )H = (U V )(V H U H ) = U IU H = U U H = I;
and likewise

(U V )H (U V ) = (V H U H )(U V ) = V IV H = V V H = I

2
So indeed (U V )H = (U V )−1 , and thus U V is unitary.

Q.2 Write out the matrix AH and compute C = AH A if

 
1 i 0
A= .
i 0 1

What is the relation between C and C H ? Does it hold when C is constructed


from some AH A?

Answer :

1 − i2
     
1 −i   i −i 2 i −i
1 i 0
C = AH A = −i 0 =  −i −i2 0  = −i 1 0
i 0 1
0 1 i 0 1 i 0 1

 
2 i −i
CH = −i 1 0
i 0 1

This is always true since

C H = (AH A)H = AH (AH )H = AH A = C.

3
Lecture-35

Q.33 Diagonalize A and K to reach U ΛU H :

   
0 1−i 0 −1 + i
A= K=
1+i 3 1+i i

Answer: Diagonalization of matrix A

|A − λI| = 0


0 − λ 1 − i
⇒ = 0 ⇒ −λ(1−λ)−(1−i)(1+i) = 0λ2 −λ−2 = 0 ⇒ λ = 2, −1
1+i 1 − λ

For λ = 2, solve Ax = λx or (A − λI)x = 0 for x(nontrivial).

    
−2 1 − i x1 0
=
1+i −1 x2 0

−2x1 + (1 − i)x2 = 0

    
x1 1 1 1
(1 + i)x1 = x2 x = = . Normalized vector, u = √ .
x2 (1 + i) 3 (1 + i)

1
Similarly, for λ = −1, solve Ax = λy or (A − λI)y = 0

    
1 1 − i y1 0
=
1+i 2 y2 0

y1 + (1 − i)y2 = 0

     
y1 −1 + i 1 −1 + i
y1 = (−1 + i)y2 y = = Normalized vector, v = √
y2 1 3 1

 
1 1 −1 + i
U=√
3 1 + i 1

 
H 1 1 1−i
U =√
3 −1 − i 1

 
2 0
Λ=
0 −1

    
H 1 1 −1 + i 2 0 1 1 1−i
Hence A = U ΛU =√ √
3 1 + i 1 0 −1 3 −1 − i 1

Diagonalization of matrix K

|K − λI| = 0


0 − λ −1 + i
⇒ = 0 ⇒ −λ(i − λ) − (1 + i)(−1 + i) = 0λ2 − iλ + 2 = 0 ⇒ λ =
1+i i−λ

2
2i, −i

For λ = 2i, solve Kx = λx or (K − λI)x = 0 for x(nontrivial).

    
−2i −1 + i x1 0
=
1+i −i x2 0

−2ix1 + x2 (−1 + i) = 0

     
x1 1 1 1
(1 − i)x1 = x2 x = = Normalized vector, u = √
x2 (1 − i) 3 (1 − i)

Similarly, for λ = −i, solve Kx = λy or (K − λI)y = 0

    
i −1 + i y1 0
=
1+i 2i y2 0

iy1 + y2 (−1 + i) = 0

     
y1 −1 − i 1 −1 − i
y1 = (−1 − i)y2 y = = Normalized vector, v = √
y2 1 3 1

 
1 1 −1 − i
U=√
3 1−i 1

3
 
1 1 1+i
UH = √
3 −1 + i 1

 
2i 0
Λ=
0 −i

    
H 1 1 −1 − i 2i 0 1 1 1+i
Hence A = U ΛU =√ √ .
3 1 − i 1 0 −i 3 −1 + i 1

ASSIGNMENT PROBLEMS

Q.3 (a) If x = reiθ what are x2 , x−1 and x̄ in polar coordinates?


Where are the complex numbers that have x−1 = x̄?

Q.3 (b) At t = 0, the complex number e(−1+i)t equals one. Sketch


its path in complex plane as t increases from 0 to 2π .

Q.10 Find the lengths and inner product of

   
2 − 4i 2 + 4i
x= and y= .
4i 4i

Q.22 Prove that AH A is always a Hermitian matrix. Compute AH A


and AAH :

 
i 1 i
A=
1 i i

4
Q.43 43 A matrix with orthonormal eigenvectors has the form
A = U ΛU −1 = U ΛU H . Prove that AAH = AH A. These are exactly nor-
mal matrices.

5
POSITIVE DEFINITE MATRIX

lecturenote36

1 TEST FOR POSITIVE DEFINITE MATRIX


Now question arise (i) which symmetricx matrix have the property xT Ax > 0 for all nonozero
vectors x
(ii) Signs of eigenvalues that gave place to the tests on a, b, c of a symmetric matrix
 
a b
A= (1)
b c
is positive definite when a > 0 and ac − b2 > 0.
(iii) For every large symmetric matrix A, it’s impractical to compute eigenvalues of A, we can
compute the pivots, and the signs of the pivots of a symmetric matrix are same as the signs of
the eigen values.
(iv) Also using the determinant to test the positive definite. If a = c = −1 and b = 0 then
detA=1 but A = −I = negative definite. The determinant test is applied not only to A itself,
giving ac − b2 > 0, but also to the 1 by 1 submatrix a in the upper-hand corner.

2 Positive Definite Matrices


This tests is a necessary and sufficient condition for the real symmetric matrix A to be positive
definite:
(I) xT Ax > 0 for all nonzero real vectors x.
(II) All the eigen values of A satisfy λi > 0
(III) All the upper left submatrices Ak have positive determinants.
(IV) All the pivots (without row exchanges) satisfy dk > 0

Example:  
2 −1
A= (2)
−1 2

3 Positive SemiDefinite Matrices


This tests is a necessary and sufficient condition for the real symmetric matrix A to be positive
semidefinite:

1
(I) xT Ax ≥ 0 for all nonzero real vectors x.
(II) All the eigen values of A satisfy λi ≥ 0
(III)No principal submatrices have negative determinants..
(IV) No pivots negative.

Example:  
0 0
A= (3)
0 1

4 Negative Definite Matrices


This tests is a necessary and sufficient condition for the real symmetric matrix A to be negative
definite:
(I) xT Ax < 0 for all nonzero real vectors x.
(II) All the eigen values of A satisfy λi < 0
(III) All the upper left submatrices Ak have negative determinants.
(IV) All the pivots (without row exchanges) satisfy dk < 0

Example:  
−1 0
A= (4)
0 −3

5 Negative SemiDefinite Matrices


This tests is a necessary and sufficient condition for the real symmetric matrix A to be negative
semidefinite:
(I) xT Ax ≤ 0 for all nonzero real vectors x.
(II) All the eigen values of A satisfy λi ≤ 0
(III)No principal submatrices have positive determinants..
(IV) No pivots positive.

Example: 
0 0
A= (5)
0 −1

Note: This is another way to test positive definite, the symmetric matrix A is positive definite
if and only if there is a matrix R with independent columns such that A = RT R
√ √ √
Elimination A = LDLT = (L D)( DLT ) so take R = DLT
√ √ √
Eigenvalues A = QΛQT = (Q Λ)( ΛQT ) so take R = ΛQT

2
6 Problem Set 6.2
1. Decide for or against the positive definiteness of
     
2 −1 −1 2 −1 −1 0 1 2
A = −1 2 −1, B = −1 2 1 , C = 1 0 1
−1 −1 2 −1 1 2 2 1 0
Answer: detA=0 implies since the determinant is the product of its eigenvalue it follows that
atleast of eigenvalues of A is equal to zero.Thus A is not positive definite.
det(B − λI)=0
⇒ (λ − 1)2 (λ − 4)=0
⇒ λ = 1, 1, 4 > 0. Thus B is positive
 definite.
0 1
det C2 = -1, where C2 =
1 0
⇒ det C = −1 < 0. Thus C is not positive definite.

3. If A and B are positive definite,m then A + B is positive definite. Pivots and eigenvalues
are not convenient for A + B. Much better to prove xT (A + B)x > 0.

Proof: A and B are positive definite.


⇒ xT Ax >0 and xT Bx >0 ∀ real x
⇒ xT (A + B)x = xT Ax + xT Bx > 0 ∀ real x
⇒ A + B is positive definite.

11. If A = RT R prove the generalized Schwarz inequality

|xT Ay|2 ≤ (xT Ax)(y T Ay)

Answer: Given A = RT R
⇒ |xT Ay|2 = |xT RT Ry|2 = |(Rx)T (Ry)|2 ≤ kRxk2 kRyk2
= [(Rx)T (Rx)][(Ry)T (Ry)]=(xT RT Rx)(y T RT Ry)= (xT Ax)(y T Ay).
Hence proved.

3
Lecturenote37

SINGULAR VALUE DECOMPOSITION

Any m by n matrix A can be factored into A = U ΣV T


A = U ΣV T is known as the ”SVD” or the singular value decomposition, where U is orthogonal,
Σ is diagonal and V is orthogonal.
We know that if A is symmetric positive definite its eigenvectors are orthogonal and we can
write A = QΛQT . This is a special case of SVD with U = V = Q

1 Work Step
To find U , we follow the following steps:
1. Find eigen values of AAT . Arrange them in decreasing order i.e.

λ1 > λ2 > .......

2. Find eigen vectors x1 , x2 , ........xn for λ1 , λ2 , .........λn

xi
3. Normalize the eigenvectors ui = kxi k
 
4. Write the orthogonal matrix U = u1 u2 ........ um with ui as its ith column.

To find V T , we need the steps as follows:


1. Find the eigenvalues of AT A. Name them such that λ1 > λ2 > .......

2. Find eigenvectors yi for λi of AT A


yi
3. Normalize the eigenvectors vi = kyi k
 
v1
 v2 
4. Write the orthogonal matrix V T =
........ with vi as its ith row.

vn
To find Σ, we follow the following steps:

1
1. The diagonal (but rectangular ) matrix Σ has eigen values from AT A
Those positive entries will be σ1 , σ2 , .........σr . They are the singular values of A. They fill the
the first r places on the main diagonal of Σ, the rest are 0.
 
σ1 0 0 · · · 0
 0 σ2 0 · · · 0 √

 
Σ=  0 0 σ 3 0 · · · . 0 with σi = λi as its dogonal.

· · · 
0 0 0 ··· 0
NOTE: All nonzero eigenvalues of AAT and AT A are same.

SVD of
T
Amxn = Umxm Σmxn Vnxn

1
···

σ1
0 0 0
1
0
 σ2
0 ··· 0
T + + T + 1
If A = U ΣV (the SVD), then its pseudoinverse is A = V Σ U , where Σ = 
0 0 σ3
0 ··· .
· · ·
0 0 0 ··· 0
T
It is obtained from Σ by replacing all the nonzero singular values by their reciprocals.
If A−1 exist then A−1 = A+

2 Problem Set-6.3
Question 1. (a) Compute AAT and its eigenvalues σ1 2 , 0 and unit eigenvectors u1 , u2 .
(b) Choose signs so that Av1 = σ1 u1 and verify the SVD:
   
1 4   σ1  T
= u1 u2 v1 v2
2 8 0
.
(c) Which four vectors give orthonormal bases for C(A), N (A), C(AT ), N (AT )
Answer: (a) Let’s compute AAT and its eigen value
  T     
T 1 4 1 4 1 4 1 2 17 34
AA = = =
2 8 2 8 2 8 4 8 34 68
det( AAT - λI ) = 0

17 − λ 34
⇒ =0
34 68 − λ
⇒ (17 − λ)(68 − λ) − 34 = 0

⇒ λ2 − 85λ + 1156 − 1156 = 0

⇒ λ2 − 85λ = 0

2
⇒ λ= 0, 85

⇒ σ1 2 = λ1 = 85, λ2 = 0

Now let’s find corresponding unit eigen vectors for eigenvalues of AAT .

( AAT - λ1 I) x = 0
    
−68 34 x1 0
⇒ =
34 −17 x2 0
⇒ 68x1 + 34x2 = 0
34x1 − 17x2 = 0

⇒ 2x1 − x2 = 0

⇒ 2x1 = x2

x1
⇒ = x22
1
 
1
∴x= is an eigenvector for λ1 = 85
2
" #
√1
x 5
⇒ u1 = kxk
= √2
(unit eigenvectors corresponding to λ1 =85)
5

( AAT - λ2 I) x = 0
    
17 34 x1 0
⇒ =
34 68 x2 0
⇒ 17x1 + 34x2 = 0
34x1 + 68x2 = 0

⇒ x1 + 2x2 = 0

⇒ x1 = −2x2

x1
⇒ = x12
−2
 
−2
∴x= is an eigenvector for λ2 = 0
1
" #
−2

x 5
⇒ u2 = kxk
= √1
(unit eigenvectors corresponding to λ2 = 0)
5
" #
√1
17
(b) If we take v1 = √4
from the previous problem, let’s choose a sign for σ1 so that
17
Av1 = σ1 u1 and verify SVD for the matrixA.

3
" 1 #  √  " #
√ √1 √

1 4 √17 √17
Av1 = 4 = = 85 √25 = 85 u1 = σ1 u1
2 8 √17 2 17 5
   √       
 σ1 T 1 −2 85 0 √1 1 −4 1 −2 1 0 1 −4
v1 v2 = √15
 
u1 u2 =
0 2 1 0 0 17 4 1 2 1 0 0 4 1
    
1 0 1 −4 1 4
= = =A
2 0 4 1 2 8
(c) Since the rank of A and consequently of AT A and AAT are equal to 1, it follows that u1
forms a basis for C(A), while u2 forms a basis for the N (AT ). On the otherhand, v1 forms a
basis for C(AT ), while v2 forms a basis for N (A).

Question 4. Find the SVD from the eigenvectors v1 , v2 of AT A and Avi = σi ui :


1 1
Fibonacci matrix A =
1 0 
T 1 1
Answer: In this case A = =A then
   1 0 
T 2 1 1 1 1 2 1
A A=A = =
1 0 1 0 1 1
det( AAT - λI ) = 0

2 − λ 1
⇒ =0
1 1 − λ
⇒ (2 − λ)1 − λ) − 1 = 0

⇒ λ2 − 3λ + 1 = 0

⇒ λ = 23 ± 25 are the eigenvalues.
Now we determine eigenvectors.

AT Ax = λx
    
2 1 x1 λx1
⇒ =
1 1 x2 λx2
⇒ x1 (λ − 2) = x2

x2
⇐⇒ x1 = λ−2

1
(x2 = 1), x1 = λ−2 = 3±√25−4 = −1±2 √5 = 1±2 5
 1+√5   1−√5 
∴ v1 = 2 and v2 = 2 which norms are
1 1
√ √
5+ 5 5− 5
kv1 k = 2
and kv2 k = 2

4
 
q √ 
5+ 5 q √
5− 5 
q 10  −

the unit vectors are vˆ1 = 2√
and vˆ2 =  q 10 


5+ 5 2√
5− 5
q √ q √
On the otherhand we have σ1 = 3+2 5 and σ2 = 3− 5
2
q √  q √ 
  5+ 5 5+ 5
1 1 1 1 q 10  q 10 
u1 = σ1 Avˆ1 = 3+√5
q = = vˆ1
2
1 0 2√ 2√
5+ 5 5+ 5
  q √ 
5− 5
  q √ 10
1 1 −
 5− 5   
u2 = σ12 Avˆ2 = q 3−
1

5 1
10 
 =  =
2
0  q  q 
2√ 2√
5− 5
− 5− 5
−vˆ2
Finally the SVD decomposition of the matrix A result:
q √ q 
q √ q √ q √  5+ 5 2√
5+ 5 5− 5 3+ 5 10 5+ 5 
0 
A = U ΣV T = q 10 q 10  2 q √  
2√ 2√
 
− 0 3− 5  q √ q 
5+ 5 5− 5 2 − 5−10 5 2√
5− 5

5
Lecturenote38

SINGULAR VALUE DECOMPOSITION

A as a linear transformation taking a vector v1 in its row space to a vector u1 = Av1 in


its columnspace. The SVD arises from finding an orthogonal basis for the row space that gets
transformed into an orthogonal basis for the column space: Avi = σi ui
We have to find an orthonormal basis v1 , v2 , .........vr for the row  space of A for which A 
σ1 0 0
      0 σ2 0
v1 v2 ........ vr = σ1 u1 σ2 u2 ........ σr ur = u1 u2 ur .........................


0 0 σr
with u1 , u2 , ........ur an orthonormal basis for the column space of A. Once we add in the null
spaces, this equation will become AV = U Σ. (We have the orthonormal bases v1 , v2 , .........vr
and u1 , u2 , ........ur to orthonormal bases for the entire space. Since vr+1 , vr+2 , .........vn will be in
the null space of A, the diagonal entries σr+1 , σr+2 , ........., σn will be 0). The columns of U and
V are bases for the row and column spaces respectively.

1 Problem Set- 6.3


Question.14: Find the SVD and the pseudoinverse V Σ+ U T of
   
  0 1 0 1 1
A= 1 1 1 1 ,B= ,C=
1 0 0 0 0
Answer: Let’s find out the eigen values and eigen vectors of AAT and AT A
 
1
 1  
AAT = 1 1 1 1  1 = 4

1
det( AAT - λI ) = 0

⇒ 4 − λ = 0
λ=4
and x = 1 is an eigen vector.

1
   
1 1 1 1 1
1
  
1
 1 1 1
AT A = 
1 1 1 1 1 = 1

1 1 1
1 1 1 1 1
det( AT A- λI ) = 0
The eigen values of AT A are 4,0,0,0.
T
 = 4, (A A − λI)x
Forλ = 0       
−3 1 1 1 x1 0 1 −3 1 1 x1 0
 1 −3 1 1 x2  0 0 −8 4 4 x2  0
⇒   =   =⇒ 
0 0 −2 −2x3  = 0
   
1 1 −3 1 x3  0
1 1 1 −3 x4 0 0 0 0 0 x4 0
⇒ −2x3 + 2x4 = 0
−8x2 + 4x3 + 4x4 = 0
x1 − 3x2 + x3 + x4 = 0

⇒ x3 = x4 , let x4 = 1
⇒ x2 =
 1,
 x1 = 1  1 
1 2
1 1
∴ x=  2
1 or x=  1 

2
1
1 2
T
Forλ
 = 0, (A A −λI)x=0
1 1 1 1 x1 0
1 1 1 1x2  0
⇒1 1 1 1x3  = 0
   

1 1 1 1 x4 0
⇒ x1 + x2 + x 3 + x4 = 0
     
−1 −1 −1
1 0 0
∴x=  0 ,  1 ,  0 
    

0 0 1
 −1   −1   −1 
√ √ √
2 2 2
 √1 
0 0
 2 ,
 √1 ,  0 
The eigen vectors for λ = 0 are      
0 2
0 0 √1
2
 1 −1 −1 −1
T
√ √ √
2 2 2 2
1 √1
 0 0 
= 1 2 0 0 0  21
 
∴ SV D of A = U ΣV T 2
 
0 √1 0

2 2 
1 √1
2
0 0 2
−1 −1 −1
1 
√ √ √ 1 1
   
2 2 2 2 2 4
1 √1 0 0   1
2  0
∴ A+ = V Σ+ U T = 1 2
√1
  1 =  41 
2 0 2
0  0   
4
1 1 1
2
0 0 √ 0 4
2

2
Now B matrix SVDwill find
 out:
  0 1
0 1 0
B= , B T = 1 0
1 0 0
0 0 
  0 1  
T 0 1 0  1 0
BB = B = 1 0 =

1 0 0 0 1
0 0
det( BB T - λI ) = 0

1 − λ 0
⇒ =0
0 1 − λ
λ = 1, 1
T
and eigenvectoros
  BB  are(BB T − λI)x = 0
0 0 x1 0
⇒ =
0 0 x2 0      
0 1 0 1
∴ the eigenvectors are x = , . Hence U =
1 0 1 0
   
0 1   1 0 0
T 0 1 0
B B = 1 0 = 0 1 0
1 0 0
0 0 0 0 0
det( B T B- λI ) = 0
The eigen values of B T B are 1,1,0,.
T
 = 1, (B B− λI)x
Forλ  =  0
0 0 0 x1 0
⇒ 0 0 0 x2  = 0
0 0 −1 x3 0
   
1 0
∴ x = 0, 1
0 0
T
Forλ
 = 0,(B
 B  − λI)x
 =0
1 0 0 x1 0
⇒ 0 1 0x2  = 0
0 0 0 x3 0  
0
x1 = 0, x2 = 0, x3 = 1(f reevariable) so the eigen vector is x = 0. Hereλ is repeated three
1
times,but we 
got only one eigen vector. So we have to consider orthornmal eigen vectors.
1 0 0
V = 0 1 0
0 0 1
 1 0 0 T
 
 
0 1 1 0 0 
SVD of B = U ΣV T = 0 1 0
1 0 0 1 0
0 0 1

3
    
1 0 0 1 0   0 1
0 1
Pseuodinverse is B + = V Σ+ U T = 0 1 00 1 = 1 0
1 0
0 0 1 0 0 0 0
Now
 C matrix SVD
 will  find out:
1 1 1 0
C= , CT =
0 0  1 0
 
T 1 1 1 0 2 0
CC = =
0 0 1 0 0 0
T
det( CC - λI ) = 0
The eigen values of CC T are 2,0
T
Forλ   − λI)x
 = 2, (CC   =0
0 0 x1 0
⇒ =
0 −2 x2 0
⇒ −2x2 = 0
⇒ x2 = 0and  x1 = 1 (free variable)
1
∴x=
0
Forλ = 0(CCT −λI)x
 =0
2 0 x1 0
⇒ =
0 0 x2 0
⇒ 2x1 = 0
⇒ x1 = 0and x2 = 1(free variable)
0
∴x=
 1 
1 0
U =
0 1    
T 1 0 1 1 1 1
C C= =
1 0 0 0 1 1
det( C T C- λI ) = 0
The eigen values of C T C are 2,0
T
 = 2, (CC −
Forλ  λI)x
 = 0
−1 1 x1 0
⇒ =
1 −1 x2 0  
1
⇒ x1 = x2 , the eigenvector is x=
" # 1
√1
2
. So v1 = √1
2
T
 = 0,(C C −λI)x
Forλ  =0
1 1 x1 0
⇒ =
1 1 x2 0  
1
⇒ x1 = −x2 , the eigenvector is x=
" # −1
√1
2
. So v2 = −1

.
2

4
" #
√1 √1
2 2
Hence V = √1 −1

2 2
" 1 #T
√1
 √
1 0 2 0 √
SVD of C = U ΣV T = 2
−1
2
0 1 0 0 √12 √ 2
" #
1
√ √1 1   1 
+ 2 2
√ 0 1 0 0
Pseuodinverse of C is C = √1 √ −1
2 = 21
2 2
0 0 0 1 2
0

5
Lecturenote39

MATRIX NORM AND CONDITION NORM

For a positive definite matrix, A. The norm of A is denoted as kAk which is λmax of A

The norm of A−1 is denoted as kA−1 k which is 1


λmin
of A

The condition number is denoted as c = kAk kA−1 k = λmax


λmin
of A.

For any given matrix A .( A may not be positive definite matrix).


p
kAk = Norm of A = λmax of AT A

kA−1 k = Norm of A−1 = √ 1


λmin of AT A

c√= Condition number = kAk kA−1 k


λ of AT A
= √ max T
λmin of A A

1 Problem set- 7.2


λmax
Question 15. Find the norms λmax and condition numbers λmin
of these positive definite matrices:
     
100 0 2 1 3 1
, ,
0 2 1 2 1 1
 
100 0
Answer: A =
0 2
⇒ det(A − λI)=0
⇒ (100 − λ)(2 − λ)=0
⇒ λ = 100, 2

kAk = λmax of A=100

kA−1 k= 1
λmin
of A = 1
2

c = kAk kA−1 k = 100 * 12 = 50

1
 
2 1
Now B =
1 2
⇒ det(B − λI)=0
⇒ (2 − λ)2 − 1=0
⇒ λ = 3, 1

kBk = λmax of B=3

kB −1 k= 1
λmin
of B =1

c = kBk kB −1 k = 3
 
3 1
Now let D =
1 1
⇒ det(D − λI)=0
⇒ (3 − λ)(1 − λ) − 1=0
⇒ λ2 − 4λ +
√3 = 0
⇒ λ=2± 2

kDk = λmax of D=2 + 2

kD−1 k= 1
λmin
of D = 1√
2− 2

c = kDk kD−1 k = 2+√2
2− 2

Question-17: Find the norms and condition numbers from the square roots of λmax (AT A) and

λmin (AT A):


     
−2 0 1 1 1 1
, ,
0 2 0 0 −1 1
     
−2 0 T −2 0 T 4 0
Answer: Let us name A= ,A = ⇒A A=
0 2 0 2 0 4
The eigen values are 4,4.
p
kAk = λmax of AT A = 2

kA−1 k = √ 1
= 1
2
λmin of AT A

c = kAk kA−1 k = 2* 12 = 1
    
1 1 T 1 0 T 1 1
Next matrix name as B = ,B = ⇒B B=
0 0 1 0 1 1
The eigen values are 2.0
p √
kBk = λmax of B T B = 2

2
kB −1 k = √ 1
= 10 = ∞
λmin of B T B

c = kBk kB −1 k = 2* 10 = ∞
     
1 1 T 1 −1 T 2 0
Next matrix name as D = ,D = ⇒D D=
−1 1 1 1 0 2
The eigen values are 2,2.
p √
kDk = λmax of DT D = 2

kD−1 k = √ 1
= √1
2
λmin of DT D

c = kDk kD−1 k = 2* √1
2
=1

lecturenote40
ITERATIVE METHODS FOR Ax = b
We know that the solution of the system for Ax = b, required finite number of steps for a full
matrix by Gaussian elimination, but when it is large, we may have to set for an approximate x
that can be obtained more quickly.
An iterative method is easy to invent, by splittin the matrix A. Put A = S − T in Ax = b.
⇒ Sx = T x + b Iterative from xk to xk+1 ⇒ Sxk+1 = T xk + b
The iterative method in the above equation is convergent if and only if every eigen value of
S −1 T satisfies |λ| < 1. Its rate of convergence depends on the maximum of |λ|.

Working Procedure:
1. Write A as sum of Lower trianguar L, Diagonal D, and Upper triangular U matrix form i.e.
A=L+D+U
2. to select S = digonal part of A (Jacobi’s method) i.e. S = D
S = triangular part of A (Gauss-Seidel method) i.e. S = D + L
3.T = (-L-U) for (Jacobi’s method), T = (-U) for (Gauss-Seidel method)
4. Find S −1 T =D−1 (−L − U ) for (Jacobi’s method)
S −1 T =(D + L)−1 (−U ) for (Gauss-Seidel method)

2 Problem set- 7.4


√ √
Question-2:
 The matrix
 has eigenvalues 2 − 2, 2, and2 + 2:
2 −1 0
A = −1 2 −1
0 −1 2
. Find the Jacobi matrix D−1 (−L − U ) and the Gauss-Seidel matrix (D + L)−1 (−U )

3
       
2 −1 0 0 0 0 2 0 0 0 −1 0
Answer: A = −1 2 −1 = L+D+U = −1 0 0 + 0 2 0 + 0 0 −1
0 −1 2 0 −1 0 0 0 2 0 0 0
1    1 
2
0 0 0 1 0 0 2 0
Jacobi matrix D−1 (−L − U ) =  0 12 −11 0 1 =  12 0 21 
0 0 12 0 1 0 0 1 0
1   2 1 
2
0 0 0 1 0 0 2 0
Gauss-Seidel matrix (D + L) (−U ) = 4 2 0 0 0 1 = 0 41 21 
−1  1 1    
1 1
1 0 0 0 0 18 41
. 8 4

4
Lecturenote40

ITERATIVE METHODS FOR Ax = b


We know that the solution of the system for Ax = b, required finite number of steps for a full
matrix by Gaussian elimination, but when it is large, we may have to set for an approximate x
that can be obtained more quickly.
An iterative method is easy to invent, by splittin the matrix A. Put A = S − T in Ax = b.
⇒ Sx = T x + b Iterative from xk to xk+1 ⇒ Sxk+1 = T xk + b
The iterative method in the above equation is convergent if and only if every eigen value of
S −1 T satisfies |λ| < 1. Its rate of convergence depends on the maximum of |λ|.

Working Procedure:
1. Write A as sum of Lower trianguar L, Diagonal D, and Upper triangular U matrix form i.e.
A=L+D+U
2. to select S = digonal part of A (Jacobi’s method) i.e. S = D
S = triangular part of A (Gauss-Seidel method) i.e. S = D + L
3.T = (-L-U) for (Jacobi’s method), T = (-U) for (Gauss-Seidel method)
4. Find S −1 T =D−1 (−L − U ) for (Jacobi’s method)
S −1 T =(D + L)−1 (−U ) for (Gauss-Seidel method)

1 Problem set- 7.4


√ √
Question-2:
 The matrix
 has eigenvalues 2 − 2, 2, and2 + 2:
2 −1 0
A = −1 2 −1

0 −1 2
. Find the Jacobi matrix D−1 (−L − U ) and the Gauss-Seidel matrix (D + L)−1 (−U )
       
2 −1 0 0 0 0 2 0 0 0 −1 0
Answer: A = −1 2 −1 = L+D+U = −1 0 0 + 0 2 0 + 0 0 −1
0 −1 2 0 −1 0 0 0 2 0 0 0
1    1 
2
0 0 0 1 0 0 2 0
Jacobi matrix D (−L − U ) = 0 2 −1 1 0 1 = 12 0 21 
−1  1   
0 0 12 0 1 0 0 1 0
1   2 1 
2
0 0 0 1 0 0 2 0
Gauss-Seidel matrix (D + L) (−U ) = 4 2 0 0 0 1 = 0 41 21 
−1  1 1    
1 1
1 0 0 0 0 18 41
. 8 4

You might also like