Lecture Notes On Numerical Methods
Lecture Notes On Numerical Methods
Lecture Notes On Numerical Methods
1.1.1 Vector
Definition 1.1 A vector is a rectangular array of elements. It is customary to represent
vectors by small underlined or bolded letters. The elements are given in parentheses
separated by commas or they are given in terms of directions i, j and k, or in the form of a
column enclosed by parentheses or square brackets.
Example 1.1
3
v v 3i 5 j 3k 3, 5, 3 5
3
1.1.2 Vector Norms
The size of a vector is represented in terms of a vector norm. The norm of a vector v is
denoted by v . The commonly used vector norms are the 2-norm, . 2 and infinity-norm
.
, other norm is one-norm . 1
If v x1 x2 xn T , then
max xi , i 1, , n
n
v 2
x12 x22 xn2 , v 1
xi , v
i 1
Example 1.2
Solution:
3
i x 1
xi 3 4 2 3 4 2 9
i 1
ii x 2
x12 x22 x32 32 4 22 29
2
1
iii x
max xi , i 1, 2 , 3 max 3 , 4, 2 max3 , 4 , 2 4
1. x 0
2. x 0 x0
3. x x for all x R
4. xy x y The triangular inequality
2 1 2 1
e.g A , or A
3 4 3 4
Example 1.3
2 3 4
Find AT , given that A
1 0 2
2
Solution:
2 1
A 3 0
T
4 2
Multiplication
Two matrices can be multiplied at a time only if they are compatible. Since matrix
multiplication is not commutative (i.e AB BA), then order of multiplication becomes
important.
Compatibility is equivalent to saying that the number of columns of the first matrix must
be equal to the number of rows of the second matrix.
Example 1.4
1 3
1 3 2
Given that A and B 0 4 . Find (i) AB (ii) BA
6 4 1 2 6
Solution
1 3
1 3 2 1 0 4 3 12 12 3 27
i AB 0 4
6 4 1 2 6 6 0 2 18 16 6 4 40
3
1 3 1 18 3 12 2 3 17 9 1
1 3 2
ii BA 0 4 0 24 0 16 0 4 24 16 4
2 6 6 4 1 2 36 6 24 4 6 38 30 10
a A B T AT B T
b AB T B T AT
c AT T A
(a) Square Matrix: This is a matrix whose number of rows is equal to the number of
columns.
1 9 7
e.g A 5 6 0 . This is 3 3 square matrix.
4 6 2
(b) Diagonal Matrix: This is a square matrix in which all of its off-diagonal elements
are zero. This requires at least one non-zero element in the main diagonal. It is
denoted by D. The elements of D can be obtained from the following expression
d if i j
d ij ii
0 if i j
2 0 0 2 0 0
e.g D 0 3 0 or D 0 0 0
0 0 7 0 0 0
(c) Identity Matrix: This is a diagonal matrix whose all elements in the main diagonal
are 1. It is denoted by capital I.
4
1 0 0
e.g I 0 1 0
0 0 1
(d) Symmetric Matrix: This is a square matrix whose transpose is equal to the matrix
itself. i.e A matrix A is said to be symmetric iff AT A .
1 2 3 1 2 3
e.g. A 2 5 1 AT 2 5 1
3 1 4 3 1 4
(e) Zero Matrix: This is a matrix of any order such that all of its elements are zero. It
is denoted by capital O.
0 0 0
e.g. O 0 0 0
0 0 0
(f) Strictly diagonally dominant: The n n matrix A is said to be strictly diagonally
dominant if
n
aii a
j 1
ij , i 1, , n .
j i
7 2 0
e.g. The matrix A 1 3 1 is strictly diagonally dominant since
4 3 8
7 2 0 72
3 1 1 3 2
8 4 3 8 7
(f) Positive definite: A symmetric n n matrix A is called positive definite if for any
non-zero vector x , then xT Ax 0 .
5
a A is non singular
b aii 0 for each i 1, , n
c max a kj max aii
1k , j n 1 i n
d aij 2
aii a jj for each i, j 1, , n with i j
(g) Tridiagonal matrix: A tridiagonal matrix is the one whose all entries are situated
at the three main diagonals, and the rest of the entries are zeros.
Consider the following matrices,
4 3 0 0 0 4 3 0 0 0
1 2 2 0 0 1 2 2 0 0
A 0 2 4 1 0 B 0 2 4 1 0
0 0 1 7 3 0 0 1 7 3
0 5 1 5
0 0 2 0 0 2
Ax
A max
x 0 x
Two commonly used matrix norms are . 1
and .
, where
m
A max aij = maximum of column sum.
i 1
1 1 j n
n
A
max aij = maximum of row sum.
1i m
j 1
6
3 6 1
Example 1.5 Given A 4 5 3 . Find i A
ii A 1
0 2 8
Solution
i A
max 3 6 1 , 4 5 3 , 0 2 8
max10 , 12 , 10 12
ii A 1
max 3 4 0 , 6 5 2 , 1 3 8
max 7 , 13 , 12 13
Many mathematical and physical problems can be solved by the use of matrix methods.
Eigenvalues and eigenvectors are very useful components of solution to these problems.
To find the eigenvalues of a given matrix is simply solving the characteristic equation
det A I 0 with the only unknown being .
Definition 1.5 An eigenvector corresponding to an eigenvalue is any non-zero vector
x such that
A x x or A I x 0
7
Example 1.6
4 1
Find all eigenvalues and eigenvectors of A given that A
2 1
Solution
To find the eigenvalues we solve the characteristic equation det A I 0
4 1 1 0
A I 0 0
2 1 0 1
4 1
0
2 1
4 1 2 0
2 5 6 0
2 3 0
2 or 3
Thus the eigenvalues of the matrix A are 2 and 3
4 2 1 x1
A I x 0 0
2 1 2 x 2
2 1 x1
0
2 1 x 2
x 1
1
x2 2
8
1
Thus the eigenvectors corresponding to the eigenvalues 2 and 3 are respectively and
2
1
.
1
1 1 1 k
0 0 0
A2 4 , A3 8 , and in general A 2
k
2 1 3 1 k k
k 1 1
8 4 16 8
2 2
k
1 k
Since, lim 0 and lim k 1 0 , we conclude that A is convergent.
k 2 k 2
9
1.5 Iterative Methods
These are the methods that follow numerical techniques in approximating the solutions to
the system of linear equations. A system Ax b can be written as x Tx c for some
matrix T while x and c are vectors.
We are going to discuss three iterative methods (1) Jacobi, Gauss-Seidel and SOR.
Example 1.8
Use Jacobi method to approximate the solutions of the system
x1 x2 2 x3 x4 8
2 x1 2 x2 3 x3 3 x4 20
x1 x2 x3 2
x1 x2 4 x3 3 x4 4
Perform only three iterations. Given the initial solutions as 2 , 2 , 2 , 2
Solution
The system is first written is appropriate form as follows
10
x1k 8 x 2k 1 2 x3k 1 x 4k 1
x 2k
1
2
20 2 x1k 1 3 x3k 1 3 x 4k 1
x3k 2 x1k 1 x 2k 1
1
3
x 4k
4 x1k 1 x 2k 1 4 x3k 1
Now, when k =1 (first iteration)
x11 8 x 20 2 x30 x 40 8 2 2 2 2 8.000
x 21 0.5 20 2 x10 3 x30 3 x 40 0.5 20 2 2 3 2 3 2 8.000
x31 2 x10 x 20 2 2 2 2.000
x 41
1
3
3
4 x10 x 20 4 x30 4 2 2 4 2 0.0000
1 0
3
The first iteration gives x 1 8.000, 8.000 , 2.000 , 0.0000
When k = 2 (second iteration)
x 42
1
3
3
4 x11 x 21 4 x31 4 8 8 4 2
1 28
3
9.333
11
Example 1.9
Use Gauss-Seidel to find the approximate solutions to the following system. Perform
1, 1, 1
0
only two iterations use x
3x1 2 x2 x3 7.5
4 x1 3 x2 14.5
2 x1 x2 3 x3 12.5
Solution
Firstly we put our equations in an appropriate format as follows
1
x1k 7.5 2 x 2k 1 x3k 1
3
1
x 2k 14.5 4 x1k
3
1
x3k 12.5 2 x1k x 2k
3
1, 1, 1 we have,
0
Now, when k = 1, with x
x11
1
3
3
7.5 2 x 20 x30 7.5 2 1 1
1 8.5
3
2.833
3
3
x 21 14.5 4 x11 14.5 4 2.833
1 1 3.168
3
1.056
3
3
x3k 12.5 2 x11 x 21 12.5 2 2.833 1.056
1 1 7.89
3
2.63
x12
1
3
3
7.5 2 x 21 x31 7.5 2 1.056 2.63
1 6.982
3
2.327
x 22
3
14.5 4 x12 14.5 4 2.327
1 1
3
5.192
3
1.731
x32
3
3
12.5 2 x12 x 22 12.5 2 2.327 1.731
1 1 9.577
3
3.192
I T 1 I T T 2 T s .
s 0
12
Theorem 1.3 For any x 0 R n , the sequence x k k 0 defined by
x x Tx k 1 c , for k 1
converges to a unique solution x Tx c if and only if T 1 .
Example 1.10
Repeat example 1.9 by applying SOR method with 1.25
Solution
The system will be written in an appropriate format as follows
x11 0.25x10
1.25
3
7.5 2 x 20 x30 0.25 1
1.25
3
7.5 2 1 1 0.25 3.542 3.292
x31 0.25x30
1.25
3
12.5 2 x11 x 21 0.25 1
1.25
3
12.5 2 3.292 0.305 2.342
Thus x 3.292, 0.305, 2.342 .
1
13
When k = 2, we have,
x12 0.25x11
3
1.25
7.5 2 x 21 x 31 0.25 3.292
1.25
3
7.5 2 0.305 2.342 1.580
Theorem 1.5 If A is a positive definite and tridiagonal matrix, then Tg TJ 1 ,
2
Example 1.11
Given a positive definite and tridiagonal matrix
4 3 0
A 3 4 1
0 1 4
Obtain the optimal relaxation parameter , and hence find Tg and T .
Solution
1
0 0
4 0 0 4
0 0
1
we have D 0 4 0 D 1
4
0 0 4
1
0 0
4
0 0 0 0 3 0
L 3 0 0 and U 0 0 1
0 1 0 0 0 0
14
Then,
0 .0.75 0
TJ D L U 0.75
1
0 0.25
0 0
0.25
0.75 0
TJ I 0.75 0.25
0 0.25
So,
det TJ I 2 0.625
The solutions to detTJ I 0 are 1 0 , 2 0.625 , 3 0.625
Thus,
TJ 0.625 0.7906
Then, we have
2 2
1.24
1 1 TJ 1 1 0.625
2
Hence,
Tg TJ 2 0.625 and T 1 1.24 1 0.24
It shows from the above example that T Tg TJ 1. Which implies that
SOR converges faster than Gauss-Seidel, and Gauss-Seidel converges faster than Jacobi.
Therefore the most convergent iterative method is the SOR.
15