Nothing Special   »   [go: up one dir, main page]

CSC 431 Lecture Note

Download as pdf or txt
Download as pdf or txt
You are on page 1of 157

Prof. Jelili O.

Oyelade

Computational Science and


Numerical Methods

1
Solution of Nonlinear Equations
( Root Finding Problems )

 Definitions
 Classification of Methods

Analytical Solutions
 Graphical Methods
 Numerical Methods
Bracketing Methods (bisection, regula-falsi)
 Open Methods (secant, Newton-Raphson, fixed
point iteration)
 Convergence Notations
2
Definitions
•Computational science is multidisciplinary, involving
researchers in the physical sciences; that is, computer scientists,
software engineers, even people in the social sciences. The
methods usually involve numerical simulations.

3
4
5
6
7 CS416 Compiler Design
8
9
10
Nonlinear Equation
Solvers

Bracketing Graphical Open Methods

• Bisection • Newton Raphson


• False Position • Secant
(Regula-Falsi) • Fixed point iteration

All Iterative
11
Roots of Equations
Thermodynamics:
van der Waals equation; v = V/n (= volume/# moles)
Find the molecular volume v such that
a
f(v)=( p+ )( v - b )- RT = 0
v2
p = pressure,
T = temperature,
R = universal gas constant,
a & b = empirical constants
12
Roots of Equations
Civil Engineering:
Find the horizontal component of tension, H, in a
cable that passes through (0,y0) and (x, y)

H  wx  
f ( H ) = cosh   - 1 + y0 - y = 0
w H  

w = weight per unit length of cable

13
Roots of Equations

Electrical Engineering :
Find the resistance, R, of a circuit such that the
charge reaches q at specified time t
 2 
1  R   q
f ( R ) = e-Rt 2L
cos  t - =0
 LC  2 L  
-
 q0
  

L = inductance,
C = capacitance,
q0 = initial charge

14
Roots of Equations
Mechanical Engineering:

Find the value of stiffness k of a vibrating mechanical system


such that the displacement x(t) becomes zero at t= 0.5 s. The
initial displacement is x0 and the initial velocity is zero. The mass
m and damping c are known, and λ = c/(2m).

  
cos ( t ) +  sin ( t )  = 0
- t
x(t) = x0e
 
in which
2
k c
= - 2
m 4m
15
Root Finding Problems
Many problems in Science and Engineering are
expressed as:

Given a continuous function f(x),


find the value r such that f (r ) = 0

These problems are called root finding problems.

16
Roots of Equations
A number r that satisfies an equation is called a root of
the equation.

The equation : x 4 − 3x 3 − 7 x 2 + 15 x = −18


has four roots : − 2, 3, 3 , and − 1 .
i.e., x 4 − 3x 3 − 7 x 2 + 15 x + 18 = ( x + 2)( x − 3) 2 ( x + 1)

The equation has two simple roots ( − 1 and − 2)


and a repeated root (3)with multiplicity = 2.

17
Zeros of a Function
Let f(x) be a real-valued function of a real variable. Any number
r for which f(r)=0 is called a zero of the function.

Examples:
2 and 3 are zeros of the function f(x) = (x-2)(x-3).

18
Graphical Interpretation of Zeros

 The real zeros of a function


f(x) are the values of x at f(x)
which the graph of the
function crosses (or touches)
the x-axis.

Real zeros of f(x)

19
Simple Zeros

f ( x) = (x + 1)( x − 2)

f ( x) = ( x + 1)(x − 2) = x − x − 2
2

has two simple zeros (one at x = 2 and one at x = −1)


20
Multiple Zeros

(
f ( x) = x − 1)2

f ( x) = (x − 1) = x − 2 x + 1
2 2

has double zeros (zero with muliplicit y = 2) at x = 1


21
Multiple Zeros
f ( x) = x 3

f ( x) = x
3

has a zero with muliplicit y = 3 at x = 0


22
Facts
▪ Any nth order polynomial has exactly n zeros
(counting real and complex zeros with their
multiplicities).
▪ Any polynomial with an odd order has at least one
real zero.
▪ If a function has a zero at x=r with multiplicity m
then the function and its first (m-1) derivatives are
zero at x=r and the mth derivative at r is not zero.

23
Roots of Equations & Zeros of Function
Given the equation :
x 4 − 3 x 3 − 7 x 2 + 15 x = −18
Move all terms to one side of the equation :
x 4 − 3 x 3 − 7 x 2 + 15 x + 18 = 0
Define f ( x) as :
f ( x) = x 4 − 3 x 3 − 7 x 2 + 15 x + 18

The zeros of f ( x) are the same as the roots of the equation f ( x) = 0


(Which are − 2, 3, 3, and − 1)

24
Solution Methods
Several ways to solve nonlinear equations are possible:
▪ Analytical Solutions
▪ Possible for special equations only
▪ Graphical Solutions
▪ Useful for providing initial guesses for other methods
▪ Numerical Solutions
▪ Open methods
▪ Bracketing methods

25
Analytical Methods
Analytical Solutions are available for special equations
only.

Analytical solution of : a x 2 + b x + c = 0
− b  b 2 − 4ac
roots =
2a

No analytical solution is available for : x − e − x = 0

26
Graphical Methods
 Graphical methods are useful to provide an initial guess
to be used by other methods.

−x
Solve e
2 Root
−x
x=e x
The root  [0,1] 1

root  0.6
1 2

27
Numerical Methods
Many methods are available to solve nonlinear equations:
▪ Bisection Method → bracketing method
▪ Newton’s Method → open method
▪ Secant Method →open method
▪ False position Method (bracketing method)
▪ Fixed point iterations (open method)
 Muller’s Method
 Bairstow’s Method
 ……….

28
Bracketing Methods
 In bracketing methods, the method starts with an
interval that contains the root and a procedure is used
to obtain a smaller interval containing the root.

 Examples of bracketing methods:


 Bisection method
 False position method (Regula-Falsi)

29
Open Methods
 In the open methods, the method starts with one or
more initial guess points. In each iteration, a new
guess of the root is obtained.
 Open methods are usually more efficient than
bracketing methods.
 They may not converge to a root.

30
Convergence Notation

A sequence x1 , x2 ,..., xn ,... is said to converge to x if


to every   0 there exists N such that :

xn − x   n  N

31
Convergence Notation

Let x1 , x2 ,...., converge to x.


xn +1 − x
Linear Convergence : C
xn − x
xn +1 − x
Quadratic Convergence : C
xn − x
2

xn +1 − x
Convergence of order P : C
xn − x
p

32
Speed of Convergence
 We can compare different methods in terms of their
convergence rate.
 Quadratic convergence is faster than linear
convergence.
 A method with convergence order q converges faster
than a method with convergence order p if q>p.
 Methods of convergence order p>1 are said to have
super linear convergence.

33
Bisection Method
The Bisection Algorithm
Convergence Analysis of Bisection Method
Examples

34
Introduction
 The Bisection method is one of the simplest methods to find a zero
of a nonlinear function.
 It is also called interval halving method.
 To use the Bisection method, one needs an initial interval that is known
to contain a zero of the function.
 The method systematically reduces the interval. It does this by dividing
the interval into two equal parts, performs a simple test and based on
the result of the test, half of the interval is thrown away.
 The procedure is repeated until the desired interval size is obtained.

35
Intermediate Value Theorem
 Let f(x) be defined on the interval
[a,b].
f(a)
 Intermediate value theorem:
if a function is continuous and f(a) and
f(b) have different signs then the a b
function has at least one zero in the f(b)
interval [a,b].

36
Examples
▪ If f(a) and f(b) have the same sign,
the function may have an even
number of real zeros or no real
zeros in the interval [a, b]. a b

▪ Bisection method can not be used The function has four real
in these cases. zeros

a b
The function has no real zeros

37
Two More Examples
 If f(a) and f(b) have
different signs, the
function has at least one
a b
real zero.
The function has one real zero
 Bisection method can be
used to find one of the
zeros. a b

The function has three real zeros

38
Bisection Method
▪ If the function is continuous on [a,b] and f(a) and f(b)
have different signs, Bisection method obtains a new
interval that is half of the current interval and the sign
of the function at the end points of the interval are
different.

▪ This allows us to repeat the Bisection procedure to


further reduce the size of the interval.

39
Bisection Method
Assumptions:
Given an interval [a,b]
f(x) is continuous on [a,b]
f(a) and f(b) have opposite signs.

These assumptions ensure the existence of at least one zero in the


interval [a,b] and the bisection method can be used to obtain a
smaller interval that contains the zero.

40
Bisection Algorithm
Assumptions:
 f(x) is continuous on [a,b]
 f(a) f(b) < 0 f(a)

Algorithm:
c b
Loop
1. Compute the mid point c=(a+b)/2 a
2. Evaluate f(c)
3. If f(a) f(c) < 0 then new interval [a, c] f(b)
If f(a) f(c) > 0 then new interval [c, b]
End loop

41
Bisection Method

b0
a0 a1 a2

42
Example
+ + -

+ - -

+ + -

43
Flow Chart of Bisection Method
Start: Given a,b and ε

u = f(a) ; v = f(b)

c = (a+b) /2 ; w = f(c) no

yes
is no is
Stop
yes (b-a) /2<ε
u w <0

b=c; v= w a=c; u= w

44
Example
Can you use Bisection method to find a zero of :
f ( x) = x 3 − 3x + 1 in the interval [0,2]?

Answer:

f ( x) is continuous on [0,2]
and f(0) * f(2) = (1)(3) = 3  0
 Assumptions are not satisfied
 Bisection method can not be used

45
Example
Can you use Bisection method to find a zero of :
f ( x) = x 3 − 3x + 1 in the interval [0,1]?

Answer:

f ( x) is continuous on [0,1]
and f(0) * f(1) = (1)(-1) = −1  0
 Assumptions are satisfied
 Bisection method can be used

46
Best Estimate and Error Level
Bisection method obtains an interval that is
guaranteed to contain a zero of the function.

Questions:
 What is the best estimate of the zero of f(x)?
 What is the error level in the obtained estimate?

47
Best Estimate and Error Level
The best estimate of the zero of the function f(x)
after the first iteration of the Bisection method is the
mid point of the initial interval:

b+a
Estimate of the zero : r =
2
b−a
Error 
2

48
Stopping Criteria
Two common stopping criteria

1. Stop after a fixed number of iterations


2. Stop when the absolute error is less than a specified
value

How are these criteria related?

49
Stopping Criteria

cn : is the midpoint of the interval at the n th iteration


( cn is usually used as the estimate of the root).
r: is the zero of the function.

After n iterations :
b − a x 0
error = r - cn  Ean = n = n
2 2

50
Convergence Analysis

Given f ( x), a, b, and 


How many iterations are needed such that : x - r  
where r is the zero of f(x) and x is the
bisection estimate (i.e., x = ck ) ?

log(b − a ) − log( )
n
log( 2)
51
Convergence Analysis – Alternative Form

log(b − a ) − log( )
n
log( 2)

 width of initial interval  b−a


n  log 2   = log 2  
 desired error    

52
Example
a = 6, b = 7,  = 0.0005
How many iterations are needed such that : x - r   ?

log( b − a ) − log( ) log(1) − log( 0.0005 )


n = = 10.9658
log( 2) log( 2)

 n  11

53
Example
 Use Bisection method to find a root of the equation x
= cos(x) with absolute error <0.02
(assume the initial interval [0.5, 0.9])

Question 1: What is f (x) ?


Question 2: Are the assumptions satisfied ?
Question 3: How many iterations are needed ?
Question 4: How to compute the new estimate ?

54
CISE301_Topic2 55
Bisection Method
Initial Interval

f(a)=-0.3776 f(b) =0.2784


Error < 0.2
a =0.5 c= 0.7 b= 0.9

56
Bisection Method

-0.3776 -0.0648 0.2784


Error < 0.1
0.5 0.7 0.9

-0.0648 0.1033 0.2784


Error < 0.05
0.7 0.8 0.9

57
Bisection Method

-0.0648 0.0183 0.1033


Error < 0.025
0.7 0.75 0.8

-0.0648 -0.0235 0.0183


Error < .0125
0.70 0.725 0.75

58
Summary
 Initial interval containing the root: [0.5,0.9]

 After 5 iterations:
 Interval containing the root: [0.725, 0.75]
 Best estimate of the root is 0.7375
 | Error | < 0.0125

59
A Matlab Program of Bisection Method
a=.5; b=.9; c=
u=a-cos(a); 0.7000
v=b-cos(b); fc =
-0.0648
for i=1:5
c=
c=(a+b)/2 0.8000
fc=c-cos(c) fc =
if u*fc<0 0.1033
b=c ; v=fc; c=
else 0.7500
fc =
a=c; u=fc;
0.0183
end
c=
end 0.7250
fc =
-0.0235 60
Example
Find the root of:

f ( x) = x 3 − 3 x + 1 in the interval : [0,1]

* f(x) is continuous
* f( 0 ) = 1, f (1) = −1  f (a) f (b)  0
 Bisection method can be used to find the root

61
Example

c= (a+b) (b-a)
Iteration a b f(c)
2 2
1 0 1 0.5 -0.375 0.5

2 0 0.5 0.25 0.266 0.25

3 0.25 0.5 .375 -7.23E-3 0.125


4 0.25 0.375 0.3125 9.30E-2 0.0625
5 0.3125 0.375 0.34375 9.37E-3 0.03125

62
Bisection Method
Advantages
▪ Simple and easy to implement
▪ One function evaluation per iteration
▪ The size of the interval containing the zero is reduced by 50%
after each iteration
▪ The number of iterations can be determined a priori
▪ No knowledge of the derivative is needed
▪ The function does not have to be differentiable

Disadvantage
▪ Slow to converge
▪ Good intermediate approximations may be discarded

63
Bisection Method (as C function)
double Bisect(double xl, double xu, double es,
int iter_max)
{
double xr; // Est. Root
double xr_old; // Est. root in the previous step
double ea; // Est. error
int iter = 0; // Keep track of # of iterations
double fl, fr; // Save values of f(xl) and f(xr)

xr = xl; // Initialize xr in order to


// calculating "ea". Can also be "xu".
fl = f(xl);
do {
iter++;
xr_old = xr;

xr = (xl + xu) / 2; // Estimate root


fr = f(xr);

64
if (xr != 0)
ea = fabs((xr – xr_old) / xr) * 100;

test = fl * fr;

if (test < 0)
xu = xr;
else
if (test > 0) {
xl = xr;
fl = fr;
}
else
ea = 0;

} while (ea > es && iter < iter_max);

return xr;
}
65
Regula – Falsi Method

66
Regula Falsi Method
 Also known as the false-position method, or linear interpolation
method.

 Unlike the bisection method which divides the search interval by


half, regula falsi interpolates f(xu) and f(xl) by a straight line and
the intersection of this line with the x-axis is used as the new
search position.

 The slope of the line connecting f(xu) and f(xl) represents the
"average slope" (i.e., the value of f'(x)) of the points in [xl, xu ].

67
f ( xu ) f ( xl ) f ( xu )( xl − xu )
= = xr = xu −
xu − xr xl − xr f ( xl ) − f ( xu )
68
Regula Falsi Method
 The regula falsi method start with two point, (a, f(a)) and (b,f(b)),
satisfying the condition that f(a)f(b)<0.

 The straight line through the two points (a, f(a)), (b, f(b)) is

f (b) − f (a)
y = f (a) + ( x − a)
 b −isathe value of x where the
The next approximation to the zero
straight line through the initial points crosses the x-axis.

b−a af (b) − bf (a )
x =a− f (a) =
f (b) − f (a ) f (b) − f (a)
69
Example
 Finding the Cube Root of 2 Using Regula Falsi

 Since f(1)= -1, f(2)=6, we take as our


starting bounds on the zero a=1 and b=2.
 Our first approximation to the zero is

b−a 2 −1
x =b− ( f (b)) = 2 − (6)
f (b) − f (a) 6 +1
= 2 − 6 / 7 = 8 / 7  1.1429

 We then find the value of the function:

y = f ( x) = (8 / 7)3 − 2  −0.5073
 Since f(a) and y are both negative, but y
and f(b) have opposite signs

70
Example (cont.)
3
 Calculation of 2 using regula falsi.

71
Open Methods

To find the root for f(x)


= 0, we construct a
magic formulae
xi+1 = g(xi)
to predict the root
iteratively until x
converge to a root.
However, x may
diverge!
(a) Bisection method
(b) Open method (diverge)
(c) Open method (converge)
72
What you should know about Open Methods

▪ How to construct the magic formulae g(x)?

▪ How can we ensure convergence?

▪ What makes a method converges quickly or diverge?

▪ How fast does a method converge?

73
OPEN METHOD
Newton-Raphson Method
Assumptions
Interpretation
Examples
Convergence Analysis

74
Newton-Raphson Method
(Also known as Newton’s Method)
Given an initial guess of the root x0, Newton-Raphson method
uses information about the function and its derivative at that
point to find a better guess of the root.

Assumptions:
 f(x) is continuous and the first derivative is known
 An initial guess x0 such that f’(x0)≠0 is given

75
Newton Raphson Method
- Graphical Depiction -
 If the initial guess at the
root is xi, then a tangent
to the function of xi that is
f’(xi) is extrapolated
down to the x-axis to
provide an estimate of the
root at xi+1.

76
Derivation of Newton’s Method
Given : xi an initial guess of the root of f ( x) = 0
Question : How do we obtain a better estimate xi +1?
____________________________________
Taylor Therorem : f ( x + h)  f ( x) + f ' ( x)h
Find h such that f ( x + h) = 0.
Newton − Raphson Formula
f ( x)
h−
f ' ( x)
f ( xi )
A new guess of the root : xi +1 = xi −
f ' ( xi )
77
Newton’s Method
Given f ( x), f ' ( x), x0 C FORTRAN PROGRAM
Assumpution f ' ( x0 )  0 F ( X ) = X * *3 − 3 * X * *2 + 1
FP( X ) = 3 * X * *2 − 6 * X
______________________
X =4
for i = 0: n DO 10 I = 1, 5
f ( xi ) X = X − F ( X ) / FP( X )
xi +1 = xi −
f ' ( xi ) PRINT *, X
end 10 CONTINUE
STOP
END

78
Newton’s Method
Given f ( x), f ' ( x), x0 F.m
function [ F ] = F ( X )
F = X ^3 − 3 * X ^ 2 + 1
Assumpution f ' ( x0 )  0
______________________ function [ FP] = FP( X )
for i = 0 : n FP.m
FP = 3 * X ^ 2 − 6 * X
f ( xi )
xi +1 = xi − % MATLAB PROGRAM
f ' ( xi ) X =4
end for i = 1 : 5
X = X − F ( X ) / FP( X )
end

79
Example
Find a zero of the function f(x) = x 3 − 2 x 2 + x − 3 , x0 = 4
f ' (x) = 3x 2 − 4 x + 1
f ( x0 ) 33
Iteration 1 : x1 = x0 − = 4− =3
f ' ( x0 ) 33
f ( x1 ) 9
Iteration 2 : x2 = x1 − = 3 − = 2.4375
f ' ( x1 ) 16
f ( x2 ) 2.0369
Iteration 3 : x3 = x2 − = 2.4375 − = 2.2130
f ' ( x2 ) 9.0742

80
Example
k xk f(xk) f’(xk) xk+1 |xk+1 –xk|
(Iteration)
0 4 33 33 3 1

1 3 9 16 2.4375 0.5625

2 2.4375 2.0369 9.0742 2.2130 0.2245

3 2.2130 0.2564 6.8404 2.1756 0.0384

4 2.1756 0.0065 6.4969 2.1746 0.0010

81
Convergence Analysis
Theorem :
Let f(x), f ' (x) and f ' ' (x) be continuous at x  r
where f(r) = 0. If f ' (r)  0 then there exists   0
xk +1-r
such that x0 -r    2
C
xk -r
max f ' ' ( x)
1 x0 -r 
C=
2 min f ' ( x)
x0 -r 

82
Convergence Analysis
Remarks

When the guess is close enough to a simple root of


the function then Newton’s method is guaranteed
to converge quadratically.

Quadratic convergence means that the number of


correct digits is nearly doubled at each iteration.

83
Error Analysis of Newton-Raphson Method
Using an iterative process we get xk+1 from xk and other info.

We have x0, x1, x2, …, xk+1 as the estimation for the root α.

Let δk = α – xk

84
Error Analysis of Newton-Raphson Method
By definition  i =  − xi (1)
 i +1 =  − xi +1 (2)

Newton-Raphson method
f ( xi )
xi +1 = xi −
f ' ( xi )
 f ( xi ) = f ' ( xi )( xi − xi +1 )
 f ( xi ) + f ' ( xi )(− xi ) = f ' ( xi )(− xi +1 )
 f ( xi ) + f ' ( xi )( − xi ) = f ' ( xi )( − xi +1 ) (3)

85
Error Analysis of Newton-Raphson Method
Suppose α is the true value (i.e., f(α) = 0).
Using Taylor's series
f " (c)
f ( ) = f ( xi ) + f ' ( xi )( − xi ) + ( − xi ) 2
2
f " (c)
 0 = f ( xi ) + f ' ( xi )( − xi ) + ( − xi ) 2
2
f " (c)
 0 = f ' ( xi )( − xi +1 ) + ( − xi ) 2 (from (3))
2
f " (c)
 0 = f ' ( xi )( i +1 ) + ( i ) 2 (from (1) and ( 2))
2
− f " ( c ) 2 − f " ( ) 2
  i +1 = i  i When xi and α are very close
2 f ' ( xi ) 2 f ' ( ) to each other, c is between xi
and α.
The iterative process is said to be of second order.
86
Problems with Newton’s Method
• If the initial guess of the root is far from
the root the method may not converge.
• Newton’s method converges linearly near
multiple zeros { f(r) = f’(r) =0 }. In such a
case, modified algorithms can be used to
regain the quadratic convergence.
87
Multiple Roots
f ( x) = x 3
f ( x) = (x + 1)
2

f(x) has three f(x) has two


zeros at x = 0 zeros at x = -1
88
Problems with Newton’s Method
- Runaway -

x0 x1

The estimates of the root is going away from the


root.
89
Problems with Newton’s Method
- Flat Spot -

x0

The value of f’(x) is zero, the algorithm fails.


If f ’(x) is very small then x1 will be very far from x0.
90
Problems with Newton’s Method
- Cycle -

x1=x3=x5

x0=x2=x4

The algorithm cycles between two values x0 and x1

91
Secant Method
 Secant Method
 Examples
 Convergence Analysis

92
Newton’s Method (Review)
Assumptions : f ( x), f ' ( x), x0 are available,
f ' ( x0 )  0
Newton' s Method new estimate :
f ( xi )
xi +1 = xi −
f ' ( xi )
Problem :
f ' ( xi ) is not available,
or difficult to obtain analytical ly.
93
The Secant Method
xi − xi −1
xi +1 = xi − f ( xi )
f ( xi ) − f ( xi −1 )
• Requires two initial
estimates xo, x1.
However, it is not a
“bracketing” method.
• The Secant Method has
the same properties as
Newton’s method.
Convergence is not
guaranteed for all xo, f(x).

94
Secant Method
f ( x + h) − f ( x )
f ' ( x) 
h
if xi and xi −1 are two initial points :
f ( xi ) − f ( xi −1 )
f ' ( xi ) =
( xi − xi −1 )

f ( xi ) ( xi − xi −1 )
xi +1 = xi − = xi − f ( xi )
f ( xi ) − f ( xi −1 ) f ( xi ) − f ( xi −1 )
( xi − xi −1 )
95
▪ Notice that this is very similar to the false position
method in form
▪ Still requires two initial estimates
▪ This method requires two initial estimates of x but does
not require an analytical expression of the derivative.
▪ But it doesn't bracket the root at all times - there is no
sign test

96
f ( xi )
Newton - Raphson : x i +1 = xi −
f ' ( xi )
df f ( xi ) − f ( xi −1 )
f ( x i ) = 
dx xi − x i −1

xi − xi −1
Secant : xi +1 = xi − f ( xi ) i = 1,2,3,
f ( xi ) − f ( xi −1 )

97
Secant Method

Assumptions :
Two initial points xi and xi −1
such that f ( xi )  f ( xi −1 )
New estimate (Secant Method) :
( xi − xi −1 )
xi +1 = xi − f ( xi )
f ( xi ) − f ( xi −1 )
98
Secant Method
f ( x) = x − 2 x + 0.5
2

x0 = 0
x1 = 1
( xi − xi −1 )
xi +1 = xi − f ( xi )
f ( xi ) − f ( xi −1 )

99
Secant Method - Flowchart

x0 , x1 , i = 1

( xi − xi −1 )
xi +1 = xi − f ( xi ) ;
f ( xi ) − f ( xi −1 )
i = i +1

NO Yes
xi +1 − xi   Stop

100
Modified Secant Method
In this modified Secant method, only one initial guess is needed :
f ( xi +  xi ) − f ( xi )
f ' ( xi ) 
 xi

f ( xi )  xi f ( xi )
xi +1 = xi − = xi −
f ( xi +  xi ) − f ( xi ) f ( xi +  xi ) − f ( xi )
 xi
Problem : How to select  ?
If not selected properly, the method may diverge .

101
Example
50

40

Find the roots of : 30

f ( x) = x 5 + x 3 + 3 20

Initial points 10

x0 = −1 and x1 = −1.1 0

-10

with error  0.001 -20

-30

-40
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2

102
Example
x(i) f(x(i)) x(i+1) |x(i+1)-x(i)|
-1.0000 1.0000 -1.1000 0.1000
-1.1000 0.0585 -1.1062 0. 0062
-1.1062 0.0102 -1.1052 0.0009
-1.1052 0.0001 -1.1052 0.0000

103
Convergence Analysis

▪ The rate of convergence of the Secant method is super


linear:
xi +1 − r

 C,   1.62
xi − r
r : root xi : estimate of the root at the i th iteration.

▪ It is better than Bisection method but not as good as


Newton’s method.

104
OPEN METHOD

Fixed Point Iteration

105
Fixed Point Iteration
 Also known as one-point iteration or successive
substitution

 To find the root for f(x) = 0, we reformulate f(x) = 0 so that


there is an x on one side of the equation.

f ( x) = 0  g ( x) = x
• If we can solve g(x) = x, we solve f(x) = 0.
– x is known as the fixed point of g(x).
• We solve g(x) = x by computing
xi +1 = g ( xi ) with x0 given
until xi+1 converges to x.
106
Fixed Point Iteration – Example
f ( x) = x 2 + 2 x − 3 = 0

3 − x2
x2 + 2x − 3 = 0  2x = 3 − x2  x=
2
3− x 2
 xi +1 = g ( xi ) = i
2
Reason: If x converges, i.e. xi+1 → xi

3 − xi2 3 − xi2
xi +1 = → xi =
2 2
 xi2 + 2 xi − 3 = 0
107
Example
Find root of f(x) = e-x - x = 0. i xi εa (%) εt (%)
(Answer: α= 0.56714329) 0 0 100.0
1 1.000000 100.0 76.3
2 0.367879 171.8 35.1
− xi
We put xi +1 = e
3 0.692201 46.9 22.1
4 0.500473 38.3 11.8
5 0.606244 17.4 6.89
6 0.545396 11.2 3.83
7 0.579612 5.90 2.20
8 0.560115 3.48 1.24
9 0.571143 1.93 0.705
10 0.564879 1.11 0.399

108
Fixed Point Iteration
 There are infinite ways to construct g(x) from f(x).

For example, f ( x ) = x 2 − 2 x − 3 = 0 (ans: x = 3 or -1)


Case a: Case b: Case c:
x2 − 2x − 3 = 0 x2 − 2x − 3 = 0 x2 − 2x − 3 = 0
 x2 = 2x + 3  x ( x − 2) − 3 = 0  2x = x2 − 3
 x = 2x + 3 x=
3 x2 − 3
x−2 x=
 g ( x) = 2 x + 3 2
3 x2 − 3
 g ( x) =  g ( x) =
x−2 2

So which one is better?

109
Case a Case b Case c
xi +1 = 2 xi + 3 3 xi2 − 3
xi +1 = xi +1 =
xi − 2 2
1. x0 = 4 1. x0 = 4 1. x0 = 4
2. x1 = 3.31662 2. x1 = 1.5 2. x1 = 6.5
3. x2 = 3.10375 3. x2 = -6 3. x2 = 19.625
4. x3 = 3.03439 4. x3 = -0.375 4. x3 = 191.070
5. x4 = 3.01144 5. x4 = -1.263158
6. x5 = 3.00381 6. x5 = -0.919355
7. x6 = -1.02762
Converge! Diverge!
8. x7 = -0.990876
9. x8 = -1.00305
Converge, but slower 110
Fixed Point Iteration Impl. (as C function)
// x0: Initial guess of the root
// es: Acceptable relative percentage error
// iter_max: Maximum number of iterations allowed
double FixedPt(double x0, double es, int iter_max) {
double xr = x0; // Estimated root
double xr_old; // Keep xr from previous iteration
int iter = 0; // Keep track of # of iterations

do {
xr_old = xr;
xr = g(xr_old); // g(x) has to be supplied
if (xr != 0)
ea = fabs((xr – xr_old) / xr) * 100;

iter++;
} while (ea > es && iter < iter_max);

return xr;
}
111
Comparison of Root
Finding Methods
 Advantages/disadvantages
 Examples

112
Summary
Method Pros Cons
Bisection - Easy, Reliable, Convergent - Slow
- One function evaluation per - Needs an interval [a,b]
iteration containing the root, i.e.,
- No knowledge of derivative is f(a)f(b)<0
needed
Newton - Fast (if near the root) - May diverge
- Two function evaluations per - Needs derivative and an
iteration initial guess x0 such that
f’(x0) is nonzero
Secant - Fast (slower than Newton) - May diverge
- One function evaluation per - Needs two initial points
iteration guess x0, x1 such that
- No knowledge of derivative is f(x0)- f(x1) is nonzero
needed
113
Example

Use Secant method to find the root of :


f ( x) = x 6 − x − 1
Two initial points x0 = 1 and x1 = 1.5

( xi − xi −1 )
xi +1 = xi − f ( xi )
f ( xi ) − f ( xi −1 )

114
Solution
_______________________________
k xk f(xk)
_______________________________
0 1.0000 -1.0000
1 1.5000 8.8906
2 1.0506 -0.7062
3 1.0836 -0.4645
4 1.1472 0.1321
5 1.1331 -0.0165
6 1.1347 -0.0005

115
Example

Use Newton' s Method to find a root of :


f ( x) = x 3 − x − 1
Use the initial point : x0 = 1.
Stop after three iterations , or
if xk +1 − xk  0.001 , or
if f ( xk )  0.0001 .

116
Five Iterations of the Solution
k xk f(xk) f’(xk) ERROR
______________________________________
0 1.0000 -1.0000 2.0000
1 1.5000 0.8750 5.7500 0.1522
2 1.3478 0.1007 4.4499 0.0226
3 1.3252 0.0021 4.2685 0.0005
4 1.3247 0.0000 4.2646 0.0000
5 1.3247 0.0000 4.2646 0.0000

117
Example

Use Newton' s Method to find a root of :


f ( x) = e − x − x
Use the initial point : x0 = 1.
Stop after three iterations , or
if xk +1 − xk  0.001, or
if f ( xk )  0.0001 .

118
Example
Use Newton' s Method to find a root of :
f ( x ) = e − x − x, f ' ( x ) = −e − x − 1

f ( xk )
xk f ( xk ) f ' ( xk )
f ' ( xk )
1.0000 - 0.6321 - 1.3679 0.4621
0.5379 0.0461 - 1.5840 - 0.0291
0.5670 0.0002 - 1.5672 - 0.0002
0.5671 0.0000 - 1.5671 - 0.0000

119
Example
Estimates of the root of: x-cos(x)=0.

0.60000000000000 Initial guess


0.74401731944598 1 correct digit
0.73909047688624 4 correct digits
0.73908513322147 10 correct digits
0.73908513321516 14 correct digits

120
Example
In estimating the root of: x-cos(x)=0, to get more than 13 correct
digits:

 4 iterations of Newton (x0=0.8)


 43 iterations of Bisection method (initial
interval [0.6, 0.8])
 5 iterations of Secant method
( x0=0.6, x1=0.8)

121
Question
Given a floating ball with a specific gravity of 0.6 and has
a radius of 5.5 cm. You are asked to find the depth to
which the ball is submerged when floating in water.

122
123
CSC 431
Interpolation
Tutorials

124
Interpolation Methods

Lagrange Polynomial
Newton’s Divided difference

125
126
127
128
129
130
Example 1
Compute ln 9.2 from ln 9.0 = 2.1972 and ln 9.5 = 2.2513 by the
Lagrange interpolation and determine the error from a = ln 9.2 =
2.2192.
𝜒0 = 9.0, 𝜒1 = 9.5, 𝑓0 = 2.1972, and 𝑓1 = 2.2513.
Example 2
Accuracy check: Note that 𝑓(𝑥)in this case
is 𝑓 𝑥 = 𝜒2 + 5𝜒+7 and the exact value
of 𝑓(𝑥) at 𝜒 = 3 is 31. Thus the
interpolation error for this example is zero.
This is because the function to be
interpolated is itself a polynomial of degree
2, and the interpolating polynomial 𝑃3(𝑥)
is of degree 3.
Example 3
Example 4
Compute ln 9.2 from ln 9.0 = 2.1972, ln
9.5 = 2.2513, and ln 11.0 = 2.3979 by the
Lagrange interpolation and determine the
error from ln 9.2 = 2.2192.
𝜒0 = 9.0, 𝜒1 = 9.5, 𝜒2 = 11.0, 𝑓0 = 2.1972, and 𝑓1 =
2.2513, 𝑓 = 2.3979.
Exercise 1
Find the Lagrange Interpolating
polynomial P3 through the points (0, 3), (1, 2),
(2, 7), and (4, 59), and then approximate
value f (3) by P3(3).

𝑝3 (x)= 𝜒2 − 2𝜒 + 3
𝑓(3) ≈ 𝑝3 (3) = 27 - 6 + 3 = 24
Exercise 2

Given that f (-2) = 46, f (-1 ) = 4, f ( 1 ) =


4, f (3) = 156, and f (4) = 484, use Lagrange's
interpolation formula to estimate the value
of f(0).
Newton’s Divided difference
Newton’s divided difference interpolation formula is
an interpolation technique used when the interval
difference is not the same for all sequence of values.

Suppose f(x ), f(x ), f(x )………f(x ) be the (n+1) values of


0 1 2 n

the function y=f(x) corresponding to the arguments x=x ,0

x , x …x , where interval differences are not the same.


1 2 n

Then the first divided difference is given by:

146
The second divided difference is given by
𝑓 𝑥2 − 𝑓 𝑥1
𝑓 𝑥1 , 𝑥2 =
𝑥2 − 𝑥1

and so on…….

Divided difference table:

147
Example 1
 Given the following table of values:

a) Compute the coefficients of the Newton interpolating polynomial


P2(x) using divided differences.
b) Interpolate f(1.9).
151
152
153
# Python3 program for implementing (1) # Function for applying Newton’s (3)
# Newton divided difference formula # divided difference formula
def applyFormula(value, x, y, n):
# Function to find the product term
def proterm(i, value, x):
pro = 1;
sum = y[0][0];
for j in range(i):
pro = pro * (value - x[j]); for i in range(1, n):
return pro;
sum = sum + (proterm(i, value,
# Function for calculating (2) x) * y[0][i]);
# divided difference table
def dividedDiffTable(x, y, n):
return sum;
for i in range(1, n):
for j in range(n - i):
y[j][i] = ((y[j][i - 1] - y[j + 1][i - 1])

(x[j] - x[i + j]));


return y;

154
# Function for displaying divided (4) # Driver Code (5)
# difference table # number of inputs given
def printDiffTable(y, n): n = 4;
for i in range(n): y = [[0 for i in range(10)]
for j in range(n - i): for j in range(10)];
print(round(y[i][j], 4), "\t", end = " "); x = [ 5, 6, 9, 11 ];
print(""); # y[][] is used for divided difference
# table where y[][0] is used for input
y[0][0] = 12;
y[1][0] = 13;
y[2][0] = 14;
y[3][0] = 16;

155
# calculating divided difference table (6)
y=dividedDiffTable(x, y, n);

# displaying divided difference table


printDiffTable(y, n);

# value to be interpolated
value = 7;

# printing the value


print("\nValue at", value, "is",
round(applyFormula(value, x, y, n), 2))

156
Exercises
(1) Determine the value of the velocity at t = 16seconds using
second order polynomial interpolation using newton
difference methods (Question in slide 151)
(Hint: For the quadratic interpolation, the velocity is given by:
V(t) = b0 + b1(t-t0) + b2(t-t1))

(2) Construct a forward difference table for the following data

(3) Explain the advantages and disadvantages of Lagrange’s


and Newton Finite Difference of Interpolation methods

157

You might also like