CSC 431 Lecture Note
CSC 431 Lecture Note
CSC 431 Lecture Note
Oyelade
1
Solution of Nonlinear Equations
( Root Finding Problems )
Definitions
Classification of Methods
Analytical Solutions
Graphical Methods
Numerical Methods
Bracketing Methods (bisection, regula-falsi)
Open Methods (secant, Newton-Raphson, fixed
point iteration)
Convergence Notations
2
Definitions
•Computational science is multidisciplinary, involving
researchers in the physical sciences; that is, computer scientists,
software engineers, even people in the social sciences. The
methods usually involve numerical simulations.
3
4
5
6
7 CS416 Compiler Design
8
9
10
Nonlinear Equation
Solvers
All Iterative
11
Roots of Equations
Thermodynamics:
van der Waals equation; v = V/n (= volume/# moles)
Find the molecular volume v such that
a
f(v)=( p+ )( v - b )- RT = 0
v2
p = pressure,
T = temperature,
R = universal gas constant,
a & b = empirical constants
12
Roots of Equations
Civil Engineering:
Find the horizontal component of tension, H, in a
cable that passes through (0,y0) and (x, y)
H wx
f ( H ) = cosh - 1 + y0 - y = 0
w H
13
Roots of Equations
Electrical Engineering :
Find the resistance, R, of a circuit such that the
charge reaches q at specified time t
2
1 R q
f ( R ) = e-Rt 2L
cos t - =0
LC 2 L
-
q0
L = inductance,
C = capacitance,
q0 = initial charge
14
Roots of Equations
Mechanical Engineering:
cos ( t ) + sin ( t ) = 0
- t
x(t) = x0e
in which
2
k c
= - 2
m 4m
15
Root Finding Problems
Many problems in Science and Engineering are
expressed as:
16
Roots of Equations
A number r that satisfies an equation is called a root of
the equation.
17
Zeros of a Function
Let f(x) be a real-valued function of a real variable. Any number
r for which f(r)=0 is called a zero of the function.
Examples:
2 and 3 are zeros of the function f(x) = (x-2)(x-3).
18
Graphical Interpretation of Zeros
19
Simple Zeros
f ( x) = (x + 1)( x − 2)
f ( x) = ( x + 1)(x − 2) = x − x − 2
2
(
f ( x) = x − 1)2
f ( x) = (x − 1) = x − 2 x + 1
2 2
f ( x) = x
3
23
Roots of Equations & Zeros of Function
Given the equation :
x 4 − 3 x 3 − 7 x 2 + 15 x = −18
Move all terms to one side of the equation :
x 4 − 3 x 3 − 7 x 2 + 15 x + 18 = 0
Define f ( x) as :
f ( x) = x 4 − 3 x 3 − 7 x 2 + 15 x + 18
24
Solution Methods
Several ways to solve nonlinear equations are possible:
▪ Analytical Solutions
▪ Possible for special equations only
▪ Graphical Solutions
▪ Useful for providing initial guesses for other methods
▪ Numerical Solutions
▪ Open methods
▪ Bracketing methods
25
Analytical Methods
Analytical Solutions are available for special equations
only.
Analytical solution of : a x 2 + b x + c = 0
− b b 2 − 4ac
roots =
2a
26
Graphical Methods
Graphical methods are useful to provide an initial guess
to be used by other methods.
−x
Solve e
2 Root
−x
x=e x
The root [0,1] 1
root 0.6
1 2
27
Numerical Methods
Many methods are available to solve nonlinear equations:
▪ Bisection Method → bracketing method
▪ Newton’s Method → open method
▪ Secant Method →open method
▪ False position Method (bracketing method)
▪ Fixed point iterations (open method)
Muller’s Method
Bairstow’s Method
……….
28
Bracketing Methods
In bracketing methods, the method starts with an
interval that contains the root and a procedure is used
to obtain a smaller interval containing the root.
29
Open Methods
In the open methods, the method starts with one or
more initial guess points. In each iteration, a new
guess of the root is obtained.
Open methods are usually more efficient than
bracketing methods.
They may not converge to a root.
30
Convergence Notation
xn − x n N
31
Convergence Notation
xn +1 − x
Convergence of order P : C
xn − x
p
32
Speed of Convergence
We can compare different methods in terms of their
convergence rate.
Quadratic convergence is faster than linear
convergence.
A method with convergence order q converges faster
than a method with convergence order p if q>p.
Methods of convergence order p>1 are said to have
super linear convergence.
33
Bisection Method
The Bisection Algorithm
Convergence Analysis of Bisection Method
Examples
34
Introduction
The Bisection method is one of the simplest methods to find a zero
of a nonlinear function.
It is also called interval halving method.
To use the Bisection method, one needs an initial interval that is known
to contain a zero of the function.
The method systematically reduces the interval. It does this by dividing
the interval into two equal parts, performs a simple test and based on
the result of the test, half of the interval is thrown away.
The procedure is repeated until the desired interval size is obtained.
35
Intermediate Value Theorem
Let f(x) be defined on the interval
[a,b].
f(a)
Intermediate value theorem:
if a function is continuous and f(a) and
f(b) have different signs then the a b
function has at least one zero in the f(b)
interval [a,b].
36
Examples
▪ If f(a) and f(b) have the same sign,
the function may have an even
number of real zeros or no real
zeros in the interval [a, b]. a b
▪ Bisection method can not be used The function has four real
in these cases. zeros
a b
The function has no real zeros
37
Two More Examples
If f(a) and f(b) have
different signs, the
function has at least one
a b
real zero.
The function has one real zero
Bisection method can be
used to find one of the
zeros. a b
38
Bisection Method
▪ If the function is continuous on [a,b] and f(a) and f(b)
have different signs, Bisection method obtains a new
interval that is half of the current interval and the sign
of the function at the end points of the interval are
different.
39
Bisection Method
Assumptions:
Given an interval [a,b]
f(x) is continuous on [a,b]
f(a) and f(b) have opposite signs.
40
Bisection Algorithm
Assumptions:
f(x) is continuous on [a,b]
f(a) f(b) < 0 f(a)
Algorithm:
c b
Loop
1. Compute the mid point c=(a+b)/2 a
2. Evaluate f(c)
3. If f(a) f(c) < 0 then new interval [a, c] f(b)
If f(a) f(c) > 0 then new interval [c, b]
End loop
41
Bisection Method
b0
a0 a1 a2
42
Example
+ + -
+ - -
+ + -
43
Flow Chart of Bisection Method
Start: Given a,b and ε
u = f(a) ; v = f(b)
c = (a+b) /2 ; w = f(c) no
yes
is no is
Stop
yes (b-a) /2<ε
u w <0
b=c; v= w a=c; u= w
44
Example
Can you use Bisection method to find a zero of :
f ( x) = x 3 − 3x + 1 in the interval [0,2]?
Answer:
f ( x) is continuous on [0,2]
and f(0) * f(2) = (1)(3) = 3 0
Assumptions are not satisfied
Bisection method can not be used
45
Example
Can you use Bisection method to find a zero of :
f ( x) = x 3 − 3x + 1 in the interval [0,1]?
Answer:
f ( x) is continuous on [0,1]
and f(0) * f(1) = (1)(-1) = −1 0
Assumptions are satisfied
Bisection method can be used
46
Best Estimate and Error Level
Bisection method obtains an interval that is
guaranteed to contain a zero of the function.
Questions:
What is the best estimate of the zero of f(x)?
What is the error level in the obtained estimate?
47
Best Estimate and Error Level
The best estimate of the zero of the function f(x)
after the first iteration of the Bisection method is the
mid point of the initial interval:
b+a
Estimate of the zero : r =
2
b−a
Error
2
48
Stopping Criteria
Two common stopping criteria
49
Stopping Criteria
After n iterations :
b − a x 0
error = r - cn Ean = n = n
2 2
50
Convergence Analysis
log(b − a ) − log( )
n
log( 2)
51
Convergence Analysis – Alternative Form
log(b − a ) − log( )
n
log( 2)
52
Example
a = 6, b = 7, = 0.0005
How many iterations are needed such that : x - r ?
n 11
53
Example
Use Bisection method to find a root of the equation x
= cos(x) with absolute error <0.02
(assume the initial interval [0.5, 0.9])
54
CISE301_Topic2 55
Bisection Method
Initial Interval
56
Bisection Method
57
Bisection Method
58
Summary
Initial interval containing the root: [0.5,0.9]
After 5 iterations:
Interval containing the root: [0.725, 0.75]
Best estimate of the root is 0.7375
| Error | < 0.0125
59
A Matlab Program of Bisection Method
a=.5; b=.9; c=
u=a-cos(a); 0.7000
v=b-cos(b); fc =
-0.0648
for i=1:5
c=
c=(a+b)/2 0.8000
fc=c-cos(c) fc =
if u*fc<0 0.1033
b=c ; v=fc; c=
else 0.7500
fc =
a=c; u=fc;
0.0183
end
c=
end 0.7250
fc =
-0.0235 60
Example
Find the root of:
* f(x) is continuous
* f( 0 ) = 1, f (1) = −1 f (a) f (b) 0
Bisection method can be used to find the root
61
Example
c= (a+b) (b-a)
Iteration a b f(c)
2 2
1 0 1 0.5 -0.375 0.5
62
Bisection Method
Advantages
▪ Simple and easy to implement
▪ One function evaluation per iteration
▪ The size of the interval containing the zero is reduced by 50%
after each iteration
▪ The number of iterations can be determined a priori
▪ No knowledge of the derivative is needed
▪ The function does not have to be differentiable
Disadvantage
▪ Slow to converge
▪ Good intermediate approximations may be discarded
63
Bisection Method (as C function)
double Bisect(double xl, double xu, double es,
int iter_max)
{
double xr; // Est. Root
double xr_old; // Est. root in the previous step
double ea; // Est. error
int iter = 0; // Keep track of # of iterations
double fl, fr; // Save values of f(xl) and f(xr)
64
if (xr != 0)
ea = fabs((xr – xr_old) / xr) * 100;
test = fl * fr;
if (test < 0)
xu = xr;
else
if (test > 0) {
xl = xr;
fl = fr;
}
else
ea = 0;
return xr;
}
65
Regula – Falsi Method
66
Regula Falsi Method
Also known as the false-position method, or linear interpolation
method.
The slope of the line connecting f(xu) and f(xl) represents the
"average slope" (i.e., the value of f'(x)) of the points in [xl, xu ].
67
f ( xu ) f ( xl ) f ( xu )( xl − xu )
= = xr = xu −
xu − xr xl − xr f ( xl ) − f ( xu )
68
Regula Falsi Method
The regula falsi method start with two point, (a, f(a)) and (b,f(b)),
satisfying the condition that f(a)f(b)<0.
The straight line through the two points (a, f(a)), (b, f(b)) is
f (b) − f (a)
y = f (a) + ( x − a)
b −isathe value of x where the
The next approximation to the zero
straight line through the initial points crosses the x-axis.
b−a af (b) − bf (a )
x =a− f (a) =
f (b) − f (a ) f (b) − f (a)
69
Example
Finding the Cube Root of 2 Using Regula Falsi
b−a 2 −1
x =b− ( f (b)) = 2 − (6)
f (b) − f (a) 6 +1
= 2 − 6 / 7 = 8 / 7 1.1429
y = f ( x) = (8 / 7)3 − 2 −0.5073
Since f(a) and y are both negative, but y
and f(b) have opposite signs
70
Example (cont.)
3
Calculation of 2 using regula falsi.
71
Open Methods
73
OPEN METHOD
Newton-Raphson Method
Assumptions
Interpretation
Examples
Convergence Analysis
74
Newton-Raphson Method
(Also known as Newton’s Method)
Given an initial guess of the root x0, Newton-Raphson method
uses information about the function and its derivative at that
point to find a better guess of the root.
Assumptions:
f(x) is continuous and the first derivative is known
An initial guess x0 such that f’(x0)≠0 is given
75
Newton Raphson Method
- Graphical Depiction -
If the initial guess at the
root is xi, then a tangent
to the function of xi that is
f’(xi) is extrapolated
down to the x-axis to
provide an estimate of the
root at xi+1.
76
Derivation of Newton’s Method
Given : xi an initial guess of the root of f ( x) = 0
Question : How do we obtain a better estimate xi +1?
____________________________________
Taylor Therorem : f ( x + h) f ( x) + f ' ( x)h
Find h such that f ( x + h) = 0.
Newton − Raphson Formula
f ( x)
h−
f ' ( x)
f ( xi )
A new guess of the root : xi +1 = xi −
f ' ( xi )
77
Newton’s Method
Given f ( x), f ' ( x), x0 C FORTRAN PROGRAM
Assumpution f ' ( x0 ) 0 F ( X ) = X * *3 − 3 * X * *2 + 1
FP( X ) = 3 * X * *2 − 6 * X
______________________
X =4
for i = 0: n DO 10 I = 1, 5
f ( xi ) X = X − F ( X ) / FP( X )
xi +1 = xi −
f ' ( xi ) PRINT *, X
end 10 CONTINUE
STOP
END
78
Newton’s Method
Given f ( x), f ' ( x), x0 F.m
function [ F ] = F ( X )
F = X ^3 − 3 * X ^ 2 + 1
Assumpution f ' ( x0 ) 0
______________________ function [ FP] = FP( X )
for i = 0 : n FP.m
FP = 3 * X ^ 2 − 6 * X
f ( xi )
xi +1 = xi − % MATLAB PROGRAM
f ' ( xi ) X =4
end for i = 1 : 5
X = X − F ( X ) / FP( X )
end
79
Example
Find a zero of the function f(x) = x 3 − 2 x 2 + x − 3 , x0 = 4
f ' (x) = 3x 2 − 4 x + 1
f ( x0 ) 33
Iteration 1 : x1 = x0 − = 4− =3
f ' ( x0 ) 33
f ( x1 ) 9
Iteration 2 : x2 = x1 − = 3 − = 2.4375
f ' ( x1 ) 16
f ( x2 ) 2.0369
Iteration 3 : x3 = x2 − = 2.4375 − = 2.2130
f ' ( x2 ) 9.0742
80
Example
k xk f(xk) f’(xk) xk+1 |xk+1 –xk|
(Iteration)
0 4 33 33 3 1
1 3 9 16 2.4375 0.5625
81
Convergence Analysis
Theorem :
Let f(x), f ' (x) and f ' ' (x) be continuous at x r
where f(r) = 0. If f ' (r) 0 then there exists 0
xk +1-r
such that x0 -r 2
C
xk -r
max f ' ' ( x)
1 x0 -r
C=
2 min f ' ( x)
x0 -r
82
Convergence Analysis
Remarks
83
Error Analysis of Newton-Raphson Method
Using an iterative process we get xk+1 from xk and other info.
We have x0, x1, x2, …, xk+1 as the estimation for the root α.
Let δk = α – xk
84
Error Analysis of Newton-Raphson Method
By definition i = − xi (1)
i +1 = − xi +1 (2)
Newton-Raphson method
f ( xi )
xi +1 = xi −
f ' ( xi )
f ( xi ) = f ' ( xi )( xi − xi +1 )
f ( xi ) + f ' ( xi )(− xi ) = f ' ( xi )(− xi +1 )
f ( xi ) + f ' ( xi )( − xi ) = f ' ( xi )( − xi +1 ) (3)
85
Error Analysis of Newton-Raphson Method
Suppose α is the true value (i.e., f(α) = 0).
Using Taylor's series
f " (c)
f ( ) = f ( xi ) + f ' ( xi )( − xi ) + ( − xi ) 2
2
f " (c)
0 = f ( xi ) + f ' ( xi )( − xi ) + ( − xi ) 2
2
f " (c)
0 = f ' ( xi )( − xi +1 ) + ( − xi ) 2 (from (3))
2
f " (c)
0 = f ' ( xi )( i +1 ) + ( i ) 2 (from (1) and ( 2))
2
− f " ( c ) 2 − f " ( ) 2
i +1 = i i When xi and α are very close
2 f ' ( xi ) 2 f ' ( ) to each other, c is between xi
and α.
The iterative process is said to be of second order.
86
Problems with Newton’s Method
• If the initial guess of the root is far from
the root the method may not converge.
• Newton’s method converges linearly near
multiple zeros { f(r) = f’(r) =0 }. In such a
case, modified algorithms can be used to
regain the quadratic convergence.
87
Multiple Roots
f ( x) = x 3
f ( x) = (x + 1)
2
x0 x1
x0
x1=x3=x5
x0=x2=x4
91
Secant Method
Secant Method
Examples
Convergence Analysis
92
Newton’s Method (Review)
Assumptions : f ( x), f ' ( x), x0 are available,
f ' ( x0 ) 0
Newton' s Method new estimate :
f ( xi )
xi +1 = xi −
f ' ( xi )
Problem :
f ' ( xi ) is not available,
or difficult to obtain analytical ly.
93
The Secant Method
xi − xi −1
xi +1 = xi − f ( xi )
f ( xi ) − f ( xi −1 )
• Requires two initial
estimates xo, x1.
However, it is not a
“bracketing” method.
• The Secant Method has
the same properties as
Newton’s method.
Convergence is not
guaranteed for all xo, f(x).
94
Secant Method
f ( x + h) − f ( x )
f ' ( x)
h
if xi and xi −1 are two initial points :
f ( xi ) − f ( xi −1 )
f ' ( xi ) =
( xi − xi −1 )
f ( xi ) ( xi − xi −1 )
xi +1 = xi − = xi − f ( xi )
f ( xi ) − f ( xi −1 ) f ( xi ) − f ( xi −1 )
( xi − xi −1 )
95
▪ Notice that this is very similar to the false position
method in form
▪ Still requires two initial estimates
▪ This method requires two initial estimates of x but does
not require an analytical expression of the derivative.
▪ But it doesn't bracket the root at all times - there is no
sign test
96
f ( xi )
Newton - Raphson : x i +1 = xi −
f ' ( xi )
df f ( xi ) − f ( xi −1 )
f ( x i ) =
dx xi − x i −1
xi − xi −1
Secant : xi +1 = xi − f ( xi ) i = 1,2,3,
f ( xi ) − f ( xi −1 )
97
Secant Method
Assumptions :
Two initial points xi and xi −1
such that f ( xi ) f ( xi −1 )
New estimate (Secant Method) :
( xi − xi −1 )
xi +1 = xi − f ( xi )
f ( xi ) − f ( xi −1 )
98
Secant Method
f ( x) = x − 2 x + 0.5
2
x0 = 0
x1 = 1
( xi − xi −1 )
xi +1 = xi − f ( xi )
f ( xi ) − f ( xi −1 )
99
Secant Method - Flowchart
x0 , x1 , i = 1
( xi − xi −1 )
xi +1 = xi − f ( xi ) ;
f ( xi ) − f ( xi −1 )
i = i +1
NO Yes
xi +1 − xi Stop
100
Modified Secant Method
In this modified Secant method, only one initial guess is needed :
f ( xi + xi ) − f ( xi )
f ' ( xi )
xi
f ( xi ) xi f ( xi )
xi +1 = xi − = xi −
f ( xi + xi ) − f ( xi ) f ( xi + xi ) − f ( xi )
xi
Problem : How to select ?
If not selected properly, the method may diverge .
101
Example
50
40
f ( x) = x 5 + x 3 + 3 20
Initial points 10
x0 = −1 and x1 = −1.1 0
-10
-30
-40
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
102
Example
x(i) f(x(i)) x(i+1) |x(i+1)-x(i)|
-1.0000 1.0000 -1.1000 0.1000
-1.1000 0.0585 -1.1062 0. 0062
-1.1062 0.0102 -1.1052 0.0009
-1.1052 0.0001 -1.1052 0.0000
103
Convergence Analysis
104
OPEN METHOD
105
Fixed Point Iteration
Also known as one-point iteration or successive
substitution
f ( x) = 0 g ( x) = x
• If we can solve g(x) = x, we solve f(x) = 0.
– x is known as the fixed point of g(x).
• We solve g(x) = x by computing
xi +1 = g ( xi ) with x0 given
until xi+1 converges to x.
106
Fixed Point Iteration – Example
f ( x) = x 2 + 2 x − 3 = 0
3 − x2
x2 + 2x − 3 = 0 2x = 3 − x2 x=
2
3− x 2
xi +1 = g ( xi ) = i
2
Reason: If x converges, i.e. xi+1 → xi
3 − xi2 3 − xi2
xi +1 = → xi =
2 2
xi2 + 2 xi − 3 = 0
107
Example
Find root of f(x) = e-x - x = 0. i xi εa (%) εt (%)
(Answer: α= 0.56714329) 0 0 100.0
1 1.000000 100.0 76.3
2 0.367879 171.8 35.1
− xi
We put xi +1 = e
3 0.692201 46.9 22.1
4 0.500473 38.3 11.8
5 0.606244 17.4 6.89
6 0.545396 11.2 3.83
7 0.579612 5.90 2.20
8 0.560115 3.48 1.24
9 0.571143 1.93 0.705
10 0.564879 1.11 0.399
108
Fixed Point Iteration
There are infinite ways to construct g(x) from f(x).
109
Case a Case b Case c
xi +1 = 2 xi + 3 3 xi2 − 3
xi +1 = xi +1 =
xi − 2 2
1. x0 = 4 1. x0 = 4 1. x0 = 4
2. x1 = 3.31662 2. x1 = 1.5 2. x1 = 6.5
3. x2 = 3.10375 3. x2 = -6 3. x2 = 19.625
4. x3 = 3.03439 4. x3 = -0.375 4. x3 = 191.070
5. x4 = 3.01144 5. x4 = -1.263158
6. x5 = 3.00381 6. x5 = -0.919355
7. x6 = -1.02762
Converge! Diverge!
8. x7 = -0.990876
9. x8 = -1.00305
Converge, but slower 110
Fixed Point Iteration Impl. (as C function)
// x0: Initial guess of the root
// es: Acceptable relative percentage error
// iter_max: Maximum number of iterations allowed
double FixedPt(double x0, double es, int iter_max) {
double xr = x0; // Estimated root
double xr_old; // Keep xr from previous iteration
int iter = 0; // Keep track of # of iterations
do {
xr_old = xr;
xr = g(xr_old); // g(x) has to be supplied
if (xr != 0)
ea = fabs((xr – xr_old) / xr) * 100;
iter++;
} while (ea > es && iter < iter_max);
return xr;
}
111
Comparison of Root
Finding Methods
Advantages/disadvantages
Examples
112
Summary
Method Pros Cons
Bisection - Easy, Reliable, Convergent - Slow
- One function evaluation per - Needs an interval [a,b]
iteration containing the root, i.e.,
- No knowledge of derivative is f(a)f(b)<0
needed
Newton - Fast (if near the root) - May diverge
- Two function evaluations per - Needs derivative and an
iteration initial guess x0 such that
f’(x0) is nonzero
Secant - Fast (slower than Newton) - May diverge
- One function evaluation per - Needs two initial points
iteration guess x0, x1 such that
- No knowledge of derivative is f(x0)- f(x1) is nonzero
needed
113
Example
( xi − xi −1 )
xi +1 = xi − f ( xi )
f ( xi ) − f ( xi −1 )
114
Solution
_______________________________
k xk f(xk)
_______________________________
0 1.0000 -1.0000
1 1.5000 8.8906
2 1.0506 -0.7062
3 1.0836 -0.4645
4 1.1472 0.1321
5 1.1331 -0.0165
6 1.1347 -0.0005
115
Example
116
Five Iterations of the Solution
k xk f(xk) f’(xk) ERROR
______________________________________
0 1.0000 -1.0000 2.0000
1 1.5000 0.8750 5.7500 0.1522
2 1.3478 0.1007 4.4499 0.0226
3 1.3252 0.0021 4.2685 0.0005
4 1.3247 0.0000 4.2646 0.0000
5 1.3247 0.0000 4.2646 0.0000
117
Example
118
Example
Use Newton' s Method to find a root of :
f ( x ) = e − x − x, f ' ( x ) = −e − x − 1
f ( xk )
xk f ( xk ) f ' ( xk )
f ' ( xk )
1.0000 - 0.6321 - 1.3679 0.4621
0.5379 0.0461 - 1.5840 - 0.0291
0.5670 0.0002 - 1.5672 - 0.0002
0.5671 0.0000 - 1.5671 - 0.0000
119
Example
Estimates of the root of: x-cos(x)=0.
120
Example
In estimating the root of: x-cos(x)=0, to get more than 13 correct
digits:
121
Question
Given a floating ball with a specific gravity of 0.6 and has
a radius of 5.5 cm. You are asked to find the depth to
which the ball is submerged when floating in water.
122
123
CSC 431
Interpolation
Tutorials
124
Interpolation Methods
Lagrange Polynomial
Newton’s Divided difference
125
126
127
128
129
130
Example 1
Compute ln 9.2 from ln 9.0 = 2.1972 and ln 9.5 = 2.2513 by the
Lagrange interpolation and determine the error from a = ln 9.2 =
2.2192.
𝜒0 = 9.0, 𝜒1 = 9.5, 𝑓0 = 2.1972, and 𝑓1 = 2.2513.
Example 2
Accuracy check: Note that 𝑓(𝑥)in this case
is 𝑓 𝑥 = 𝜒2 + 5𝜒+7 and the exact value
of 𝑓(𝑥) at 𝜒 = 3 is 31. Thus the
interpolation error for this example is zero.
This is because the function to be
interpolated is itself a polynomial of degree
2, and the interpolating polynomial 𝑃3(𝑥)
is of degree 3.
Example 3
Example 4
Compute ln 9.2 from ln 9.0 = 2.1972, ln
9.5 = 2.2513, and ln 11.0 = 2.3979 by the
Lagrange interpolation and determine the
error from ln 9.2 = 2.2192.
𝜒0 = 9.0, 𝜒1 = 9.5, 𝜒2 = 11.0, 𝑓0 = 2.1972, and 𝑓1 =
2.2513, 𝑓 = 2.3979.
Exercise 1
Find the Lagrange Interpolating
polynomial P3 through the points (0, 3), (1, 2),
(2, 7), and (4, 59), and then approximate
value f (3) by P3(3).
𝑝3 (x)= 𝜒2 − 2𝜒 + 3
𝑓(3) ≈ 𝑝3 (3) = 27 - 6 + 3 = 24
Exercise 2
146
The second divided difference is given by
𝑓 𝑥2 − 𝑓 𝑥1
𝑓 𝑥1 , 𝑥2 =
𝑥2 − 𝑥1
and so on…….
147
Example 1
Given the following table of values:
154
# Function for displaying divided (4) # Driver Code (5)
# difference table # number of inputs given
def printDiffTable(y, n): n = 4;
for i in range(n): y = [[0 for i in range(10)]
for j in range(n - i): for j in range(10)];
print(round(y[i][j], 4), "\t", end = " "); x = [ 5, 6, 9, 11 ];
print(""); # y[][] is used for divided difference
# table where y[][0] is used for input
y[0][0] = 12;
y[1][0] = 13;
y[2][0] = 14;
y[3][0] = 16;
155
# calculating divided difference table (6)
y=dividedDiffTable(x, y, n);
# value to be interpolated
value = 7;
156
Exercises
(1) Determine the value of the velocity at t = 16seconds using
second order polynomial interpolation using newton
difference methods (Question in slide 151)
(Hint: For the quadratic interpolation, the velocity is given by:
V(t) = b0 + b1(t-t0) + b2(t-t1))
157