Nothing Special   »   [go: up one dir, main page]

NUMETHS

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 67

Mathematical Model

An equation that would approximate the actual performance of a device or system.


Diode Model:
1. Piecewise

2. Approximate

=0
3. Exact

Numerical Methods
Techniques by which mathematical problems are formulated so that they can be solved with arithmetic
operation. Also known as computer math combines mathematics and computers.

Problem Solving (Engineering)


1. Analytical (Exact)
2. Numerical Method (Approximation)
- Reformat the equation so that we can use a known solution.

Dependent Variable = f (independent variables, parameters, forcing function)

affects the state dimensions system’s properties external forces


of the system

Problems with Analytical Method


1. There is only finite set or family of problems that may be solved.
2. Graphical solutions are very tedious without the aide a computer.
3. Calculators are slow and errors (round-off / truncation).

NUMETHS/T42 Page 1 of 67
Summary of Numerical Methods
1. Roots of equations and optimization
2. Linear algebraic functions
3. Curve fitting
4. Differentiation and integration
5. Differential equations

Newton’s Second Law of Motion


F = ma
Exact Solution:

dV C
=g− V
dt m
Using Differential Equation:
−C
gm m
t
V (t)= (1−e ) Numerical Method approximation
C
Conservation Laws and Engineering:
 Change = increases – decreases
- The change exists
 If the change does not exists,
Increases = decreases
Roots of Equation and Optimization
Given a system defined as 0, 2, 3 & 5, determine the system equation.
Let x as our variable
(x-2)(x-3)(x-5)=0
 x 3−10 x 2+31 x−30
 Accuracy
 Precision (reproducibility)
 Inaccuracy (or bias)
 Imprecision (uncertainty)

NUMETHS/T42 Page 2 of 67
Accuracy and Precision
Errors:
 Computed Errors: Accuracy Type
*Absolute Error ( Ea )

Ea =|True value−Copmuted value| x 100 %


*Relative Error ( E R )

E R= |True value−Computed
True value
value
|x 100 %
Error Definitions:

¿ True Fractional Relative Error= |True Error


True Value |
True Error
¿ True Percent Relative Error=|
True Value |
x 100 %

¿ True Value= Approximation+ Error

¿ ET =True Value −Approximation

Non-computed Errors
*Truncation Error – discard of digits

*Round – off Errors – nearest value

For Numerical Methods


Approximate Error
ε a= x 100 %
Approximation

Iterative Approach
Current Approximation−Previous Approximation
ε a= x 100 %
Current Approximation

 If the following criterion is met

ε a=( 0.5 x 10 (2−n) ) %

NUMETHS/T42 Page 3 of 67
*you can be sure that the result is correct.

TAYLOR AND MCLAURIN SERIES OF EXPANSION (TSE and MSE)


Taylor’s Theorem
States that any smooth function may be approximated by a polynomial. The more derivatives of the
function, the more complete the curvature.
General Form:
n
f ( x )=∑ Cn(x −a)n
n =0

x a t

f ( x )=C 0 +C 1 ( x−a )+C 2 ( x −a )2 +C3 (x −a)3+ …+C n (x−a)n


Where:
C 0 , C 1 , … , Cn are constant coefficients of the power series.

To determine the coefficients

f ' (x )=C 1+ 2C 2 ( x−a ) +3 C3 ( x−a )2 + 4 C 4 ( x−a )3 +…+n C n ( x−a )n−1


f '' ' ( x )=2C 2 +2. 3C 3 ( x−a ) +3.4 C 4 ( x−a )2 + 4.5C 5 ( x−a )3 +…+ n ( n−1 ) Cn ( x−a)n−2
f '' ' ' ( x ) =3.2C 3 4.3C 4 ( x−a )+ …+n ( n−1 ) (n−2)Cn (x −a)n−3

n
f ( x )=n ( n−1 ) ( n−2 )( n−3 ) … [ n−( n−1 ) ] Cn
If a = x

f ' ( x )=C 1=f ' (a)


f ' ' ( x ) =2C 2=f '' ( a)
f ' ' ' ( x ) =3.2C 2=f ' ' ' (a)
Thus

NUMETHS/T42 Page 4 of 67
C 0=f ( x )=f ( a )

f ' ( x) f ' ( a)
C 1= =
1! 1!
f ' ' ( x) f ' ' (a)
C 2= =
2! 2!

f ( x ) f n (a)
n
C n= =
n! n!

Substituting to the General Form:

f ' (a) f ' ' (a ) f ' ' ' (a ) fn


f ( x )=f ( a ) + ( x−a ) + (x−a)2 + (x−a)3 +…+ ( x−a)n
1! 2! 3! n!
Taylor Series Expansion
Let a = 0, then
C 0=f ( x )=f ( a )=f ( 0 )

f ' (x) f ' (0)


C 1= =
1! 1!
f ' ' ( x) f ' ' (0)
C 2= =
2! 2!

f n (x ) f n (0)
C n= =
n! n!
Substitute to the General Form

f ' (0) f ' ' (0) 2 f ' ' ' ( 0) 3 f n (0) n


f ( x )=f ( 0 ) + ( x )+ (x) + ( x ) + …+ ( x)
1! 2! 3! n!
McLaurin Series Expansion

Example:
1. Determine the Taylor Series Expansion of the function f(x) = ln(x+1) near the value a = 0.5 and determine
its first 4 on-zero terms.
Solution:
f ( x )=ln ( x +1 )

NUMETHS/T42 Page 5 of 67
1
f ' (x )= =(x +1)−1
x+ 1
−1
f ' ' (x )= 2
=−( x+1)−2
( x+ 1)
2
f ' ' ' (x )= 3
=2(x +1)−3
( x +1)
a=0.5
f ( a )=ln ( 0.5+1 )=0.4055
1
f ' ( a )= =0.6667
0.5+1
−1
f ' ' ( a )= =−0.4444
( 0.5+1 )2
2
f ' '' ( a )= =0.5926
( 0.5+1 )3

0.6667 0.4444 0.5926


f ( x )=0.4055+ ( x−0.5 ) − ( x−0.5)2+ (x−0.5)3
1! 2! 3!

π
2. Use Taylor Expansion Series with n=0 to 6 to approximate f(x)=cosx at x i+ 1= with the value of f(x)
3

π
and its derivative at x i=
4
Solution:
π
cos =0.7071067812
4

π π
4 3

|x i−a|=h a=x i +1

|π4 − π3 |= 12π

NUMETHS/T42 Page 6 of 67
Percentage relative error

ε t= |0.5−0.7071067812
0.5 |x 100
ε t=41.42 %
−sinx
f ' ( x )=
1!

sin ( π4 ) (h)
f' ( π3 )=cos π4 − 1!

sin ( π4 )
¿ 0.7071067812−
1! ( 12π )
= 0.5219866588

ε t= |0.5−0.5219866588
0.5 |x 100
ε t=4.4 %

−cosx
f '' ( x) =
2!

sin ( π4 ) (h)− cos( π4 ) ( h )


f '' ( π3 )=cos π4 − 1! 2!
2

cos ( π4 )
=
0.5219866588−
2! ( 12π )
= 0.497754491469

ε t= |0.5−0.4977544914
0.5 |x 100
ε t=0.45 %

sin ( π4 ) ( h) − cos ( π4 ) ( h) + sin ( π4 ) ( h )


f ''' ( π3 )=cos π4 − 1! 2!
2
3!
3

NUMETHS/T42 Page 7 of 67
=
sin ( π4 ) π 3
0.497754491469+
3! ( )
12
=0.4998691469

ε t= |0.5−0.4998691469
0.5 |x 100
ε t=0.02 %
cosx
f iv ( x )=
4!
π
f iv π
( )=¿0.4998691469
+ cos
4 () π 3
3
4! ( )
12
=0.5000075508
ε t=0.0015 %
−sinx
f v ( x )=
5!

fv ( π3 )=0.500000403
ε t=0.00000608 %
−cosx
f vi ( x )=
6!

f vi ( π3 )=0.4999999878
ε t=0.0000244 %

x
3. Determine the McLaurin expansion of the function f ( x )=e and express the following terms in a power series
formula.
Solution:

f ( x )=e x
f ' (x )=e x ( 1 )=f ( a )=f ( 0 )=1
f ' ' (x )=e x ( 1 )=f ( a )=f ( 0 )=1
f ' ' ' (x)=e x ( 1 )=f ( a ) =f ( 0 ) =1
f ( a )=f ( 0 )=e 0=1

NUMETHS/T42 Page 8 of 67
1 1 1 1
f ( x )=1+ x + x2 + x 3 +⋯+ x n
1! 2! 3! n!
x 2 x3 xn
f ( x )=1+ x + + +⋯+
2! 3 ! n!

xn
f ( x ) =∑ Power Series Formula
n =0 n!

ROOTS FOR NON-LINEAR FUNCTION

Types of Equations
1. Polynomial Equation
a. Standard Form

a 0 x n +a1 x n−1 + a2 x n−2 +…+a n x=0


b. Roots
1.1 Even –roots will be real or complex
1.2 Odd – at least one root will be real
1.3 Total number of roots that can be obtained (n)
2. Transcendental Equation

f ( x )=cos x+ e x + x −1
 Fibonacci Series
0 1 1 2 3 5 8 …
f ( x )= X ( n ) + X (n−1)

Techniques in Evaluating the Roots of Non-Linear Equations


1. Graphical Method
2. Empirical Formula
3. Iterative Method

ε q=|0.5−102−n|×100 %
Where n=n0 of digits needed for error

NUMETHS/T42 Page 9 of 67
BRACKETING TECHNIQUES
Interhalving (Bisection) Method
Also known as “Binary Search” is a basic formula for solving a root of an equation. Applicable for
polynomials and transcendental equations.
STEPS:
1. To get 2 values of x, i.e. x 0 and x i, that brackets the root of f(x)=0
x 0−initial state
x i−final state }
x0 =x∧x i=x , must bracket the root

2. f ( x 0 ) ⋅ f ( x i ) < 0 bracketing condition treats the interval distance (midpoint) as a line segment then
successive values divides the interval in half and replaces one endpoint with the midpoint so that the
root is bracketed again
f ( x 0 ) ⋅ f ( x i ) < 0 is true, root is found
+, the value- middle to final
-, the value –initial to middle
x0 ¿0 Root xl

Graphical Depiction
f (x i)
y=f (x )

NUMETHS/T42 Page 10 of 67
x0 Root xi

f (x i)

Iterative Formula

x 0−x i xi −x 0
x 0= ∨x 2=x 0 + Midpoint Formula
2 2

Substitute x 2to the given function f(x) to obtain f ( x ¿¿ 2)¿ if f( x ¿¿ 2)¿ > 0 then replace ‘+’ function
limit value of x with x 2 but if f( x ¿¿ 2)¿ < 0 (negative) then replace ‘ – ‘ function limit value of x with x 2
.The process is repeated until the terminating condition is satisfied.
Regula Falsi (False Position) Method
Applicable to both polynomial and transcendental equation
Also known as “Linear interpolation”
A hybrid of the Bisection Method constructs approximating straight lines that bracket the root of the

equation. Requires the initial interval x 0 and x 1 found with f ( x 0) and f ( x 1) of opposite signs (sides)

xi

x0 Root

Initial Values:

+ f ( x0 )
x 0=

x 1=
−f ( x 1 )
−f ( x 1 )
+ f ( x0 )
}
Bracketing Values

Bracket Condition

NUMETHS/T42 Page 11 of 67
f ( x 0 )∗f ( x i ) <0
by similar triangle

x1− x2 ( x 0−x 1 )
=
f ( x1 ) f ( x 0 )−f ( x 1 )
and solving for

( x 1 )∗( x 0−x 1 )
x 2=x 1−f
f ( x 0 )−f ( x 1)
or

( x 0 )∗( x0 −x 1)
x R =x 0−f
f ( x 0 )−f ( x 1 )

Substitute x to the given function f(x) to obtain f ( x 2) if f( x 2 ) >0 then replace ‘+’ function limit value of x

and x 2. But if f( x 2 ) <0 then replace ‘-’ function limit value of x and x 2. Repeat until terminating condition is
satisfied.

OPEN METHODS
Fixed Point Iteration (MOSS)
Applicable for both polynomial and transcendental
STEPS:
1. Tabulate values of x to obtain initial value of x 0 for iteration and convergence test.
2. Formulate iterative formula but rewriting the given function from f( x 0 ) ≥ 0 to x ≥ f (x) obtain all
possible iterative formula from the given function.
3. Test each possible iterative formula for convergence by substituting the initial value x 1to the

derivative of the f(x). If 0<|f ' ( x 0 )|<1 then the iterative formula is convergent, if|f ' ( x 0)|>1 ,
divergent.

4. Select from among the convergent iterative formula. The |f ' ( x 0)| closest to 0 and rewrite the
iterative formula using the variables X n(previous value) and X n+1(new value).
5. Begin the iteration by substituting the initial value of x 0 to the iterative formula and compute the
new value of f (n+1). Substitute X n+1 to X nand recomputed for the new value.

NUMETHS/T42 Page 12 of 67
6. Repeat until terminating condition is satisfied.

Application of Newton’s Second Law of Motion


−c
gm t
V= (1−e m )
c
Given: Parachutist jumps from a hot air balloon at t = 10s. Determine the drag coefficient (c )
m = 68.1 kg

v = 40 m/s
t = 10s
g = 9.8 m/s2
c = implicit in the equation
Solve for the roots of the equation
−c
f ( c) =
gm
c
( t
1−e m −v )
Substituting the given
−c
f ( c) =
(9.8)(68.1)
c
(
(10)
1−e 68.1 −40 )
667.38
f ( c) = ( 1−e−0.1468 )−40
c
substituting values of c from the introduction
(c @ 105°C = 14.811…)
Solve for the root using bisection method .

c f (c )
4 34.099
8 17.644
12 6.602
16 -2.271
20 -8.402

∴ the root is between 12 and 16


Xu+ XL 16+12
(1) Xr = = = 14
2 2
−14
(9.8)(68.1) (10)
f (12) f (14) ; f (14) = ( 1−e 68.1 )−40
14

NUMETHS/T42 Page 13 of 67
f (14) = 1.565
f (12) f (14) = (6.062)(1.565) = 9.487 > 0 ∴ upper limit

12 14 16
16+14
(2) Xr = =15
2
667.38
f (15) = ( 1−e−(0.1468)(15) )−40
15
f (15) = -0.428
f (14) f (15) = (1.565)(-0.428) = -0.6698 < 0 ∴ lower limit

14 15 16
14+15
(3) Xr = =14.5
2
667.38
f (14.5) = (1 - e−(0.1468)(14.5 ) ¿−40
14.5
f (14.5) = 0.549
f (14) f (14.5) = (1.565)(0.549) = 0.859 > 0 ∴ upper limit

14 14.5 15
14.5+15
(4) Xr = =14.75
2
667.38
f (14.75) = ( 1−e−(0.1468)(14.75) )−40
14.75
f (14.75) = 0.055
f (14.5) f (14.75) = (0.549)(0.055) = 0.0302 > 0 ∴ upper limit

14.5 14.75 15
14.75+ 15
(5) Xr = =14.88
2
667.38
f (14.88) = ( 1−e−(0.1468)(14.88) )−40
14.88
f (14.88) = -0.197

NUMETHS/T42 Page 14 of 67
f (14.75) f (14.88) = (0.055)(-0.197) = -0.011< 0 ∴ lower limit

14.75 14.88 15
Table
XL XU Xr f(L) f(r) f(L)f(r) Remarks
12 16 14 6.06 1.565 9.487 >0
14 16 15 1.565 -0.428 -0.6698 <0
14 15 14.5 1.565 0.549 0.859 >0
14.5 15 14.75 0.549 0.055 0.0302 >0
14.75 15 14.88 0.055 -0.197 -0.011 <0

ℰt = |true value−estimated
true value
value
|x 100
Xr new − Xr old
ℰa =
| true value |x 100

ℰa% ℰt%
(1) 100 5.476
(2) 6.67 1.276
(3) 3.45 2.099
(4) 1.69 0.411
(5) 0.874 0.466
Note: ℰa = 0.6 (set value)
when ℰa = 0.6, stop
Desired Error (ℰad)
△ Xo
log
n= E ad
log 2
△ Xo
n = log( ¿
E ad
n = number of iterations
△Xo = initial value for the difference of XU - XL
from the sample, 16 - 12 = 4 (after 6 iterations)

ℰa = |14.88−14.75
2 | = 0.065

4
log ( )
n= 0.065
log 2
n = 5.94 n≈6 next highest
False Position Method (Regula Falsi Method)

NUMETHS/T42 Page 15 of 67
f (x)
employs similar triangles

Xu f ( X l ) f ( X u)
=
Xr Xr− Xu

Xr f ( X u ) (X l −X u )
X Xr= X u−
f ( X l )−f (X u )
root

Xl
Same Problem using Regula Falsi Method
X l =12 f ( X l ) =6.06
X u =16 f ( X u )=−2.271
(−2.271 ) (12−16 )
Xr=16−
6.06 — 2.271
Xr=14.91
667.38
f (14.91)= ( 1−e−(0.1468 ) (14.91) ) −40
14.91

f (14.91 )=−0.255
f (12) f (14.91 )=( 6.06 ) (−0.255 )=1.545< 0
εa=100
X u =14.91 f ( X u ) =−0.255
− ( 0.255 )( 12−14.91 )
Xr=14.91−
6.06−(−0.255 )
Xr=14.79
667.38
f (14.79)= ( 1−e−(0.1468 ) (14.79 )) −40
14.79
f (14.79) =−0.022
f (14.79) f ( 14.91)=(−0.022 ) (−0.255) = 0.0056 > 0
− ( 0.255 )( 14.79−14.91 )
Xr=14.91−
(−0.022)−(−0.255 )
Xr=14.78

NUMETHS/T42 Page 16 of 67
εa=0.068

Method of Simultaneous Substitution (MOSS)

Ex. Determine the root using MOSS

f ( x )=x 3−4 x 2 + x−10


Terminate if εa ≤0.0001
Sol’n:
1. Assume values of x
x 0 1 2 3 4 5
f(x) -10 -12 -16 -16 -6 20
root lies between 4 and 5
x 0=4
x 0=5
2. Determine possible iterative formulas
a) x=f ( x )
f ( x )=−x 3 + 4 x 2+10
x 3=4 x 2 – x +10
1
b) 2
f ( x )=(4 x – x +10) 3

4 x 2 – x+10 12 from ( x 2 ) ( x ) =4 x 2 –
c) f ( x )=( ) x+ 10
x
4 x2 – x +10
d) f x =
( )
x2
x 3+ x−10
e) f ( x )= from ( 4 x ) ( x )=x 3 + x−10
4x
10
f) f ( x )= 2 from x ( x 2−4 x+ 1 ) – 10
x – 4 x +1
1
x3 + x−10 x3 + x−10
g) f ( x )= ( 4 ) 2
from 4 x2 =x3 + x−10  x=
2
4
3. Convergence Test 0<|f ' ( x 0 )|<1 , convergent

0<|f ' ( x 0 )|>1 , divergent


a) f ( x )=−x 3 + 4 x 2+10
f ' ( x )=−3 x 2 +8 x

NUMETHS/T42 Page 17 of 67
Subst. x 0=4

f ' ¿¿¿
1
b) f ( x )=(4 x – x +10) 2 3

−2
1
f ' ( x)= (4 x 2 – x+10) 3 (8 x−1)
3
Subst. x 0=4
−2
f ' ¿¿¿ = 1 ¿ 3 ( 8 ( 4 )−1 ] =0.6084 convergent
3
4 x 2 – x+10 12
c) f x =(
( ) )
x
1 4 x2 – x +10 12 x ( 8 x−1 )−( 4 x+10 )( 2 x )
f ' ( x)= ( ) [ ]
2 x x2
Subst. x 0=4

f ' (x 0)=0.4034 convergent


4 x2 – x +10
d) f ( x )=
x2
x2 ( 8 x−1 )−( 4 x 2−x +10 ) (2 x)
f ' ( x)= 2
x
Subst. x 0=4

f ' ( x 0 )=−0.25 convergent


x 3+ x−10
e) f ( x )=
4x
4 x ( 3 x +1 )−(x 3+ x−10)(4)
f ' ( x)=
(4 x 2)
Subst. x 0=4

f ' ( x 0 )=2.1536 divergent


10
f) f ( x )= 2
x – 4 x +1
f' −10(2 x−4 )
( x )= 2
¿
(x ¿¿ 2 – 4 x+1 )

Subst. x 0=4

f ' ( x 0 )=−40 divergent

NUMETHS/T42 Page 18 of 67
1
x3 + x−10
g) f ( x )=
( 4 ) 2

4 ( 3 x 2+ 1 )
−1
1 x 3+ x−10
f ' ( x)=
2 ( 4 ) 2
[
42
]

Subst. x 0=4

f ' ( x 0 )=1.6085 divergent


4. Select the Iterative Formula nearest 0. Eq’n d, f ' ( x )=0.25 smallest value for x 0=4
0

x n=initial value
x n+1=iterative value
4 x2 – x +10
f ( x )=
x2
4 x n2 – x n +10
x n+1= 2
xn

subst . initial value x 0=4 εa


4( 4)2 – 4 +10
(1) x n+1= =4.375 → xn 8.57%
(4)2
4( 4.375)2 – 4.375+10
(2) x n+1= =4.3095 → xn 1.52%
(4.375)2
4( 4.3095)2 – 4.3095+10
(3) x n+1= =4.3064 → x n 0.072%
(4.3095)2
4( 4.3064)2 – 4.3064+10
(4) x n+1= =4.3070 → x n 0.014%
(4.3064)2
4( 4.3070)2 – 4.3070+10
(5) x n+1= =4.3069 → xn 0.002%
(4.3070)2

Secant Method

x1− x2 x 1−x 0 x 1−x 0


fx 1
=
fx 1−fx 0
x 2=x 1−fx 1 ( fx 1−fx 0 ) Secant method equation

NUMETHS/T42 Page 19 of 67
x0 −x 2 x 1−x 0 x 1−x 0
fx 0
=
fx 1−fx 0
x 2=x 0−fx 0 ( fx 1−fx 0 ) Secant method equation

Using x = 3 and x = 4
(1) x 0=3 ; f ( x¿ ¿ 0)=−16 ¿
x 1=4 ; f (x ¿¿ 1)=−6 ¿

x 2=3−(−6 )
( −6−4−3(−16) )=4.6
εa = |4.6−4
4.6 |
x 100=14.04 %

(2) x 0=4 ; f (x¿¿ 0)=−6 ¿


3
x 1=4.6 ; f (x ¿¿ i)=(4.6) −4 ¿ ¿
4.6−4
x 2=4− (−6 ) ( 7.296+6 )=4.2708
εa = |4.2708−4.6
4.2708 |
x 100=7.702 %

(3) x 0=4.6 ; f (x ¿¿ 0)¿ =7.296


x 1=4.2708 ; f (x ¿¿ i)=(4.2708)3−4( 4.2708)2+ 4.2708−10=−0.7889 ¿
x 2=4.6−¿

εa = |4.3030−4.2708
4.3030 |x 100=0.748 %
(4) x 0=4.2708; f (x ¿¿ 0)=−0.7889 ¿
x 1=4.3030 ; f (x ¿¿ i)=(4.303)3−4( 4.303)2+ 4.303−10=−0.0867 ¿
4.303−4.2708
x 2=4.2708−( 4.303 )
(−0.0867− (−0.7889 ) )
=4.307

εa = |4.307−4.2708
4.307 |x 100=0.092 %

(5) x 0=4.303; f (x ¿¿ 0)=−0.0867 ¿


x 1=4.307 ; f ( x ¿¿ i)=¿¿
x 2=4.303−(−0.0867 ) ¿

NUMETHS/T42 Page 20 of 67
εa = |4.307−4.307
4.307 |x 100=0 %
Root = 4.307

Newton – Raphson Method


x f (x¿¿0 )
1=¿x 0− '
¿¿
f (x¿¿0 )¿

Convergence test
g(x) = ¿ ; Then the initial value x 0 is convergent on a root, otherwise it is divergence
Example:
Determine a root using Newton Raphson
F(x) = x 3−4 x 2+ x−10 εa ≤ 0.0001
Solution:

At x 0=5
f (x¿ ¿ 0)=(5)3−4(5)2 +5−10 ¿
f ' (x ¿¿ 0)=3 x 2−8 x +1=3 ¿ ¿
f ' ' (x ¿¿ 0)=6 x−8=6 ( 5 )−8=22 ¿
(20 x 22)
g(x) =
| 362 |
< 1 ; g(x) = 0.3395 < 1

Iterative Formula
x 20
1=¿x 0−
f (x¿¿0 )
¿¿ ; x 1=5− =4.444
'
f (x¿¿0 )¿ 36
(1) x 1=4.444−¿
(2) x 1=4.3139−¿

( 4.3069 )3−4 ( 4.3069 )2+ 4.3069−10


(3) x 1=4.3069−
[
3 ( 4.3069 )2−8 ( 4.3069 )+ 1
=4.3069 ; εa=0.163 %
]

NUMETHS/T42 Page 21 of 67
ROOTS OF POLYNOMIAL FUNCTIONS
Muller’s Method
General Parabolic Function
f2(x) = a(x-x2)2 + b(x-x2) + c
Graphical Depiction

Initial values of x to be fitted to f2(x) such that:


X0; f(x0)
X1; f(x1)
X2; f(x2)
:
substituting the initial values of f2(x),
for x0,
f(x0) = a(x0-x2)2 + b(x0-x2) + c ------- Eq1
for x1,
f(x1) = a(x1-x2)2 + b(x1-x2) + c ------- Eq2
for x2,
f(x2) = a(x2-x2)2 + b(x2-x2) + c ------- Eq3
from equation 3
f(x2) = c
substitute to equations 1 and 2
f(x0) - f(x2) = a(x2-x2)2 + b(x2-x2) + c ------- Eq4
f(x1) - f(x2) = a(x2-x2)2 + b(x2-x2) + c ------- Eq5
to fit the initial values of f2(x)
h0 = x1– x0 ; h0 = x2 – x1
f ( x 1 )−f ( x 0 ) f ( x 2 )−f ( x1 )
δ 0= ;δ 1= Newton’s Divided Difference Interpolation
x 1−x 0 x 2−x 1
Hence,
-bh12 + ah1 = δ1h1
δ1h1= f(x2) - f(x1) but h1 = x2- x1
δ0h0+ δ1h1= f(x2) - f(x0) but h1 + h0 = x2 - x1
-b(h1 + h0)2 + a(h1 + h0) = δ0h0+ δ1h1

NUMETHS/T42 Page 22 of 67
The result is a system of linear equations with two unknown variables
δ 1−δ 0
a= b=a h1+ δ 1 c=f ( x 2)
h1 +h 0
Referring to the coefficients of the parabolic function f2(x), solving for the root estimate
2C
x 3=x 2−
b±D
Where: D = the discreminant value evaluated as

D= √b 2−4 ac
Note:
 Use ‘+’ if|b+ D|>|b−D|

 Use ‘-‘ if |b+ D|<|b−D|

Substitution Process
The technique uses a substitution process similar to secant method:
X0 = disregarded
For the next iteration
Previous X1 , substitute to X0
Previous X2, substitute to X1
Previous X3, substitute to X2
Terminating Condition
The process terminates if
x 1−x 2
ε a= | | x1
×100

Bairstow’s Method
General quadratic function
f2(x) = x2 – rx – s
where: r and s = coefficients of the quadratic factor f2(x) of the given polynomial function
evaluated as:
r ¿ =r + ∆ r
s¿ =s+∆ s
r* and s* refer to the new values of r and s
r and s refer to the previous values of r and s
Δr and Δs refer to the change in values of r and s

NUMETHS/T42 Page 23 of 67
The corresponding change in the values of r and s (Δr and Δs) is obtained by evaluating the system of linear
equation containing coefficients derived from applying synthetic division to the given function f(x)

Linear Equation
C 2 ∆ r +C 3 ∆ s=−b1
C 1 ∆ r +C 2 ∆ s=−b0
Thus
−b1 C3 C 2 −b 1

∆r=
| −b0 C2 | |
∆ s=
C 1 −b 0 |
C2 C3 C2 C 3
| C1 C2 | | C1 C2 |
Terminating condition
∆r ∆s
ε a= | |
r ¿ ×100 ε s =
s¿ | |
×100

Note: since r and s are coefficients of the quadratic factor f 2(x) then both variables should satisfy the terminating
condition to terminate the iteration. The computed values of r and s are then used for the synthetic division of the
next iteration.
Root Formula:
2
r ± √(r + 4 s )
x=
2
Possible outcomes:
1. If the given polynomial order is a 5th order or higher, then in order to determine the remaining roots, deflate
the given function by the quadratic factor f 2(x). Obtain a new quadratic equation of the deflated function
using the previous values of r and s initial values for synthetic division of the deflated function.
2. If the given polynomial function is a 4 th order polynomial, then to obtain the remaining roots, deflate the
given function by the quadratic factor and subject the deflated function to the root formula.
3. If the given polynomial function is a 3rd order polynomial, then to evaluate the remaining roots, apply the
formula
−s
x=
r
Example:
Determine a root from the function F(x) = x 3−4 x 2+ x−10 using Muller’s methon, terminate if εa ≤
0.0001%
Solution:

x 0=3 ; f ( x 0 ) =33−4 ( 3 )2 +3−10=−16

NUMETHS/T42 Page 24 of 67
x 1=4 ; f (x ¿¿ 1)=4 3−4(4)2 +4−10=−6 ¿
x 2=5 ; f ( x¿¿ 5)=5 3−4 (5)2+ 5−10=20 ¿
h0 =4−3=1; h1=5−4=1
f (x ¿¿ 0) −6−(−16)
δ 0=f ( x ¿¿ 1)− = =10 ¿ ¿
x 1−x 0 4−3
F (x ¿¿ 1)
δ 1=f ( x¿ ¿2)− ¿¿
20−(−6)
x 2−¿x = =26 ¿
1
5−4
δ 1−δ 0 26−10
a= = =8
h1 +h 0 1+1
b=ah1 +¿δ =(8 ) (1) +26=34 ¿
1

c=f ( x ¿¿ 2)=20 ¿
d= √b 2−4 ac=√ 342 −4 ( 8 ) (20)=22.7156
|b+ d|≠|b−d|;|34+22.7156|>|34−22.7156|
Then use “+” to solve for x3
−2 c −2(20)
x 3=x 2+ =5+ =4.2947
b+ d 54+22.7156

ε a= |4.2947−5
4.2947 |
x 100=16.4219 %

x 0=4 ; f ( x¿¿ 0)=−16 ¿


x 1=5 ; f ( x¿¿ 1)=20 ¿
x 2=4.2947 ; f (x ¿¿ 2)=4.2 9473 −4 (4.2947)2 +4.2947−10=−0.269 ¿

h0 =5−4=1; h1=4.2947−5=−0.7053
−0.2697−20
δ 0=26 ; δ 1= =28.7391
4.2947−5
28.7391−26
a= =9.2945
−0.7053+1
b=( 9.2945 ) (−0.7053 ) +28.7391=22.1837
c=¿-0.2697
D= √(22.1857)2−4 ( 9.2945 ) ¿ ¿
|22.1837+ 22.4086|>|22.1837−22.4086|; ; ; use +

NUMETHS/T42 Page 25 of 67
ε a= |4.3068−4.2947
4.3068 |x 100=0.281 %
x 0=5 ; f ( x¿ ¿ 0)=20 ¿
x 1=4.2947 ; f (x ¿¿ 1)=−0.2697 ¿
x 2=4.3068 ; f (x ¿¿ 2)=−0.0025 ¿
δ 1=22.0826 ; δ 0=28.7391
a = 9.6026
b = 22.1988
c = -0.0025
D = 22.201 ; |22.201+22.1988|>|22.201−22.1988|; use +
x 3=4.3069

ε a= |4.3069−4.3068
4.3069 |x 100=0.0023 %
x 0=4.2947 ; f (x ¿¿ 0)=−0.2697¿
x 1=4.3068 ; f (x ¿¿ 1)=−0.0025 ¿
x 2=4.3069 ; f (x ¿¿ 2)=−0.0003 ¿
h0 =0.0121; h1=0.0001
δ 0=22.0826 ; δ 1=22
a = -6.7705
b = 21.9993
c = -6.7705
D = 21.9991
|21.9993+21.9991|>|21.9993−21.9991|; use +
−2(−0.0003)
x 3=4.3069+ =4.3069
21.9993+21.9991

εa | 4.3069−4.3069
4.3069 |x 100=0
Example:
F(x) = x 5−3.5 x 4 +2.75 x 3+2.12 x 2−3.815 x+1.25 use Bairstow method.
r = s = -1, initial values and εr = εs ≤ 0.05%
Solution:
a0 a1 a2 a3 a4 a5
r = -1 1 -3.5 2.75 2.12 -3.815 1.25
s = -1 (b0)(r) (b1)(r) (b2)(r) (b3)(r) (b4)(r)
-1 4.5 -6.25 -0.375 10.44

NUMETHS/T42 Page 26 of 67
(b0)(s) (b1)(s) (b2)(s) (b3)(s)
-1 4.5 -6.25 -0.375
b0=1 b1=-4.5 b2=6.25 b3=0.375 b4=-10.44 b5=11.315
r = -1 (c0)(r) (c1)(r) (c2)(r) (c3)(r) (c4)(r)
s = -1 -1 5.5 -11.75
(c0)(s) (c1)(s) (c2)(s) (c3)(s)
c0=1 c1=-5.5 c2=11.75 c3=-11.375 -10.375

c 2 Δr+ c3 Δs=−b1 ; 10.75Δr – 4.875Δs = 4.5

c 1 Δr+c 2 Δs=−b0 ; -5.5Δr + 10.75Δs = -1

4.5 −4.875
Δr=
|−1 10.75 |
=
43.5
=0.4901
10.75 −4.875 88.75
|−5.5 10.75 |
10.75 4.5
Δs=
|−5.5 −1|
=
14
=0.1577
10.75 −4.875 88.75
|−5.5 10.75 |
r ¿ =−1+0.4901=−0.5099
s¿ =−1+0.1577=−0.8423
0.4901
εr=|−0.5099 |x 100=96.117 %
0.1577
εs=|
−0.8423|
x 100=18.723 %

a0 a1 a2 a3 a4 a5
1 -3.5 2.75 2.125 -3.815 1.25
r =-0.5099 -0.5099 2.0446 -2.0153 -1.7781 4.5494
s =-0.8423 -0.8423 3.3775 -3.3291 -2.9373
1 -4.0099 3.9323 3.4872 -8.9222 2.8621
r =-0.5099 -0.5099 2.3046 -2.761
s =-0.8423 -0.8423 3.8073
1 -4.5198 5.4146 4.5333

c 2 Δr+ c3 Δs=−b1 ; 5.4146Δr +4.5333Δs = 4.0099


c 1 Δr+c 2 Δs=−b0 ; -4.5198Δr + 5.4146Δs = -1
Δr = 0.5269 ; Δs = 0.2551
r ¿ =−0.5099+ 0.5269=0.0170
s¿ =−0.8423+0.2551=−0.5871

NUMETHS/T42 Page 27 of 67
εr=|0.5269
0.0170 |
x 100=3099 %

0.2551
εs=|
−0.5871|
x 100=43.45 %

a0 a1 a2 a3 a4 a5
r = 0.0170 1 -3.5 2.75 2.125 -3.815 1.25
s = -0.5871 0.0170 -0.0592 0.0358
-0.5871 2.0449
1 -3.485 2.1037 4.2057
r = 0.0170 0.0170 -0.0589 0.0248
s = -0.5871 -0.5871 2.0349
1 -3.466 1.4577 6.2654

c 2 Δr+ c3 Δs=−b1 ; 1.4571Δr + 6.2654Δs = 3.483


c 1 Δr+c 2 Δs=−b0 ; -3.466Δr + 1.4577Δs = -1
Δr = 0.4758 ; Δs = 0.4453
r ¿ =0.0170+0.4758=0.4928
s¿ =−0.5871+0.4453=−0.1418
0.4758
εr=| |0.4928
x 100=98.5501 %

0.4453
εs=|
−0.1418|
x 100=314.0339 %

Δr = 0, r ¿= -0.5, εr = 0%
Δs = 0, s¿ = 0.5, εs = 0%
f 2 ( x )=x 2 +0.5 x−0.5
x=−0.5 −¿ 2
+ ¿ √(−0.5) + 4(0.5 ) ¿
¿
2

x 1=0.5 ; x 2=−1
Deflate the Polynomial

x 3−4 x 2+5.25 x−2.5


x 2+ 0.5 x −0.5 x5 −3.5 x 4 + 2.75 x 3 +2.125 x 2−3.815 x +1.2

SYSTEM OF LINEAR EQUATION


Direct Methods

NUMETHS/T42 Page 28 of 67
1. Determinants
2. Inverse Matrices
3. Gaussian Techniques
4. Matrix Decomposition

Iterative Methods
1. Gauss – Jacobi Method
2. Gauss – Seidel Method

Matrix Decomposition
1. Crout’s/ Cholesky’s Method
Given a square matrix

a11 a12 a13

[
[ A ] = a14 a15 a16
a17 a18 a19 ]
Using Crout’s Method
[ A ] =[ L ] • [ U ]
L11 0 0 1 U 12 U 13

[
[ L ] = L21 L22
L31 L32
0
] [
L33
[ U ]= 0 1 U 23
0 0 1 ]
(1)
L11 :a11 =L11 ( 1 )+ 0 ( 0 ) +0 ( 0 )
L11 =a11
L21 :a 21=L21 ( 1 )+ L22 ( 0 ) +0 ( 0 )
L21=a21
L31 :a 21=L31 ( 1 )+ L32 ( 0 ) + L33 ( 0 )
L31=a31
(2)
U 12 : a12=L11 ( U 12 ) +0 ( 1 )+ 0 ( 0 )
a12
U 12=
L11
U 13 : a13=L11 ( U 13 ) +0 ( U 23 ) +0 ( 1 )
a13
U 13=
L11

NUMETHS/T42 Page 29 of 67
(3)
L22 :a 22=L21 ( U 12 ) + L22 ( 1 ) +0 ( 0 )
L22=a22−L21 ( U 12 )
L32 :a 32=L31 ( U 12 ) + L32 ( 1 ) + L33 ( 0 )
L32=a32−L31 ( U 12)
(4)
U 23 :a23=L21 ( U 13) + L22 ( U 23 ) +0 ( 1 )
a23−L21 ( U 13 )
U 23=
L22
(5)
L33 : a31=L31 ( 1 )+ L32 ( 0 ) + L33 ( 0 )
L33=a33 + L31 ( U 13) + L32 ( U 23 )

From [ A ] [ X ] =[ C ] ,
after decomposition
[ L ][ U ] [ X ] =[ C ]
let
[ Y ] =[ U ] [ X ] (1)
then
[ L ][ Y ] =[ C ] (2)

From the matrix equation (2), evaluate for the variables of matrix [ Y ] by using forward substitution. Substitute the
values of matrix [ Y ] to the initial assumption, equation (1), and determine the values of the unknown variable in
matrix [ X ] by reverse substitution.

Example:
Given −11=6 X 1−2 X 2−3 X 3
12=−2 X 1+5 X 2−2 X 3
13=−3 X 1−2 X 2 +8 X 3
Determine the unknown coefficient using Crout’s Method.

Solution:

NUMETHS/T42 Page 30 of 67
[ C ] =[ A ][ X ]
6 −2 −3 X 1
][ ]
−11
[ ][
12 = −2 5 −2 X 2
13 −3 −2 8 X 3

L11 =a11 L21 =a21 L31=a31


L11 =6 L21=−2 L31=−3
a12 −2 a −3
U 12= = U 13 = 13 =
L11 6 L11 6
−1 −1
U 12= U =
3 13 2

L22=a22−L21 ( U 12 )=5−(−2 ) ( −13 )


13
L22=
3

L32=a32−L31 ( U 12) =−2−(−3) ( −13 )


L32=−3

U 23=
a23−L21 ( U 13 )
=
−2−(−2 ) ( −12 )
L22 13
3
−9
U 23=
13

L33=a33 + L31 ( U 13) + L32 ( U 23 )=8−(−3 ) ( −12 )−(−3) ( −913 )


115
L33=
26
6 0 0 −1 −1

[ ][ ]
1
13 3 2
−2 0
L= 3 U= −9
0 1
115 13
−3 −3
26 0 0 1
For the assumption
[ L ] • [ L ] =[ C ]

NUMETHS/T42 Page 31 of 67
6 0 0

[ ][ ] [ ]
−2

−3 −3
13
3

26
0
Y 1 −11
• Y 2 = 12
115 Y 3 13

6 Y 1=11−2 ( −116 )+( 133 ) Y =12−3 (−11


2 −
25 115
6 ) ( 13 ) 26
+ Y =133

−11 25
Y 1= Y 2= Y 3=3
6 13

Substitute the values of Y to the initial assumption of X


[ Y ] =[ U ] • [ X ]
−11 −1 −1

[ ] [ ][ ]
1
6 3 2 X1
25 = −9 • X 2
0 1
13 13 X3
3 0 0 1
Reverse substitution
( 0 ) X 1 + ( 0 ) X 2 + ( 1 ) X 3=3
X 3 =3

( 0 ) X 1 + ( 1 ) X 2− ( 93 )(3)= 2513
X 2 =4
−1 1 −11
(1) X1+
3
( 4)−
2
(3)= ()
6
X 1 =1

2. Doolittle Method
Given a square matrix

NUMETHS/T42 Page 32 of 67
A 11 A12 A13 1 0 0

[
[ A ] = A 21
A 31
A22
A32
A23
A33 ]
3 x3
[
[L] = L21 1 0
L31 L32 1 ]
3x3
[U]=

U 11 U 12 U 13

[ 0 U 22 U 23
0 0 U 33 ] 3x 3

Derived formulas:
1 ¿ A 11=U 11
2 ¿ A ¿12=U 12
3 ¿ A ¿13=U 13
A 21 A 31
4 ¿ L21= L31=
U 11 U 11
5 ¿ U ¿22=A 22−L21 U 12 U 23= A 23−L21 U 13 U 33= A 33−L31 U 13−L32 U 23
A 32−L31U
6 ¿ L31=A 31 /U 11 L21= A21 /U 11 L32= 12

U 22
[L] ∙ [y] = [C]
Forward substitution
[y] = [U] ∙ [X]
Reverse substitution

Example:
Given
-11 = 6 X1 −¿2 X2 −¿3 X3
12 = -2 X1+5 X2−¿2 X3
13 = -3 X1−¿2 X2 +8 X3
Determine the unknown coefficients using Doolittle Method.
Solution:
[C] = [A] ∙ [X]

−11 6−2−3 x 1
[ ] [ ][ ]
12 = −2 5−2 ∙ x 2
13 −3−2 8 x 3

A11 =U 11 A12=U 12 A13=U 13

NUMETHS/T42 Page 33 of 67
U 11 = 6 U 12 = -2 U 13= -3
A 21 −2 A 31 −3
L21= = L31= =
U 11 6 U 11 6
−1 −1
L21= L31=
3 2

U 22= A 22−L21 U 12=5− ( −13 )(−3)



U 23= A 23−L21 U 13=−2− ( −13 )(−3)
13
U 22= U 23=−3
3

L32=
A 32−L31U 12
=
−2− ( −12 ) (−2)
U 22 13
3
−9
L32=
13

U 33= A 33−L31 U 13−L32 U 23=8− (−12 )(−3 )−( −9


13 )
(−3)

115
U 33=
26
1 0 0 6 −2 −3

[
[ L] = −1/3 1 0
−1/2 −9/13 1 ]
3x 3
[U]=
[ 0 13 /3
0 0
−3
115/26 ]
3x 3

[L] ∙ [y] = [C]

1 0 0 y1 −11
[ −1/3 1 0
−1/2 −9/13 1 ] [ ][ ]
3x 3
∙ y 2 = 12
y3 13

( 1 ) y 1=−11 ( −13 ) (−11) +1 y 2=12


25
y 1=−11 y 2=
3

( −12 ) (−11)−( 139 )( 253 )+( 1) y 3=13


345
y 3=
26

NUMETHS/T42 Page 34 of 67
[y] = [U] ∙ [X]

−11 6−2−3 x1
[ ][
25/3 = 0 13/3−3 ∙ x 2
325/26 0 0 115/26 x3 ][ ]
( 115
26 ) ( x 3 )=
345
26 ( 133 ) x 2−3( 3 )= 253
x 3=3 x 2=4
( 6 ) x 1−2 ( 4 ) −3 ( 3 )=−11
x 1=6

ITERATIVE METHODS
Also called “approximate methods”, provides an alternative to elimination methods and consist of
guessing a value and then using a systematic method to obtain a better estimate of the rest.

Gauss-Jacobi Method
Steps:
a. Rearrange the order of the equations in a manner that the dominant coefficients for each unknown
variable lay on the main diagonal of the systems linear equations.
b. Create the iterative formula for each variable by transposing the dominant variable and dividing
the equation by the dominant coefficient.
c. Begin the iteration by substituting initial values (0’s) to the variables to obtain the new values. The
process is repeated by substituting the computed values to the variables of the next iteration until
the terminating condition is satisfied.
NOTE:
Since all the variables are within o linear equation, then all the variables must satisfy the
terminating condition. Continue otherwise.

Example:
Given: 25=−4 x 1−2 x 2+ 11 x 3
−11=7 x1 −3 x 2−4 x 3
3=−3 x1 +6 x 2−2 x 3

NUMETHS/T42 Page 35 of 67
Solution:
Step 1: Rearrange the dominant coefficient
−11=7 x1 −3 x 2−4 x 3  eq. 1

3=−3 x1 +6 x 2−2 x 3  eq. 2

25=−4 x 1−2 x 2+ 11 x 3  eq. 3


Step 2: Iterative Equation
−11+3 x 2+ 4 x 3
x 1=
7
3+3 x1 +2 x 3
x 2=
6
25+4 x1 +2 x2
x 3=
11
Step 3: K Iterations
K = 1: x 1=x 2=x 3=0
−11+3(0)+ 4( 0)
x 1= =−1.5714
7
3+3(0)+2(0)
x 2= =0.5
6
25+4 (0)+2(0)
x 3= =2.2727
11
K = 2:
−11+3 ( 0.5 ) + 4 ( 2.2727 )
x 1= =−0.0585
7
3+3 (−1.5714 ) +2 ( 2.2727 )
x 2= =0.4719
6
25+4 (−1.5714)+2(0.5)
x 3= =1.7922
11

|0.0585−1.5714
εa =
x1
0.0585 |x 100 %=2586 %
0.4719−0.5
ε =|
0.4719 |
ax x 100 %=5.955 %
2

1.7922−2.2727
ε =|
ax |x 100 %=26.18 %
1
1.7922

K X1 X2 X3 εa X1 εa X2 εa X3

NUMETHS/T42 Page 36 of 67
1 -1.5714 0.5 2.2727 0 0 0
2 -0.0585 0.4719 1.7922 2586 5.955 26.81
3 -0.3451 1.0682 2.3373 83.05 55.8182 23.3218
4 0.222 1.1066 2.3415 83.14 3.4701 0.1794
5 0.2408 1.3915 2.5547 7.8073 20.4743 8.3454
6 0.4848 1.472 2.6133 50.33 5.4688 2.2424
7 -0.1939 1.6135 2.7167 350.0258 8.7698 3.7848
8 0.6725 1.3084 2.4956 128.8327 23.3186 8.8596
9 0.4154 1.6681 2.7552 61.8922 21.5635 9.4222
10 0.7179 1.6261 2.7271 42.1368 2.5829 1.0304

Gauss-Seidel Method
Similar to Gauss-Jacobi Method except that the new values are introduced in each iterative formula in
order to accelerate convergence.

Comparison of Gauss-Jacobi with Gauss-Seidel

Gauss-Jacobi Gauss-Jacobi
K = 1: K = 1:

K = 2: K = 2:

K = 3: K = 3:

Example:
Given: 25=−4 x 1−2 x 2+ 11 x 3
−11=7 x1 −3 x 2−4 x 3
3=−3 x1 +6 x 2−2 x 3

NUMETHS/T42 Page 37 of 67
Solution:
−11+3 x 2+ 4 x 3
x 1=
7
3+3 x1 +2 x 3
x 2=
6

25+4 x1 +2 x2
x 3=
11
K Steps
K = 1: K = 1: x 1=x 2=x 3=0

−11+3(0)+ 4( 0)
x 1= =−1.5714
7

3+3(−1.5714)+2( 0)
x 2= =−0.2857
6

25+4 (−1.5714)+2(−0.2857)
x 3= =1.6494
11
K = 2:
−11+3 (−0.2857 ) + 4 ( 1.6494 )
x 1= =−0.7514
7
3+3 (−.7514 ) +2 ( 1.6494 )
x 2= =0.6741
6
25+4 (−0.7514)+2(0.6741)
x 3= =2.1221
11

CURVE FITTING
Least Square Regression
Regression is an approximate strategy in deriving an approximating function that fits the shape or
general trend of the data without necessarily matching the individual points. It is a process where
substantial error is associated with data; polynomial interpolation is inappropriate and may yield
unsatisfactory results when used to predict intermediate values. Least square regression is a technique that
devices a criterion in order to establish a basis for a fit. One way to do this is to device a curve that
minimizes the discrepancy between the data points and the curve.

NUMETHS/T42 Page 38 of 67
Linear Regression
The simplest example of least square approximation. It is fitting a straight line to a set of paired
observations:
( X 1 ,Y 1), ¿ ¿ ), ( X 3 , Y 3 ), ….
The mathematical expression for a straight line is:
General Form

f (n) =a0 +a 1 x +a2 x2 +… .. an x n


Linear Regression
f 1( x )=a0 +a 1 x +e
Where:
a 0and a 1 represent the intercept and slope,
respectively, of a straight line
y = mx+b slope-intercept form
m = a 1 , b= a 0

e is the error or residual between the model and observations

Rearranging the coefficients


e= y-a 0-a 1x
Thus the error, or residual, is the discrepancy between the true value of y and the

approximate value, a 0+ a1 x , predicted by the linear equation


Criteria for Best Fit
In order to fit the set of data points into linear functions, errors, or residuals must then be reduced or
minimized. The errors in a linear polynomial are then categorized as:
a.) Summations of Residuals/Error
n n
εr= ∑ Ci = ∑ y1 −a0−a1 x 1
i=o i=o

where:
n = total number of points
However, the criterion is inadequate to depict the fit of a straight line to two points.
Obviously, the best fit is the line connecting the points. However, any straight line passing through
the midpoint of the corresponding line (except in a perfectly vertical line) in a minimum value of
the equation equal to zero, because errors cancel.
b.) Summation of Absolute Residual/Errors

NUMETHS/T42 Page 39 of 67
Another logical criterion then would be to minimize the sum of absolute values of the
discrepancies. However, this is also inadequate, for any 4 points, any straight line falling within
the dashed lines will minimize the absolute value of the sum. Thus this criterion will not also give
the best fit.
n

∑ ¿ e1 ¿
i=0

c.) Summation of Square Residual/ Error


A third method for fitting a best line is the minimum criterion. In this technique, a line is chosen
that minimizes the maximum distance that an individual points falls from the line. The strategy is
ill suited for regression because it gives undue influence to an outlier. That is, a single point with a
large error. It should be noted that the minimax principle is sometimes well suited for fitting
simple functions as opposed to a complicated function. A strategy that overcomes the
aforementioned approach is to minimize the sum of the squares of the residual between the
measured y and the y calculated with the linear model.
n n
Sr = ∑ ei² = ∑ ( y imeasured ¿− y i model)² ¿
i=1 i=1
n
= ∑ (¿ y 1−a 0−a1 x 1) ² ¿
i=1

Minimize the errors (maxima-minima)


To determine a 0and a 1, the equation is differentiated with respect to each other’s coefficient and
setting these derivatives equal to zero result with a minimum of Sr.
dSr
= 0 = -2 Σ(y-a 0-a 1 x i ¿
a0
dSr
= 0 = -2 Σ(y-a 0-a 1 x i ¿( x i ¿
a1
0= Σ y - Σa 0- Σa 1 x i
0 = Σ x i y i - Σa 0 x i- Σa 1 x i²

Σ y =Σa 0- Σ x i a1 (1)

Σ x i y i = Σ x i a0- Σ x i ² a1 (2)

The equation can be expressed as a set of 2 simultaneous linear equations with 2 unknowns a 0 and a 1.
These are called normal equations and can be solved simultaneously.
Let Σa 0 = na0 , then,

Σ y = na0 + Σ x 1 a i (1)
2
Σx1 y1 = Σx1 a0 + Σx a1 (2)

NUMETHS/T42 Page 40 of 67
Thus
a 1 = n ∑ x i y i −¿ ¿
a 0 = ý−a1 x́
y = f ( x)=a0 + a1 x
where:
x́ and ý are mean values of x and y evaluated as:

x́=
∑ ẋ i ý=
∑ ẏ i
n n
where:
n = number of data points
e i= y actual − y model
y model =a0 +a1 x
Example:
Given
x 0.05 0.11 0.15 0.31 0.46 0.52 0.7
y 0.956 0.89 0.832 0.717 0.571 0.539 0.378

Fit the following data using Linear Regression.


Solution:
n x yactual xy xi2 ymodel ei
1 0.05 0.956 0.0478 0.0025 0.9399 0.0161
2 0.11 0.89 0.0979 0.0121 0.8877 0.0023
3 0.15 0.832 0.1248 0.0225 0.8529 - 0.0209
4 0.31 0.717 0.2222 0.961 0.7137 - 0.0032
5 0.46 0.571 0.2627 0.2116 0.5832 -0.0122
6 0.52 0.539 0.2803 0.2704 0.5311 0.0079
7 0.7 0.378 0.2646 0.49 0.3744 0.0036
Σx = 2.3 Σy = 4.883 Σxy = 1.3004 Σ xi2 = 1.1052

( 7 ) (1.3004 )−( 2.3 )(4.883)


a 1 = n ∑ x i y i −¿ ¿ = -0.8699
(7 ) (1.1052)2 −(2.3)2

x́=
∑ ẋ i ¿
2.3
= 0.3286
n 7

ý=
∑ ẏ i ¿
4.883
= 0.6976
n 7
a 0 = ý−a1 x́ = 0.6976 – (-0.8699)(0.3286) = 0.9834
f ( x)=a0 + a1 x = 0.9834 + (-0.8699)x ymodel

NUMETHS/T42 Page 41 of 67
@ x = 0.05; f 1(x) =¿0.9834 + (-0.8699)(0.05) = 0.9399

Criteria for Best Fit


(1) Summation of Residual Errors
n n
e r =∑ e i=∑ y 1−a o−a 1 x 1
i=0 i=0

(2) Sum of Absolute Residual Errors


n n
e r =∑ |e i|=∑ | y 1−ao −a1 x1|
i=0 i=0

(3) Sum of Square of Residuals


n n
2 2
sr =∑ ( e i ) =∑ ( y actual− y model )
i=0 i=0

n
2
sr =∑ ( y 1−ao−a1 x 1 )
i=0

To solve for the functions, minimize the errors (maxima-minima).

y
Standard Error of the Estimates, s
x
y sr
s =
x n−2 √
y
= error value y and x
x
r = correlation coefficients
st −sr
r 2=
st
r2 = coefficient of determination
n ∑ x i y i− ∑ x i ∑ y i
r= 2 2
2
√ 2
n ∑ ( x i ) − ( ∑ x i ) n ∑ ( yi ) − ( ∑ y i )

Polynomial Regression
Another method to fit the data points. The least square procedure can easily be extended to fit data for a higher –
order polynomial.

NUMETHS/T42 Page 42 of 67
General Form:

y=ao +a 1 x+ a2 x2 +e
Residuals or Errors in a second order polynomial is then defined as:

e= y −ao −a1 x−a2 x2


Applying the criteria for best fit, for this case, the sum of squares of the residuals is:
n n
2 2
sr =∑ ( e i ) =∑ ( y 1−a o−a 1 x−a 2 x 2 )
i=0 i=0

Solving for the variables ao , a1 , a2 , and minimizing errors by applying maxima – minima
δsr 2
=−2 ∑ ( y1 −ao −a1 x1−a2 x ) =0
ao
δsr
=−2 ∑ ( y1 −ao −a1 x1−a2 x 12 ) x i=0
a1
δsr 2 2
=−2 ∑ ( y1 −ao −a1 x1−a2 x 1 ) x i =0
a1
Then,

0=∑ y i−∑ a0−∑ x i a1−∑ x i a2

0=∑ y i x i−∑ xi a0 −∑ x i2 a1−∑ xi3 a2


0=∑ y i x i2−∑ x i3 a0−∑ xi3 a1−∑ x i4 a 2
If we let ∑ a0=n a0 (the realization of the equation can be expressed as a set of (3) simultaneous linear equations
with 3 unknown variables (a0, a1, a2 ).

∑ yi =n a0 + ∑ x i a1 +∑ x i2 a2
∑ x i y i=∑ x i a0 +∑ x i2 a1 +∑ x i3 a2
∑ x i2 y i=∑ x i2 a 0+ ∑ xi2 a1 + ∑ x i3 a 2
Evaluate the coefficients a0, a1, a2 using systems of linear equation
 2nd Order Regression

∑ yi n ∑ x i ∑ x i2 a0

[ ][
∑ x i yi = ∑ x i ∑ xi2 ∑ x i3
∑ x i2 y i ∑ xi2 ∑ xi3 ∑ xi 4
Substitute to the form
][ ]
a1
a2

f 2 ( x )=a0 +a1 x+ a2 x 2
 3rd Order Regression

NUMETHS/T42 Page 43 of 67
∑ yi n ∑ xi ∑ x i2 ∑ x i3 a0

[ ][
∑ x i yi = ∑ x i
∑ x i2 y i ∑ x i2
∑ x i3 y i ∑ x i3
Substitute to the form
∑ x i2
∑ x i3
∑ x i4
∑ x i3 ∑ xi4
∑ x i4 ∑ x i5
∑ x i5 ∑ x i6
][ ]
a1
a2
a3

f 2 ( x )=a0 +a1 x+ a2 x 2 +a3 x 3



Standard Deviation, Sy

∑ ( y i− ý )2
Sy=
√ n−1
S y =∑ ( y−a0−a 1 x 1 )2

Sum of the squares of the residuals between measured and model.

y Sr x
S =
x n−2√if S <S y
y

You can use r2 (coefficient of determination)

ST =∑ ( y i− ý )2
ST = the total sum of squares of residual between data points and the mean.
2
2 ( yi − ý )
variance=S =∑ y
n−1
2
2 (∑ y i )
∑ yi − n
variance=
n−1

Example:
Fit the following data using 2nd Order Regression
x 0 1 2 3 4 5
y 2.1 7.7 13.6 27.2 40.9 61.1
Solution:
N=6
e i= y i− y model

NUMETHS/T42 Page 44 of 67
x y x i2 x i3 x i4 x i5 x i6 xi yi x i2 y i
0 2.1 0 0 0 0 0 0 0
1 7.7 1 1 1 1 1 7.7 7.7
2 13.6 4 8 16 32 64 27.2 54.4
3 27.2 9 27 81 243 729 81.6 244.8
4 40.9 16 64 256 1024 4096 163.6 654.4
5 61.1 25 125 625 3125 15625 305.5 1527.5
Σ 15 152.6 55 225 979 4425 20515 585.6 2488.8
152.6 6 15 55 a0
[ ][
585.6 = 15 55 225 a1
2488.6 55 225 979 a2 ][ ]
a0 = 2.4786
a1 = 2.3593
a2 = 1.8607
2nd Order Equation using 2nd Order Polynomial Regression

f 2 ( x )=a0 +a1 x+ a2 x 2
f 2 ( x )=2.4786+ 2.3593 x +1.8607 x 2
X 0 1 2 3 4 5 Σ
y model 2.4786 6.6986 14.64 26.3028 41.687 60.1926 151.9996
ei -0.3786 1.0014 -1.04 0.8972 -0.787 0.3074 0.9004
e i2 0.1433 1.0028 1.0816 0.805 0.6194 0.0945 3.7466
2 544.4289 314.4593 140.0199 3.1223 239.2281 1272.1349 2513.3934
( y i− ý )

ý=
∑ y i = 152. 6 =25 . 433
n 6
e i= y i− y model

Sr =∑ e i2
ST =∑ ( y i− ý )2
st −sr 2513 .3934−3 . 7466
r 2= = =0 . 9985
st 2513. 3934

INTERPOLATION

NUMETHS/T42 Page 45 of 67
The most common method to estimate intermediate values between precise data points. The common
method is polynomial interpolation nth order polynomial.
fn(x) = a 0+ a 1 x a 2 x 2 +a 3 x3 + …+an x n
For n+1 data points, there is only one polynomial of order n that posses through all the data points polynomial
interpolation consist of determining the unique nth order polynomial that fit n+1 data points.
Although there is one nth order polynomial that fits n+1 data points. There is a variety of mathematical techniques in
which this polynomial can be expressed.

Lagrange Interpolating Polynomials


Simply a reformation of the Newton’s polynomial that avoids the computation divided differences.
Represented concisely as:
n
fn(x) = ∑ L 1( X ) F (X )
i=0

when,
L1(x) = π ( X −Xj )
(Xi –Xj)
j = 0; i ≠ j
π= denotes the “product of” (Lagrange terms)

Graphical Depiction

 First order Interpolation (linear)


Xo; f(Xo)
Xi; f(Xi)
f1(x) = Lo(x) f(X0) + L1 f (X1)

( X−Xi )
f1 =
[ (Xo−X 1) ]
f(X0) +
[
(X −Xo)
(X 1−X 0)
f(X1)
]

NUMETHS/T42 Page 46 of 67
 2nd order Interpolation (quadratic or parabolic)
Xo ; f(Xo)
X1 ; f(X1)
X2 ; f(X2)
f2 (X) = Lo (X) f (Xo) + L1 (x) f(X1) + L2 (X) f(X2)

( X −X 1 )( X −X 2 ) ( X−Xo )( X− X 2)
f2(X) =
[ ( Xo−X 1 ) ( Xo−X 2)
f(X0) +
] [
( X 1−X 0 ) (X 1− X 2)
f(X1) +
]
( X −Xo ) ( X −X 1)
[ ( X 2−Xo ) ( X 2−X 1)
f(X2)
]
The summation of all the products designated by the Lagrange terms is a unique nth order polynomial that
posses exactly through all n+1 data points.
 3rd Order Interpolation
Xo; f(Xo)
X1; f(X1)
X2; f(X2)
X3; f(X3)
f3(X) = Lo (X) f (Xo) + L1 (x) f(X1) + L2 (X) f(X2) + L3(X) f(X3)

( X− X 1 )( X −X 2 ) ( X −X 3) ( X −Xo ) ( X −X 2 ) (X −X 3)
f3(X) =
[ ( Xo−X 1 ) ( Xo− X 2 ) ( Xo− X 3)
f(X0) +
] [
( X 1−X 0 ) ( X 1−X 2 ) (X 1− X 3) ]
f(X1) +

( X− Xo ) ( X− X 1 ) ( X−X 3) ( X −Xo ) ( X− X 1 )( X− X 2)
[ ( X 2−Xo ) ( X 2−X 1 ) ( X 2−X 3)
f(X2) +
] [
( X 3−Xo ) ( X 3− X 1 )( X 3−X 2) ]
f(X3)

Example:
Fit the following data points using Lagrange Interpolation
Xo X1 X2 X3
x 5 -1 -3 2
f(x) y 120 -6 -32 3
Determine the function at x=7
Solution:

( X− X 1 )( X −X 2 ) ( X −X 3) ( X −Xo ) ( X −X 2 ) (X −X 3)
f3(X) =
[ ( Xo−X 1 ) ( Xo− X 2 ) ( Xo− X 3)
f(X0) +
] [
( X 1−X 0 ) ( X 1−X 2 ) (X 1− X 3) ]
f(X1) +

( X −Xo ) ( X− X 1 )( X− X 2)
¿ f(X2) + [ ( X 3−Xo ) ( X 3− X 1 )( X 3−X 2)
f(X3)
]
NUMETHS/T42 Page 47 of 67
( X−(−1) ) ( X −(−3) ) ( X−2) ( X −5 ) ( X −(−3) ) ( X−2)
f3(X) =
[ ( 5−(−1) )( 5−(−3) ) (5−2) ] [ (120) +
(−1−5 ) (−1−(−3 )) (−1−2) ](-6) +

( X−5 ) ( X−(−1) ) ( X−2) ( X−5 ) ( X −(−1 ) ) (X −(−3))


[ (−3−5 ) (−3−(−1 ) ) (−3−2)
(-32) +
] [
( 2−5 ) ( 2− (−1 ) ) (2−(−3))
(3)
]
( X+ 1 )( X + 3 ) (X −2) ( X−5 ) ( X +3 ) ( X−2)
f3(X) =
[ ( 6 )( 8 )( 3)
(120) +
] [
(−6 ) ( 2 ) (−3)
(-6) +
]
( X−5 ) ( X +1 ) (X −2) ( X−5 ) ( X +1 ) (X + 3)
[ (−8 ) (−2 ) (3)
(-32) +
] [
(−3 )( 3 )(5)
(3)
]
120 3 1
f3(X) = ( x +2 x 2−5 x −6) - (x3 −4 x 2−11 x +30)
44 6
32 3 1
(x −6 x 2−3 x+10) - ( x 3− x2−17 x −15)
80 15

Equate common coefficients of x

f3(X) = ( 120 1 32 1
− + − )x (
144 6 80 15
240 4 192 1
+ − + )x (
144 6 80 15
−600 11 96 17
3
144
+ + − )x
+
6 80 15
2
+ +

30 320
( −720
144
− +
6 80
+1)

f3(X) = x 3+ 0 x2 +0 x−5=x 3−5


@ x=7
f3(X) = 73 −5=338

Newton’s Divided Difference Interpolating Polynomials


The most popular and useful form
1. Linear Interpolation (First Order)
Simplest form of interpolation is to correct 2 data points with a straight line called linear
interpolation depicted graphically by using similar triangles.
f 1 ( x )−f (x o) f ( x 1 )−f (x o )
=
(x−x o) ( x1 −xo )
Rearranging gives:
f ( x 1 )−f (x o )
f ( x 1 ) =f ( x o ) + (x−x o ) Divided Difference
(x1 −x o)

NUMETHS/T42 Page 48 of 67
The notation f 1 ( x ) designated that this is a first order interpolating polynomial. In general, the smaller the
interval between points, the better the approximation is. This is due to the decrease in interval, will result in
a continuous function that better approximation by a straight line.

2. Quadratic Interpolation (2nd order)


A strategy for improving the estimate is to introduce some curvature into the line connecting the
points. If 3 data points are available, this can be accomplished with a second order polynomial (also called
a quadratic parabola).
Form:
f 2 ( x )=b o +b1 ( x−x o ) +b 2 ( x−x o ) ( x−x1 )
Expanding the terms:

f 2 ( x )=ao +a1 x+ a2 x 2
where: a o=b o−0 b 1 x o +b2 x o x 1
a o=b o−0 b 1 x o +b2 x o x 1
a o=b 2

A simple procedure is used to determine the value of the coefficients.


b o=f (x o)
f ( x1 ) −f (x o)
b 1=
x 1−x o
f ( x 2 )−f (x 1 ) f ( x 1 ) −f (x o )

x 2−x 1 x 1−x o
b 2=
x 2−x o

General form of Newton’s Interpolating Polynomials the preceding analysis can be generalized to fit an nth
order polynomial to n+1 data points. The nth order polynomial is:
f n ( x )=bo +b1 ( x−x o ) +b 2 ( x−x o ) ( x−x 1 ) +b 3 ( x−x o ) ( x−x 1 ) ( x−x 2 ) +…+ bn ( x−x o )( x−x1 ) ( x−x 2 ) … ( x−x n )
where: b o=f (x o)

b 1=f [x o , x 1 ] first finite divided difference

b 2=f [x o , x1 , x2 ] 2nd finite divided difference

NUMETHS/T42 Page 49 of 67

b n=f [x o , x1 , x 2 , … , x n] nth order divided difference
Note that bracketed functions are finite divided differences which are generally represented as:
Finite Divided Difference

'
f ( x 1 )−f (x o )
△ f [ x o ] =f [ x o , x 1 ]= first divide difference
x1 −x o
f [ x 1 , x 2 ]−f [ xo , x 1]
△ ' ' f [ x o ] =f [ x o , x 1 , x 2 ] = 2nd divided difference
x 2−x o
f [ x 1 , x 2 , … , x n ]−f [ x o , x 1 , … , x n ]
△ n f [ x n ]=f [x o , x1 , x2 , … , xn ]= nth divided difference
x n−x o
Tabulated Divided Difference
x f (x) △ ' f ( x) △' ' f ( x ) △' ' ' f ( x ) △ IV f ( x )
xo f (x o) f [ x o , x1 ] f [ x o , x1 , x2 ] f [x o , x 1 , x 2 , x 3 ] f [x o , x 1 , x 2 , x 3 , x 4 ]
x1 f (x 1) f [ x o , x2 ] f [ x1 , x2 , x3 ] f [x 1 , x 2 , x3 , x 4 ]
x2 f ( x 2) f [ x o , x3 ] f [ x2 , x3 , x4 ]
x3 f (x 3) f [ xo , x4 ]
x4 f (x 4 )
Note: the range of x is incrementing order

Graphical Depiction of Newton’s Divided Difference


x f (x) △ ' f ( x) △' ' f ( x ) △' ' ' f ( x )
xo f (x o)
f [ x1 , xo ]
x1 f (x 1) f [ x2 , x1 , xo ]
f [ x2 , x1 ] f [x 3 , x2 , x1 , x o ]
x2 f (x 2) f [ x3 , x2 , x1 ]
f [ x3 , x2 ]
x3 f (x 3)

These can be used to evaluate the coefficients which in turn substituted to yield the interpolating
polynomial called Newton’s Interpolating Polynomial.

NUMETHS/T42 Page 50 of 67
Note: That the data points should or are not necessarily evenly spaced or are they in ascending order for
x is normally the coefficient used.

Example:
Fit the following data using Newton’s Interpolation
X 0.3 1 0.7 0.6 1.9 2.1
y -0.736 1 -1.104 -0.328 11.008 15.212

Solution:

x f (x) △ ' f ( x) △' ' f ( x ) △' ' ' f ( x ) △ IV f ( x ) △ V f (x )


0 0.3 -0.736 1.36 -22.8 85.3214 -69.4316 49.592
1 1 1 -7.76 36.925 -25.7692 19.8342
2 0.7 -1.104 7.01 3.45 3.982
3 0.6 -0.328 11.12 9.0
4 1.9 11.008 21.02
5 2.1 15.212

'
f ( x1 ) −f (x o) −0.328−(−0.736)
△ f (x)= ¿ =1.36
x 1−x o 0.6−0.3

−1.104−(−0.328)
¿ =−7.76
0.7−0.6

1−(−1.104)
¿ =7.01
1−0.7

11.008−1
¿ =11.12
1.9−1

15.212−11.008
¿ =21.02
2.1−1.9

f [x 2 , x 1 ]−f [ x o , x 1 ] −7.76−1.36
△ ' ' f ( x )= ¿ =−22.8
x 2−x o 0.7−0.3

7.01−(−7.76)
¿ =36.925
1−0.6

NUMETHS/T42 Page 51 of 67
11.12−7.01
¿ =3.45
1.9−0.7

21.02−11.12
¿ =9.0
2.1−1
'' ' f [ x 3 , x 2 , x 1 ]−f [ x2 , x 1 , x o ] 36.925−(−22.08)
△ f ( x )= ¿ =85.32
x 3−x o 1−0.3
3.45−36.925
¿ =−25.7692
1.9−0.6

9.0−3.45
¿ =3.982
2.1−0.7

f [ x 4 , x 3 , x 2 , x 1 ]−f [ x3 , x2 , x 1 , x o ] −25.7692−85.32
△ IV f ( x ) = ¿ −69.4316
x 4−x o 1.9−0.3

5.982−(−25.7692)
¿ =−19.8342
2.1−0.7

f [ x 5 , x 4 , x3 , x 2 , x 1 ] −f [ x 4 , x3 , x 2 , x 1 , x o ] 19.8342−(−69.4316)
△ V f ( x )= ¿ =45.592
x5 −x o 2.1−0.3

f 3 ( x )=−0.736+1.369 ( x −0.3 )−22.8 ( x−0.3 )( x−0.6 )+ 85.3214 ( x−0.3 ) ( x−0.6 )( x−0.7 )−69.4316 ( x−0.3 ) ( x−0.6

¿−0.736+1.36 x−0.408−22.8 x2 +20.52 x+ 4.144+20.52 x−22.8 x 2−10.7503 …

NUMERICAL INTEGRATION
Newton-Cotes Integration Formulas
Integration is the inverse operation of differentiation which means “bring together, as parts into a whole; to
unite, to indicate the total amount of.” ‘Mathematically’ represented by:

NUMETHS/T42 Page 52 of 67
b
I =∫ f ( x)dx
a

which stands for the integral of the function f (x) with respect to an independent variable x 1 evaluated between 2
limits x=a , x=b . The function f (x) is referred to as the integrand. As the meaning implies, integration is the
total value or summation of f (x) dx over the range x=a∧x=b . In fact, the symbol ∫ is actually capital S that is
intended to signify the close connection between integration and summation.

The function to be integrated typically appears in any of the particular form:


a. A simple continuous function such as a polynomial, an exponential, or a trigonometric function.
b. A complicated continuous function that is difficult or nearly impossible to differentiate or integrate directly.
c. A tabulated form of the function where values of x and f (x) are given at a number of discrete points as is
often the case with experimental field data.
i.e.
3
2
2+ cos(1+ x 2 ) 0.5 x
I =∫ e dx
0 √ 1+0.5 sin x

The Newton-Cotes formula is the most common numerical schemes. The basic theory is to replace or transform a
complicated function or tabulated data with an approximate polynomial function that is easy to integrate.
b b
~
I =∫ f ( x)dx ¿ I =∫ f n ( x) dx
a a

where f n ( x) is an nth order polynomial


form:

f n ( x )=ao +a1 x+ a2 x2 +a 3 x 3 +…+a n−1 x n−1+ an x n

The Trapezoidal Rule

NUMETHS/T42 Page 53 of 67
Is the first of the Newton-Cotes close integration formulas. It corresponds to the case where the polynomial
is in the first order
b b

∫ f ( x )dx ~¿ I =∫ f 1 ( x )dx
a a

where f 1 ( x)is a polynomial function of the first order

Recall Newton’s First Order Interpolation

f ( x b ) −f ( x a )
f ( x 1 ) =f ( x a ) + ( x −x a)
( x b−x a)
The area under the straight line is an estimate of the integral f (x) between limits a and b.
b
f ( x b )−f (x a)
I =∫ [f ( x a )+ (x−x a )]dx
a x b−x a
Hence,
f ( a )+ f ( b )
I =( b−a ) [ ]
2
(Width)(Average height)

Geometrically, the trapezoid rule is equivalent to approximately the area of the trapezoid under the straight line

connecting f (a) and f (b). Thus, the integral estimate can be represented as:
~
I ¿ ( widt h ) ( average heig h t)
Trapezoidal rule approximation formula using a single trapezoid
h
I = [f ( a )−f ( b ) ]
2
or
h
I = [f ( x a )−f ( x b ) ]
2
Multiple application of trapezoidal rule
To increase the accuracy, divide the integration interval a to b into several segments and apply
approximation methods to each segment. The areas of individual segment can be added to yield the integral for the
entire interval. The entire equation is called multiple application, or composite integration formulas. There are n+1
equally spaced base points and consequently, there are n segments of equal width.
b−a
h=
n
where: h – equal spacing between segments and discrete points

NUMETHS/T42 Page 54 of 67
n – number of segments (trapezoids) with n+1 discrete data points
b – upper limit
a – lower limit
If a and b are Xa and Xb respectively, the total integral can be represented as:
1 2 3 n
I =∫ f ( x)dx +∫ f ( x )dx+∫ f ( x )dx +…+ ∫ f ( x )dx
0 1 2 n−1

Substituting the Trapezoidal Rule for Each Integral

f ( xo )+ f ( x1 ) f ( x 1) + f ( x 2 ) f ( x 3 ) +f ( x 4 ) f ( xn −1 ) + f (x n )
I =( h ) +( h ) ( h) +…+(h)
2 2 2 2
Hence, (Grouping the terms)
n−1
f ( x o ) +2 ∑ f ( x 1 ) + f (x n)
i=1
I =( b−a ) [ ]
2n
(Width)(Average height)
In general,
n−1
n
I = [f ( x o ) +2 ∑ f ( x 1 ) +f (x n)]
2 i=1

Truncation (Error)
For trapezoidal Rule:
−1 h
ET = f ( ε ) (b−a)3
12
where: ε lies somewhere in the interval a to b, for multiple application trapezoidal rule, the
truncation error formula is transformed as:

−(b−a)3 n ' '


ET = ∑ f (ε)
12n2 i =1

Thus,

−( b−a)3
Ea= f ''
12 n2
Estimate the integral function:
Using trapezoidal rule:
f(x) = .2 + 25x - 200 x 2 + 675 x 3 - 900 x 4 + 400 x 25

Solution:

NUMETHS/T42 Page 55 of 67
Classical Approach:
.8
I= ∫ ¿ ¿ 25x -200 x 2 + 675 x 3 - 900 x 4 + 400 x 5 ) dx
0
.8 .8 .8 .8 .8 .8
2 3
dx - ∫ 900 x dx + ∫ 400 x5 dx
4
I= ∫ .2 dx ∫ 25 xdx - ∫ 200 x + ∫ 675 x
0 0 0 0 0 0

I = 1.6405

Using Trapezoidal Rule:


at n = 1
x0 = a = 0
x 1 = b 0.8
f( x 0) = f(a) = .2

f( x 1) = f(b)

= .2 + 25(.8) – 200(.8 ¿2 + 675(0.8 ¿ ¿3 – 900(.8)4 + 400((0.8)5


Recall:
f ( x 0 )+f ( x 1 )
I = (b-a)
2
0.2+.232
I = (.8 – 0)
2
I = .1728 sq. units
at n = 2
0.8+0
h= = .4
2
x f(x) trapezoidal pattern
0 0.2 1(0.2) = .2
0.4 2.456 2(2.456) = 4.912
0.8 0.232 1(0.232) = 0.232

∑ F (x) ' s = 14.848


.2
I= (14.848) = 1.4848 sq. units
2
at n = 8
0.8−0
h= = .1
8

NUMETHS/T42 Page 56 of 67
x f(x) trapezoidal pattern
0 0.2 1(0.2) = 0.2
0.1 1.289 2(1.289) = 2.578
0.2 1.288 2(1.288) = 2.576
0.3 1.607 2(1.607) = 3.214
0.4 2.456 2(2.456) = 4.912
0.5 3.325 2(3.325) = 6.65
0.6 3.464 2(3.464) = 6.928
0.7 2.369 2(2.363) = 4.726

∑ F ¿ ¿x)’s = 32.016
0.1
I= (32.016) 1.6008 sq. units
2
Truncation Error

−(b – a)3
E(h) = f”
12 n2
(.8 – 0)3
= (-60)
12(8)
= 0.04
I = I(h) + E(h) = 1.6008 + 0.04 = 1.6408 sq. units
.8

f”(x) =
∫ (−400 ¿ +4050 x−10800 x 2+ 8000 x 3 )dx
0
¿
.8−0
4050
f”(x) = −400(0.8)+ ¿¿
2
= -60

SIMPSON’S RULE
Using higher polynomials to connect discrete points. In order to accurately approximate the integral of a
function using Newton Cotes Integration formulas, the process simply increases the order of the simplified function.

Simpson’s 1/3 Rule


1. Transforms the complicated function into a second order polynomial to further improve the approximation
technique.
b b
I= ∫ f ( x )dx = ∫ f 2 ( x) dx
a a

f 2(x) is polynomial function of the 2nd order and a and b are designated as X0 and X1 respectively

NUMETHS/T42 Page 57 of 67
(x−x 1)(x−x 2) (x−x 0)( x−x 2) (x−x 0)( x−x 1)
f 2(x) = f(x0) + f(x1) + f(x2)
(x 0 – x 1)(x 0−x 2) (x 1−x 0)( x 1−x 2) (x 2−x 0)( x 2−x 1)
2nd order Lagrange Interpolating Polynomial

Substituting,
b
(x−x 1)( x−x 2) (x−x 0)( x−x 2) (x−x 0)( x−x 1)
I= ∫ ¿ ¿ (x 0 – x 1)( x 0−x 2) f(x0) + (x 1−x 0)( x 1−x 2) f(x1) + (x 2−x 0)( x 2−x 1) f(x2)]dx
a

After integration and algebraic manipulation


f ( x 0)+ 4 f ( x 1)+ f ( x 2)
I = (b-a)
6
=width x average height
Using a parabola. Simpson’s 1/3 rule approximation
h
I= [f(x0 + 4f(x1) + f(x2))]
3
b−a
Where h =
2
The “1/3” is due to the function h is divided by 3

Multiple application of Simpson’s Rule


The Simpson’s rule can be improved by simply dividing the integration intervals into a number of segments
of equal width
b−a
h=
n
the total integral can be represented as:
2 4 6 n
I= ∫ f ( x )dx + ∫ f ( x )dx + ∫ f ( x )dx + ......+ ∫ f ( x)dx
0 02 4 n−2

In applying Simpson’s rule on each individual integral


h h h h
I =[ f(x0) + 4f(x1) + f(x2)] + [f(x2) + 4f(x3) + f(x4)] + [f(x3) + 4f(x4) + f(x5)] +.....+ [f(x(n-2)
3 3 3 3
+4f(xn-1) + f(xn)]
Combining terms:
n −1 n−2
f ( x 0)+ 4 ∑ f ( xi)+ 2 ∑ f ( xj)+ f ( xn)
I = (b-a)[ i=odd i=even
3n
In general:

NUMETHS/T42 Page 58 of 67
n−1 n−2
h
I= [f(x0) + 44 ∑ f ( xi)+2 ∑ f ( xj)+ f ( xn)
3 i=odd i=even

Note: Since the technique of Simpson’s 1/3 rule utilizes a parabola to approximate the integral of a function; hence,
a minimum of 3 discrete points (that are equally spaced) are required. In this case, a minimum of n=2 or 2
segments are to be applied. An even number of segments are to be utilized to implement the method. “4” and
“2” in the equation are but natural for a Simpson’s 1/3 rule. The odd points represent the middle term of each
application and therefore carry the weight of times 5 in the equation. The even points are the common end
points adjacent to each application and therefore counted as twice.

Truncation Error
1 5 4
Et = - h f (ξ )
90
b−a
because h =
2
n
Et = -¿ ¿ ∑ f 4 ¿ ¿)
n =1

Therefore
b−a 4
Et = - f
180 n4

Simpson’s 3/8 Rule


The integral approximation’s improved by using third order polynomial fitting four discrete points:
b b
I= ∫ f ( x )dx = ∫ f 3 ( x)dx
a a

f 3 ( x) - is a third order polynomial fucntion


Where:
f 3 ( x) = f(x0) + ∆’f(x0)(x-x0) + ∆’’f(x0)(x-x2)(x-x1) + ∆’’’f(x0)(x-x0)(x-x1)(x-x2)
3rd order Newton’s Interpolating polynomial.
Substituting,
b
I= ∫ ¿ ¿ f(x0) + ∆’f(x0)(x-x0) + ∆’’f(x0)(x-x2)(x-x1) + ∆’’’f(x0)(x-x0)(x-x1)(x-x2)]dx
a

To yield;
f ( x 0)+ 3 f ( x 1)+3 f ( x 2)+f ( x 3)
I= (b-a) [ ¿
3
Width height

NUMETHS/T42 Page 59 of 67
Simpson’s 3/8 Rule
3h
I= ( f (x 0)+3 f ( x 1)+ 3 f (x 2)+ f ( x 3)
8
b−a
Where: h =
3
The third Newton’s Cotes closed interpolation formula (Simpson’s 3/8 rule because multiplied by 3/8 where the end
points are weighted 1/8).

Multiple Application of Simpson’s 3/8 Rule


To improve the accuracy the integral is divided into a number of segments equal widths fitting multiple
cubic polynomial segments
b−a
h=
3
the total segments can be represented as:
3 6 9 h
I= ∫ f ( x )dx + ∫ f ( x )dx + ∫ f ( x )dx + .... + ∫ f ( x)dx
0 3 6 n−3

Applying Simpson’s Rule to each individual integral yields


3h 3h 3h
I= (f(x0) + 3f(x1) + 3f(x2) + f(x3)) + (f(x3) + 3f(x4) + 3f(x5) + f(x6)) + (f(x6) + 3f(x7) +
8 8 8

3h
3f(x8) + f(x9)) + (f(x9) + 3f(x10) +3f(x11) + f(x12))
8

Grouping the terms


n=−4 n=−3

I = (b – a)

width
[ f ( x 0)+3 ∑
i=1
l=0
f (x 1)+ 2

average height
8n
∑ f (x 2)+ f (x 3)
j=3 a ¿
]
Finally,
n=−4 n=−3
3 h f (x 0)+3 ∑ f ( x 1)+2 ∑ f ( x 2)+ f ( x 3)]
I= [
8 i=1 j =3 a
l=0

The technique uses a cubic polynomial to express the integral of function therefore a minimum of 4 discrete points
(equally spaced) are required. A minimum of 3 segments should be applied. The number of segments are always
multiple of 3.

NUMETHS/T42 Page 60 of 67
Example:
Estimate the integral of the function from x = 0 and x = 0.8 using Simpson’s 1/3 Rule.
f(x) = 0.2 + 25x – 200x2 + 675x3 – 900x4 + 400x5

Solution:
at n = 2
0.8−0
h= = 0.4
2
x f(x) Simpson’s 1/3 pattern
0 0.2 1(0.2) = 0.2
0.4 2.456 4(2.456) = 9.824
0.8 0.232 1(0.232) = 0.232
Σf(x)’s = 10.256
0.4
I= (10.256 )=1.3675 square units
3
at n = 4
0.8−0
h= = 0.2
4
x f(x) Simpson’s 1/3 pattern
0 0.2 1(0.2) = 0.2
0.2 1.288 4(1.288) = 5.512
0.4 2.456 2(2.456) = 9.824
0.6 3.464 4(3.464) = 13.856
0.8 0.232 1(0.232) = 0.232
Σf(x)’s = 24.352

0.2
I= ( 24.352 )=1.6235 square units
3
Truncation Error
5
−(b−a) ´ (4 )
E ( h) = f
180 n4
f́ (4 ) = average 4th order derivative integral
f I ( x )=¿25 – 400x + 2025x2 – 3600x3 + 2000x4

NUMETHS/T42 Page 61 of 67
f II ( x )=¿ – 400 + 4050x – 10800x2 + 8000x3
f III ( x )=¿ 4050 – 21600x + 24000x2
f IV ( x )=¿ – 21600 + 48000x
0.8

∫ (−21600+4800 x) dx
(4 ) 0
f́ =
0.8−0
f́ (4 ) = -2400

at n = 4

−( 0.8−0 )5
E ( h) = 4
(−2400)
180 ( 4 )
E(h) = 0.0171
I = I(h) + E(h) = 1.6235 + 0.0171
I = 1.6406 square units

Simpson’s 3/8 Rule


at n = 3
0.8−0
h= = 0.2667
3
x f(x) Simpson’s 1/3 pattern
0 0.2 1(0.2) = 0.2
0.2667 1.4239 3(1.4239) = 4.2987
0.5333 3.4871 3(3.4871) = 10.4613
0.8 0.232 1(0.232) = 0.232
Σf(x)’s = 15.192
3
I = ( 15.192 )=1.5194 square units
8
at n = 6
0.8−0
h= = 0.1333
6
x f(x) Simpson’s 1/3 pattern
0 0.2 1(0.2) = 0.2
0.1333 1.3102 3(1.3102) = 3.9306
0.2667 1.4239 3(1.4239) = 4.2987
0.4 2.456 2(2.456) = 4.912
0.5333 3.4871 3(3.4871) = 10.4613
0.6667 2.8745 3(2.8745) = 8.6235
0.8 0.232 1(0.232) = 0.232
Σf(x)’s = 32.6581

NUMETHS/T42 Page 62 of 67
3(0.1333)
I= ( 32.6581 )=1.6325 square units
8
Truncation Error

−( 0.8−0 )5
E ( h) = 4
(−2400)
180 ( 6 )
E(h) = 0.0076
I = I(h) + E(h) = 1.6235 + 0.0076
I = 1.6401square units
Romberg’s Integration
Romberg Integration is an extrapolation formula of the Trapezoidal Rule for integration. It provides a better
approximation of the integral by reducing the True Error.
The true error in a multiple segment Trapezoidal Rule with n segments for an integral
b

∫ f (x) dx
a

Is given by
n

(b−a)3
∑ f ' ' (ξ i)
i=1
Et = 2
12 n n
where for each i, ξ iis a point somewhere in the domain, [a+(i-1)h, a+ih] .
n

The term can be


∑ f ' ' (ξ i) viewed as an approximate average value of in f’’(x) in [a,b].
i=1
n
This leads us to say that the true error, Et, previously defined can be approximated as
1
Et ~¿ α
n2

Richardson’s Extrapolation
Richardson’s Extrapolation for Trapezoidal Rule
The true error, Et, in the n-segment Trapezoidal rule is estimated as
C
Et ~
n2
where C is an approximate constant of proportionality. Since
Et =TV −I n
Where TV = true value and In= approx. value

NUMETHS/T42 Page 63 of 67
From the previous development, it can be shown that
C
~ TV −I 2 n
(2 n)2
when the segment size is doubled and that
I 2 n−I n
TV ~ I 2 n +
3
which is Richardson’s Extrapolation.
Romberg integration is same as Richardson’s extrapolation formula as given previously. However, Romberg used a
recursive algorithm for the extrapolation. Recall
I 2 n−I n
TV ~ I 2 n +
3

This can alternately be written as


I 2 n−I n I 2 n−I n
( I ¿¿ 2n) R=I 2 n+ =I 2 n + 2−1 ¿
3 4 −1
Note that the variable TV is replaced by ( I ¿¿ 2n) R ¿ as the value obtained using Richardson’s extrapolation
formula. Note also that the sign ~ is replaced by = sign. Hence the estimate of the true value now is

TV ~ ( I ¿¿ 2n)R +Ch 4 ¿
where Ch4 is an approximation of the true error.
Determine another integral value with further halving the step size (doubling the number of segments),
I 4 n−I 2 n
( I ¿¿ 4 n) R=I 4 n + ¿
3
It follows from the two previous expressions that the true value TV can be written as
(I ¿¿ 2 n)R
TV ~ ( I ¿¿ 4 n) R +(I ¿¿ 4 n) R− ¿¿ ¿
15
( I ¿¿ 2n) R
TV =I 4 n +( I ¿¿ 4 n)R − ¿¿
43−1−1
A general expression for Romberg integration can be written as
I k−1 , j +1−I k−1 , j
I k , j=I k−1, j +1+ , k ≥2
4 k−1−1

NUMETHS/T42 Page 64 of 67
The index k represents the order of extrapolation. k=1 represents the values obtained from the regular Trapezoidal
rule, k=2 represents values obtained using the true estimate as O(h 2). The index j represents the more and less
accurate estimate of the integral.

Rohmberg Integration

I = I(h) + E(h)

Richardson’s Integration
1
I =I (h 2) + n ( I (h 2)−I (h 1) ¿
(2 −1)
where: I (h 1) = integral estimate of h1 (more accurate)
I (h 2) = integral estimate of h2 (less accurate)
n = extrapolation order (minimum of 2)
h1
h2 = = extrapolation condition
2
or
n2 = 2n1 = number of segments
if n = 2(extrapolation order)

NUMETHS/T42 Page 65 of 67
1
I =I (h 2) + 2 ( I (h 2)−I (h 1) ¿
(2 −1)
1
I =I (h 2) + ( I (h 2)−I (h 1) ¿
3
4 1
I = I − I (h 2)
3 (h 1) 3

Extrapolation Table
Extrapolation
Segments Extrapolation Order
h I(h) Order
(n) n=2
n=3
n3 h3 I(h3) I’(h) I’’(h)
n2 h2 I(h2) I’(h)
n1 h1 I(h1)

Improve the solution using Rohmberg Integration


Using Trapezoidal rule
n h I(h)
1 0.8 0.1728
2 0.4 1.0688
4 0.2 1.4848
8 0.1 1.6008

Extrapolation Table
h I(h) n=2 n=3 n=4
0.1 1.6008 1.6395 1.6418 1.6406
0.2 1.4848 1.6235 1.6601
0.4 1.0688 1.3675
0.8 0.1728
at n = 2
4 1
I ' (h) = (1.6008 )− (1.4848) = 1.6395 square units
3 3
4 1
I ' (h) = (1.4848 )− (1.0688) = 1.6235 square units
3 3
4 1
I ' (h) = (1.0688 )− (0.1728) = 1.3675 square units
3 3
at n = 3

NUMETHS/T42 Page 66 of 67
1
I ' ' (h) =1.6395− (1.6395−1.6235) = 1.6418 square units
2
1
I ' ' (h) =1.6235− (1.6235−1.3675) = 1.6601 square units
7
at n = 4
1
I ' ' ' (h) =1.6413− (1.6418−1.6601) = 1.6406 square units
15

NUMETHS/T42 Page 67 of 67

You might also like