Nothing Special   »   [go: up one dir, main page]

14.1 Differential Equations: Definitions

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52

1

1
Chapter 14
Differential Equations
Leonhard Paul Euler (17071783) Johann Bernoulli (16671748)
2
14.1 Differential Equations: Definitions
We start with a continuous time series {x(t)}.
Ordinary Differential Equation (ODE): It relates the values of
variables at a given point in time and the changes in these values
over time.
Example: G(t, x(t), x(t), x(t),...) = 0 for all t t: scalar, usually time
An ODE depends on a single independent variable. A partial
differential equation (PDE) depends on many independent variables.
ODE are classified according to the highest degree of derivative
involved.
- First-Order ODE: x'(t) = F(t, x(t)) for all t.
- Nth-Order ODE: G(t, x(t), x'(t), ..., x
n(
t)) = 0 for all t.
Examples: First-order ODE x'(t) = a x(t) + (t)
Second-order ODE x'(t) = a
1
x(t) + b x(t) + (t)
2
3
14.1 Differential Equations: Definitions
If G(.) is linear, we have a linear ODE. If G(.) is anything but linear,
then we have a non-linear ODE.
A differential equation not depending directly on t is called
autonomous.
Example: x'(t) = a x(t) + b is autonomous.
A differential equation is homogeneous if (t) = 0
Example: x'(t) = a x(t) is homogeneous.
If starting values, say x(0), are given. We have an initial value problem.
Example: x'(t) + 2 x(t) = 3 x(0) = 2.
If values of the function and/or derivatives at different points are
given, we have a boundary value problem.
Example: x'(t) + 4 x(t) = 0 x(0) = -2, x(/4) = 10.
4
14.1 Differential Equations: Definitions
A solution of an ODE is a function x(t) that satisfies the equation for all
values of t. Many ODE have no solutions.
Analytic solutions -i.e., a closed expression of x in terms of t- can be
found by different methods. Example: conjectures, integration.
Most ODEs do not have analytic solutions. Numerical solutions will be
needed.
If for some initial conditions a differential equation has a solution
that is a constant function (independent of t), then the value of the
constant, x

, is called an equilibrium state or stationary state.


If, for all initial conditions, the solution of the differential equation
converges to x

as t , then the equilibrium is globally stable.


3
5
Problem: The rate of growth of the population is proportional to the
size of the population.
Quantities: t = time, P(t) = population, k = proportionality constant
(growth-rate coefficient)
The differential equation representing this problem:
dP(t)/dt = kP(t)
Note that P
0
=0 is a solution because dP(t)/dt = 0 forever (trivial!).
If P
0
0, how does the behavior of the model depend on P
0
and k?
In particular, how does it depend on the signs of P
0
and k?
Guessing a solution: The first derivative should be similar to the
function. Lets try an exponential: P(t) = c e
kt
dP(t)/dt = c ke
kt
= kP(t) --it works! (and, in fact, c = P
0
.)
14.1 ODE: Classic Problem
6
A first-order ODE:
x'(t) = F(t, x(t)) or x'(t) = f(t, x(t)) for all t.
Notation
The steady state represents an equilibrium where the system does
not change anymore. When x(t) does not change anymore, we
call its value x

, That is,
x'(t) = 0
Example: x'(t) = a x(t) + b, with a0.
When x'(t) = 0, x

=-b/a
14.2 First-order differential equations:
Notation and Steady state
dt
dx
t x xt = =

) ( '
4
7
A first-order ODE that may be written in the form x'(t) = f (t)g(x) for
all t is called separable. They are easier to solve (case discussed first by
Leibniz and Bernoulli in 1694).
x'(t) = [e
x(t)+t
/x(t)](1 + t
2
) is separable. We can write it as:
x'(t) = [e
x(t)
/x(t)][e
t
(1 + t
2
)].
x'(t) = f (t) + g(x(t)) is not separable unless either f or g is identically 0:
it cannot be written in the form x'(t) = f(t) g(x).
If g is a constant, then the general solution of the equation is simply
the indefinite integral of f .
If g is not constant, the equation may be easily solved. Assume g(x)
0 for all values that x assumes in a solution, we may write:
dx/g(x) = f (t)dt.
Then, we may integrate both sides:
x
(1/g(x))dx =
t
f (t)dt.
14.2 Separable first-order differential equations
8
Example: x'(t) = x(t) t.
First write the equation as: dx/x = t dt.
Integrate both sides: ln x = t
2
/2 + C. (C always consolidates the
constants of integration).
Finally, isolate x: for all t. (C= e
C
).
(Note: if x(t) 0 for all t; in all the solutions we need C 0).
With an initial condition x(t
0
) = x
0
, the value of C is determined:
14.2 Separable first-order differential equations
2 /
0
2
0
t
Ce x =
2 /
2
) (
t
Ce t x =
5
9
A linear first-order differential equation takes the form
x'(t) + a(t)x(t) = b(t) for all t for some functions a and b.
Case I. a(t) = a 0 for all t.
- Then, x'(t) + ax(t) = b(t) for all t.
- The LHS looks like the derivative of a product. But, not exactly
the derivative of f (t)x(t). We would need f (t) = 1 and f '(t) = a
for all t, which is not possible.
- Trick: Multiply both sides by g(t) for each t:
g(t) x'(t) + a g(t) x(t) = g(t) b(t) for all t.
- Now, we need f (t) = g(t) and f '(t) = ag(t).
If f (t) = e
at
=> f '(t) = a e
at
= a f (t).
14.2 Linear first-order differential equations
10
- Set g(t) = e
at
=> e
at
x'(t) + a e
at
x(t) = e
at
b(t)
- The integral of the LHS is e
at
x(t)
- Solution:
e
at
x(t) = C +
t
e
as
b(s)ds, or
x(t) = e
at
[C +
t
e
as
b(s)ds]. (
t
f (s)ds is the indefinite
integral of f (s) evaluated at t).
Proposition
The general solution of the differential equation
x'(t) + a x(t) = b(t) for all t,
where a is a constant and b is a continuous function, is given by
x(t) = e
at
[C +
t
e
as
b(s)ds] for all t.
14.2 Linear first-order differential equations
6
11
Special Case: b(s)=b
The differential equation is x'(t) + ax(t) = b
Solution:
x(t) = e
-at
at[C +
t
te
as
asb ds] = e
-at
at[C +b
t
te
as
as ds]
14.2 Linear first-order differential equations
; ) (
)] 1 ( [ ] | [ ) (
0
a
b
a
b
C e
e
a
b
C e e
a
b
C e t x
at
at at t as at
+ =
+ = + =


Note: If x(0)=x
0
, then x
0
= C
Stability: If a >0 => x(t) is stable (and x

=b/a)
If a <0 => x(t) is unstable
12
14.2 Linear first-order differential equations:
Phase Diagram
A phase diagram graphs the first-order ODE. That is, plots x(t)
and x(t):
Example: x'(t) + ax(t) = b
a > 0
a < 0
x(t) x(t)
x(t) x(t)
x

=b/a x

=b/a
7
13
Example: u'(t) + 0.5 u(t) = 2.
Solution:
u(t) = Ce
-0.5t
-+ 4. (Solution is stable => 0.5>0)
Steady state: x

= b/a = 2/0.5 = 4
If u(0) = 20 => C = 16, => Definite solution: x(t) = 16 e
-.5t
-+
4.
Example: v'(t) - 2 v(t) = -4.
Solution:
v(t) = Ce
2t
-- 2. (Solution is unstable => -2<0)
Steady state: v

= b/a = -4/-2 = 2
If v(0) = 3 => C = 7, => Definite solution: v(t) = 7 e
2t
-+ 2.
14.2 Linear first-order differential equations
14
Figure 14.1 Phase Diagrams for Equations
(14.6) and (14.7)
8
15
Let p be the price of a good.
Total demand: D(p) = a bp
Total supply: S(p) = + p,
a, b, , and are positive constants.
Price dynamics: p'(t) = [D(p) S(p)] with > 0.
Replacing supply and demand:
p'(t) + (b + )p(t) = (a ). (a first-order linear ODE)
Solution:
p(t) = Ce
-(b+)t
+ (a )/(b + ).
p

= (a )/(b + ),
Given (b + ) > 0, this equilibrium is globally stable.
14.2 Linear first-order differential equations:
Price Dynamics
16
Case II. a(t) a (a is a function!)
- Then, x'(t) + a(t) x(t) = b(t) for all t.
- Recall we need to recreate f (t)x(t) to apply product rule:
- We need f (t) = g(t) and f '(t) = a(t) g(t) for all t:
- Try: g(t) = e

t
a(s)ds
(the derivative of ta(s)ds = a(t) ).
- Multiplying the ODE equation by g(t):
e

t
a(s)ds
x'(t) + a(t)e

t
a(s)ds
s)x(t) = e
a(s)ds
sb(t), or
(d/dt)[x(t) e
a(s)ds
s] = e

t
a(s)ds
sb(t).
- Thus x(t) e

t
a(s)ds =
C +
t
t e
a(s)ds
b(u)du, or
x(t) = e
-

t
a(s)ds
s[C +
t
t e
a(s)ds
dsb(u)du].
14.2 Linear first-order differential equations
9
17
Example: x'(t) + (1/t)x(t) = e
t
.
We have
t
(1/s)ds = ln t => e
t (1/t)dt
= t.
Solution:
x(t) = (1/t)(C +
t
ue
u
du)
= (1/t)(C + te
t

t
e
u
du) (use integration by parts.)
= (1/t)(C + te
t
e
t
)= C/t + e
t
e
t
/t.
We can check that this solution is correct by differentiating:
x'(t) + x(t)/t = C/t
2
+ e
t
e
t
/t + e
t
/t
2
+ C/t
2
+ e
t
/t e
t
/t
2
= e
t
.
As usual, an initial condition determines the value of C.
14.2 Linear first-order differential equations
18
14.2 Linear differential equations: Analytic
Solution Revisited - Proof
Suppose, we have the following form:
x"(t) + ax'(t) + bx(t) = f (t) (a and b are constants)
Let x
1
be a solution of the equation. For any other solution of this
equation x, define z = x x
11
.
Then z is a solution of the homogeneous equation:
x"(t) + ax'(t) + bx(t) = 0.
=> z"(t) + az'(t) + bz(t) = [x"(t) + ax'(t) + bx(t)] [x
1
"(t) + ax
1
'(t)
+ bx
1
(t)] = f (t) f (t) = 0.
Further, for every solution z of the homogeneous equation, x
1
+ z
is clearly a solution of original equation.
That is, the set of all solutions of the original equation may be
found by finding one solution of this equation and adding to it the
general solution of the homogeneous equation.
10
19
14.2 Linear differential equations: Analytic
Solution Revisited
Thus, we can follow the same strategy used for difference equations
to generate an analytic general solution:
Steps:
1) Solve homogeneous equation (constant term is equal to zero.)
2) Find a particular solution, for example x

.
3) Add homogenous solution to particular solution
Example: x'(t) + 2x(t) = 8.
Step 1: Guess a solution to homogeneous equation: x'(t) + 2x(t) = 0
x(t) = Ce
-2t
.
Step 2: Find a particular solution, say x

= 8/2=4
Step 3: Add both solutions: x(t) = Ce
-2t
+ 8
20
14.3 Non-linear ODE: Back to Population Model
The population model presented before was very simple. Lets
complicate the model:
1. If the population is small, growth is proportional to size.
2. If the population is too large for its environment to support, it will
decrease.
We now have quantities: t = time, P = population, k = growth-rate
coefficient for small populations, N = carrying capacity
Lets restate 1. and 2. in terms of derivatives:
1. dP/dt is approximately kP when P is small.
2. dP/dt is negative when P > N.
Logistic Model (Pierre-Franois Verhulst ):
dP
dt
= k 1
P
N
|
\

|

|
P
11
21
Lets divide both sides of the equation by N:
Let x(t)=P/N => x'(t) =k[1-x(t)] x(t)
Solution:
14.3 Non-linear ODE: Back to Population Model
N
P
N
P
k
N
P
dt
d
|

\
|
= 1
. ) ( lim ;
) 1 (
) (
0
0
N t P
e P N
e NP
t P
t kt
kt
=
+
=

22
14.4 Second Order Differential Equations
A second-order ordinary differential equation is a differential
equation of the form:
G(t, x(t), x'(t), x"(t)) = 0 for all t,
involving only t, x(t), and the first and second derivatives of x.
We can write such an equation in the form:
x"(t) = F (t, x(t), x'(t)).
Note that equations of the form x"(t) = F (t, x'(t)) can be
reduced to a first-order equation by making the substitution
z(t) = x'(t).
12
23
14.4 Second Order Differential Equations: Risk
Aversion Application
The function (w) = wu"(w)/u'(w) is the Arrow-Pratt measure of
relative risk aversion, where u(w) is the utility function for wealth w
Question: What u(w) has a degree of risk-aversion that is
independent of the level of wealth? Or, for what u do we have
a = wu"(w)/u'(w) for all w?
This is a second-order ODE in which the term u(w) does not
appear. (The variable is w, rather than t.)
Let z(w) = u'(w) => a = wz'(w)/z(w)
=> az(w) = wz'(w), a separable equation
=> adw/w = dz/z.
Solution: aln w = ln z(w) + C, or
z(w) = C* w
-a
(C*=exp(C))
24
14.4 Second Order Differential Equations: Risk
Aversion Application
Solution: z(w) = C* w
-a
(C*=exp(C))
Now, z(w) = u'(w), so to get u we need to integrate:
=> u(w) = C* ln w + B if a = 1
= C* w
1a
/(1 a) + B if a 1
That is, a utility function with a constant degree of relative risk-
aversion equal to a takes this form.
13
25
14.4 Linear second-order equations with constant
coefficients: Finding a Solution
Based on the solutions for first-order ODE, we guess that the
homogeneous equation has a solution of the form x(t) = Ae
rt
.
Check: x(t) = Ae
rt
x'(t) = rAe
rt
x"(t) = r
2
Ae
rt
,
=> x"(t) + ax'(t) + bx(t) = r
2
Ae
rt
+ arAe
rt
+ bAe
rt
= 0
=> Ae
rt
(r
2
+ ar + b) = 0.
For x(t) to be a solution of the equation we need r
2
+ ar + b = 0.
This equation is the characteristic equation of the ODE.
Similar to second-order difference equations, we have 3 cases:
If a
2
> 4b => 2 distinct real roots
If a
2
= 4b => 1 real root
If a
2
< 4b => 2 distinct complex roots.
26
14.4 Linear second-order equations with constant
coefficients: Finding a Solution
If a
2
> 4b => Two distinct real roots: r and s.
=> x
1
(t) = Ae
rt
and x
2
(t) = Be
st
, for any values of A and B,
are solutions.
=> also x(t) = Ae
rt
+ Be
st
is a solution. (It can be shown that
every solution of the equation takes this form.)
If a
2
= 4b => One single real root: r
=> (A + Bt)e
rt
is a solution (r = (1/2)a is the root).
If a
2
< 4b => Two complex roots: r
j
= i j=1,2.
=> x
1
(t) = e
(+i)t
and x
2
(t) = e
(-i)t
(=a/2, =(ba
2
)/4)
Use Eulers formula to eliminate complex numbers: e
i
=cos()+i sin().
Adding both solutions and after some algebra:
=> x(t) = A e
(+i)t
+ B e
(-i)t
= A e
t
cos(t) + B e
t
sin(t).
14
27
14.4 Linear second-order equations with constant
coefficients: Finding a Solution
Example: x"(t) + x'(t) 2x(t) = 0. (a
2
> 4b = 1>4*(-2)=8)
Characteristic equation: r
2
+ r 2 = 0 => eigenvalues are 1 and 2.
Solution: x(t) = Ae
t
+ Be
2t
.
Example: x"(t) + 6x'(t) + 9x(t) = 0. (a
2
= 4b = 6
2
= 4*9
Characteristic equation: r
2
+ 6r + 9 = 0 => eigenvalue is 3.
Solution: x(t) = (A + Bt)e
3t
.
Example: x"(t) + 2x'(t) + 17x(t) = 0. (a
2
< 4b = 4<4*(17)=68)
Characteristic equation: r
2
+ 2r + 17 = 0 =>roots are complex
with = a/2 = -1 and = (b a
2
/4) = 4.
Solution: [A cos(4t) + B sin(4t)]e
t
.
28
14.4 Linear second-order equations with constant
coefficients: Stability
Consider the homogeneous equation x"(t) + ax'(t) + bx(t) = 0.
If b 0, there is a single equilibrium, namely 0 i.e., the only
constant function that is a solution is equal to 0 for all t.
Three cases:
Characteristic equation with two real roots: r and s.
Solution: x(t) = Ae
rt
+ Be
st
=> equilibrium is stable iff r < 0 and s < 0.
Characteristic equation with one single real root: r
Solution: (A + Bt)e
rt
=> equilibrium is stable iff r < 0.
Characteristic equation with complex roots
Solution: (A cos(t) + B sin(t))e
t
, where = a/2, the real part of
each root. => equilibrium is stable iff <0 (or a>0).
15
29
14.4 Linear second-order equations with constant
coefficients: Stability
The real part of a real root is simply the root. We can combine the
three cases:
The equilibrium is stable if and only if the real parts of both roots of
the characteristic equation are negative. A bit of algebra shows that
this condition is equivalent to a > 0 and b > 0.
Proposition
An equilibrium of the homogeneous linear second-order
differential equation x"(t) + ax'(t) + bx(t) = 0 is stable if and only if
the real parts of both roots of the characteristic equation r
2
+ ar +
b = 0 are negative, or, equivalently, if and only if a > 0 and b > 0.
If b = 0, then every number is an equilibrium (and none is stable).
30
Stability of a macroeconomic model.
Let Q be aggregate supply, p be the price level, and be the
expected rate of inflation.
Q(t) = a bp + c, where a > 0, b > 0, and c > 0.
Let be Q* the long-run sustainable level of output.
Assume that prices adjust according to the equation:
p'(t) = h(Q(t) Q*) + (t), where h > 0.
Finally, suppose that expectations are adaptive:
'(t) = k(p'(t) (t)) for some k > 0.
Question: Is this system stable?
14.4 Linear second-order equations with
constant coefficients: Example
16
31
Question: Is this system stable?
Reduce the system to a second-order ODE:
1) Differentiate equation for p'(t) => get p"(t)
2) Substitute in for '(t) and (t).
We obtain: p"(t) h(kc b) p'(t) + khb p(t) = kh(a Q*)
=> System is stable iff kc < b. (khb > 0 as required.)
Note:
If c = 0 -i.e., expectations are ignored- => system is stable.
If c 0 and k is large -inflation expectations respond rapidly to
changes in the rate of inflation- => system may be unstable.
14.4 Linear second-order equations with
constant coefficients: Example
32
14.5 System of Equations: First-Order Linear
Differential Equations - Substitution
Consider the 2x2 system of linear homogeneous differential
equations (with constant coefficients)
x'(t) = ax(t) + by(t)
y'(t) = cx(t) + dy(t)
We can solve this system using what we know:
1. Isolate y(t) in the first equation => y(t) = x'(t)/b ax(t)/b.
2. Differentiate this y(t) equation => y'(t) = x"(t)/b ax'(t)/b.
3. Substitute for y(t) and y'(t) in the second equations in our system:
x"(t)/b ax'(t)/b = cx(t) + d[x'(t)/b ax(t)/b],
=> x"(t) (a + d)x'(t) + (ad bc)x(t) = 0.
This is a linear second-order ODE in x(t). We know how to solve it.
4. Go back to step 1. Solve for y(t) in terms of x'(t) and x(t).
17
33
14.5 System of Equations: First-Order Linear
Differential Equations - Substitution
Example:
x'(t) = 2x(t) + y(t)
y'(t) = 4x(t) 3y(t).
1. Isolate y(t) in the first equation: => y(t) = x'(t) 2x(t),
2. Differentiate in 1. => y'(t) = x"(t) 2x'(t).
3. Substitute these expressions into the second equation:
x"(t) 2x'(t) = 4x(t) 3x'(t) + 6x(t), or
x"(t) + x'(t) 2x(t) = 0.
Solution:
x(t) = Ae
t
+ Be
2t
.
4. Using the expression y(t) = x'(t) 2x(t) we get
y(t) = Ae
t
2Be
2t
2Ae
t
2Be
2t
= Ae
t
4Be
2t
.
34
14.5 System of Equations: First-Order Linear
Differential Equations - Diagonalization
Consider the 2x2 system of linear differential equations (with
constant coefficients)
x'(t) = ax(t) + by(t) + m
y'(t) = cx(t) + dy(t) + n
Lets rewrite the system using linear algebra:
+ =
(

+
(

=
(

= ) (
) (
) (
) ( '
) ( '
) ( ' t Az
n
m
t y
t x
d c
b a
t y
t x
t z
Diagonalize the system (A must have independent eigenvectors):
H
-1
z(t) = H
-1
A (H H
-1)
z(t) + H
-1

H
-1
A H =
H
-1
z(t) = u(t) and H
-1
= s
u(t) = u(t) + s => u
1
(t) =
1
u
1
(t) + s
1
u
2
(t) =
2
u
2
(t) + s
2
18
35
14.5 System of Equations: First-Order Linear
Differential Equations - Diagonalization
Now, we have u(t) = u(t) + s
=> u
1
(t) =
1
u
1
(t) + s
1
u
2
(t) =
2
u
2
(t) + s
2
Solution:
u
1
(t) = e
-1t
[u
1
(0) - s
1
/
1
] + s
1
/
1
u
2
(t) = e
-2t
[u
2
(0) - s
2
/
2
] + s
2
/
2
36
14.5 System of Equations: First-Order Linear
Differential Equations General Approach
We start with an nxn system z (t) = Az(t) + b(t).
First, we solve the homogenous system:
Theorem: Let z = Az be a homogeneous linear first-order system. If z =
ve
t
is a solution to this system (where v = [v
1
, v
2
, ..., v
n
] ], then is an
eigenvalue of A and v is the corresponding eigenvector.
Proof: Start with z = ve
t
=> z = ve
t
Substitute for z and z in z = Az , => ve
t
= Ave
t
Divide e
t
both sides => v = Av or (A - I)v = 0.
Thus, for a non-trivial solution, it must be that |A-I| = 0, which is
the characteristic equation of matrix A. Thus, is an eigenvalue of A
and v is its associated eigenvector.
19
37
14.5 System of Equations: First-Order Linear
Differential Equations General Approach
A has n eigenvalues,
1
, ..,
n
and n eigenvectors, v
1
, v
2
, .., v
n
=> each term is a solution to z = Az.
Any linear combination of these terms are also solutions to z = Az.
Thus, the general solution to the homogeneous system z = Az is:
where c
1
, .., c
n
are arbitrary, possibly complex, constants.
If the eigenvalues are not distinct, things get a bit complicated but
nonetheless, as repeated roots are not robust, or "structurally unstable"
(i.e. do not survive small changes in the coefficients of A), then these
can be generally ignored for practical purposes.
t
i
i
e

v vv v

=
=
=
n i
i
t
i i
i
e c (t)
1

v vv v z zz z
38
14.5 System of Equations: First-Order Linear
Differential Equations General Approach
Example: x'(t) = x(t) + 2 y(t)
y'(t) = 3 x(t) + 2 y(t) x(0)=0, y(0)=-4
Rewrite system:
(

+
(

= =

=
=

3
2
1
1
4
2 1
1
t t
n i
i
t
i i
e c e c e c (t)
i

v vv v z zz z
) (
) (
) (
2 3
2 1
) ( '
) ( '
) ( ' t Az
t y
t x
t y
t x
t z =
(

=
(

=
Eigenvalue equation:
2
- 3 - 4 = 0 =>
1
,
2
=(1,4)
Find Eigenvectors:
1
=-1 => v
1
=(v
1,1
v
1,2
) v
1,1
=-v
1,2
Let v
1,2
=1 => v
1
=(-1,1)

2
=4 => v
2
=(v
2,1
v
2,2
) v
2,1
=(2/3)v
2,2
Let v
2,2
=3 => v
2
=(2,3)
Solution:
20
39
14.5 System of Equations: First-Order Linear
Differential Equations General Approach
(

+
(

=
(

=
3
2
1
1
4
0
0
2 1
c c ) ( z zz z
Find constants:
=> 2x2 system: c
1
=-(8/5);c
2
=-(4/5)
Definite solution:
(

=
(

t t
t t
t t
e e
e e
t) y
t) x
e e (t)
4
4
4
) 5 / 12 ( ) 5 / 8 (
) 5 / 8 ( ) 5 / 8 (
(
(
3
2
) 5 / 4 (
1
1
) 5 / 8 ( z zz z
40
14.5 System of Equations: First-Order Linear
Differential Equations Phase Plane
In the single ODE we sketch the solution, x(t), in the x-t plane. This
will be difficult in this case since our solutions are actually vectors.
Think of the solutions as points in the x-y plane. Plot the points. The
steady state corresponds to (x

,y

). The x-y plane is called the phase plane.


Phase diagrams are particularly useful for non-linear systems, where
analytic solution may not possible. Phase diagrams provides qualitative
information about the solution paths of nonlinear systems.
For the linear case, plot points in the x-y plane when z(t)=0.
Trajectories of z(t) are easy to deduce from the parameters a, b, c, and d.
For the non-linear case, we need to be more creative.
21
41
First, we start with the non-linear system:
x'(t) = f(x(t),y(t))
y'(t) = g(x(t),y(t))
Second, we establish the slopes of the singular curves by
totally differentiating the singular curves:
x y
x y
x x
x 0 y 0 y y
f (x, y)dx f (x, y)dy 0
g (x, y)dx g (x, y)dy 0
y f y g
0 say 0 say
x f x g
= =
+ =
+ =

= > = <

& &
14.5 System of Equations: First-Order Linear
Differential Equations Phase Plane
42
y(t)
x(t)
x

x 0 = &
y 0 = &
Now, establish the directions of motion. Suppose that
x y
x y
f 0, g 0
x y

= < = <

& &
14.5 System of Equations: First-Order Linear
Differential Equations Phase Plane
22
43
y
x
y
x
x 0 = & y 0 = &
y
x
x 0 = &
y 0 = &
y*
x*
44
Saddlepath
y
x
x 0 = &
y 0 = &
Focus y
x
x 0 = &
y 0 = &
23
45
Limit Cycle
y
x
x 0 = &
y 0 = &
46
14.5 System of Equations: First-Order Linear
Differential Equations Phase Plane
Example:
x'(t) = x(t) + 2 y(t)
y'(t) = 3 x(t) + 2 y(t) x(0)=0, y(0)=-4
Plot some points in the x-y plane: (-2,4); (1,0);(2,-2);(-3,-1)
(

=
(

=
(


=
(

=
(

=
(

=
(

=
(

=
11
5
1
3
2 3
2 1
) ( '
2
2
2
2
2 3
2 1
) ( '
3
1
0
1
2 3
2 1
) ( '
2
6
4
2
2 3
2 1
) ( '
t z
t z
t z
t z
24
47
14.5 System of Equations: First-Order Linear
Differential Equations Phase Plane
Plot the trajectories of the solutions in black and blue. In blue, the
lines that follow the direction of the eigenvectors:
With the exception of two trajectories, the trajectories in red move
away from the equilibrium solution (0,0).
These equilibrium points are called saddle point, which is unstable.
48
14.5 System of Equations: First-Order Linear
Differential Equations Stability
The general solution of the homogeneous equation:
The stability depends on the eigenvalues. Recall eigenvalue equation:

2
- tr(A) + |A| = 0
Three cases:
1. [tr(A)]
2
> 4|A| => 2 real distinct roots
- signs of
1
,
2
1)
1
<0,
2
<0 if tr(A)<0,|A|>0
2)
1
>0,
2
>0 if tr(A)>0,|A|>0
3)
i
>0,
j
<0 if |A|<0
Under Situation 1, the system is globally stable. There is
convergence towards (x

, y

), which is called a tangent node.

=
=
=
n i
i
t
i i
i
e c (t)
1

v vv v z zz z
25
49
14.5 System of Equations: First-Order Linear
Differential Equations Stability
Example: x'(t) = -5(t) + 1 y(t)
y'(t) = 4 x(t) - 2 y(t) x(0)=1, y(0)=2
Eigenvalue equation:
2
- 7 + 6 = 0 =>
1
,
2
=(1,6)
Eigenvectors:

1
=-6 => v
1
=(v
1,1
v
1,2
) v
1,1
=-v
1,2
Let v
1,2
=1 => v
1
=(1,-1)

2
=-1 => v
2
=(v
2,1
v
2,2
) v
2,1
=(1/4)v
2,2
Let v
2,2
=4 => v
2
=(1,4)
50
14.5 System of Equations: First-Order Linear
Differential Equations Stability
Under Situation 2, the system is globally unstable. There is no
convergence towards (x

, y

). A shock will move the system away from


the tangent node, unless we are lucky and the system jumps to the new
tangent node.
Under Situation 3, the system is saddle path unstable. We need C
i
=0
when
i
>0.
26
51
y(t)
x(t)
x 0 = &
14.5 System of Equations: First-Order Linear
Differential Equations Stability - Application
In economics, it is common to assume that the economy is a
stable situation. If a model determines an equilibrium with a saddle
path, the saddle path trajectory is assumed. If the equilibrium is
perturbed, the economy jumps to the new saddle path.
y
0,
y
1,
0
0
=

y
0
1
=

y
x 0 = &
This model displays
overshooting in y(t).
The economy jumps
from y
0,
to y
J
immediately, then it
converges to y
1,
.
y
J
52
14.5 System of Equations: First-Order Linear
Differential Equations Stability
2. [tr(A)]
2
= 4|A| => 1 real root equal to =tr(A)/2=(a+d)/2
System cannot be diagonalized (eigenvectors are the same!).
x(t) = C
1
e
t
+ C
2
t e
t
+x

y(t) = [(-a)/b (C
1
+ C
2
t )+ C
2
/b]e
t
+ y

The stability of the system depends on . If <0, the system is


globally stable.
27
53
14.5 System of Equations: First-Order Linear
Differential Equations Stability
3. [tr(A)]
2
> 4|A| => 2 complex roots r
i
=i
Two solutions:
Similar to what we did for second-order DE, we can use Eulers
formula to transform the e
it
part and eliminate the complex part:
e
i
= cos() + i sin().
Example: x'(t) = 3x(t) - 9 y(t)
y'(t) = 4 x(t) - 3 y(t) x(0)=2, y(0)=-4
Eigenvalue equation:
2
+ 27= 0 =>
1
,
2
= (33i, -33i)
Eigenvectors:
1
=33i => v
1,2
=1/3(1- 3i)v
1,1
Let v
1,1
=3 => v
1
=(1, (1- 3i))

2
=-1 => v
2
=(v
2,1
v
2,2
) v
2,1
=(1/4)v
2,2
Let v
2,2
=4 => v
2
=(1,4)
The solution from the first eigenvalue
1
=33i: z
1
(t)=v
1
e
33it
54
14.5 System of Equations: First-Order Linear
Differential Equations Stability
Using Eulers formula:
) ( ) (
) 3 3 cos( 3 ) 3 3 sin(
) 3 3 sin( 3
) 3 3 sin( 3 ) 3 3 cos(
) 3 3 cos( 3
3 1
3
) 3 3 sin( ) 3 3 cos(
3 1
3
3 3
t i t
t t
t
i
t t
t
(t)
i
t i t
i
e (t)
it
v vv v u uu u z zz z
z zz z
1 11 1
1 11 1
+ =
(

+
(

+
=
(

+ =
(

=
It can be shown that both u(t) and v(t) are independent solutions. We
can use them to get a general solution to the homogeneous system:
z(t) = c
1
u(t)+ c
2
v(t)
28
55
14.5 System of Equations: First-Order Difference
Equations - Example
Now, we have a system
x'(t) = 4 x(t) + 5 y(t) + 2
y'(t) = 5 x(t) + 4 y(t) + 4
Lets rewrite the system using linear algebra.
(

+
(

=
(

=
4
2
) (
) (
4 5
5 4
) (
) (
) ( '
t y
t x
t y
t x
t z
Eigenvalue equation:
2
- 8 - 9 = 0 =>
1
,
2
=(9,1)
u
1
(t) = 9 u
1
(t) + s
1
(unstable equation)
u
2
(t) = -1 u
2
(t) + s
2
(stable equation)
Solution:
u
1
(t) = e
9t
[u
1
(0) - s
1
/9] + s
1
/9
u
2
(t) = e
-t
[u
2
(0) - s
2
/(-1)] + s
2
/(-1)
56
14.5 System of Equations: First-Order Difference
Equations - Example
Use the eigenvector matrix, H, to transform the system:
(
(
(

+
+ +
=
(

+
=
(

= =
(

=
(

= =
(


=
(

] 1 ) 1 ) 0 ( ( [ ]
9
3
)
9
3
) 0 ( ( [
] 1 ) 1 ) 0 ( ( [ ]
9
3
)
9
3
) 0 ( ( [
) ( ) (
) ( ) (
) (
) (
1 1
1 1
) ( ) (
1
3
4
2
2 / 1 2 / 1
2 / 1 2 / 1
) 2 / 1 (
1 1
1 1
;
1 1
1 1
2 1
9
2 1
9
2 1
2 1
2
1
1
2
1
1
u e u e
u e u e
y
x
t u t u
t u t u
t u
t u
t Hu t z
H
s
s
s
H H
t t
t t
t
t

We need [x(0),y(0)]=(x
0
,y
0
) to obtain u
1
(0) and u
2
(0).
29
57
14.6 Analytical Solutions
A function y is called a solution in the extended sense of the differential
equation y'(t) = f(t,y) with y(t
0
) = y
0
if y is absolutely continuous, y
satisfies the differential equation a. e. and y satisfies the initial condition.
Theorem: Carathodory's existence theorem
Consider the differential equation y'(t) = f(t,y), y(t
0
) = y
0
,
with f(t,y) defined on the rectangular domain
R={(t,y)| |t-t
0
| a, |f(t,y)| m(t)}
If the function f(t,y) satisfies the following three conditions:
- f(t,y) is continuous in y for each fixed t,
- f(t,y) is measurable in t for each fixed y,
- there is an L-integrable function m(t), |t-t
0
| a, such that
|f(t,y)| m(t) for all (t,y) R,
then, the differential equation has a solution in the extended sense in a
neighborhood of the initial condition.
58
14.6 Analytical Solutions
The Carathodory's existence theorem states than an ODE has a
solution, under some mild conditions.
It is a generalization of the Peanos existence theorem, which requires the
right hand side of the first-order ODE to be continuous. Peanos
theorem also applies to higher dimensions, when the domain of f(.) is
an open subset of RxR
n
.
These theorems are general, imposing mild restrictions on f(.). The
PicardLindelf theorem (or CauchyLipschitz theorem) establishes conditions
for the existence of a uniqueness of solutions to first-order equations
with given initial conditions. Under this theorem, f(.) is Lipschitz
continuous (with bounded derivatives) in y and continuous in t.
30
59
14.6 Numerical Solutions
As the previous theorems show, it is common to find ODE that
cannot be solved analytically, in which case we have to satisfy ourselves
with an approximation to the solution.
Numerical ordinary differential equations is the part of numerical analysis
which studies the numerical solution of ODE. This field is also known
under the name numerical integration, but some people reserve this term
for the computation of integrals.
There are several algorithms to compute an approximate solution to
an ODE.
A simple method is to use techniques from calculus to obtain a series
expansion of the solution. An example is the Taylor Series Method.
60
The Taylor series method is a straight forward adaptation of classic
calculus to develop the solution as an infinite series.
The method is not strictly a numerical method but it is used in
conjunction with numerical schemes.
Problem: Computers usually cannot be programmed to construct the
terms and the order of the expansion is a priori unknown
From the Taylor series expansion:
The step size is defined as:
Using the ODE to get all the derivatives and the initial conditions, a
solution to the ODE can be approximated.
14.6 Numerical Solutions: Taylor Series Method
( ) ( ) ( ) ( ) ( ) ( ) K +

+ + =
0
IV
4
0
3
0
2
0 0

! 4

! 3

! 2
x y
h
x y
h
x y
h
x y h x y x y
0
x x h =
31
61
Example: ODE y(x) = x + y y
0
=1,
Analytical solution: y(x) = 2 e
x
- x 1
Lets try to approximate y(x) using a Taylor series expansion.
- First, we need the jth order derivatives for j=1, 2, 3, ...
14.6 Numerical Solutions: Taylor Series Method
( ) ( )
( ) ( )
( ) ( )
( ) ( ) x y x y
x y x y
x y x y
x y x x y
=
=
+ =
+ =
(4)
1
( ) ( )
( ) ( )
( ) ( )
( ) ( ) 2 0 0
2 0 0
2 1 1 0 1 0
1 1 0 0 0
(4)
= =
= =
= + = + =
= + = + =
y y
y y
y y
y x y
62
- Second, replace in the Taylor series expansion
Note: The Taylor series is a function of x
0
and h. Plug in the
initial conditions (n=4):
Resulting in the equation:
( ) ( ) ( ) ( ) ( ) ( ) K +

+ + =
0
(4)
4
0
3
0
2
0 0

! 4

! 3

! 2
x y
h
x y
h
x y
h
x y h x y x y
( ) ( ) ( ) ( ) ( ) Error 2
! 4
2
! 3
2
! 2
1 1
4 3 2
+

+ + =
h h h
h h y
( ) Error
12

3
1
4 3
2
+

+ + + =
h h
h h h y
14.6 Numerical Solutions: Taylor Series Method
32
63
The results (x=0)
Second Third Fourth Exact
h y(h ) y(h ) y(h ) Solution
0 1.00000 1.00000 1.00000 1.00000
0.1 1.11000 1.11033 1.11034 1.11034
0.2 1.24000 1.24267 1.24280 1.24281
0.3 1.39000 1.39900 1.39968 1.39972
0.4 1.56000 1.58133 1.58347 1.58365
0.5 1.75000 1.79167 1.79688 1.79744
0.6 1.96000 2.03200 2.04280 2.04424
0.7 2.19000 2.30433 2.32434 2.32751
0.8 2.44000 2.61067 2.64480 2.65108
0.9 2.71000 2.95300 3.00768 3.01921
1 3.00000 3.33333 3.41667 3.43656
1.1 3.31000 3.75367 3.87568 3.90833
1.2 3.64000 4.21600 4.38880 4.44023
1.3 3.99000 4.72233 4.96034 5.03859
1.4 4.36000 5.27467 5.59480 5.71040
1.5 4.75000 5.87500 6.29688 6.46338
1.6 5.16000 6.52533 7.07147 7.30606
1.7 5.59000 7.22767 7.92368 8.24789
1.8 6.04000 7.98400 8.85880 9.29929
1.9 6.51000 8.79633 9.88234 10.47179
2 7.00000 9.66667 11.00000 11.77811
Taylor Series Example
0.00
2.00
4.00
6.00
8.00
10.00
12.00
0 0.5 1 1.5 2
h Value
Y

V
a
l
u
e
Second y(h )
Third y(h )
Fourth y(h )
Exact Solution
14.6 Numerical Solutions: Taylor Series Method
64
Note that the last set of terms,
we start to lose accuracy for
the 4
th
order with big h:
Difficult to estimate. All we
know is that it is in the range
of 0< <h
Taylor Series Example
0.00
2.00
4.00
6.00
8.00
10.00
12.00
0 0.5 1 1.5 2
h Value
Y

V
a
l
u
e
Second y(h )
Third y(h )
Fourth y(h )
Exact Solution
( ) h y
h
< <

= 0 ,
! 5
Error
(5)
5
14.6 Numerical Solutions: Taylor Series Method
33
65
Numerical analysis is an art. The number of terms, we chose is a
mater of judgment and experience.
We usually truncate the Taylor series, when the contribution of the
last term is negligible to the number of decimal places to which we
are working.
Things can get complicated for higher-order ODE.
Example: y(x) = 3 + x y
2
, y(0)=1, y(0)=-2
14.6 Numerical Solutions: Taylor Series Method
y y y y y y y y y y y
y y y y
y y y
= =
=
=
2 6 4 2 2
2 2
2 1
V
2 IV
The higher order terms can be calculated from previous values and
they are difficult to calculate. Euler method can be used in these cases.
66
One thing about the Taylor series, is that the error is small when h
is small and only a few terms are need for good accuracy.
The Euler method may be though of an extreme of the idea for a
Taylor series having a small error when h is extremely small. The
Euler method is a first-order Taylor series with each step having an
upgrade of the derivative and y term changed:
14.6 Numerical Solutions: Euler Method
( ) ( ) ( )
( )
0 0
2
0 0
Error
Error ,
2 !
y x h y x h y x
h
y x x h
+ = + +

= < < +
The Euler method can have the algorithm, where the coefficients
are upgraded each time step:
The first derivative and the initial y values are update each
iteration.
( ) error
2
n n 1 n
h O y h y y + + =
+
34
67
0 0
y ) y(x ); y , x ( f y
dx
dy
= == = = == = = == =
x
0
x
1
x
2
x
3
y y
0
h h h h h h
Straight line approximation
14.6 Numerical Solutions: Euler Method
68
Consider: y(x) = x + y
The initial condition is: y(0)=1
The step size is: h =.02
The analytical solution is: y(x) = 2 e
x
- x 1
The algorithm has a loop using the initial conditions and
definition of the derivative
The derivative is calculated as: y
i
(x) = y
i
+ x
i
The next y value is calculated: y
i+1
(x) = y
i
+ h y
i
(x)
Take the next step: x
i+1
= x
i
+ h
14.6 Numerical Solutions: Euler Method
35
69
The results
Exact Error
xn yn y'n hy'n Solution
0 1.00000 1.00000 0.02000 1.00000 0.00000
0.02 1.02000 1.04000 0.02080 1.02040 -0.00040
0.04 1.04080 1.08080 0.02162 1.04162 -0.00082
0.06 1.06242 1.12242 0.02245 1.06367 -0.00126
0.08 1.08486 1.16486 0.02330 1.08657 -0.00171
0.1 1.10816 1.20816 0.02416 1.11034 -0.00218
0.12 1.13232 1.25232 0.02505 1.13499 -0.00267
0.14 1.15737 1.29737 0.02595 1.16055 -0.00318
0.16 1.18332 1.34332 0.02687 1.18702 -0.00370
0.18 1.21019 1.39019 0.02780 1.21443 -0.00425
0.2 1.23799 1.43799 0.02876 1.24281 -0.00482
14.6 Numerical Solutions: Euler Method
70
Euler Example Problem
1.00
1.10
1.20
1.30
1.40
1.50
0 0.1 0.2 0.3 0.4
X Values
Y

V
a
l
u
e
s
y
Exact Solution
Compare the error at y(0.1)
with a h=0.02
Error = 1.1103-1.1081
= 0.0022
If we want the error to be
smaller than 0.0001
We need to reduce the step
size by 22 to get the desired
error.
22
0001 . 0
0022 . 0
Reduction = =
14.6 Numerical Solutions: Euler Method
36
71
The trouble with this method is
Lack of accuracy
Small step size
Euler method only uses the previously computed value y
n
to
determine y
n+1
. This can be generalized to include more past values.
These methods are called multi-steps.
Note: For the simple Euler method, we use the slope at the
beginning of the interval y
n
, to determine the increment to the
function, but this is always wrong. If the slope is constant, the
solution is linear.
14.6 Numerical Solutions: Euler Method
72
Extra
Introduction to Stochastic Processes
and Calculus
37
73
Preliminaries: Sigma-algebra
Definition: A sigma-algebra F is a set of subsets of s.t.:
F.
If F, then F.
If
1
,
2
,,
n
, F, then U
(I >= 1)

i
F.
The set (, F ) is called a measurable space.
There may be be certain elements in that are not in F
The smallest sigma algebra generated by the real space ( = R
n
) is
called the Borel Sigma Algebra .
74
Preliminaries: Probability Measure
Definition: Probability measure
A probability measure is the triplet (, F, P) where P: F [0,1] is a
function from F to [0,1].
P() = 0 and P() = 1 always.
The elements in that are not in F have no probability.
We can extend the probability definition by assigning a probability
of zero to such elements.
38
75
Preliminaries: Stochastic Process
Definition: Random variable x wrt (, F, P)
x : F R
n
is a measurable function (i.e. x
-1
(z) F for all z in R
n
).
Hence, P: F [0,1] is translated to an equivalent function

x
: R
n
[0,1], which is the distribution of x.
Definition: Stochastic Process X(t, )
A stochastic process is a parameterized collection of random variables x(t),
or X(t, ) = {x(t)}
t.
Normally, t is taken as time.
Think of as one outcome from a set of possible outcomes of an
experiment. Then, X(t, ) is the state of an outcome of the
experiment at time t.
76
Stochastic Process - Illustration
Time
X(t,
1
)
X(t,
2
)
X(t,
3
)
Y
1
= X(t
1
, )
Y
2
= X(t
2
, )
Y
1
& Y
2
are 2 different random
variables.
Stochastic Process X(t, ) is a
collection of these Y
i
s
39
77
Stochastic Process: A few considerations
A stochastic process is a function of a continuous variable (most
often: time).
The question now becomes how to determine the continuity and
differentiability of a stochastic process?
It is not simple as a stochastic process is not deterministic.
We use the same definitions of continuity, but now look at the
expectations and probabilities.
A deterministic function f(t) is continuous if:
f(t
1
) f(t
2
) t
1
t
2
.
To determine if a stochastic process X(t,) is continuous, we
need to determine:
P(X(t
1
, ) X(t
2
, )) t
1
t
2
or
E(X(t
1
, ) X(t
2
, )) t
1
t
2

78
Stochastic Process: Kolomogorov Continuity
Theorem
If for all T > 0, there exist a, b, > 0 such that:
E(|X(t
1
, ) X(t
2
, )|
a
) |t
1
t
2
|
(1 + b)
Then X(t, ) can be considered as a continuous stochastic
process.
Brownian motion is a continuous stochastic process.
Brownian motion (Wiener process): X(t, ) is almost surely
continuous, has independent normal distributed (N(0,t-s))
increments and X(t=0, ) =0. (a continuous random walk)
Robert Brown (17731858)
Andrey Kolmogorov (1903-1987)
40
79
79
Let the a variable z(t) be almost surely continuous, with z(t=0)=0.
Define N(,v) as a normal distribution with mean and variance v.
z(t) changes in a small interval of time t by z
Definition: The variable z(t) follows a Wiener process if
z = t , where i.i.d. N(0,1)
The values of z for any 2 different (non-overlapping) periods of
time are independent
Stochastic Process: Wiener process
Norbert Wiener (1894 1964, USA)
80
80
Properties of Wiener processes:
Mean of z is 0 and variance of z is t (standard deviation is t).
Let N=T/t, then
Thus, z(t) has independent increments, z, following a N(0, t).
Typical notation: W
t
Theorem (Levy): Quadratic variation.
As the partition of [0,T] becomes finer (a smaller norm), say P0,
Stochastic Process: Wiener process - Properties

=
=
N
i
i
t z T z
1
) 0 ( ) (

=

=
N
t
t t P
T W W
1
2
1 0 || ||
) ( lim
41
81
81
Stochastic Process: Generalized Wiener process
A Wiener process has a drift rate (i.e., average change per unit time)
of 0 and a variance rate of 1.
In a generalized Wiener process, the drift rate and the variance rate
can be set equal to any chosen constants.
The variable x follows a Generalized Wiener process with a drift rate
of and a variance rate of
2
if
dx= dt + dz
- The change in the value of x in any time interval T is normally
distributed with:
- Mean change in x in time T is T
- Variance of change in x in time T is
2
T
82
82
In an It process the drift rate and the variance rate are functions of
time
dx = a(x,t) dt + b(x,t) dz
(the discrete time equivalent is only true in the limit as t tends to 0.)
Example: Ito process for stock prices (S)
dS= S dt + S dz
where is the expected return and is the volatility.
The discrete time equivalent is
where S/S ~(t,
2
t).
t t x b t t x a x + = ) , ( ) , (
Stochastic Process: Generalized Wiener process
t S t S S + =
42
83
Stochastic Process and Calculus: Motivation
Consider a process which is the square of a Brownian motion:
Y(t)=W(t)
2
This process is always non-negative, Y(0) = 0, Y(t) has infinitely many
zeroes on t > 0 and E[Y(t)]=E[W(t)
2
]=t.
Question: What is the stochastic differential of Y(t))?
Using standard calculus: dY(t) = 2W(t) dW(t)
=> Y(t) =
t
dY =
t
2W(t) dW(t)
Consider
t
2W(t) dW(t):
By definition, the increments of W(t) are independent, with constant
mean.
84
Therefore, the expected value, or mean, of the summation will be
zero:
But the mean of Y (t) = W(t)
2
is t which is definitely not zero! The
two stochastic processes dont agree even in the mean, so
something is not right! If we want to keep the integral definition
and limit processes, then the rules of calculus will have to change.
Stochastic Process and Calculus: Motivation
43
85
Stochastic Processes: Applications (1)
We saw several systems expressed as differential equations.
Example: Population growth, say dN/dt = a(t)N(t).
There is no stochastic component to N(t), given initial conditions, we
can derive without error the evolution of N(t) over time.
However, in real world applications, several factors introduce a
random factor in such models:
a(t) = b(t) + (t) x Noise = b(t) + (t) W(t),
where W(t) is a stochastic process that represents the source of
randomness (for example, white noise).
A simple differential equation becomes a stochastic differential
equation.
86
Other applications where stochastic processes are used :
Filtering problems (Kalman filter)
Minimize the expected estimation error for a system state.
Optimal Stopping Theorem
Financial Mathematics
Theory of option pricing uses the differential heat equation applied
to a geometric Brownian motion (e
t+W(t)
).
Stochastic Processes: Applications (2)
44
87
Stochastic calculus: Introduction (1)
Let us consider:
dx/dt = b(t,x) + (t,x) W(t)
- White noise assumptions on W(t) would make W(t) discontinuous.
This is bad news.
Hence, we consider the discrete version of the equation:
x
k + 1
- x
k
= b(t
k
,x
k
)t
k
+ (t
k
,x
k
)W(t
k
)t
k
(x
k
= x(t
k
,) )
- We can make white noise assumptions on B
k
, where
B
k
= W(t
k
)t
k.
- It turns out that B
k
can only be a Brownian motion
88
Now we have another problem:
x(t) = b(t
k
,x
k
)t
k
+ (t
k
,x
k
) B
k
As t
k
0, b(t
k
,x
k
)t
k
time integral of b(t,x)
What about (t
k
,x
k
) B
k
?
Hence, we need to find expressions for integral and
differentiation of a function of stochastic process.
Again, we have a problem.
Brownian motion is continuous, but not differentiable
(Riemnann integrals will not work!)
Stochastic Calculus provides us a mean to calculate integral
of a stochastic process but not differentiation.
It makes sense as most stochastic processes are not differentiable.
Stochastic calculus: Introduction (2)
45
89
We use the definition of integral of deterministic functions as a
base:
(t,) dB = (t
k
,) B
k
, where t
k
[t
k
,t
k + 1
) as
t
k + 1
t
k
0.
But, we cannot chose any t
k
; t
k
[t
k
,t
k + 1
].
Example: if t
k
= t
k
, then E( B
k
B
k
) = 0.
Example: if t
k
= t
k + 1
, then E( B
k
B
k
) = t.
We need to be careful in choosing t
k
.
Stochastic calculus: Introduction (3)
90
Stochastic calculus: Ito and Stratonovich
Two choices for t
k
are popular:-
If t
k
= t
k
, then it is called Itos integral.
If t
k
= ( t
k
+ t
k + 1
)/2, then it is called Stratonovich integral.
We will concentrate on Itos integral as it provides computational
and conceptual simplicity.
Itos and Stratonovich integrals differ by a simple time
integral only.
Kiyoshi It (19152008, Japan)
46
91
Stochastic calculus: Itos Theorem (1)
For a given f(t,) if:
1. f(t,) is F
t
adapted (a process that cannot look into the future)
f(t,) can be determined by t and values of B
t
() up to t.
B
t/2
() is F
t
adapted but B
2t
() is not F
t
adapted.
2. E(f
2
(t,) dt ) < (Expected energy is bounded)
Then
f(t, ) dB
t
() = (t
k
, ) (B
k + 1
- B
k
) and
E(| f(t, ) dB
t
()|
2
) = E(f
2
(t,) dt ) (Ito isometry)
=> the integral f(t, ) dB can be defined. f(t, ) is said to be B-integrable
(integrable =bounded integral)
92
Stochastic calculus: Itos Theorem (1)
Then
f(t, ) dB
t
() = (t
k
, ) (B
k + 1
- B
k
) and
E(| f(t, ) dB
t
()|
2
) = E(f
2
(t,) dt ) (Ito isometry)
=> the integral f(t, ) dB can be defined. f(t, ) is said to be B-integrable
(integrable =bounded integral)
(t , ) are called elementary functions.
Their values are constant in the interval [t
k
,t
k + 1
]
E(|f(t, ) - (t,)|
2
dt ) 0 (Difference in expected energy is
insignificant)
47
93
94
Stochastic calculus: Itos Theorem (2)
If f(t,)= B(t,) =>select (t,) = B(t
k
,) when t [t
k
,t
k + 1
)
Then, we have: B(t,) dB(t,) = B(t
k
,) (B(t
k + 1
,) - B(t
k
,))
Some algebra (recalling 2b(a-b)=a
2
-b
2
-(a-b)
2
)
:
(1) B(t
k
,) (B(t
k + 1
,) - B(t
k
,)) = [B
2
(t
k+1
,)- B
2
(t
k
,) (B(t
k +1
,) B(t
k
,))
2
]
(2) B
2
(t
k+1
,) - B
2
(t
k
,) = (B(t
k +1
,) B(t
k
,))
2
+ 2 B(t
k
,) (B(t
k + 1
,) - B(t
k
,))
=> B
2
(t) - B
2
(0) = B
2
(t
k+1
,) - B
2
(t
k
,)
(3) lim
t0
[ (B(t
k + 1
,) B(t
k
,))
2
] = T (quadratic variation property of B(t))
B(t,) dB(t,) = lim
t0
[B
2
(t
k+1
,)- B
2
(t
k
,) - (B(t
k + 1
,)
- B(t
k
,))
2
] =B
2
(t,)/2 t/2.
Note: Itos integral gives us more than the expected B
2
(t,)/2. This is
due to the time-variance of the Brownian motion.
48
95
Stochastic calculus: Itos Theorem (3)
Simple properties of Itos integrals:
- [a X(t,) + b Y(t,)] dB(t) = a X(t,) dB(t) + b Y(t,)dB(t)
- E[a X(t,) dB(t)] = 0
- a X(t,) dB(t) is F
t
measurable
96
Stochastic calculus: Itos Process (1)
For a general process x(t,), how do we define integral f(t,x) dx?
If x can be expressed by a stochastic differential equation, we
can calculate f(t,x).
Definition:
An Itos process is a stochastic process on (, F, P), which can be
represented in the form:
x(t,) = x(0) + (s) ds + (s)dB(s)
where and may be functions of x and other variables. Both are
processes with finite (square) Riemann integrals.
Alternatively, we can say x(t,) is called an Itos process if
dx(t) = (t) dt + (t) dB(t).
49
97
Stochastic calculus: Itos Process (1)
Itos Formula
Let x(t,) be an Ito process: dx(t) = (t) dt + (t) dB(t).
Let f(t,x) be a twice continuously differentiable function (in
particular all second partial derivatives are continuous functions).
Then, f(t,x) is also an Ito process and
f(t,x) = (df/dt) dt + (df/dx) dx(t) + d
2
f/dx
2
(dx(t))
2
= [(df/dt) + (df/dx) (t) + d
2
f/dx
2

2
(t)]dt + (df/dx) (t) dB(t)
Note: Ito processes is closed under twice continuously differentiable
transformations.
Useful rules: dt*dt = dt*dB(t) = dB(t)*dt =0
dB(t)*dB(t) = (dB(t))
2
= dt
98
Stochastic calculus: Itos Process (2)
Check:
Let B(t,) =X(t) (think of = 0, = 1), then define:
f(t,) = B
2
(t,)/2.
Now,
(B
2
(t,)/2) = 0 dt + B(t,) dB(t)+ d
2
f/dx
2
(dB(t))
2
= B(t,) dB(t)+ .dt
=> B
2
(t,)/2 = B(t,).dB
t
+ .dt
or B(t,).dB
t
=B
2
(t,)/2 - t/2
50
99
Stochastic calculus: Itos Process (3)
Example:
Let f(t,) = e
tB(t)
.
Now,
(f(t,)) = e
tB(t)
.B(t) dt + t e
tB(t)
dB(t) + t
2
e
tB(t)
(dB(t))
2
= e
tB(t)
.(B(t) + t
2
) dt + t e
tB(t)
dB(t)
Example: Z(t) = f(t,) = e
rt+B(t)
.
Now,
(f(t,)) = Z(t).r dt + Z(t) dB(t) +
2
Z(t) dt
= (r +
2
) Z(t) dt + Z(t) dB(t)
100
Stochastic calculus: Application
Let S(t), a stock price, follow a geometric Brownian motion:
dS(t) = S(t) dt + S(t) dB(t).
The payoff of an option f(S,T) is known at T.
Applying Itos formula:
d(f(S,t)) = (df/dt) dt + (df/dS) dS(t) + d
2
f/dS
2
(dS(t))
2
= (df/dt) dt + (df/dS) [ S(t) dt + S(t) dB(t)] + d
2
f/dS
2
(dS(t))
2
= [(df/dt) + S(t) (df/dS) + d
2
f/dS
2

2
S(t)
2
]dt + (df/dS) S(t)dB(t)
Form a (delta-hedge) portfolio: hold one option and continuously trade
in the stock in order to hold (df/dS) shares. At t, the value of the
portfolio:
(t) = f(S,t) S(t) df/dS
51
101
Stochastic calculus: Application
Let R be the accumulated profits from the portfolio. Then, over the
time period [t, t + dt], the instantaneous profit or loss is:
dR = df(S,t) df/dS dS(t)
Substituting using Itos formula for df(S,t) and for dS(t), we get:
dR = [(df/dt) + d
2
f/dS
2

2
S(t)
2
] dt
Note: This is not a SDE (dB(t) has dissappeared: riskless portfolio!)
Since there is no risk, the rate of return of the portfolio should be r,
the rate on a riskless asset.
102
Stochastic calculus: Application
That is,
dR = r (t) dt = r [f(S,t) S(t) df/dS] dt
=> r [f(S,t) S(t) df/dS] dt = [(df/dt) + d
2
f/dS
2

2
S(t)
2
] dt
=> (df/dt) + d
2
f/dS
2

2
S(t)
2
+ r S(t) df/dS r f(S,t)=0
This is the Black-Scholes PDE. Given the boundary conditions for a
call option, C(S,t), it can be solved using the standard methods.
Boundary conditions:
C(0,t)=t for all t
C(S,t)S, as S .
C(S,T)=max(S-K,0) K=strike price
52
103
Stochastic calculus: Solving a stochastic DE
Make a guess (Hope you are lucky!)
Example: We are asked to solve the stochastic DE:
dZ(t) = Z(t) dB(t).
We need an inspired guess, so we try: Y(t)=e
rt+B(t)
with SDE: dZ(t) = (r +1/2
2
) Z(t) dt + Z(t) dB(t).
Replace in given SDE:
=>(r +1/2
2
) Z(t) dt + Z(t) dB(t) = Z(t) dB(t).
=> r = - 1/2
2
Solution: Y(t)=exp(- 1/2
2
t + dB(t)) (This solution is
called the Dolans exponential of Brownian motion.)
Note: SDE with solutions are rare.
104
For Man U fans: The Black Scholes

You might also like