Nothing Special   »   [go: up one dir, main page]

15.1 Dynamic Optimization

Download as pdf or txt
Download as pdf or txt
You are on page 1of 32

RS Ch 15 Dynamic Optimization

11
Chapter 15
Dynamic Optimization
Pierre de Fermat (1601?-1665) L. Euler
Lev Pontryagin (1908-1988)
2
15.1 Dynamic Optimization
In this chapter, we will have a dynamic system i.e., evolving over time
(either discrete or continuous time). Our goal: optimize the system.
We will have a particular optimization problems with the following
features:
1) Aggregation over time for the objective function
2) Variables linked (constrained) across time.
Example: Planners problem Maximization of consumption over time
Max


T
dt t C
rt
e
0
) (
Subject to ( ) K F F = and 0 >

K
F
and 0
2
2
<

K
F

Change in capital stock: K C F K =
&

RS Ch 15 Dynamic Optimization
3
15.1 Dynamic Optimization
Notation:
K(t) = capital (only factor of production)
F(K) =well-behaved output function i.e., F(K)>0, and F(K)<0, for K >0
C(t) = consumption
I(t) = investments = F(K) C(t)
= constant rate of depreciation of capital.
We are not looking for a single optimal value C* , but for values
C(t) that produce an optimal value for the integral (or aggregate
discounted consumption over time).
4
The calculus of variations involves finding an extremum (maximum or
minimum) of a quantity that is expressible as an integral.
Question: What is the shortest path between two points in a plane?
You know the answer -a straight line- but you probably have not
seen a proof of this: the calculus of variations provides such a proof.
Consider two points in the x-y plane, as shown in the figure.
An arbitrary path joining the points follows the general curve
y = y(x), and an element of length along the path is
.
2 2
dy dx ds + =
x
y
x
1
x
2
y
2
y
1
2 2
dy dx ds + =
y = y(x)
2
1
15.1 Calculus of variations Classic Example
RS Ch 15 Dynamic Optimization
5
We can rewrite this as:
which is valid because Thus, the length is
Note that we have converted the problem from an integral along a
path, to an integral over x:
We have thus succeeded in writing the problem down, but we need
some additional mathematical machinery to find the path for which L
is an extremum (a minimum in this case).
, ) ( 1
2
dx x y ds + =
. ) ( dx x y dx
dx
dy
dy = =
. ) ( 1
2
1
2
2
1

+ = =
x
x
dx x y ds L
15.1 Calculus of variations Classic Example
. ) ( 1
2
1
2
2
1

+ = =
x
x
dx x y ds L
6
In our usual minimizing or maximizing of a function f(x), we
would take the derivative and find its zeroes These points of zero
slope are stationary points i.e., the function is stationary at those
points, meaning for values of x near such a point, the value of the
function does not change (due to the zero slope).
Similarly, we want to be able to find solutions to these integrals
that are stationary for infinitesimal variations in the path. This is
called calculus of variations.
The methods we will develop are called variational methods.
These principles are common, and of great importance, in many
areas of physics (such as quantum mechanics and general
relativity) and economics.
15.1 Calculus of variations Classic Example
RS Ch 15 Dynamic Optimization
7
We will try to find an extremum (to be definite, a minimum) for an
as yet unknown curve joining two points x
1
and x
2
, satisfying the
integral relation:
The function f is a function of three variables, but because the path
of integration is y = y(x), the integrand can be reduced to a
function of just one variable, x.
To start, lets consider two curves joining points
1 and 2, the right curve y(x), and a wrong
curve:
Y(x) that is a small displacement from the right
curve, as shown in the figure.
We call the difference between these curves as some function h(x).
. ] ), ( ' ), ( [
2
1

=
x
x
dx x x y x y f S
x
y
x
1
x
2
y
2
y
1
(wrong) ) ( ) ( ) ( x x y x Y + =
y = y(x) (right)
2
1 . 0 ) ( ) ( ); ( ) ( ) (
2 1
= = + = x x x x y x Y
15.1 Euler-Lagrange Equations
8
There are infinitely many functions h(x), that can be wrong. We
require that they each be longer than the right path. To
quantify how close the wrong path can be to the right one,
lets write Y = y + h, so that
Now, we can characterize the shortest path as the one for which
the derivative dS/d = 0 when = 0. To differentiate the above
equation with respect to , we need the partial derivative
via the chain rule
so dS/d = 0 gives
. ] , ' , [
] ), ( ' , [ ) (
2
1
2
1

+ + =
=
x
x
x
x
dx x y y f
dx x x Y Y f S

/ S
,
) , , (
y
f
y
f x y y f

+ +


0
2
1
2
1
=
|
|

\
|

=

x
x
x
x
dx
y
f
y
f
dx
f
d
dS


15.1 Euler-Lagrange Equations
RS Ch 15 Dynamic Optimization
9
The second term in the equation can be integrated by parts:
but the first term of this relation (the end-point term) is zero
because h(x) is zero at the endpoints.
Our modified equation is:
This leads us to the Euler-Lagrange equation
Key: the modified equation has to be zero for any h(x).
, ) ( ) (
2
1
2
1
2
1
|
|

\
|

x
x
x
x
x
x
dx
y
f
dx
d
x
y
f
x dx
y
f

. 0 ) (
2
1
=
|
|

\
|

x
x
dx
y
f
dx
d
y
f
x
d
dS

. 0 =

y
f
dx
d
y
f
15.1 Euler-Lagrange Equations
10
Lets go over what we have shown. We can find a minimum (more
generally, a stationary point) for the path S if we can find a function
for the path that satisfies:
The procedure for using this is to set up the problem so that the
quantity whose stationary path you seek is expressed as
where f[y(x),y(x),x] is the function appropriate to your problem.
Then, write down the Euler-Lagrange equation, and solve for the
function y(x) that defines the required stationary path.
. 0 =

y
f
dx
d
y
f
, ] ), ( ), ( [
2
1

=
x
x
dx x x y x y f S
15.1 Euler-Lagrange Equations
RS Ch 15 Dynamic Optimization
11
Find the shortest path between two points:
The integrand contains our function
The two partial derivatives in the Euler-Lagrange equation are:
The Euler-Lagrange equation gives us
This says that
=> y
2
= constant (call it m
2
), so y(x) = mx + b
=> A straight line is the shortest path.
. ) ( 1
2
1
2
2
1

+ = =
x
x
dx x y ds L
. ) ( 1 ) , , (
2
x y x y y f + =
.
1
and 0
2
y
y
y
f
y
f
+

. 0
1
2
=
+

y
y
dx
d
y
f
dx
d
). 1 ( or ,
1
2 2 2
2
y C y C
y
y
+ = =
+

15.1 Euler-Lagrange Equations Example I


12
Intertemporal utility maximization problem:
B: bliss level of utility
c(t) = c(k(t),k(t))
Substitute constraint into integrand:
f(x(t),x(t),t)= B - u(c(k(t),k(t)) = V(c(k(t),k(t)))
Euler-Lagrange equation:
) ( ' ) ( ' ) ( ' c u
dt
d
k f c u V
dt
d
V
k
k
= =

15.1 Euler-Lagrange Equations Example II

=
2
1
) ( ' )) ( ( ) ( . . ))] ( ( [ max
x
x
t k t k f t c t s dx t c u B
) 1 )( ( ' ); ( ' ) ( ' = = =

c u
k d
dc
V V k f c u V
c
k
k

=
k
k
V
dt
d
V
RS Ch 15 Dynamic Optimization
13
Repeating Euler-Lagrange equation:
If we are given functional forms:
f(k(t) )=k(t)

u(c(t)) = ln(c(t))
Then,
) ( ' ) ( ' ) ( ' c u
dt
d
k f c u V
dt
d
V
k
k
= =

15.1 Euler-Lagrange Equations Example II
) (
) (
) (
) (
) (
)
) (
1
( ) (
) (
1
) ( ' ) ( ' ) ( '
1
2
1
t c
t c
t k
t c
t c
t c dt
d
t k
t c
c u
dt
d
k f c u

=
= =
=

14
15.1 Dynamic Optimization Calculus of
Variations: Summary
max
x
In the general economic framework, F(.) will be the objective
function, the variable x will be time, and y will be the variable to
choose over time to optimize F(.). Changing notation:
Necessary Conditions: Euler-LaGrange Equation
1 1 0 0
) ( , ) ( s.t. )) ( ), ( , (
1
0
x t x x t x dt t x t x t F
t
t
= =


. 0 =
|
|

\
|

x
F
t x
F
RS Ch 15 Dynamic Optimization
15
15.1 Dynamic Optimization Limitations
Method gives extremals, but it does not tell maximum or minimum:
- Distinguishing mathematically between max/min is more difficult.
- Usually have to use geometry to setup the problem.
Solution curve x(t) must have continuous second-order derivatives
- Requirement from integration by parts.
We find stationary states, which vary only in space, not in time.
-Very few cases in which systems varying in time can be solved.
- Even problems involving time (e.g., brachistochrones) do not
change in time.
16
Optimal control: Find a control law for a given system such that a
certain optimality criterion is achieved.
Typical example: Minimization of a cost functional that is a
function of state and control variables.
More general than Calculus of Variations.
Handles Inequality restrictions on Instruments and State Variables
Similar to Static Methods (Lagrange Method and KKT Theory)
Long history in economics: Shell(1967), Arrow(1968) and
Shell(1969).
15.2 Optimal Control Theory
RS Ch 15 Dynamic Optimization
17
The dynamic system is described by a state equation represented by:
where x(t) = state variable, u(t) = instrument or control variable
The control aim is to maximize the objective functional:
Usually the control variable u(t) will be constrained as follows:
u(t) (t), t [t
0
,t
1
]
Boundary Conditions: t
0
, t
1
, x(t
0
) = x
0
fixed. Sometimes, x(t
1
) = x
T
.
Sometimes, we will also have additional constraints. For example,
h(x(t),u(t),t) 0
Functional Form Assumption: f(.), g(.) are continuously differentiable.
15.2 Optimal Control Theory

1
0
)) ( ), ( , (
t
t
dt t u t x t f
)) ( ), ( , ( ) ( t u t x t g t x =

18
We can convert the calculus of variation problem into a control
theory problem.
Calculus of variation problem:
Define: x(t)=u(t), x(t
0
)=x
0
, x(t
1
)=x
1
Replace in calculus of variation problem:
Notation: x(t) = x
t
= x
t
15.2 Optimal Control Theory


1
0
)) ( ), ( , ( max
t
t
dt t x t x t F

1
0
)) ( ), ( , ( max
t
t
dt t u t x t F
RS Ch 15 Dynamic Optimization
19
Form a Lagrangian-type expression, the Hamiltonian
H(t,x(t),u(t),(t)) = f(t,x(t),u(t)) + g(t,x(t),u(t))
Necessary Conditions (f.o.c.), called the Maximum Principle):
Boundary (Transversality) Conditions: x(t
0
) = x
0
, (t
1
) = 0
The Hamiltonian multiplier, (t), is called the co-state or adjoint variable:
It measures the imputed value of stock (state variable) accumulation.
15.2 Optimal Control Theory
)) ( ), ( , ( ) (
0
.
t u t x t g t x
d
dH
dx
dH
du
dg
du
df
du
dH
= =
=
= =

(adjoint equation)
20
Conditions for necessary conditions to work i.e.,: u(t) and x(t)
maximize:
- Control variable must be piecewise continuous (some jumps,
discountinuities OK).
- State variable must be continuous and piecewise differentiable.
- f(.) and g(.) first-order differentiable w.r.t. state variable and t, but
not necessarily w.r.t. control variable.
- Initial condition finite for state variable.
- If no finite terminal value for state variable, then (t
1
) = 0.
15.2 Optimal Control Theory Pontryagins
Maximum principle
)) ( ), ( , ( ) ( s.t. )) ( ), ( , (
1
0
t u t x t g t x dt t u t x t f
t
t
=

RS Ch 15 Dynamic Optimization
21
Sufficiency: if f(.) and g(.) are strictly concave, then the necessary
conditions are sufficient, meaning that any path satisfying these
conditions does in fact solve the problem posed.
15.2 Optimal Control Theory Pontryagins
maximum principle
Lev Pontryagin (19081988, Russia/URSS)
22
Lets go back to our first example:
Max


T
dt
t
C
rt
e
0
s.t. K C Q K =
&

where
|

\
|
= K Q Q and 0 >

K
Q
and 0
2
2
<

K
Q


Form the Hamiltonian:
K K C Q
t
C
rt
e H
&
+ +

=
(

(1) (boundaries:
0
K and
T
K .)
F.o.c.:
0 =

rt
e
C
H
=> =
rt
e (2)
0 = +

|
|
|

\
|

&
K
Q
K
H
(3)
K C Q K
H

= =

&
(4)
15.2 Optimal Control Theory Example
RS Ch 15 Dynamic Optimization
23
From (3) and (2)

+ = + =

r K
K
Q
&
*) ( (5)

=> At the optimal (K*), the marginal productivity of capital
should equal the total user cost of capital, which is the sum
of the interest and the depreciation rates.

The capital stock does not change when it reaches K*.

We have the following dynamics:
C
t
should be reduced if K
t
< K* (dQ/dK too low)
C
t
should be increased if K
t
> K* (dQ/dK too high)
15.2 Optimal Control Theory Example
24
C*
K1 K* K2
15.2 Optimal Control Theory Example
Dynamics of the system
Following dynamics:
C
t
should be reduced if K
t
< K* (dQ/dK too low)
C
t
should be increased if K
t
> K* (dQ/dK too high)
RS Ch 15 Dynamic Optimization
25
We can add to the problem more state and control variables. Analogy
can be made to multivariate calculus.
Objective Functional:
1
0
t
1 2 1 2
t
f (t, x (t), x (t), u (t), u (t))dt

Constraints:
1
1 1 2 1 2
2
2 1 2 1 2
x (t) g (t, x (t), x (t), u (t), u (t))
x (t) g (t, x (t), x (t), u (t), u (t))
=
=
&
&
Boundary Conditions: t
0
, t
1
, x
1
(t
0
)=x
10
, x
2
(t
0
) = x
20
; fixed
Free Endpoints for x
1
and x
2
15.2 OCT More State and Control Variables
26
Form Hamiltonian:
H(t,x
1
(t),x
2
(t),u
1
(t),u
2
(t),
1
(t),
2
(t))= f(t,x
1
(t),x
2
(t),u
1
(t),u
2
(t))+
1
g
1
(t,x
1
(t),x
2
(t),u
1
(t),u
2
(t))
+
2
g
2
(t,x
1
(t),x
2
(t),u
1
(t),u
2
(t))
F.o.c.:
1 2
i 1 2
i i i
1 2
i 1 2
i i i i
i
i
f g g
u : 0
u u u
H f g g
x x x x
H
x

+ + =


= = + +
`

)

&
&
i = 1,2
Boundary (Transversality) Conditions: x
1
(t
0
)=x
10
, x
2
(t
0
)=x
20
,
1
(t
1
)=
2
(t
1
)= 0
Fixed Endpoint Problem: Add the boundary condition: x(t
1
) = x
*
15.2 OCT More State and Control Variables
RS Ch 15 Dynamic Optimization
27
More General Problem: Lets add a terminal-value function
and inequality restrictions.
Objective Functional:
x = n-dimensional vector, u = m-dimensional vector
15.2 Optimal Control Theory General Problem
s.t. )) ( , ( )) ( ), ( , ( max
1
1
0
t x t dt t u t x t f
t
t
+

n s r q
t t x t x K
s r i t x
r q i t x
q i x t x
n i x t x t u t x t g t x
n
i
i
i i
i i i
i

+ =
+ =
= =
= = =

1
at , 0 )) ( ),..., ( (
,..., 1 , 0 ) (
,..., 1 , free ) (
,..., 1 , ) (
,..., 1 , ) ( )), ( ), ( , ( ) (
1 1
1
1
1 1
0 0
28
F.o.c.:
i i
i
n
j
i i
j 1 i i
n
k
k
k 1
j j
H
x g (t, x, u),i 1, , n
g
f
, j 1, , n
x x
f g
0, j 1, , m
u u
=
=

= = =

= + =
`

)

+ = =

& K
&
K
K
Note: H(t,x*,u,) is maximized by u=u*
15.2 Optimal Control Theory General Problem
RS Ch 15 Dynamic Optimization
29
Transversality Conditions
(i) x
i
(t
1
) free:
i 1
i
(t )
x

(ii) x
i
(t
1
) 0:
i 1 i 1 i 1
i i
(t ) , x (t ) (t ) 0
x x
(
=
(


(iii) K(x
q
(t
1
),,x
n
(t
1
)) 0:
i 1
i i
K
(t ) p , i q, , n
x x
p 0, pK 0

= + =

=
K
15.2 Optimal Control Theory General Problem
(iv) K(x
q
(t
1
),,x
n
(t
1
)) = 0:
i 1
i i
K
(t ) p , i q, , n
x x

= + =

K
(v) t
1
is free: at t:
n
i i t
i 1
f g 0
=
+ + =

30
(vii) K(x
q
(t
1
),,x
n
(t
1
),t
1
) 0:
i 1
i i
n
i i t
i 1
1
1
K
(t ) p , i q, , n
x x
K
f g p 0
t
p 0, K 0, pK 0, t t
=

= + =

+ + + =

= =

K
15.2 Optimal Control Theory General Problem
(vi) T t
1
:
n
i i i
i 1
f g 0
=
+ +

at t
1
, with strict equality if T > t
1
, if T - t
1
>0 is required
RS Ch 15 Dynamic Optimization
31
Form Hamiltonian:
H = f(t,x,u) + g(x,u,t) + k(t,x)
Optimality Conditions:
u u u
x x x x
1 x 1
u :H f g 0
H (f g k )
(t ) (x(t )), 0, k 0
= + =
= = + +
= =
&
State Variable Restrictions: k(t,x) 0
0 ) , ( , ) ( )), ( ), ( , ( ) ( s.t.
s.t. ) ( ( )) ( ), ( , ( max
0 0
1
1
0
= =
+

x t k x t x t u t x t g t x
t x dt t u t x t f
t
t

15.2 Optimal Control Theory General Problem


32
Often, in economics, we have to optimized over discounted
sums, subject to our usual dynamic constraint:
T
rt
0
0
e f (t, x, u)dt
x g(t, x, u), x(0) x

= =

&
Form Hamiltonian:
H = e
-rt
f(t,x,u) + g(t,x,u)
Optimality Conditions:
rt
u u u
rt
x x x
u :H e f g
H e f g
(T) 0

= +
= =
=
&
15.2 OCT Current Value Hamiltonian
RS Ch 15 Dynamic Optimization
33
Sometimes, it is convenient to eliminate the discount factor. The
resulting system involves current, rather than, discounted values of
various magnitudes.
Hamiltonian: H = e
-rt
f(t,x,u) + (t) g(t,x,u)
Define m(t) = e
rt
(t) =>
Current Value Hamiltonian:
rt
H e H f (t, x, u) mg(t, x, u) = = +
%
Optimality Conditions:
u u x x
H
f mg 0, m rm f mg
u

= + = =

%
&
15.2 OCT Current Value Hamiltonian
Note: If f(.) and g(.) are autonomous, this substitution leads to
autonomous transition equations describing the dynamics of the
system.

+ = + =
rt rt rt
e rm e re m
34
Using the usual notation, we want to maximize utility over time:
Form Current Value Hamiltonian:
Optimality Conditions:
15.2 OCT Current Value Hamiltonian - Example
Note: The current value Hamiltonian consists of two terms: 1)
utility of current consumption, and 2) net investment evaluated by
price m, which reflects the marginal utility of consumption.
0 ) , ( , ) ( , ) 0 ( , ) ( ) (
s.t. )) ( ( max
0
1
0
= = =
=

x t C K T K K K K C K F t K
dt t C U e J
T
t
t
t
) ) ( ( )) ( (
~
K C K F m t C U H e H
t
+ = =

) ) ( ' ( ) ( '
0 ) ( '
~
=
= = =

K F m C U m m
m C U
dC
dH
dC
dH
RS Ch 15 Dynamic Optimization
35
The static efficiency condition:
U(C(t))=m(t),
maximizes the value of the Hamiltonian at each instant of time
myopically, provided m(t) is known.
The dynamic efficiency condition:
forces the price m of capital to change over time in such a way that
the capital stock always yields a net rate of return, which is equal to
the social discount rate . That is,
15.2 OCT Current Value Hamiltonian - Example
There is a long-run foresight condition which establishes the
terminal price m(T) of capital in such a way that exactly the terminal
capital stock K(T) is obtained at T .
mdt dt K F m C U dm + = )] ) ( ' ( ) ( ' [
) ) ( ' ( ) ( ' =

K F m C U m m
36
Equilibria in Infinite Horizon Problems
Hamiltonian: = f(x,u) + m g(x,u)
Optimality Conditions:
u x
H 0, m rm H = = &
15.2 Optimal Control Theory Application
Transversality Conditions:
t
rt
t
limm(t)x(t) 0
lime m(t)x(t) 0

=
=
0 0
) ( )), ( ), ( ( ) ( s.t. )) ( ), ( ( max
0
x t x t u t x g t x dt t u t x f e
t
rt
= =

Problems of this sort, if they are of two dimensions, lead to phase


plane analysis.
~
H
RS Ch 15 Dynamic Optimization
37
x 0 = &
m 0 = &
x
m
15.2 Optimal Control Theory Application
38
Sufficiency: Arrow and Kurz (1970)
If the maximized Hamiltonian is strictly concave in the state
variables, any path satisfying the conditions above will be
sufficient to solve the problem posed.
Hamiltonian: = f(x,u) + mg(x,u)
Optimality Conditions: u x
H 0, m rm H = = &
u u(m, x) =
Maximized Hamiltonian:
H* f (x, u(m, x)) mg(x, u(m, x)) = +
15.2 Optimal Control Theory Sufficiency
~
H
RS Ch 15 Dynamic Optimization
39
Example I: Nerlove-Arrow Advertising Model
Let G(t) 0 denote the stock of goodwill at time t .
where u = u(t ) 0 is the advertising effort at time t measured in
dollars per unit time. Sales S are given by
S = S(p,G,Z),
where p is the price level and Z other exogeneous variables.
Let c(S) be the rate of total production costs, then, total revenue net of
production costs is:
R(p,G,Z) = p S(p,G,Z) c(S)
Revenue net of advertising expenditure is: R(p,G,Z) - u .
15.2 OCT Applications
0
) 0 ( , G G G u G = =

Kenneth Joseph Arrow (1921, USA)


40
The firm wants to maximize the present value of net revenue
streams discounted at a fixed rate :
subject to
Note that the only place p occurs is in the integrand, which we can
maximize by first maximizing R w.r.t p holding G fixed, and then
maximized the result with respect to u. Thus,
Implicitly, we get p*(t) = p(G(t),Z(t))
Define (G,Z)=R(p*,G,Z). Now, J is a function of G and Z only.
For convenience, assume Z is fixed.
0
) 0 ( , G G G u G = =

15.2 OCT Applications


RS Ch 15 Dynamic Optimization
41
Solution by the Maximum Principle
The adjoint variable (t) is the shadow price associated with the
goodwill at time t . The Hamiltonian can be interpreted as the dynamic
profit rate which consist of two terms:
(i) the current net profit rate (G) - u.
(ii) the value of the new goodwill created by
advertising at rate u.
0 =
du
dH
15.2 OCT Applications
42
The second equation corresponds to the usual equilibrium relation for
investment in capital goods:
It states that the marginal opportunity cost of investment in goodwill,
, should equal the sum of the marginal profit
from increased goodwill and the capital gain, ( +)dt .
15.2 OCT Applications
RS Ch 15 Dynamic Optimization
43
Define as the elasticity of demand with respect to
goodwill and after some algebra, we can derive:
We also can obtain the optimal long-run stationary equilibrium .
Again, after some simple algebra, we obtain: as
15.2 OCT Applications
44
The property of is that the optimal policy is to go to as fast as
possible.
If , it is optimal to jump instantaneously to by applying an
appropriate impulse at t =0 and set for t >0.
If , the optimal control u*(t)=0 until the stock of goodwill
depreciates to the level , at which time the control switches to
and stays at this level to maintain the level of
goodwill. See Figure.
15.2 OCT Applications
RS Ch 15 Dynamic Optimization
45
15.2 OCT Applications
46
Example II: Neoclassical Growth Model (Robert Solow)
max
Preferences: dt
C
e
t t

0
1
1


Capital accumulation:
t t t t t
K C N Y K =
&

Technology:

=
1
t t t t
N K A Y assume 1 =
t
A 1 =
t
N
Form current value Hamiltonian:
( ) [ ]
1
1
1
, ,

=
t t t t
t
K C K
C
K c H

(1)
C is control, K is state variable, is adjoint variable.
First order conditions: 0 =

t
C
H
=>
t t
C

(2)

t
t
t t
K
H

=
&
=> [ ]

=
1
t t t t
K
&
(3)

t t t t
K C K K

=
&
(4)
15.2 OCT Applications
RS Ch 15 Dynamic Optimization
47
Transversality condition 0 =

t t
t
K e
t
Lim

(5)

Characterization of the balanced growth path: Capital stock,
consumption and the shadow price of capital remain constant
in the balanced growth path
c
g
C
C
=
&
;
K
g
K
K
=
&
and

g
t
t
=
&
. From (3)
[ ]

=
1
t
t
t
K
&
=>



+ =

t
t
t
K
&
1
(6)
Since the RHS is constant , therefore LHS also should be
constant 0 =
K
K
&
. If capital stock is not growing output is not
growing 0 =
Y
Y
&
and consumption is not growing 0 =
C
C
&
.

From (2)
t
t
t
t
C
C
& &

= => 0 =
t
t

&

15.2 OCT Applications
48
Recall that from (2):


t t t t
C C = => =


Then, we have a 2x2 non-linear system of differential equations:
t t t t
K K K

=
&

[ ]
1 1
) (

+ = =

t t
t
t
K K
&


This system can be shown as a phase diagram in the ( )
t t
K , space,
to analyze the transition dynamics of the shadow price
t
and
capital.

From (6), we can calculate the steady state value for K
t
:

1
] / ) [( *

+ =

K (<1)

15.2 OCT Applications
RS Ch 15 Dynamic Optimization
49
In ( )
t t
K , space the transition dynamics of the shadow
price
t
and capital.
0 =
t

&



0 <
t

&
0 >
t

&

( )
t




K*

'
*
K K K > > .
0 = K
&
0 > K
&

0 < K
&

*
K ' K K
15.2 OCT Applications
50
Putting all these things together the convergence to the
steady state can be summarised in the following diagram.

0 = K
&

0 =
&


I II

IV

III





*
K ' K K
Convergence to the steady state lies in region I and III as
shown by the double arrow red line.
15.2 OCT Applications
RS Ch 15 Dynamic Optimization
51
15.3 Discrete Time Optimal Control
We change from continuous measured variables to discrete
measured variables.
State Variable Dynamics: x
t
= x
t+1
x
t
= f(x
t
, u
t
, t), x
0
given
Objective Functional:
Form a Lagrangean:
Define the Hamiltonian H
t
: H
t
=H(x
t
,u
t
,
t+1
,t) = u(x
t
,u
t
,t) +
t+1
f(x
t
,u
t
,t)
Now, re-write the Lagrangean:
) ( ) , ( max
1
0
T
T t
t
t
x S t x u J + =

=
=
] ) , , ( [ ) ( ) , (
1
1
0
1
0
t t
T
t
t t t T
T
t
t
x x t u x f x S t x u L + + + =
+

=


)} ( { ) , (
1
1
0
1 t t
T
t
t
t
T
x x H T x S L + =
+

=
+

52
15.3 Discrete Time Optimal Control
F.o.c.:
Optimality Conditions: H
u
= 0
x
t
= x
t+1
- x
t
= dH/d
t+1
= f(x
t
,u
t
,t)

t
= -dH/dx
t
= -du/dx
t
, -
t+1
df/dx
t
Boundary Conditions: x
0
given and
T
= dS/dx
T
0
0
0 ) (
0
1
1
1 1
= =
= + =
= =
= =
+
+
+ +
T
T T
t t
t
t
t
t t
t
t
t
t
t
t
dx
dS
dx
dL
dx
dH
dx
dL
x x
d
dH
d
dL
du
dH
du
dL



RS Ch 15 Dynamic Optimization
53
Consider an production-inventory discrete problem.
Let I
k
, P
k
and S
k
be the inventory, production, and demand at time k ,
respectively. Let I
0
be the initial inventory, let and be the goal levels
of inventory and production, and let h and c be inventory and
production cost coefficients.
The problem is:
subject to
Form the Hamiltonian:
15.3 Discrete Time Optimal Control - Example
54
The adjoint variable satisfies:
To maximize the Hamiltonian, let us differentiate w.r.t. to production:
Since production must be nonnegative, we obtain the optimal production
as
These expressions determine a two-point boundary value problem. For a
given set of data, it can be easily solved numerically. If the constraint P
k
0 is dropped it can be solved analytically.
15.3 Discrete Time Optimal Control - Example
RS Ch 15 Dynamic Optimization
55
Discrete Calculus of Variations
We have an intertemporal problem:
Trick: Differentiate the objective function with respect to x
t
at time
t and t-1.
Objective Function at time t:
t
g(x
t+1
,x
t
)
Optimality Condition at time t:
t
g
x
(x
t+1
,x
t
)
Optimality Condition at time t-1:
t-1
g
x
(x
t
,x
t-1
)
Complete Condition:
15.3 Discrete Time Optimal Control
) , ( max
1
0
t t
t
t
x x g
+

t t
t t
t t 1
x t 1 t x t t 1
1
x t 1 t x t t 1
g (x , x ) g (x , x ) 0
g (x , x ) g (x , x ) 0

+
+ =
+ =
Transversality Condition:
T
T
T
T
g
lim x 0
x

56
Example I: Cash Flow Maximization by a Firm
[ ]
t
t t 1 t t
t 0
max P[f (L ) C(L L )] WL , 0 1

+
=
< <

P: Profit function, L
t
: Labor at time t, W: wage rate.
Euler Equation (f.o.c.):
[ ]
t t 1
t t 1 t t t 1
1
t t 1 t t t 1
P[f (L ) C (L L )] W PC (L L ) 0
P[f (L ) C (L L )] W PC (L L ) 0

+
+ =
+ =
15.3 Discrete Time Optimal Control
RS Ch 15 Dynamic Optimization
57
Alternative way to solve intertemporal problems
Equivalent in many contexts to methods already seen
Typical problem:
Lagrangean Formulation:
15.3 Dynamic Programing
) ( ) 1 ( c
s.t. ) ( ) ( max
1 t
1 0
0
t t t
T t
T
t
t
k f k k
k V c u
= +
+
+
+
=

[ ]
T T
t
t 0 T 1 t t t t t 1
t 0 t 0
u(c ) V (k ) f (k ) (1 )k c k
+ +
= =
+ + +

% %
58
Optimality Conditions:
[ ]
t
t t t
t 1 t 1 t 1 t
T 1 0 T 1 t 0 T 1 t T 1
t t 1 t t
c : u (c ) 0
k : f (k ) (1 ) 0, 0 t T
k : V (k ) 0, V (k ) k 0
c k (1 )k f (k )
+ + +
+ + + +
+
=
+ = < <
( =

+ =
%
% %
% % % %
Eliminate the multiplier to get
15.3 Dynamic Programing
[ ]
T T
t
t 0 T 1 t t t t t 1
t 0 t 0
u(c ) V (k ) f (k ) (1 )k c k
+ +
= =
+ + +

% %
[ ]
t 1 t 1 t
0 T 1 t 0 T 1 t T 1
t t 1 t t
u (c ) f (k ) (1 ) u (c )
V (k ) 0, V (k ) k 0
c k (1 )k f (k )
+ +
+ + +
+
+ =
( =

+ =
% % % %
RS Ch 15 Dynamic Optimization
59
T
T 0 T 1
u(c ) V (k )
+
+
%
subject to
T T 1 T T
c k (1 )k f (k )
+
+ =
T T T T 1 T 1 T
c c (k ), k k (k )
+ +
= =
Now solve the period T-1 problem
Choose c
T-1
and k
T
to maximize
T 1 T
T 1 T T 0 T 1 T
u(c ) u(c (k )) V (k (k ))

+
+ +
%
subject to
T 1 T T 1 T 1
c k (1 )k f (k )

+ =
k
T
given
k
T-1
given
15.3 Dynamic Programing
Problem can be solved recursively:
First solve the problem at t = T
Choose c
T
and k
T+1
to maximize:
60
Continue solving backwards to time 0.
The same optimality conditions arise from the problem:
T T 1
1 T T 0 T 1
c ,k
V (k ) max u(c ) V (k )
+
+
= +
subject to
T T 1 T T
c k (1 )k f (k )
+
+ =
T 1
0 T 1 0 T 1
V (k ) V (k ) /
+
+ +
=
%
15.3 Dynamic Programing
Optimality Conditions:
[ ]
T T
0 T 1 T 0 T 1 T T 1
T T 1 T T
u (c )
V (k ) 0, V (k ) k 0
c k (1 )k f (k )
+ + +
+
=
=
+ =
These are the same conditions as before if we define
T
T T
/ =
%
RS Ch 15 Dynamic Optimization
61
Given the constraint and k
T
, V
1
(k
T
) is the maximized value of
T 0 T 1
u(c ) V (k )
+
+
Period T-1 problem is equivalent to maximizing
T 1 1 T
u(c ) V (k )

+
with the same constraint at T-1 and k
T-1
given:
T 1 T
2 T 1 T 1 1 T
c ,k
V (k ) max u(c ) V (k )


= +
T 1 T T 1 T 1
c k (1 )k f (k )

+ = subject to
Envelope Theorem implies
[ ]
1 T T T
V (k ) f (k ) (1 ) = +
15.3 Dynamic Programing
62
Optimality Conditions:
T 1 T 1
1 T T 1
T 1 T T 1 T 1
u (c )
V (k )
c k (1 )k f (k )


=
=
+ =
The envelope theorem can be used to eliminate 1
V
[ ]
T 1 T 1
T T T 1
T 1 T T 1 T 1
u (c )
f (k ) (1 )
c k (1 )k f (k )


=
+ =
+ =
15.3 Dynamic Programing
The period T-1 envelope condition is
[ ]
2 T 1 T 1 T 1
V (k ) f (k ) (1 )

= +
RS Ch 15 Dynamic Optimization
63
This process can be continued giving the following Bellman
Equation:
T T
j 1 T j T j j T j 1
c ,k
V (k ) max u(c ) V (k )
+ +
= +
subject to
T j T j 1 T j T j
c k (1 )k f (k )
+
+ = k
T-j
given
Bellmans Principle of Optimality
The fact that the original problem can be written in this recursive way
leads us to Bellmans Principle of Optimality:
An optimal policy has the property that whatever the initial state and
initial decision are, the remaining decisions must constitute an optimal
policy with regard to the state resulting from the first decision.
15.3 Dynamic Programing
64
Ken Arrows office

You might also like