Lecture 2: Predictability of Asset Returns: T T+K T T T+K T+K
Lecture 2: Predictability of Asset Returns: T T+K T T T+K T+K
Lecture 2: Predictability of Asset Returns: T T+K T T T+K T+K
Fischer
SS 2008
1
today’s price reflects all the information from past prices. Under this
old definition it says that it is not possible to make profits based on in-
formation from past asset prices. Newer definitions recognize that there
is a tradeoff between risk and expected returns. Hence, the information
from risk is captured in the distribution’s higher moments.
• ME Example #2.1: Hall’s Martingale with Consumption
Let zt be a vector containing a set of macroeconomic variables (such
as money supply, or GDP) including aggregate consumption ct for pe-
riod t. Hall’s (1979) martingale hypothesis is that consumption is a
martingale with respect to zt :
E(ct |zt−1 , zt−2 , · · · , z1 ) = ct−1
This formalizes the notion in consumption theory called “consumption
smoothing”: the consumer, wishing to avoid fluctuations in the stan-
dard of living, adjusts consumption in t − 1 to the level such that no
change in subsequent consumption is anticipated. See Hayashi page
101.
• ME Example #2.2: Naive Inflation Forecasts
Lets us define a simple naive forecast using the martingale model for
inflation. The naive forecast says that the best inflation forecast for
t + h, defined as πt+h|t , is the current observed inflation rate, πt :
E(πt+h |It ) = πt .
The naive forecast is a martingale if the information set It = {pt , pt−1 , · · ·}.
The naive forecast is frequently used as the simplest forecasting bench-
mark against ARIMA models or the Phillips curve model. Most studies
show that they have difficulty in beating the naive inflation forecast,
see Akteson and Ohanian (2000).
• RW1 Specification
the drift term, µ, is interpreted as the expected change in prices and the
error term, , is independently and identically distributed (i.e., Normal,
Weibull, Exponential). Independence ⇒ cov(t , t−k ) = 0, the reverse
2
however is not true: Independence ⇐ cov(t , t−k ) = 0. If X1 and
X2 are independent random variables for every Borel function (i.e.,
function g(·) guarantee’s that r.v. xt is also a r.v. for yt = g(xt ))
h1 (·) and h2 (·) then E(h1 (X1 )h2 (X2 )) = E(h1 (X1 )) · E(h2 (X2 )). This
is both a necessary as well as a sufficient condition for independence.
One particular case of interest is when h1 (X1 ) = X1 and h2 (X2 ) = X2 ,
then under linear independence we have E(X1 X2 ) = E(X1 ) · E(X2 ).
From this we have cov(X1 , X2 ) = E(X1 X2 ) - E(X1 ) · E(X2 ). Here,
linear independence is equivalent to uncorrelatedness, since it implies
cov(X1 , X2 ) = 0
• RW1 model is stronger than the Martingale Model
3
Random Walk, Time Dependency and Stationarity
pt = µ + pt−1 + t , t ∼ IID(0, σ 2 ).
Can be rewritten as E(pt |p0 ) = p0 + µt and V ar(pt |p0 ) = σ 2 t. To show
this, consider the following steps:
pt = µ + pt−1 + t
pt−1 = µ + pt−2 + t−1
..
.
p1 = µ + p0 + 1
Through repeated substitution
pt = µt + p0 + t + t−1 + · · · + 0 ,
with E(t ) = 0, one obtains
E(pt |p0 ) = µt + p0 .
Next, let us consider var(pt |p0 ):
var(pt |p0 ) = E(pt |p0 − E(pt |p0 ))E(pt |p0 − E(pt |p1 ))
= E(t + · · · + 1 )E(t + · · · + 1 )
with E(t )E(t−k ) = cov(t , t−k ) = 0 for k 6= 0 and E(t , t−k ) = σ 2 . This
gives
t
σ2,
X
var(pt |p0 ) =
i=1
2
= σ t.
The simple Random walk model has a time dependent mean and variance.
The time dependency of the moments makes it a non stationary process. To
obtain stationarity, or in other words non time dependence, we can first-
difference the RW process:
(1 − L)pt = ∆pt = t , with µ = 0,
This yields E(∆pt |∆p0 ) = 0 and var(∆pt |∆p0 ) = σ 2 . Note, there are other
forms of non stationarity that have nothing to do with Random Walk models;
i.e.
pt = µ + αt + t , t ∼ IID(0, σ 2 ).
4
In this simple model, we have a process that is non-stationary in mean,
but stationary in the variance. Note: Random walk models are are non
stationary, but not all forms of non stationary are random walk models.
n
X
Ns ≡ Yt , Yt ≡ It It+1 + (1 − It )(1 − It+1 )
t=1
Nr ≡ n − Ns
5
Example 1: Defining the Number Sequences Ns
r1 0.1 I1 1 Y1 0
r2 -0.5 I2 0 Y2 1
r3 -0.7 I3 0 Y3 0
r4 0.3 I4 1 Y4 0
r5 ?
ˆ ≡ Ns = Ns /N = πˆs →
CJ
pr πs
= CJ =
1/2
=1 (3)
Nr Nr /N 1 − πˆs 1 − πs 1/2
6
Binomial Distribution:
E(Ns ) = np = nπ
var(Ns ) = npq = nπ(1 − π)
(θ̂ − θ0 ) ∼ N (0, V )
1/T (θ̂ − θ0 ) ∼ N (0/T, V /T 2 ) ∼ N (0, V /T 2 )
or
X
F (x) = f (u), ∀ ∈ R discrete
u≤x
7
0 x<a
F (x) = x−a
a≤x≤b
b−a
1 otherwise
2
where x = µ/σ and the normal density function is φ(x) = 1
(2π)0.5
e−t /2 .