M5A42 Applied Stochastic Processes Problem Sheet 1 Solutions Term 1 2010-2011
M5A42 Applied Stochastic Processes Problem Sheet 1 Solutions Term 1 2010-2011
M5A42 Applied Stochastic Processes Problem Sheet 1 Solutions Term 1 2010-2011
1. Calculate the mean, variance and characteristic function of the following probability density functions.
SOLUTION
(a)
Z +∞ Z +∞
E(X) = xf (x) dx = λ xe−λx dx
−∞ 0
1
= .
λ
Z +∞ Z +∞
E(X 2 ) = x2 f (x) dx = λ x2 e−λx dx
−∞ 0
2
= .
λ2
1
Consequently,
1
var(X) = E(X 2 ) − (EX)2 = .
λ2
The characteristic function is
Z ∞
λ
φ(t) = E(eitx ) = λ eitx e−λx dt = .
0 λ − it
(b)
Z +∞ Z b
x
E(X) = xf (x) dx = dx
−∞ a b−a
a+b
= .
2
+∞ b
x2
Z Z
2 2
E(X ) = x f (x) dx = λ x2 dx
−∞ a b−a
b2 + ab + a2
= .
3
Consequently,
(b − a)2
var(X) = E(X 2 ) − (EX)2 = .
12
The characteristic function is
b
eitb − eita
Z
itx 1
φ(t) = E(e )=λ eitx dt = .
a b−a it(b − a)
(c)
Z +∞
λα
E(X) = λ xα e−λx dx
Γ(α) 0
Γ(α + 1) α
= = .
λΓ(α) λ
Z +∞
2
E(X ) = λ x1+α e−λx dx
0
Γ(α + 2) α(α + 1)
= 2
= .
λ Γ(α) λ2
Consequently,
α
var(X) = E(X 2 ) − (EX)2 = .
λ2
The characteristic function is
Z ∞
λα itx
φ(t) = E(e ) = eitx xα−1 dt
Γ(α) 0
Z ∞
λα 1
= e−y y α−1 dy
Γ(α) (λ − it)α 0
λα
= .
(λ − it)α
2
2. (a) Let X be a continuous random variable with characteristic function φ(t). Show that
1 (k)
EX k = φ (0),
ik
where φ(k) (t) denotes the k-th derivative of φ evaluated at t.
(b) Let X be a nonnegative random variable with distribution function F (x). Show that
Z +∞
E(X) = (1 − F (x)) dx.
0
(c) Let X be a continuous random variable with probability density function f (x) and characteristic
function φ(t). Find the probability density and characteristic function of the random variable
Y = aX + b with a, b ∈ R.
(d) Let X be a random variable with uniform distribution on [0, 2π]. Find the probability density of
the random variable Y = sin(X).
SOLUTION
(a) We have Z
itx
φ(t) = E(e )= eitx f (x) dx.
R
Consequently Z
φ(k) (t) = (ix)k eitx f (x) dx.
R
Thus:
Z
(k)
φ (0) = (ix)k f (x) dx = ik EX k ,
R
1 (k)
and EX k = ik
φ (0).
(b) Let R > 0 and consider
Z R
P(X < R) = xf (x) dx
0
Z R
dF
= dx x
0 dx
Z R
R
= xF (x)|0 − F (x) dx
0
Z R
= (F (R) − F (x)) dx.
0
Thus,
3
where the fact limx→∞ F (x) = 1 was used.
Alternatively:
Z ∞ Z ∞Z ∞
(1 − F (x)) dx = f (y) dydx
0
Z0 ∞ Zxy
= f (y) dxdy
0 0
Z ∞
= yf (y) dx = EX.
0
(c) We have:
P(Y ≤ y) = P(aX + b ≤ y)
y−b
= P(X ≤ )
a
Z y−b
a
= f (x) dx.
−∞
Consequently,
∂ 1 y−b
fY (y) = P(Y ≤ y) = f .
∂y a a
Similarly,
n 0 x < 0,
x
FX (x) = 2π , x ∈ [0, 2π],
1, x > 2π.
The random variable Y takes values on [−1, 1]. Hence, P(Y ≤ y) = 0 for y ≤ −1 and
P(Y ≤ y) = 1 for y ≥ 1. Let now y ∈ (−1, 1). We have
The equation sin(x) = y has two solutions in the interval [0, 2π]: x = arcsin(y), π − arcsin(y)
for y > 0 and x = π − arcsin(y), 2π + arcsin(y) for y < 0. Hence,
π + 2 arcsin(y)
FY (y) = , y ∈ (−1, 1).
2π
4
The distribution function of Y is
n 0 y ≤ 0,
π+2 arcsin(y)
FY (y) = 2π , y ∈ (−1, 1),
1, y ≥ 1.
We differentiate the above expression to obtain the probability density:
1 √1
n
π , y ∈ (−1, 1),
fY (y) = 1−y 2
0, y∈
/ (−1, 1).
3. Let X be a discrete random variable taking vales on the set of nonnegative integers with probability
mass function pk = P(X = k) with pk ≥ 0, +∞
P
k=0 pk = 1. The generating function is defined as
+∞
X
g(s) = E(sX ) = pk sk .
k=0
(a) We have
+∞
X +∞
X
g 0 (s) = kpk sk−1 and g 00 (s) = k(k − 1)pk sk−2 .
k=0 k=0
Hence
+∞
X
g 0 (1) = kpk = EX
k=0
and
+∞
X +∞
X
00
g (1) = 2
k pk − kpk = EX 2 − g 0 (1)
k=0 k=0
from which it follows
EX 2 = g 00 (1) + g 0 (1).
(b) We calculate
+∞ −λ k
X e λ
g(s) = sk
k!
k=0
λ(s−1)
= e .
5
(c) Consider the independent nonnegative integer valued random variables Xi , i = 1, . . . d. Since
the Xi ’s are independent, so are the random variables eXi , i = 1, . . . . Consequently,
Pd
Xi
gPd (s) = E(e i=1 ) = Πdi=1 E(eXi ) = Πdi=1 gXi (s).
i=1 Xi
4. Let b ∈ Rn and Σ ∈ Rn×n a symmetric and positive definite matrix. Let X be the multivariate
Gaussian random variable with probability density function
1 1 1 −1
γ(x) = exp − hΣ (x − b), x − bi .
(2π)n/2 det(Σ)
p
2
(a) From the spectral theorem for symmetric positive definite matrices we have that there exists a
diagonal matrix Λ with positive entries and an orthogonal matrix B such that
Σ−1 = B T ΛB.
hΣ−1 z, zi = hB T ΛBz, zi
= hΛBz, Bzi = hΛy, yi
d
X
= λi yi2 .
i=1
6
(b) From the above calculation we have that
γ(x) dx = γ(B T y + b) dy
1 n 1 2
= p Πi=1 exp − λi yi dyi .
(2π)n/2 det(Σ) 2
Consequently
Z
EX = xγ(x) dx
d
ZR
= (B T y + b)γ(B T y + b) dy
RZd
= b γ(B T y + b) dy = b.
Rd
(c) Let y be a multivariate Gaussian random variable with mean 0 and covariance I. Let also
√
C = B Λ. We have that Σ = CC T = C T C. We have that
X = CY + b.
To see this, we first note that X is Gaussian since it is given through a linear transformation of a
Gaussian random variable. Furthermore,
EX = b and E((Xi − bi )(Xj − bj )) = Σij .
Now we have:
φ(t) = EeihX,ti = eihb,ti EeihCY,ti
T ti
= eihb,ti EeihY,C
P P
= eihb,ti Eei j ( k Cjk tk )yj
2
ihb,ti − 12 j | k Cjk tk |
P P
= e e
ihb,ti − 12 hCt,Cti
= e e
ihb,ti − 21 ht,C T Cti
= e e
ihb,ti − 21 ht,Σti
= e e .
7
Consequently,
1
φ(t) = eihb,ti− 2 ht,Σti .