Nothing Special   »   [go: up one dir, main page]

0% found this document useful (0 votes)
16 views3 pages

Assignment06 2024

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 3

QF314800: Mathematical Statistics I

Assignment 06: Multiple Random Variables


Instructor: Chung-Han Hsieh (ch.hsieh@mx.nthu.edu.tw)
Teaching Assistant: Zong Gan (GanZong@gmail.com)
Reading Task: Before you attempt the problems below, we encourage you to review Sections 4.1
and 4.2 in Chapter 4 handout and read Sections 3.1 and 3.2 in Chapter 3 handout.
Caution: If you use a large language model like ChatGPT to help you prepare the assignment,
you are responsible for (i) indicating that you are making use of it and (ii) ensuring that the
answer produced by it is correct and justifiable. (iii) Your answer should be completely on
your own.

Problem 6.1 (Continuous Bivariate Random Variables). Let X and Y be two continuous ran-
dom variables having the joint density function:
fX,Y (x, y) := 1/x, 0 ≤ y ≤ x ≤ 1.
(i) Show that the marginal density satisfies fX (x) = 1 for 0 ≤ x ≤ 1.
(ii) Show that the conditional density satisfies fY |X (y|x) = 1/x for 0 ≤ y ≤ x ≤ 1.
(iii) Compute the conditional probability P (X 2 + Y 2 ≤ 1|X = x).
Problem 6.2 (Joint CDF). Let FX,Y (x, y) be the joint distribution function of X and Y . For
all real constants a < b and c < d, show that
P (a < X ≤ b, c < Y ≤ d) = FX,Y (b, d) − FX,Y (b, c) − FX,Y (a, d) + FX,Y (a, c).
Definition 6.1 (Independent Random Variables). Random variables X and Y are called inde-
pendent if {X ≤ x} and {Y ≤ y} are independent events for all x, y ∈ R.
Problem 6.3 (Conditional Expectation Properties). Let X, Y, Z be discrete random variables.
Show the following:
(i) E[aY + bZ | X] = aE[Y | X] + bE[Z | X] for a, b ∈ R.
(ii) E[Y | X] ≥ 0 if Y ≥ 0.
(iii) E[1 | X] = 1.
(iv) If X and Y are independent then E[Y | X] = E[Y ].
(v) E[Y g(X) | X] = g(X)E[Y | X] for any suitable function g.
(vi) E [E[Y | X, Z] | X] = E[Y | X] = E [E[Y | X] | X, Z].
Definition 6.2 (Family of Distributions). A family of distributions is a collection of probability
distributions {f (x | θ) : θ ∈ Θ} where f (x | θ) is the pdf or pmf, θ is a parameter or vector
of parameters, and Θ is the parameter space, the set of all possible values that the parameters
can take. For example, the family of normal distributions can be denoted as {f (x | µ, σ 2 ) : µ ∈
R, σ 2 ≥ 0}.
Problem 6.4 (Properties of Normal Random Variable). Let X be a normal random variable
with pdf
(x − µ)2
 
2 1
fX (x | µ, σ ) := √ exp − , x ∈ R.
σ 2π 2σ 2
for parameters µ ∈ R and σ > 0. √
2
(i) Find the moment generating function of X. Hint: You may find the integral R e−x dx = π
R

useful.
(ii) Use part (i) to find the mean and variance of X.

1
Problem 6.5 (Lognormal Random Variable). Let Z be normal random variable N (µ, σ 2 ) with
pdf:
1 (z−µ)2
fZ (z) := √ e− 2σ2 , z ∈ R.
2πσ 2
Then the lognormal random variable, call it X, is defined by the transformation: X := exp(Z).
2
(i) Show that E[X] = eµ+σ /2 .
2 2
(ii) Show that var(X) = e2(µ+σ ) − e2µ+σ .
(iii) Suppose Z ∼ N (0, 1). Find the density function fX of X.

Problem 6.6 (Binominal Random Variable). Let X be a binomial random variable with pdf:
 
n x
P (X = x | n, p) = p (1 − p)n−x , x = 0, 1, . . . , n.
x
t n
(i) Show that the mgf of X is given by MX (t) = [(1−p)+pe
Pn ]n. Hint: You may use the binominal
n
theorem: For x, y ∈ R and integer n ≥ 0, (x + y) = i=0 i xi y n−i .
(ii) Show that E[X] = np.
(iii) Show that var(X) = np(1 − p).
Problem 6.7 (Negative Binomial Distribution). The binomial distribution counts the number
of successes in a fixed number of Bernoulli trials. Suppose that, instead, we count the number of
Bernoulli trials required to get a fixed number of successes. This leads to the negative binomial
distribution, which is defined formally below.
Definition 6.3 (Negative Binominal Distribution). Let X be a random variable be the trial at
which the rth success occurs, where r is a fixed integer. Then
 
x+r−1 r
P (X = x | r, p) = p (1 − p)x , x = 0, 1, 2, . . .
r−1

and p ∈ (0, 1). We say that X has a negative binominal(r, p) distribution.


Let X be a negative binomial random variable with parameter (r, p).
(i) Show that the mgf of X is MX (t) = pr (1 − (1 − p)et )−r for t < − log(1 − p).
(ii) Find the mean E[X].
(iii) Find the variance var(X).
Problem 6.8 (Geometric Random Variable). A special case of negative binomial distribution is
the so-called geometric distribution. It is useful to model the number of failures before the first
success since the experiment can have an indefinite number of trials until success.
(i) Let r = 1. Show that the negative binomial random variable has pmf

P (X = x | p) = p(1 − p)x , x = 0, 1, 2, . . . .

This is so called geometric random variable with success probability p.


(ii) Let X be a geometric random variable. Show that P (X ≥ k) = (1 − p)k for k ≥ 0 being an
integer.
(iii) (Memorylessness) Use part (ii) to show that P (X ≥ k + j | X ≥ k) = P (X ≥ j) where k
and j are nonnegative integers.

2
Problem 6.9 (Optional: A Buy-and-Hold Strategy). Generate 252 normal random variables
{Xk }252
k=0 with mean µ = 0.1% and standard deviation σ = 0.15% that models the returns for
a specific risky asset over 252 trading days. With V0 = 1 being the initial account, consider a
stochastic recursion
Vk+1 = Vk + wXk Vk
where w ∈ (0, 1) is weight that corresponds to the so-called long only trade.
(i) Pick an w you like then write your code to generate 1 sample path of {Vk }252
k=0 := {V0 , V1 , . . . , V252 }.
(ii) Repeat part (i) to generate 10 sample paths of Vk . Plot them on the same figure with x-axis
being k and y-axis being the Vk values.
(iii) Report the average terminal account values V252 over the 20 sample paths you generated.

You might also like