Solution 5 Problem 1: Let a > 0 be a known constant, and let θ > 0 be a parameter
Solution 5 Problem 1: Let a > 0 be a known constant, and let θ > 0 be a parameter
Solution 5 Problem 1: Let a > 0 be a known constant, and let θ > 0 be a parameter
391J
Statistics for Engineers and Scientists Apr 11
MIT, Spring 2006 Handout #13
Solution 5
(a) The beta, β(θ, 1), density: fX (x | θ) = θxθ−1 , for 0 < x < 1.
a
(b) The Weilbull density: fX (x | θ) = θaxa−1 e−θx , for x > 0.
θaθ
(c) The Pareto density: fX (x | θ) = x(θ+1)
, for x > a.
T (x) x1 x2 · · · xn
× an (x1 x2 · · · xn )a−1
×I(0,∞) (x1 )I(0,∞) (x2 ) · · · I(0,∞) (xn ) .
h(x)
1
Factorization theorem implies that
n
T (x) xai
i=1
T (x) x1 x2 · · · xn
Problem 2:
Solution
2
Factorization theorem implies that
Pθ {X = k} = (1 − θ)k−1 θ, k = 1, 2, 3, . . . .
Pθ {X = x} = e(x−1) ln(1−θ)+ln θ
= ex ln(1−θ)−(ln(1−θ)−ln θ) ,
3
where x ∈ {1, 2, 3, . . . }. Thus, the geometric distribution is a one-parameter
exponential family with
Solution
n n
(ln p) xi
e[ln(1−p)][n− xi ]
=e
i=1 i=1
n
[ln p−ln(1−p)] xi +n ln(1−p)
=e i=1 ,
for x ∈ {0, 1}n . Therefore, the joint pmf is a member of the exponential
family, with the mappings:
θ=p h(x) = 1
n
η(p) = ln p − ln(1 − p) T (x) = xi
i=1
P {X = x | p} n
n
= e[ln p−ln(1−p)][ i=1 xi − i=1 yi ]
.
P {X = y | p}
4
Define a function k(x, y) h(x)/h(y) = 1, which is bounded and non-zero
for any x ∈ X and y ∈ X .
n
n
Note that x and y such that i=1 xi = i=1 yi are equivalent because
function k(x, y) satisfies the requirement of likelihood ratio partition.
n
Therefore, T (x) i=1 xi is a sufficient statistic.
Note: One should not be surprised that the joint pdf belongs to the exponen-
tial family of distribution. Recall that Gaussian distribution is a member of the
exponential family of distribution and that random variables, Xi ’s and Yj ’s, are
mutually independent. Thus, their joint pdf belongs to the exponential family
as well.
Note: To derive the minimal sufficient statistic, one may alternatively consider
likelihood ratio partition.
The set D0 is defined to be
D0 (x, y) ∈ Rm+n for all µ, for all σ 2 > 0, for all τ 2 > 0
fX,Y x, y | µ, σ 2 , τ 2 = 0
=∅ (empty set).
5
Let (x, y) ∈
/ D0 and (v, w) ∈
/ D0 be given. Their likelihood ratio is given by
1 2 2
1 2 2
m m n n
fX,Y (x, y | θ)
= exp − 2 xj − vj − 2 yi − wi
fX,Y (v, w | θ) 2σ j=1 j=1
2τ i=1 i=1
µ
m m
µ
n n
+ 2 xj − vj + 2 yi − wi .
σ j=1 j=1
τ i=1 i=1
fX,Y (x, y | θ)
= k(x, y, v, w).
fX,Y (v, w | θ)
6
(a) Find the likelihood ratio Λ(x).
Solution
1 −|x|
2 e
Λ (x) = 1 2
√1 e− 2 x
2π
π 1 x2 −|x|
= e2 .
2
(b) The decision region for hypothesis H1 , R1 , is the set of points x’s that
give rise to the output decision H1 :
7
Taking natural log both sides of the inequality and writing x2 as |x|2 yields
1 η √2
2
R1 = x |x| − |x| − ln √ >0 .
2 π
√
When 1 + 2 ln η√π2 < 0, the decision region is empty (since the term,
b2 − 4ac, in the square root of the quadratic formula is negative).
√
π −1
When 1 + 2 ln η√π2 ≥ 0, or equivalently, η ≥ 2 e
2 , we will have a
R1 = x |x| > 1 + 1 + 2 ln η or
π
√2
|x| < 1 − 1 + 2 ln η
π
√
2
= x |x| > 1 + 1 + 2 ln η
π
(absolute value cannot be negative)
√2
= x x > 1 + 1 + 2 ln η or
π
√2
x < −1 − 1 + 2 ln η .
π
R0 = R\R1
∅, for η ≤ 0;
R, for 0 < η < π2 e−1/2 ;
= √ √
1 − 1 + 2 ln(η 2/ π),
1 + 1 + 2 ln(η √2/√π) , for π e−1/2 ≤ η.
2