Estimation 2nd Sem 19-20
Estimation 2nd Sem 19-20
Estimation 2nd Sem 19-20
Gauranga C. Samanta
P. G. Department of Mathematics
F. M. University, Odisha
POPULATION:
Population is the collection of observations about which
conclusions are to be drawn
SAMPLE:
Sample is a portion of population.
PARAMETERS:
Parameters are statistical quantities of population.
STATISTICS
Statistics are statistical measures of sample
POPULATION:
Population is the collection of observations about which
conclusions are to be drawn
SAMPLE:
Sample is a portion of population.
PARAMETERS:
Parameters are statistical quantities of population.
STATISTICS
Statistics are statistical measures of sample
POPULATION:
Population is the collection of observations about which
conclusions are to be drawn
SAMPLE:
Sample is a portion of population.
PARAMETERS:
Parameters are statistical quantities of population.
STATISTICS
Statistics are statistical measures of sample
POPULATION:
Population is the collection of observations about which
conclusions are to be drawn
SAMPLE:
Sample is a portion of population.
PARAMETERS:
Parameters are statistical quantities of population.
STATISTICS
Statistics are statistical measures of sample
INFERENCE THEORY
INFERENCE THEORY
INFERENCE THEORY
INFERENCE THEORY
INFERENCE THEORY
POINT ESTIMATE:
An estimate of a population parameter given by a single number is
called point estimate
POINT ESTIMATOR:
A point estimator is a statistic for Estimating the population
Parameter θ and will be denoted by θ̂
POINT ESTIMATE:
An estimate of a population parameter given by a single number is
called point estimate
POINT ESTIMATOR:
A point estimator is a statistic for Estimating the population
Parameter θ and will be denoted by θ̂
Definition
Let X ∼ f (x; θ) and X1 , X2 , · · · , Xn be a random sample from the
population X . Any statistic that can be used to guess the
parameter θ is called an estimator of θ. The numerical value of
this statistic is called an estimate of θ. The estimator of the
parameter θ is denoted by θ̂.
Definition
Let X be a population with the density function f (x; θ), where θ is
an unknown parameter. The set of all admissible values of θ is
called a parameter space and it is denoted by Ω, that is
Ω = {θ ∈ Rn |f (x; θ) is a pdf} for some natural number n.
Definition
Let X ∼ f (x; θ) and X1 , X2 , · · · , Xn be a random sample from the
population X . Any statistic that can be used to guess the
parameter θ is called an estimator of θ. The numerical value of
this statistic is called an estimate of θ. The estimator of the
parameter θ is denoted by θ̂.
Definition
Let X be a population with the density function f (x; θ), where θ is
an unknown parameter. The set of all admissible values of θ is
called a parameter space and it is denoted by Ω, that is
Ω = {θ ∈ Rn |f (x; θ) is a pdf} for some natural number n.
n
3
X X3 i
E [X ] = M3 =
n
i=1
and so forth upto
n
X Xm
E [X m ] = Mm = i
n
i=1
Gauranga C. Samanta F. M. University, Odisha
Moment Method Continued
In moment method, we find the estimator for the parameters
θ1 , θ2 , · · · , θm by equating the first m population moments (if they
exist) to the first m sample moments, that is
n
X Xi
E [X ] = M1 = = X̄
n
i=1
n
2
X X2 i
E [X ] = M2 =
n
i=1
n
3
X X3 i
E [X ] = M3 =
n
i=1
and so forth upto
n
X Xm
E [X m ] = Mm = i
n
i=1
Gauranga C. Samanta F. M. University, Odisha
Moment Method Continued
In moment method, we find the estimator for the parameters
θ1 , θ2 , · · · , θm by equating the first m population moments (if they
exist) to the first m sample moments, that is
n
X Xi
E [X ] = M1 = = X̄
n
i=1
n
2
X X2 i
E [X ] = M2 =
n
i=1
n
3
X X3 i
E [X ] = M3 =
n
i=1
and so forth upto
n
X Xm
E [X m ] = Mm = i
n
i=1
Gauranga C. Samanta F. M. University, Odisha
Moment Method Continued
In moment method, we find the estimator for the parameters
θ1 , θ2 , · · · , θm by equating the first m population moments (if they
exist) to the first m sample moments, that is
n
X Xi
E [X ] = M1 = = X̄
n
i=1
n
2
X X2 i
E [X ] = M2 =
n
i=1
n
3
X X3 i
E [X ] = M3 =
n
i=1
and so forth upto
n
X Xm
E [X m ] = Mm = i
n
i=1
Gauranga C. Samanta F. M. University, Odisha
Moment Method Continued
In moment method, we find the estimator for the parameters
θ1 , θ2 , · · · , θm by equating the first m population moments (if they
exist) to the first m sample moments, that is
n
X Xi
E [X ] = M1 = = X̄
n
i=1
n
2
X X2 i
E [X ] = M2 =
n
i=1
n
3
X X3 i
E [X ] = M3 =
n
i=1
and so forth upto
n
X Xm
E [X m ] = Mm = i
n
i=1
Gauranga C. Samanta F. M. University, Odisha
Moment Method Continued
1 Pn 2 = 1
Pn
n i=1 (Xi − X̄ ) P n (Xi2 −
i=1P 2Xi X̄ + X̄ 2 )
Pn Xi2 n n
i=1 Xi X̄ 2
= i=1 n − 2X̄ n + i=1 n
Pn Xi2 2
= i=1 n − 2X̄ X̄ + X̄
Pn Xi 2
2
= i=1 n − X̄
1 Pn 2 = 1
Pn
n i=1 (Xi − X̄ ) P n (Xi2 −
i=1P 2Xi X̄ + X̄ 2 )
Pn Xi2 n n
i=1 Xi X̄ 2
= i=1 n − 2X̄ n + i=1 n
Pn Xi2 2
= i=1 n − 2X̄ X̄ + X̄
Pn Xi 2
2
= i=1 n − X̄
1 Pn 2 = 1
Pn
n i=1 (Xi − X̄ ) P n (Xi2 −
i=1P 2Xi X̄ + X̄ 2 )
Pn Xi2 n n
i=1 Xi X̄ 2
= i=1 n − 2X̄ n + i=1 n
Pn Xi2 2
= i=1 n − 2X̄ X̄ + X̄
Pn Xi 2
2
= i=1 n − X̄
1 Pn 2 = 1
Pn
n i=1 (Xi − X̄ ) P n (Xi2 −
i=1P 2Xi X̄ + X̄ 2 )
Pn Xi2 n n
i=1 Xi X̄ 2
= i=1 n − 2X̄ n + i=1 n
Pn Xi2 2
= i=1 n − 2X̄ X̄ + X̄
Pn Xi 2
2
= i=1 n − X̄
Example
Let X1 , X2 , · · · , Xn be a random sample of size n from a
population X with probability density function
(
θx θ−1 , if 0 < x < 1
f (x; θ) =
0, otherwise,
Example
Let X1 , X2 , · · · , Xn be a random sample of size n from a
population X with probability density function
(
θx θ−1 , if 0 < x < 1
f (x; θ) =
0, otherwise,
Example
Let X1 , X2 , · · · , Xn be a random sample of size n from a
population X with probability density function
(
θx θ−1 , if 0 < x < 1
f (x; θ) =
0, otherwise,
Example
Suppose X1 , X2 , · · · , X7 is a random sample from a population X
with density function
−x
x 6e β
, if 0 < x < ∞
f (x; β) = Γ(7)β 7
0, otherwise
Example
Suppose X1 , X2 , · · · , X7 is a random sample from a population X
with density function
−x
x 6e β
, if 0 < x < ∞
f (x; β) = Γ(7)β 7
0, otherwise
Example
Suppose X1 , X2 , · · · , X7 is a random sample from a population X
with density function
−x
x 6e β
, if 0 < x < ∞
f (x; β) = Γ(7)β 7
0, otherwise
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if 0 < x < θ
f (x; θ) = θ
0, otherwise
ANS: θ̂ = 2X̄
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if 0 < x < θ
f (x; θ) = θ
0, otherwise
ANS: θ̂ = 2X̄
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if − θ < x < θ
f (x; θ) = 2θ
0, otherwise
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if − θ < x < θ
f (x; θ) = 2θ
0, otherwise
Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n
Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n
Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n
Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n
Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n
Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n
Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n
Solution: P
E [X ] = M1 = nXi = X̄
Here E [X ] = 0,Pso we will have to calculate second order moment
X2
E [x 2 ] = M2 = n i
we know V (X ) = E [X 2 ] − (E [X ])2 . this implies
2
E [X 2 ] = (2θ)
12 . this implies
P 2
(2θ)2 Xi
12 q= n
Pn
3 i=1 Xi2
θ̂ = n
Example
If X1 , X2 , · · · , Xn is a random sample from a distribution with
density function
(
(1 − θ)x −θ , if 0 < x < 1
f (x; θ) =
0, otherwise
Example
If X1 , X2 , · · · , Xn is a random sample from a distribution with
density function
(
(1 − θ)x −θ , if 0 < x < 1
f (x; θ) =
0, otherwise
Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X
Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X
Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X
Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X
Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X
Qn
Likelihood function
Qn is defined as L(θ) = i=1 f (xi ; θ)
ln L(θ)
P = ln ( i=1 f (xP i ; θ))
= ni=1 ln(1 − θ)xi−θ
= ni=1 ln f (xi ; θ)P
= n ln(1 − θ) − θ ni=1 ln xi
d ln L(θ) −n
− ni=1 ln xi = 0
P
dθ = 1−θ
θ̂ = 1 + ln1X
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
−x
x 6e β
, if 0 < x < ∞
f (x; β) = Γ(7)β 7
0, otherwise
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
−x
x 6e β
, if 0 < x < ∞
f (x; β) = Γ(7)β 7
0, otherwise
Qn
Likelihood function
Qn is defined as L(β) = i=1 f (xi ; β)
ln L(β)
P = ln ( i=1 fP (xi ; β))
= 6 ni=1 ln xi − β1 ni=1 xi − n ln(6!) − 7n ln(β)
d ln L(β)
= β12 ni=1 xi − 7n
P
dβ β =0
X̄
β̂ = 7
Qn
Likelihood function
Qn is defined as L(β) = i=1 f (xi ; β)
ln L(β)
P = ln ( i=1 fP (xi ; β))
= 6 ni=1 ln xi − β1 ni=1 xi − n ln(6!) − 7n ln(β)
d ln L(β)
= β12 ni=1 xi − 7n
P
dβ β =0
X̄
β̂ = 7
Qn
Likelihood function
Qn is defined as L(β) = i=1 f (xi ; β)
ln L(β)
P = ln ( i=1 fP (xi ; β))
= 6 ni=1 ln xi − β1 ni=1 xi − n ln(6!) − 7n ln(β)
d ln L(β)
= β12 ni=1 xi − 7n
P
dβ β =0
X̄
β̂ = 7
Qn
Likelihood function
Qn is defined as L(β) = i=1 f (xi ; β)
ln L(β)
P = ln ( i=1 fP (xi ; β))
= 6 ni=1 ln xi − β1 ni=1 xi − n ln(6!) − 7n ln(β)
d ln L(β)
= β12 ni=1 xi − 7n
P
dβ β =0
X̄
β̂ = 7
Qn
Likelihood function
Qn is defined as L(β) = i=1 f (xi ; β)
ln L(β)
P = ln ( i=1 fP (xi ; β))
= 6 ni=1 ln xi − β1 ni=1 xi − n ln(6!) − 7n ln(β)
d ln L(β)
= β12 ni=1 xi − 7n
P
dβ β =0
X̄
β̂ = 7
Hint:
1
L(θ) = θn
d ln L(θ)
dθ = − nθ
Hint:
1
L(θ) = θn
d ln L(θ)
dθ = − nθ
Hint:
1
L(θ) = θn
d ln L(θ)
dθ = − nθ
Note:
If we estimate the parameter θ of a distribution with uniform
density on the interval (0, θ), then the maximum likelihood
estimator is given by
θ̂ = X(n)
where as
θ̂ = 2X
is the estimator obtained by the method of moment.
Hence, in general these two methods do not provide the same
estimator of an unknown parameter.
Note:
If we estimate the parameter θ of a distribution with uniform
density on the interval (0, θ), then the maximum likelihood
estimator is given by
θ̂ = X(n)
where as
θ̂ = 2X
is the estimator obtained by the method of moment.
Hence, in general these two methods do not provide the same
estimator of an unknown parameter.
Example
Suppose X1 , X2 , · · · , Xn be a random sample from a normal
population with mean µ and variance σ 2 . What are the maximum
likelihood estimators of µ and σ 2 ?
Example
Suppose X1 , X2 , · · · , Xn be a random sample from a normal
population with mean µ and variance σ 2 . What are the maximum
likelihood estimators of µ and σ 2 ?
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
(
1
, if α < x < β
f (x; α, β) = β−α
0, otherwise
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
q 2
2 −(x−θ)
π e 2 , if x ≥ θ
f (x; θ) =
0, otherwise
Example
Suppose X1 , X2 , · · · , Xn is a random sample from a population X
with density function
q 2
2 −(x−θ)
π e 2 , if x ≥ θ
f (x; θ) =
0, otherwise
q n
Qn −(xi −θ)2
2
Solution: L(θ) = π i=1 e
2
q n
Qn −(xi −θ)2
2
Solution: L(θ) = π i=1 e
2
q n
Qn −(xi −θ)2
2
Solution: L(θ) = π i=1 e
2
q n
Qn −(xi −θ)2
2
Solution: L(θ) = π i=1 e
2
Definition
An estimator θ̂ of θ is said to be an unbiased estimator of θ if and
only if
E (θ̂) = θ
Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. Is the sample mean X̄ an
unbiased estimator of the parameter µ?
ANS: Yes
E [X̄ ] = E [ X1 +X2 +···+X
n
n
] = n1 E [X1 + · · · + Xn ] = n1 nµ = µ
Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. What is the maximum
likelihood estimator of σ 2 ? Is this maximum likelihood estimator
an unbiased estimator of the parameter σ 2 ?
Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. Is the sample mean X̄ an
unbiased estimator of the parameter µ?
ANS: Yes
E [X̄ ] = E [ X1 +X2 +···+X
n
n
] = n1 E [X1 + · · · + Xn ] = n1 nµ = µ
Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. What is the maximum
likelihood estimator of σ 2 ? Is this maximum likelihood estimator
an unbiased estimator of the parameter σ 2 ?
Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. Is the sample mean X̄ an
unbiased estimator of the parameter µ?
ANS: Yes
E [X̄ ] = E [ X1 +X2 +···+X
n
n
] = n1 E [X1 + · · · + Xn ] = n1 nµ = µ
Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. What is the maximum
likelihood estimator of σ 2 ? Is this maximum likelihood estimator
an unbiased estimator of the parameter σ 2 ?
Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. Is the sample mean X̄ an
unbiased estimator of the parameter µ?
ANS: Yes
E [X̄ ] = E [ X1 +X2 +···+X
n
n
] = n1 E [X1 + · · · + Xn ] = n1 nµ = µ
Example
Let X1 , X2 , · · · , Xn be a random sample from a normal population
with mean µ and variance σ 2 > 0. What is the maximum
likelihood estimator of σ 2 ? Is this maximum likelihood estimator
an unbiased estimator of the parameter σ 2 ?
Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?
ANS: Yes
1 Pn
E [S 2 ] = E n−1 2
i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]
σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2
Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?
ANS: Yes
1 Pn
E [S 2 ] = E n−1 2
i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]
σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2
Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?
ANS: Yes
1 Pn
E [S 2 ] = E n−1 2
i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]
σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2
Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?
ANS: Yes
1 Pn
E [S 2 ] = E n−1 2
i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]
σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2
Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?
ANS: Yes
1 Pn
E [S 2 ] = E n−1 2
i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]
σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2
Example
Let X1 , X2 , · · · , Xn be a random sample from a population with
mean µ and variance σ 2 > 0. Is the sample variance S 2 an
unbiased estimator of the population variance σ 2 ?
ANS: Yes
1 Pn
E [S 2 ] = E n−1 2
i=1 (Xi − X̄ )
2
σ2 σ2
E [S 2 ] = n−1 E (n−1)S E [χ2n−1 ]
σ2
= n−1
σ2
n−1 (n− 1) = σ 2
Therefore E [S 2 ] = σ 2 , S 2 is an unbiased estimator for σ 2
1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2
Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
= n−1 E i=1
1
P 2
Xi − nX̄ 2
= n−1 E
2
1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2
1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2
Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
= n−1 E i=1
1
P 2
Xi − nX̄ 2
= n−1 E
2
1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2
1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2
Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
= n−1 E i=1
1
P 2
Xi − nX̄ 2
= n−1 E
2
1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2
1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2
Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
= n−1 E i=1
1
P 2
Xi − nX̄ 2
= n−1 E
2
1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2
1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2
Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
= n−1 E i=1
1
P 2
Xi − nX̄ 2
= n−1 E
2
1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2
1 Pn
E [S 2 ] = E n−1 (Xi − X̄ )2
Pn i=1
1
(Xi2 − 2Xi X̄ + X̄ 2 )
= n−1 E i=1
1
P 2
Xi − nX̄ 2
= n−1 E
2
1
= n−1 n(σ 2 + µ2 ) − n(µ2 + σn ) as
2
(V (X̄ ) = σn , so V (X̄ ) = E [X̄ 2 ] − E [X̄ ]2 , this implies
2
E [X̄ 2 ] = σn + µ2 )
= n−1 2
n−1 σ = σ
2
Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .
Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued
Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .
Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued
Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .
Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued
Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .
Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued
Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .
Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued
Example
Let X be a random variable with mean 2. Let θˆ1 and θˆ2 be
unbiased estimators of the second and third moments, respectively,
of X about the origin. Find an unbiased estimator of the third
moment of X about its mean in terms of θˆ1 and θˆ2 .
Solution:
Given θ̂1 and θ̂2 are an unbiased estimator of the second and third
moments, that is
E [θ̂1 ] = E [X 2 ] and E [θ̂2 ] = E [X 3 ]
The unbiased estimator of the third order moment of X about its
mean is
E [(X −2)3 ] = E [X 3 −6X 2 +12X −8] = E [X 3 ]−6E [X 2 ]+12E [X ]−8
= θ̂2 − 6θ̂1 + 24 − 8
= θ̂2 − 6θ̂1 + 16 is an unbiased estimator of the 3rd moment of X
about its mean.
Gauranga C. Samanta F. M. University, Odisha
Unbiased Estimator Continued
Example
Let X1 , X2 , · · · , Xn be a sample of size n from a distribution with
unknown mean −∞ < µ < ∞, and unknown variance σ 2 > 0.
Show that the statistic X̄ and Y = X1 +2Xn(n+1)2 +···+nXn
are both
2
unbiased estimators of µ.
Example
Let X1 , X2 , · · · , X5 be a sample of size 5 from a uniform
distribution on the interval (0, θ), where θ is unknown. Let the
estimator of θ be kXmax , where k is some constant and Xmax the
largest observation. In order kXmax to be an unbiased estimator,
what should be the value of the constant k?
6
ANS: k = 5
Example
Let X1 , X2 , · · · , Xn be a sample of size n from a distribution with
unknown mean −∞ < µ < ∞, and unknown variance σ 2 > 0.
Show that the statistic X̄ and Y = X1 +2Xn(n+1)2 +···+nXn
are both
2
unbiased estimators of µ.
Example
Let X1 , X2 , · · · , X5 be a sample of size 5 from a uniform
distribution on the interval (0, θ), where θ is unknown. Let the
estimator of θ be kXmax , where k is some constant and Xmax the
largest observation. In order kXmax to be an unbiased estimator,
what should be the value of the constant k?
6
ANS: k = 5
Example
Let X1 , X2 , · · · , Xn be a sample of size n from a distribution with
unknown mean −∞ < µ < ∞, and unknown variance σ 2 > 0.
Show that the statistic X̄ and Y = X1 +2Xn(n+1)2 +···+nXn
are both
2
unbiased estimators of µ.
Example
Let X1 , X2 , · · · , X5 be a sample of size 5 from a uniform
distribution on the interval (0, θ), where θ is unknown. Let the
estimator of θ be kXmax , where k is some constant and Xmax the
largest observation. In order kXmax to be an unbiased estimator,
what should be the value of the constant k?
6
ANS: k = 5
Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
=Q P(Max{Xi } ≤ t)
= 5i=1 P(Xi ≤ t)
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65
Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65
Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65
Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65
Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65
Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65
Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65
Solution:
Fθ̂ (t) = P(θ̂ ≤ t)
= P(Max{X i } ≤ t)
= 5i=1 P(Xi ≤ t)
Q
R 5
t 1
0 θ dx
55
= tθ
4
The pdf of θ̂ is fθ̂ = 5t
θ5
,0<t<θ
Rθ 4
E [θ̂] = E [kXMax ] = kE [XMax ] = k 0 t 5t
θ5
= 56 kθ
k = 65
Definition
Let θ̂1 and θ̂2 be two unbiased estimators for θ. The estimator θ̂1
is said to be more efficient than θ̂2 if Var (θ̂1 ) < Var (θ̂2 ).
Var (θ̂2 )
Remark: The ratio η(θ̂1 , θ̂2 ) is given by Var (θ̂1 )
is called the
relative efficiency of θ̂1 with respect to θ̂2
Example
Let X1 , X2 , X3 be a random sample of size 3 from the popolation
with mean µ and variance σ 2 . If the statistics X̄ and Y given by
Y = X1 +2X62 +3X3 are two unbiased estimators of the population
mean µ, then which one is more efficient?
Definition
Let θ̂1 and θ̂2 be two unbiased estimators for θ. The estimator θ̂1
is said to be more efficient than θ̂2 if Var (θ̂1 ) < Var (θ̂2 ).
Var (θ̂2 )
Remark: The ratio η(θ̂1 , θ̂2 ) is given by Var (θ̂1 )
is called the
relative efficiency of θ̂1 with respect to θ̂2
Example
Let X1 , X2 , X3 be a random sample of size 3 from the popolation
with mean µ and variance σ 2 . If the statistics X̄ and Y given by
Y = X1 +2X62 +3X3 are two unbiased estimators of the population
mean µ, then which one is more efficient?
Example
Let X ∼ U(0, 1/θ) and X1 , X2 , · · · , Xn be a random sample of size
n. Find MLE and varify whether the estimator is biased or
unbiased.
n
ANS: n−1 θ
Example
Let X ∼ U(0, θ) and X1 , X2 , · · · , X5 be the random sample. Find
MLE and varify whether the estimator is biased or unbiased.
ANS: 56 θ
Example
Let X ∼ U(2θ, 3θ) and and X1 , X2 , · · · , Xn be a random sample of
size n. Find MLE
Example
Let X ∼ U(0, 1/θ) and X1 , X2 , · · · , Xn be a random sample of size
n. Find MLE and varify whether the estimator is biased or
unbiased.
n
ANS: n−1 θ
Example
Let X ∼ U(0, θ) and X1 , X2 , · · · , X5 be the random sample. Find
MLE and varify whether the estimator is biased or unbiased.
ANS: 56 θ
Example
Let X ∼ U(2θ, 3θ) and and X1 , X2 , · · · , Xn be a random sample of
size n. Find MLE
Example
Let X ∼ U(0, 1/θ) and X1 , X2 , · · · , Xn be a random sample of size
n. Find MLE and varify whether the estimator is biased or
unbiased.
n
ANS: n−1 θ
Example
Let X ∼ U(0, θ) and X1 , X2 , · · · , X5 be the random sample. Find
MLE and varify whether the estimator is biased or unbiased.
ANS: 56 θ
Example
Let X ∼ U(2θ, 3θ) and and X1 , X2 , · · · , Xn be a random sample of
size n. Find MLE
Example
Let X ∼ U(0, 1/θ) and X1 , X2 , · · · , Xn be a random sample of size
n. Find MLE and varify whether the estimator is biased or
unbiased.
n
ANS: n−1 θ
Example
Let X ∼ U(0, θ) and X1 , X2 , · · · , X5 be the random sample. Find
MLE and varify whether the estimator is biased or unbiased.
ANS: 56 θ
Example
Let X ∼ U(2θ, 3θ) and and X1 , X2 , · · · , Xn be a random sample of
size n. Find MLE
Example
Let X ∼ U(0, 1/θ) and X1 , X2 , · · · , Xn be a random sample of size
n. Find MLE and varify whether the estimator is biased or
unbiased.
n
ANS: n−1 θ
Example
Let X ∼ U(0, θ) and X1 , X2 , · · · , X5 be the random sample. Find
MLE and varify whether the estimator is biased or unbiased.
ANS: 56 θ
Example
Let X ∼ U(2θ, 3θ) and and X1 , X2 , · · · , Xn be a random sample of
size n. Find MLE
Definition
Let X1 , X2 , · · · , Xn be a random sample of size n from a
population X with density f (x; θ), where θ is an unknown
parameter. The interval estimator of θ is called a 100(1 − α)%
confidence interval for θ if
P(L ≤ θ ≤ U) = 1 − α.
Example
Suppose that when a signal having value µ is transmitted from
location A the value received at location B is normally distributed
with mean µ and variance 4. That is, if µ is sent, then the value
received is µ + N where N , representing noise, is normal with
mean 0 and variance 4. To reduce error, suppose the same value is
sent 9 times. If the successive values received are
5, 8.5, 12, 15, 7, 9, 7.5, 6.5, 10.5, let us construct a 95 percent
confidence interval for µ.
Example
Suppose that when a signal having value µ is transmitted from
location A the value received at location B is normally distributed
with mean µ and variance 4. That is, if µ is sent, then the value
received is µ + N where N , representing noise, is normal with
mean 0 and variance 4. To reduce error, suppose the same value is
sent 9 times. If the successive values received are
5, 8.5, 12, 15, 7, 9, 7.5, 6.5, 10.5, let us construct a 95 percent
confidence interval for µ.
Example
Suppose that when a signal having value µ is transmitted from
location A the value received at location B is normally distributed
with mean µ and variance 4. That is, if µ is sent, then the value
received is µ + N where N , representing noise, is normal with
mean 0 and variance 4. To reduce error, suppose the same value is
sent 9 times. If the successive values received are
5, 8.5, 12, 15, 7, 9, 7.5, 6.5, 10.5, let us construct a 95 percent
confidence interval for µ.
Example
From past experience it is known that the weights of salmon grown
at a commercial hatchery are normal with a mean that varies from
season to season but with a standard deviation that remains fixed
at 0.3 pounds. If we want to be 95 percent certain that our
estimate of the present season’s mean weight of a salmon is
correct to within ±0.1 pounds, how large a sample is needed?
ANS: n ≥ 35
Example
From past experience it is known that the weights of salmon grown
at a commercial hatchery are normal with a mean that varies from
season to season but with a standard deviation that remains fixed
at 0.3 pounds. If we want to be 95 percent certain that our
estimate of the present season’s mean weight of a salmon is
correct to within ±0.1 pounds, how large a sample is needed?
ANS: n ≥ 35
Example
Extensive monitoring of a computer time-sharing system has
suggested that response time to a particular editing command is
normally distributed with standard deviation 25 millisec. A new
operating system has been installed, and we wish to estimate the
true average response time µ for the new environment. Assuming
that response times are still normally distributed with σ = 25,
what sample size is necessary to ensure that the resulting 95% CI
has a width of (at most) 10?
ANS n = 97
Example
Extensive monitoring of a computer time-sharing system has
suggested that response time to a particular editing command is
normally distributed with standard deviation 25 millisec. A new
operating system has been installed, and we wish to estimate the
true average response time µ for the new environment. Assuming
that response times are still normally distributed with σ = 25,
what sample size is necessary to ensure that the resulting 95% CI
has a width of (at most) 10?
ANS n = 97
25
The sample size n must satisfy 10 = 2 × 1.96 × √ n
√
n = 9.80, n = 96.04
Since n must be an integer, a sample size of 97 is required
Reamarks:
1. A general formula for the sample size n necessary to ensure an
interval width w is obtained from equating w to
2 × zα/2 × √σn and solving for n.
2. The sample size necessary
for the two sided CI to have a
σ 2
width w is n = 2zα/2 w
25
The sample size n must satisfy 10 = 2 × 1.96 × √ n
√
n = 9.80, n = 96.04
Since n must be an integer, a sample size of 97 is required
Reamarks:
1. A general formula for the sample size n necessary to ensure an
interval width w is obtained from equating w to
2 × zα/2 × √σn and solving for n.
2. The sample size necessary
for the two sided CI to have a
σ 2
width w is n = 2zα/2 w
25
The sample size n must satisfy 10 = 2 × 1.96 × √ n
√
n = 9.80, n = 96.04
Since n must be an integer, a sample size of 97 is required
Reamarks:
1. A general formula for the sample size n necessary to ensure an
interval width w is obtained from equating w to
2 × zα/2 × √σn and solving for n.
2. The sample size necessary
for the two sided CI to have a
σ 2
width w is n = 2zα/2 w
25
The sample size n must satisfy 10 = 2 × 1.96 × √ n
√
n = 9.80, n = 96.04
Since n must be an integer, a sample size of 97 is required
Reamarks:
1. A general formula for the sample size n necessary to ensure an
interval width w is obtained from equating w to
2 × zα/2 × √σn and solving for n.
2. The sample size necessary
for the two sided CI to have a
σ 2
width w is n = 2zα/2 w
gives
P(µ < X̄ + zα √σn ) = 1 − α
gives
P(µ < X̄ + zα √σn ) = 1 − α
Theorem
If X̄ is the mean of a random sample of size n from a population
with variance σ 2 , the one-sided 100(1 − α)% confidence bounds
for µ are given by
Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi = 132, then what is the 95% confidence interval for µ?
Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi = 132, then what is the 95% confidence interval for µ?
Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi = 132, then what is the 95% confidence interval for µ?
Example
Let X1 , X2 , · · · , X11 be a random sample of size 11 from a normal
distribution
P11 with unknown mean µ and variance σ 2 = 9.9. If
i=1 xi = 132, then what is the 95% confidence interval for µ?
Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .
Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .
Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .
Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .
Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .
Remark:
Notice that the length of the 90% confidence interval for µ is
3.112.
However, the length of the 95% confidence interval is 3.718.
Thus higher the confidence level bigger is the length of the
confidence interval.
Hence, the confidence level is directly proportional to the
length of the confidence interval.
In view of this fact, we see that if the confidence level is zero,
then the length is also zero.
That is when the confidence level is zero, the confidence
interval of µ degenerates into a point X̄ .
Example
Let X1 , X2 , · · · , X40 be a random sample of size 40 from a
distribution
P40 with unknown mean µ and variance σ 2 = 10. If
i=1 xi = 286.56, then what is the 90% confidence interval for the
popolation mean µ?
Example
Let X1 , X2 , · · · , X40 be a random sample of size 40 from a
distribution
P40 with unknown mean µ and variance σ 2 = 10. If
i=1 xi = 286.56, then what is the 90% confidence interval for the
popolation mean µ?
Example
Let X1 , X2 , · · · , X40 be a random sample of size 40 from a
distribution
P40 with unknown mean µ and variance σ 2 = 10. If
i=1 xi = 286.56, then what is the 90% confidence interval for the
popolation mean µ?
Example
Let X1 , X2 , · · · , X40 be a random sample of size 40 from a
distribution
P40 with unknown mean µ and variance σ 2 = 10. If
i=1 xi = 286.56, then what is the 90% confidence interval for the
popolation mean µ?
The T Distribution
Definition
Let Z be the standard normal r. v. and let χ2γ be an independent
chi-squared random variable with γ degrees of freedom. The r. v.
T = rZ
χ2γ
γ
Remarks:
The T distribution is a generalization of the Cauchy
distribution and the normal distribution.
That is, if γ = 1, then the probability density function of T
becomes
1
f (t) = π(1+t 2 )
, −∞ < t < ∞
which is the Cauchy distribution.
Further, if γ → ∞, then
−t 2
f (t) = √1 e 2 , −∞ < t < ∞
2π
which is the probability density function of the standard
normal distribution.
Theorem
If the random variable X has a t-distribution with γ degrees of
freedom, then
(
0, if γ ≥ 2
E [X ] =
DNE , if γ = 1
and
(
γ
γ−2 , if γ ≥ 3
V [X ] =
DNE , if γ = 1, 2
Theorem
If X ∼ N(µ, σ 2 ) and X1 , X2 , · · · , Xn be a random sample from the
population X , then
X̄ −µ
S
√
∼ T(n−1)
n
Example
Let X1 , X2 , X3 , X4 be a random sample of size 4 from a standard
normal distribution. If the statistic W is given by
X1 −X2 +X3
W =√
X12 +X22 +X32 +X42
ANS: E (W ) = 0
Theorem
If X ∼ N(µ, σ 2 ) and X1 , X2 , · · · , Xn be a random sample from the
population X , then
X̄ −µ
S
√
∼ T(n−1)
n
Example
Let X1 , X2 , X3 , X4 be a random sample of size 4 from a standard
normal distribution. If the statistic W is given by
X1 −X2 +X3
W =√
X12 +X22 +X32 +X42
ANS: E (W ) = 0
(n−1)S 2
Therefore, the random variable defined by the ratio of σ2
X̄ −µ
to σ
√ has a t- distribution with (n − 1) degrees of freedom
n
X̄ −µ
√ σ
n X̄ −µ
That is r = S
√
∼ tn−1
(n−1)S 2 n
(n−1)σ 2
(n−1)S 2
Therefore, the random variable defined by the ratio of σ2
X̄ −µ
to σ
√ has a t- distribution with (n − 1) degrees of freedom
n
X̄ −µ
√ σ
n X̄ −µ
That is r = S
√
∼ tn−1
(n−1)S 2 n
(n−1)σ 2
(n−1)S 2
Therefore, the random variable defined by the ratio of σ2
X̄ −µ
to σ
√ has a t- distribution with (n − 1) degrees of freedom
n
X̄ −µ
√ σ
n X̄ −µ
That is r = S
√
∼ tn−1
(n−1)S 2 n
(n−1)σ 2
(n−1)S 2
Therefore, the random variable defined by the ratio of σ2
X̄ −µ
to σ
√ has a t- distribution with (n − 1) degrees of freedom
n
X̄ −µ
√ σ
n X̄ −µ
That is r = S
√
∼ tn−1
(n−1)S 2 n
(n−1)σ 2
(n−1)S 2
Therefore, the random variable defined by the ratio of σ2
X̄ −µ
to σ
√ has a t- distribution with (n − 1) degrees of freedom
n
X̄ −µ
√ σ
n X̄ −µ
That is r = S
√
∼ tn−1
(n−1)S 2 n
(n−1)σ 2
Example
A random sample of 9 observations from aPnormal popula1tion
yields the observed statistics x̄ = 5 and 81 9i=1 (xi − x̄)2 = 36.
What is the 95% confidence interval for µ?
Example
A random sample of 9 observations from aPnormal popula1tion
yields the observed statistics x̄ = 5 and 81 9i=1 (xi − x̄)2 = 36.
What is the 95% confidence interval for µ?
Example
A random sample of 9 observations from aPnormal popula1tion
yields the observed statistics x̄ = 5 and 81 9i=1 (xi − x̄)2 = 36.
What is the 95% confidence interval for µ?
They are the upper and lower 100(1 − α)% bounds, respectively.
Here tα is the t- value having an area of α to the right.
or for σ
r P r Pn
n 2 2
i=1 (Xi −µ) i=1 (Xi −µ)
χ2α ,n
, χ21− α ,n
2 2
That is
(n−1)S 2 (n−1)S 2
,
χ2α ,n−1 χ21− α ,n−1
2 2
or for σ
r r
(n−1)S 2 (n−1)S 2
χ2α ,n−1
, χ21− α ,n−1
2 2
Example
A random sample of 9 observations from P a normal population with
µ = 5 yields the observed statistics 18 9i=1 xi2 = 39.125 and
P9 2
i=1 xi = 45. What is the 95% confidence interval for σ ?
Example
A random sample of 9 observations from P a normal population with
µ = 5 yields the observed statistics 18 9i=1 xi2 = 39.125 and
P9 2
i=1 xi = 45. What is the 95% confidence interval for σ ?
Example
A random sample of 9 observations from P a normal population with
µ = 5 yields the observed statistics 18 9i=1 xi2 = 39.125 and
P9 2
i=1 xi = 45. What is the 95% confidence interval for σ ?
Example
A random sample of 9 observations from P a normal population with
µ = 5 yields the observed statistics 18 9i=1 xi2 = 39.125 and
P9 2
i=1 xi = 45. What is the 95% confidence interval for σ ?
Definition
Error of estimate is the difference between the estimator and the
quantity it is supposed to estimate X̄ − µ, is the error of estimate
for population mean.
To examine this error, let us make use of the fact that for
large n
X̄ −µ
σ
√
n
is a random variable having approximately the standard
normal distribution.
Gauranga C. Samanta F. M. University, Odisha
Error of Estimate
When we use a sample mean to estimate the population
mean, we know that although we are using a method of
estimation which has certain desirable properties, the chances
are slim, virtually nonexistent, that the estimate will actually
equal to population mean µ?
Definition
Error of estimate is the difference between the estimator and the
quantity it is supposed to estimate X̄ − µ, is the error of estimate
for population mean.
To examine this error, let us make use of the fact that for
large n
X̄ −µ
σ
√
n
is a random variable having approximately the standard
normal distribution.
Gauranga C. Samanta F. M. University, Odisha
Error of Estimate
When we use a sample mean to estimate the population
mean, we know that although we are using a method of
estimation which has certain desirable properties, the chances
are slim, virtually nonexistent, that the estimate will actually
equal to population mean µ?
Definition
Error of estimate is the difference between the estimator and the
quantity it is supposed to estimate X̄ − µ, is the error of estimate
for population mean.
To examine this error, let us make use of the fact that for
large n
X̄ −µ
σ
√
n
is a random variable having approximately the standard
normal distribution.
Gauranga C. Samanta F. M. University, Odisha
Error of Estimate Continued
Example
An industrial engineer intends to use the mean of a random sample
of size n = 150 to estimate the average mechanical aptitude (as
measured by a certain test) of assembly line workers in a large
industry. If ,on the basis of experience , the engineer can assume
that σ = 6.2 for such date, what can he assert with probability
0.99 about maximum size of his error?
ANS: z α2 √σn , α
2 = 0.005, z0.005 = 2.58, ANS: 1.30
Example
An industrial engineer intends to use the mean of a random sample
of size n = 150 to estimate the average mechanical aptitude (as
measured by a certain test) of assembly line workers in a large
industry. If ,on the basis of experience , the engineer can assume
that σ = 6.2 for such date, what can he assert with probability
0.99 about maximum size of his error?
ANS: z α2 √σn , α
2 = 0.005, z0.005 = 2.58, ANS: 1.30
Example
An industrial engineer intends to use the mean of a random sample
of size n = 150 to estimate the average mechanical aptitude (as
measured by a certain test) of assembly line workers in a large
industry. If ,on the basis of experience , the engineer can assume
that σ = 6.2 for such date, what can he assert with probability
0.99 about maximum size of his error?
ANS: z α2 √σn , α
2 = 0.005, z0.005 = 2.58, ANS: 1.30
Example
A research worker wants to determine the average time it takes a
mechanic to rotate the tires of a car, and he wants to be able to
assert with 95% confidence that the mean of his sample is off by
atmost 0.50 minute. If he can persume from past experience that
σ = 1.6 minutes,how large a sample will he has to take?
ANS: 39.3 ≈ 40
Example
A research worker wants to determine the average time it takes a
mechanic to rotate the tires of a car, and he wants to be able to
assert with 95% confidence that the mean of his sample is off by
atmost 0.50 minute. If he can persume from past experience that
σ = 1.6 minutes,how large a sample will he has to take?
ANS: 39.3 ≈ 40