Nothing Special   »   [go: up one dir, main page]

Ch06 - Probality and Random Process

Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

Probability and

Random Variables

6.7 INTRODUCTION
Thus far we have discussed the transmission of deterministic signals over a channel, and we have not
emphasized the central role played by the concept of "randomness" in communication. The word random
means unpredictable. If the receiver at the end of a channel knew in advance the message output from the
originating source, there would be no need for communication. So there is a randomness in the message
source. Moreoveq transmitted signals are always accompained by noise introduced in the system. These
noise waveforms are also unpredictable. The objective of this chapter is to present the mathematical
background essential for further study of communication.

6.2 PROBABILITY
A. Random Experiments
In the study of probability, any process of observation is referred to as an experimenr. The results of an
observation are called the outcomes of the experiment. An experiment is called a random experimenr if its
outcome cannot be predicted. Typical examples of a random experiment are the roll of a die, the toss of a
coin, drawing a card from a deck, or selecting a message signal for transmission fiom several messages.

B. Sampte Space and Events


The set of all possible outcomes of a random experiment is called the sample space S. An element in S is
called a sample poini. Each outcome of a random experiment corresponds to a sampie point.
A set,4 is called a subset of B, denoted by A C B if every element of ,4 is also an element of B. Any
subset of the sample space ^S is called an event. A sample point of S is often referred to as an elementary
event. Note that the sample space S is the subset of itself, that is, S c S. Since S is the set of all possible
outcomes, it is often called the certain event.
I_-------l
I 6.2 I nnalog and Digital Communicotions

C. Algebra of Events
I . The complement of event A, denoted Z , is the event containing all sample points in S but not in A.
2. The union of events A and^B, denoted A U,B, is the event containing all sample points in eitherl
or.B or both.
3. The intersection of events A andB, denoted A n ^8, is the event containing all sample points in both
A and B.
4. The event containing no sample point is called the null event, denoted Z. Thus O corresponds to
an impossible event.
5. Two eventt A and B are called mutually exclusive or disjoinr if they contain no coilrmon sample
point, that is, A n B - A.
By the preceding set of definitions, we obtain the following identities:

S - a e -S
SUA:S,SOA:A
AUA-SA)Z:a7,-A
D. Probabitities of Events
An assignment of real numbers to the events defined on S is known as the probability measure. In the
axiomatic definition, the probability P(A) of the event A is a real number assig,re d to A that satisfies the
following three axioms'.
Axiom l: P(A) > o (6.1)
Axiom 2:
-1
P(s) (6.2)
Axiom 3: P(AuB):P(A)+P(B)ifA.B:a (6.3)
With the preceding axioms, the fbllowing useful properties of probability can be obtained (Solved
Problems. 6.1-6.4):

1. PG)-1-P(A) (6.4)
2. P(z) -- 0 (6.5)
3. P(A) < P(B) if A c B (6.6)
4. P(A) < t (6.7)
s. P(A u B) = P(A) + P(B) - P(A . B) (6.8)
Note that property 4 can be easily derived from axiom 2 and property 3. Since A C S, we have
P(A) S P(S): I
--
Thus, combining with axiom 1, we obtain
0<P(A)<t (6.e)
Property 5 implies that
P(AuB)<P(A)+P(B) (6.10)
since P(A a B) > 0 by axiom 1.
Probability and Random Vaiobles t 6r-I
One can also define P(A) intuitively, in terms of relative frequency. Suppose that arandom experiment
is repeated n times. If an eventl occurs nrtimes, then its probability P(A) is defined as

P(A): hm k (6.1 I )
ft+(x) n

Note that this limit may not exist.

E. Egualty Likely Events


Consider a finite sample space S with finite elements ,S: { \p \2,..., )n}
where ),'s are elementary events. Let P(1,) : pi. Then
1.0{pi<li:1,2,...,n
n
2. L p, : pr * pz*... + p,: I (6.r2)
i=l
3. If A - U 11, where 1is a collection of subscripts, then
iel

P(A): I p|.\):Dp, (6.13)


),,€A i€I

When all elementary events A, (i : l, 2, . . ., n) are equally likely events, that is

Pt : Pz: ... : Pn

then from Eq.(6.12),we have


I
P,-- 1 i:1,2,.-.,fl (6.t4)
n

n(A)
and P(A): (6.1s)

where n(A) is the number of outcomes belonging to event A andn is the number of sample points in S.

F. Conditiona[ Probability
The conditional probability of an event A given the event B, denotedby P(AIB), is defined as

P(AiB): r# P(B)>o (6.l6)

where P(A a ^B) is the joint probability of A and B. Similarly,


P (A' B) Pu') > o
\'
P(BIA\ - P@\
(6.17)

is the conditional probability of an event .B given event A. FromEqs (6. l6) and (6.17)we have

P(A . B) : P(A1B)P(B) : P(B1A)P(A) (6.18)


7 6A1 Analog ord Digitrl Cor*rrirotiorc

Equation (6.18) is often quite useful in computing the joint probability of events.
From Eq. (6.18) we can obtain the following Baltes rule:
P(B lA) P(A)

G. Independent Events
Two events A and B are said to be (statistically) independent if
P(AIB)
- P(A) and P(BIA) : P(B) (6.20)
This, together with Eq. (6.19), implies that for two statistically independent events
P(A. B): P(A)P(B) (6.2r)
We may also extend the definition of independence to more than tw-o events. The events Ap A2, .. ., A, are
independent if and only if for every subset {A,,, A,., ..., A,*} (2 < k < n) of these events,
' P(A,,)A,, ... nAit): P(A,,)P(A,.) ...P(A,r) (6.22)

H. Total Probabitity
The events Ap Ar, ..., A, are called mutually exclusive and exhaustive tf
3.q,: Aru Azu... An: Sand Ai)Aj: Q i *i (6.23)
,=1

Let B be any event in S. Then

. P(B) -D P@. A): f P@Vt)p(At) 6.24)


,:1 i=l

which is known as the total probability of event B (Prob.6.13). LetA : A,rtEq. (6.19); using Eq. (6.2a)
we obtain

p(A)B)- - P(Bl'4) - rc.zs)


I,:1 pfr4)P( )
Note that the terms on the right-hand side are all conditioned on events A,,while that on the left is condi-
tioned on B. Equation (6.25) is sometimes referred to as Ba1,ss' tlteorem.

6.3 RANDOM VARIABLES


A. Random Variables
Consider a random experiment with sample space S. Arandom varioble X(\) is a single-valued real func-
tion that assigns a real number called the value ofX(A) to each sample point ,\ of Often we use a single
^S.
letter X for this function in place ofx(,\) and use r.v. to denote the random variable. A schematic diagram
representing a r.v. is given in Fig. 6.1.
Probability and Random Voiables t 6.il
The sample space S is termed the domain of the
X, and the collection of all numbers [values of
r.v.
X(^)] is termed the range of the r.v. X. Thus, the
range of X is a certain subset of the set of all real
numbers and it is usually denoted by R . Note that
two or more different sample points might give the x(^) R
same value ofX(A), but two different numbers in the Fig. 6.1 Random variable X as a function
rage cannot be assigned to the same sample point.
The r.v. Xinduces a probability measure on the real line as follows:

P(X: : P{) :X()) - x}


x)
P(X < x): P{ ) :x(A) S x}
P(xr < *. *r.) - P{.\ :x, < X()) < xz)
If X can take on only a countable number of distinct values, then X is called a discrete random variable.
If X can assume any values within one or more intervals on the real line, then X is called a continuous
random variable.,The number of telephone calls arriving at an office in a finite time is an example of a
discrete random variable, and the exact time of arrival of a telephone call is an example of a continuous
random variable.

B. Distribution Function
The distributionfunction lor cumulative distributionfunction (cdf)l ofXis the function defined by
F*(*) : P(X < x) - oo (.x ( oo (6.26)
Properties of F*(x):
l. 0 < F*(x) <I (6.27a)
2. F*(*r) < Fx@) if x, < x, (6.27b)
3. &(co) - I (6.27c)
4. rx(-m) - 0 (6.27d)
5. F*(o'): Fx(a) q* - a * e (6.27e)
,lr,T,
From definition (6.26) we can compute other probabilities:
P(a<X<b)-FAb)-F*(o) (6.28)

P(X>a):l-F*(a) (6.2e)

P(X < b) : Fx-@ ) b- :


.1:1. b - e (6.30),

Flx)

C. Discrete Random Variables


and Probabitity Mass Functions
LetX be a discrete r.v. with cdf F*(*) Then F*(x) is a stair-
case function (see Frg. 6.2), and F*(x). Changes values only
in jumps (at most a countable number of them) and is constant
between jumps.
| . 6.6 | Analog and Digital Communicotions
_
Suppose that the jumps in Fr(x) of a discrete r.v. X occur at the points x1, x2t .. ., where the sequence
may be either finite or countably infinite, and we assume x,1 x, if i <7. Then
Fr@,)-F*(*,_r) - P(X. r,) -P(X ( x,._r) : P(X:
",)
(6.31)

Let px@) P(X: x) (6.32)


-
Then function P*(*) is called the probability mass function (pmf) of the discrete r.v. X.
Properties of P*(x):
1.0{P-G,)<1 i -- 1,2, ...
\ t,
(6.33a)

- 0 if r = x,(i -
-
2. Pr(*) 1,2, .. .) (6.33b)

3. f pr(r,): t (6.33c)

The cdf ,Fr(x) of a discrete r.v. X can be obtained by

F*(x): P(X ( x) : Dfr(r,) (6.34)


x(x

D. Continuous Random Variabtes and Probability Density Functions


LetX be a r.v. with cdf F*(x) Then F*(*) is continuous and also has a derivative dF*(x)/dxthat exists
everyw-here except at possibly a finite number of points and is piecewise continuous. Thus, ifXis a con-
tinuous r.v., then (Solved Problem 6.22)
P(X - x) : 0 (6.3s)
In most applications, the r.v. is either discrete or continuous. But if the cdf F*(x) of a r.v. Xpossesses both
features of discrete and continuous r.v.s, then the r.v. X is called the mixed r.v..

f*@): !I# (6.36)

The functionfr(x) is called the probability densiQ,function (pd0 of the continuous r.v. X.
Properties offx@)'
t. f*(x) > 0 (6.37a)
2. f-@)dx - t (6.37b)
[:
3. f*(r) is piecewise continuous.

4. P(a < x < b) : f*(x) dx (6.37c)


[:
The cdf f*-(x) of a continuous r.v. X can be obtained by

F*(i) : P(X ( x) - ['*-f*{0 ae (6.38)

6.4 TWO-DIMENSIONAL RANDOM VARIABLES


A. Joint Distribution Function
Let S be the sample space of a random experiment. Let X and Y be two r.v.'s defined on S. Then
the pair (X, y) is called a two-dimensional r.v. if eactr of X and I associates a real number with
Probability and Random

every element of S. The joint cumulative distribution function (or joint cdfl of X and I, denoted by
Fn(*,-y), is the function defined by
Fn(r,Y): P(X < x, Y 1Y) (6.3e)
Two r.v.'s X and Izwill be called independent if

F*r(*, y) : F*(x)Fr(y) (6.40)

for every values of x and y.

B. Marginal Distribution Function


Since {X < oo} and {Y < co} are certain events, we have

{X<x,Y1oc }: {X<x} {X ( oo, Y<y}: {Y!ly}


so that

F*r(*, oo) : F*(*) $.ala)


F n(* , )') : F ,(Y) (6.4 I b)

The cdf's Fr(*) and Fr$,,), when obtained by Eqs (6.41a) and (6.41b), are referred to as tbe marginal
cdf's ofXand I, respectively.

C. Joint Probability Mass Functions


Let (X, )') be a discrete two-dimensional r.v. and (X, Y) takes on the values (xu y) for a certain allowable
set of integers i andj.Let
Pn(xu!):P(X-x,,Y:!i) (6.42)

The funct ion Pn(xi,11) iscalled thejoint probability mass function (jornt pmf) of (X,Y).
Properties of Pn@i,l):
l. o< pn@i,!) < I (6.43a)

2. Ix, D pxy(xi,!):t
v.
(6.43b)

The joint cdf of a discrete two-dimensional r.v. (X,Y) is given by

Fn(*,/) : D D, pn@u v,) (6.44)


r;(x yr(.t'

D. Marginal Probability Mass Functions


Suppose that fora fixedvalue X: xp the r.v. Ycan onlytake onthe possible values yi(j:1,2,..., n).

Then px@): t pn,lxi,t) $.a5a)


-Ii

(6.4sb)
Similarly, PrU) - D Pxv@pYti)
-r- .
Analog and Digital Communications

The pmf'spx@) and Pr(1t,), when obtained by Eqs @.a5a) and (6.45b), are referred to as the marginal
pmf's ofXand d respectively. If X and Y are independent r.v.'s, then

Pxy(xu li) : p.(x)py(Yi) (6.46)

E. Joint Probability Density Functions


Let (X,I') be a continuous two-dimensional r.v. with cdf Frr(x, y) and let

F*(x'
.fn6,-y) :
02 Y) (6.47)
dx0y

The functionf*r.(x,y) is calle dthe joint probability densityfunction (joint pdf ) of (X, f ). By integrating
Eq. $.a7 ), we have

F.rr(x, t) - {:_ f _f*r(€,rid€drt


(6.48)

Prop ertie s off*r(x, 1t\:

l. f.n@,-v) > o (6.aea)


(6.4eb)
2.
t: nf,o(uy) dx - d1, 1

F. Marginal Probabitity Density Functions


By Eqs (6.41a), (6.41b), and definition (6.36), we obtain

-f,@)
: y) dv (6.50a)
[*-f-y@,

fvT) : v) dx (6.s0b)
[:.fn@,,
The pdf 's .f*(x) and l"r(x). when obtained by Eqs (6.50a) and (6.50b), are referred to as the marginal pdf's
ofXand I, respectively. If Xand Y are independent r.v's, then
f*r(*, )') : fx@)frO) (6.s r)

The conditional pdf ofXgiven the event {Y:y} is

.fx,@|il: f,Lv't*o (6.s2)


#
where/,.(y) is the marginal pdf of L

6.5 FUNCTIONS OF RANDOM VARIABLES


A. Random Variable g(X)
Given a r.v. X and a function g(x), the exprettto:
I __ g\X) (6.s3)
Probobility and Random Variables

defines a new r.v. Y. With y a given number, we denote D, the subset of R, (range of ,D such that S@) S y.
Then
(Y < y) -- ls$ ) S yj - (x e Dy)
where (X e 4,) is the event consisting of all outcomes A such that the point X(^) € Dr.Hence,
Fr(y) : P(Y 1r) : Pls6) { yf - P(X e D") (6.s4)
IfXis a continuous r.v. with pdf/r(x), then

Fr(y): I^, fr(x) dx (6.ss)

Determination of f,(y) fromf*(x):


LetX be a continuous r.v. with pdf f*(x). If the transformation y: g(x) is one-to-one and has the
inverse transformation
,-s'(y)-h(y) (6.s6)
then the pdf of Iis given by (Solved Problem 6.30)

r, ( y) : rx @)l#,1 : *v')tlryl (6.s7)

Note that if g(x) is a continuous monotonic increasing or decreasing function, then the transformation
y: g(x) is one-to-one. If the transformation y: g(x) is not one-to-one,fy(y) is obtained as follows:
Denoting the real roots of y - g(x) by xo, that is

I : g@): ... - g(x)- ...

fvl)- Dr lYf
g'G)l r
(6.s8)

where g'(x) is the derivative ofg(x).

B. One Function of Two Random Variables


Given two random variables Xand Y and a function g(x, y), the expression

/- g(X, Y) (6.se)
is a new random variable. With z a given number, we denote by D, the region of the xy plane such that
g(x, y) S z. Then
lZ<rl- {s(X,\Az\: {(X,Y)e D,}
where {(X, Y) e D,} is the event consisting of all outcomes A such that the point {,f(l), f(^)} is in D,.
Hence,

Fr(r):P(Z1z)-P{(X,n€D,) (6.60)
If X and Y are continuous r.v.s with joint pdf fyr(x,y), then

Fz@) -- [
DZ
I fn@, y) dx dy' (6.61)
I 6.1O I Analog and Digitol Communications _
C, Two Functions of Two Random Variabtes
Given two r.v.s X and Y and two functions g(x, y) and h(x, y) the expression
7 : g(X, Y) W : h(X, Y) (6.62)
defines two new r.v.s Z and W. With z and w two given numbers we denota D,*the subset of Rr, [range
of (X, Y)l such that g(x, y) I z and h(1, i ( w. Then
(Z 12, W < w): [g(x, !') < z. h(x,y) 3w]
{(X, Y) e D,*} -
where {({ Y) e D,,,) is the event consisting of all outcomes .\ such that the point {X (A), y(A)} e D,,,.
Hence,

Fr*(r, n,) : P(Z < Z, W < w): P{(X, Y) e D,*} (6.63)


In the continuous case we have

F^v(2, w) : t f fn(*, Y) dx dY (6.64)


D-,,

Determination of frr(2, w) fromf*r(*, y),


LetX andY be two continuous r.v.s with joint pdf f*r(x,y).If the transformation
z
- g(x, y) w : h(x, y) (6.6s)
is one-to-one and has the inverse transformation

x - q(2, w) y - r(2, w) (6.66)


then the joint pdf of Z and W is given by

. /zw(r, w) : f.w(x, y)lJ(x, y)l-l (6.67)


where x : q(2, w), !: r(2, w) and

lae asl I d, 0z
a"l la.
J(x.vl--lo*'-t-l 0y
(6.68)
ph ahl @- 0w
la,' arl I a, a,
which is the Jacobian of the transformation (6.65).

6.6 STATISTICAL AVERAGES


A. Expectation
The expectation (or ntean) of a r.v. X, den oted by E(X) or I,Lyt is defined by

[\ x,p^'('r,) X :discrete
llx: E(X) : )i (6.6e)
x-ft(x) dx X: continuous
Lr "
Probability ond Random Voiables

The expectation of Y : g(X) is given by


[t str, )P-r(x) (discrete case)
E(Y):Elso^)l -I ,

dx (continuous case)
(6.70)

trt: s(x)-fx@)

The expectation of Z
- g(X, Y)is given by

t-- f
tt s(xi, )')Pxt,(x,, y) (discrete case)
E(z)-Els(x,Dl :1 , rc.1D
y)-fxt(x..r') dx dy (continuous case)
t/ : /_. rtr,
Note that the expectation operatioz is linear, that is,
EIX+Yf:ElXl+ElYl (6.72)
Elrxl -- cElxl (6.73)
where c is a constant (Solved Problem 6.45).

B. Moment
The rrth moment of a r.v. Xis defined by

tI xi p*@,) x:discrete
E(X'\: I i- (6.74)

t'L: *" 'f *(x) dx X:continuous

C. Variance
The variance of a r.v. X, denoted by or Var(X), is defined by
"tr
Var(X) : o2* - El6- tr)') (6.7s)
Thus,

[I ,r, = Px)' Pr(*,) x:discrete


o*2
x- li (6.76)

lf * (x - t-1,)' -f rG) d* x :continuous


The positive square root of the variance, or oy, is called the standard deviation of X. The variance or
standard variation is a measure of the "spread" of the values of Xfrom its mean px.By using Eqs (6.72)
and (6.73), the expression in Eq. (6.75) can be simplified to

o'* : Elx') - t 2* - ElX2l - (Elxl)' (6.77)

D. Covariance and Correlation Coefficient


The (fr, n)th moment of a two-dimensional r.v. (X, I) is defined by
.f otil _ nnotos ona oigit t ctyllyrrtttl:

tD t *! yi p*r(x,,' y)
r x : discrete
ffik:E(Xoyn)-]T-'r 6.7g)
,n -f n(*' y) dx d"- x continuous
t/",. f* ro :

The (1, 1)th joint moment of (X, Y),


nn: E(XY) (6'79)
is called the correlutian of X and Y. If E(X Y ) : 0, then we say that X and Y arc orthogonal. The covari-
ance of X and I, denoted by Cov(X, I ) or o.rr,, is defined by
Cov({ Y) : oxy: E[(X - tt)(Y - p)l (6.80)
Expanding Eq. (6.80), we obtain
cov(x, Y):E(rY)-E(X)E(Y) (6.81)
If Cov(X, Y) :0, then we say that X and Y are uncorrelated. From Eq. (6.81) we see that X and Y are
uncorrelated if
E6y): E(x)E(y) ( 6.g2)
Note that if X and Y are independent, then it can be shown that they are uncorrelated (Supplementary
Problem 6.13). Horvever, the converse is not true in general; that is, the fact that X and Y are uncorrelated
does not, in general, imply that they are independent (Supplementary Problem 6.14). The corelation
coefficienr, denoted by p (X, Y ) or pn,, is defined by

p(X,Y):pxy- on (6.83)
o xov
It can be shown that (Solved Problem 6.53)
lpnl ( I or- 1< pxi-! I (6.84)

6.7 SPECIAL DISTRIBUTIONS


There are several distributions that arise very often in communication problems. These include binomial
distribution, poisson distribution, and norrnal, or gaussian distribution.

A. Binomial Distribution
A r.v. Xis called a binomial r.v. with parameters (n, p) if its pmf is given by

pxo)-p(x-k):llO-t-p)'-o k:0,t,...,n (6.8s)'

where0<p(land
(,\ : nt
[o,J k\n 1)l
which is known as the binomial coefficient. The corresponding cdf ofXis
Fx@):f llorrt-p)'-rm<x<m*t (6.86)
flo lf/ '
Probability ond Random Variables

The mean and variance of the binomial r.v. X are

Fx:nPo; -nP(L- p) (6.87)


The binomial random variable X is an integer-valued discrete random variable associated with repeated
trials of an experiment. Consider performing some experiment and observing only whether event I
occurs. If A occtxs, we call the experiment a success; if it does not occur (l occurs), we call it a failure.
Supposethatthe probabilitythat.,4 occurs is P(A)
-p; hence,P(A) - q - 1-p.We repeatthis experi-
ment r times (trials) under the following assumptions:

l. P (A) is constant on each trial.


2. The n trials are independent.

A point in the sample space is a sequence of nA's and 7's. Apoint wtth kA's and n - kZ '.s will be assigned
a probability of po qn-o.Thus, if X is the random variable associated with the number of times that
.4 occurs in z trials, then the values ofXare the integers ft
- 0, l, ..., n.
In the study of communications, the binomial distribution applies to digital transmission when
X stands for the number of errors in a message of r digits. (See Solved Problems 6. 17 and 6.41.)

B. Poisson Distribution
A r.v. X is calle d a Poisson r.y. with parameter CI(> 0) if its pmf is given by

px@): P(X: k): k:0, 1, ... (6.88)


"-ooo
kl
The corresponding cdf ofXis
nk
F*(*): s-o n{x1n*1 (6.8e)
?:, kl

The mean and variance of the Poisson r.v. X are (Solved Problem 6.42)
px:a oi:o (6.90)
The Poisson distribution arises in some problems involving counting, for example, monitoring the
number of telephone calls arriving at a switching center during various intervals of time. In digital
communication, the Poisson distribution is pertinent to the problem of the transmission of many data
bits when the error rates are low. The binomial distribution becomes awkward to handle in such cases.
However, if the mean value of the error rate remains finite and equal to a, we can approximate the
binomial distribution by the Poisson distribution. (see Solved Problem 6.t9 and 6.20.)

C. Normal (or Gaussian) Distribution


A r.v. X is calle d normal (or gaussian) r.v. if its pdf is of the form
I ('r-ttt2'(2oz)
f*(x): -+ e (6.e 1)
I lro
The corresponding cdf of X is

I
F*@): d€: t- e-€,zo, (6.e2)
h [-*r-"-tL\2t(2o2) ..l2" SG-Dro
T--_--..:---]
.l 6,74 | Analog and Digital Communicotions

This integral cannot be evaluated in a closed form and must be evaluated numerically. It is convenient to
use the function QQ) defined as

Q@) - hL* e-€'tz O, (6.e3)

Then Eq. (6.92) can be written as

(6.e4)

The function Q@) is known as the complementary error function or simply the Q function. The function
QQ) is tabulated in Table C-l (App. C). Figure 6.3 illustrates a normal distribution. The mean and vari-
ance of X are (Solved Problem 6.43)

ltx: t-, o _',


: 02 (6.95)
We shall use the notation N(p,, o'1 to denote that X is normal with mean p, andvariance oz.Inparticular,
X: N(0;1); that is, Xwith zero mean and unit variance is defined as a standard normal r.v.
FrV)

0Fx 0p
(a) (b)

. tig. 6.3 Normal distribution

The normal (or gaussian) distribution has played a significant role in the study of random phenomena
in nature. Many naturally occurring random phenomena are approximately normal. Another reason for
the importance of the normal distribution is a remarkable theorem called the central-limit theorem.This
theorem states that the sum of a large number of independent random variables, under certain conditions,
can be approximated by a normal distribution.

Probabitity

6.I Using the axioms of probabiliry prove Eq: (6.4).

S:AUA and AOA:o


Then the use of axioms I and 3 yields

P(s): l: P(A)+Pa1Z1
Thus P(A)-I1P(A)
Probability and Rondom Variobles

6"2 VeriS Eq. (6.5). , ,i,


A:AUA and A)A-A
Therefore, by axiom 3,

P(A) : P(A u a) : P(A) + P(a)


and we conclude that

P(o1 : g

6.3 Veriff Eq. (6.6).

Let A c B. Then from the Venn diagram shown in Fig. 6.4, we see that s
B-Au(B)h and An(B)47:q
Hence, from dxiom 3,

P(B) Shaded region: Aa B


- P(A) + P(B e, A ) > P(A) Fig. 6.4
because by axiom l, P(B a A) > 0.

6.4 Verifu Eq. (6.8).

From the Venn diagram of Fig. 6.5, each of the sets,4 U ,B and B can be expressed respectively, as
a union of mufually exclusive sets as follows: S
' A.UB-AU(,enq and B:(AnB)u17aa1
Thus, by axiom 3,
Shadedregion: Aaa
P(AuB):P(A) +P(Z aa1 (6.e6)

and P(B): P(A n B) + e1Z a a1 (6.e7)


From Eq. (6.97) we have

P(7 aB):P(B)-P(A)B) (6.e8)


Shaded region: Aa B
Substituting Eq. (6.98) into Eq. (6.96), we obtain
Fig. 6.5
P(A u B) : P(A) + P(B) - P(A ) B)

Let P(,4) .+ 0.9 and P(8) : 0.8. Show that P(A n B) : 0.7.

From eq. (6.8) we have

P(A . B) -- P(A) + P(B) - P(A u B)


By Eq. (6.9),0 < P(A U B) < l. Hence, J
Analog and Digital Communications

P(A.B)> P(A)+P(B)-r (6.ee)


Substituting the given values of P(A) and P(B) in Eq. (6.99), we get

P(A.B)>0.9+0.8-r-0.7
Equation (6.99) is known as Bonferronib inequality.

6.6 Show that

P(A):P(AnB)+ PVn E) (6.100)


From the Venn diagram of Fig. 6.6, we see that
AnB ANB
A:(AnA)u @o E) and (A.B).(A. B) -a (6.101) Fig. 6.6
Thus, by axiom 3 we have

P(A)-P(A.B)+P(A. B)
6.7, G, idsr,.fi.felegrapH.Sotlres generating two symbols: dot and dash" We oo-serued that the dots
were twiCe as likely io rccur as the dashes. Find the probabilities of the dot's occurring and
dash's occurring.

From the observation, we have

P(dot) 2P(dash)
-
Then, by Eq. (6.12)

. P(dot) * P(dash)
- 3P(dash) : I

Thus, P(dash) * l/3 and P(dot) :213

Show tha!, P{AIB) &efiryby Eq, (5.16) satisfies the three axioms of aprobatility, that is,
(a) P(AIB) >0, (b) P(qB): 1, and (c) P(Au CIB) : PQaIB) + P(ClB),if A i C: a
(a) By axiom 1, P(A n B) ) 0. Thus,
P(AIB) > o

(b) Since (.t n B)


- B we have
P(slB)- P(snB) r(4
P(B) - P(B) -1
(c) Now (Au C) ) B - (AnB) u (Cn B).ItA o C : a,then(Fig. 6.7),

(A)B)n(C)B)-s
Hence, by axiom 3
Probability and Rondom Variables

u Pl(Au c) ) Bl P(A.B)+ P(CnB)


P(A clB)
- P (B) - P (B) -- P(AIB) + P(clB)

6"9 Find PUIB) if (a),...r*,,f'],#::e,.Or'i(b) A C B,and (c) B c A.

(a) lf A)B - a ,thenP(AnB): P(q) - 0.Thus,

P(AIB):'(!.?.') : P(a) *0
P(B) P(B) CoB
(b) It A C 8, then A n B : A and
B) P(A)p(A)
P(A:B):!4#:;(B)
(c) IfB C A,thenA a B - B and
P (A. B) P(B) i
P(AIB) - P (B) P (B)

6.10 Show that,if P(lla) > P{A}, #eu,P{fft4) > P(B).


P(AO B)
Tf PUIB\, - > P(A),then P(A n B) > P(A)P(B). Thus
P(B)

P(BIA):
y# - P(A)P(B)
P (A)

,,].,
6-I[,Let,r' fi , a, tr ppace ,S. Show tlat if/ aiA g are independent, then
sCI are "
(a) l and .B and (b) 7. erCA.

(a) From Eq. (6.100) (Solved Problem 6.6), we have

P('4): P(A n B) + ePa a B1


Since A and B are independent, using Eqs (6.21) and (6.4), we obtain
P(,4. E) : P(A) - P(A n B) - P(A) - P(A)P(B) 6.142)

- P(A)U - P(B)) - P(A)P(E)


Thus, by Eq. (6.21), A and E are independent.
(b) Interchanging A and^B in Eq. (6.102), we obtain

P(B.z)-P(DPG)
which indicates that Z and B are independent.
Analog and Digitol Communications

6,1? LetA and.B be events defined in a sample space S. Shov that if both P(A) and P(B) are nonzero,
thenevpnts,{and8.'cantotbebothmutuallyexclusiveandindeponderrt.
Let A and B be mutually exclusive events and P(A) = 0, P(B) = 0. Then P(A a B) : P(a1 - g
and P(A)P(B) * 0. Therefore

P(A)B)=P(A)P(B)
That is, A and B are not independent.

6.13 Verify Eq. (6.24).

Since B n B [and using Eq. (6.23)], we have


^S -
BaA" B
B -B O^S: BO(ArUAr.U...UAN)
- (B.Ar) u (B n A)o...u (B.A*) s
BnAs
Now the evehts B a Ak (k - 1 ,2, ..., i/) are
mufually exclusive, as seen from the Venn
diagram of Fig. 6.8. Then by axiom 3 of the
probability definition and Eq. (6.18), we obtain

P(B)-P(Bo8: Dp<A.Ao) B. A1 B)43 BnAo


k:l
Fig. 6.8
N

- tk:l P(BIA)P(Ak)
,6i ffi*6-$}a.0i.sr,,.!1 & *d.#B Ro,isa,a:

',. O)'trf.a 0,,


,',,. ,, ,ib,the pt*UuUitity that
,; ',1',. ,,,a
Q tffi,:i., '1' , ', ','.,
r ir.l, ' , ' .

'r. '-1,, ft),-If a. $ wa$'rc $si*6,probabiiity that


fll,r,

' (g',Calcritr*tb th Wffi*ry'+f w,roi r,.


,t'',
ft +} l:,'iffiilffi)
fiHitie
.'
.tr;n,smiuea
,ii',: ,i,rii
Fig;r,6;9 "B,,,,ffi
,tom ugteettux;P*ffi
:,
,gigq}.ig,$ffif lffiCr ,ii;,f;, ;

(a) From Fig. 6.9, we have


P(*r) - 1 - P(m) * | - 0.5 - 0.5
P(rrlm) - I - P(rrlm) - l-p-'1-0.1 :0.9
P(r,lm,) - 1- P(rnlm,\- l-q-l-0.2-0.8
Probability and Random Vaiables

Using Eq. (6.2a), we obtain

P(r) : P(rrlmo)P(m) * P(rolm,)P(m): 0.9(0.5) + 0.2(0.5) : 0.55

P(r) - P(rrlm{))P(*i * P(r,lmr)P(m,) : 0. l(0.5) + 0.8(0.5) - 0.45

(b) Using Bayes'rule (6.19), we have


P (m) P (r, I mo) (0.s)(0.e)
P(m6lrs)
- : -:-------------:- : 0.8 18
P (ro) 0.55

Solved Problem 9.5).


(a) Find the range of P- (rrr) for which the MAP criterion prescribes that we decide mx if rn is
received,
O) Find the range of P(ms) for which the MAP criterion prescribes that we decide m, if r, is
received.
(c) Find the range of P(m) for which the MAP criterion prescribes that we decide m'rlo matter
what is received.
(d) fifld *he ra,rge:,,ofP(ra*) f6r,rvhich the MAP criterion prescribes that we decide mtna matter
what is received.
Analog ond Digital Communications

(a) Since P(rtlmr) - P(rolm) and P(rolmr) :


- 1 1 - P(rrlmr), wehave
P(.rrlmr):0.9 P(rrlm):0.1 P(rtlmr) - 0.6 P(rolmr):0.4
By Eq. (6.24), we obtain
P(rs\ P(rntm,)P(mr) : 0 e P(m) + 0 4rt - P(m))
:i:,ff:ri,l;
Using Bayes rule (6.19), we have

p(molr) P(rolmr)P(m) (m)


o'9 P
- P (r) - -
0.s P (m) * 0.4
P(rolmr)P(m,) _ O.aU - P(mi) _ 0.4 * 0.4 P(mo)
P(mrlr(,)
P (ro) 0.s P (m) * 0.4 0.s P (m) + 0.4

Now by the MAP decision rule, we decide mrif ro is received if P(molrr) > P(mtlro), that is,
0.9 P(mo) 0.4-0.4P(mn)
0.5 P (m) + 0.4 0.s P (m) + 0.4

or 0.9 P(m,,) > 0.4 - 0.4 P(mo)or 1.3 P(mo) > 0.4 or P(mo) > 9! - 0.31
1.3

Thus, the range of P(m) for which the MAP criterion prescribes that we decide moif rois received is
0.31 < P(m)!l
(b) Similarly, we have

'
P(r) - P(rrlmr)P(mo) * P(r,,lmr)P(m,,): 0.1 P(*i + 0.6[1 - P(*)1
:-0.5 P(mi +0.6
P(rtlm')P(mi 0.1P (mo)
P(rnolrr) - P (r)
- P(*i + 0.6
-0.s
P(rtlmt)P(m'). 0.6U- P(mi) _ 0.6 45 P (mr)
P(mrlrr) - -
P -0.s P(m) + 0.6
(rt) - o.s P (mi + 0.6
Now by the MAP decision rule, we decide m, if r, is receiredif P(mtl.,) > P(molrt), that is,

0.6 - (m) ,
0.6 P 0.1P (mo)

-0.s P(mi + 0.6 -0.s P(mi + 0.6


or 0.6 - 0.6 P(mr)> 0.1 P(mo)or 0.6 > 0.7 P(mo) or P(mr). 5 :0.86
0.7

Thus, the range of P(m) for which the MAP criterion prescribes that we decide m, if r, is received is

0<\mo)<0.86
(c) From the result of (6) we see that the range of P(m) for ryhich we decide mrif r, is received is
P(mi > 0.86
f 6.,,l
Combining with the result of (a), the range of P(mr) for which we decide mo no matter what is
received is given by
0.86< P(m)<l
(d) Similarly, from the result of (a) we see that the range of P(m) for which we decide m, if ro
is received is
P(mi < 0.31
Combining with the result of (b), the range of P(m) for which we decide ml no matter what is
received is given by
0l PQno) < 0.31

6.16 Coasider an experiment consisting of the observation of six successive pulse positions on a com-
munication link. Suppose that at each of the six possible pulse positions there can be a positive
pulse, a negative pulse, or no pulse. Suppose also that the individual experiments that determine
the kind of pulse at each possible position are independent. Let us denote the event that the ith
pulse is positive by {x, : + 1}, that it is negative by {x, --
I }, and that it is zero by {x, 0}. *
Assume that

P&i: *1) * p:0.4 P(x, * -1) : Q: A3 for i - t,2, ...,6


(a) Fiud the probability that all pulses are positive.
(b) Find the probability that the first three pulses are positive, the next two are zero, and the last
is negative.

(a) Since the individual experiments are independent, by Eq. t6.22) the probability that all pulses
are positive is given by
Pl(xr
- +l) O (xz: +l) n ... O (-ro: +l)l
: P(xr: +1)P(xz: *1) .. P(xo: +l)
: p6 : (0.4)u : 0.0041
(b) From the given assumptions, we have
P(x,
- 0):1-p-q--0.3
Thus, the probability that the first three pulses are positive, the next two are zero, and the last is
negative is given by
P[(-rr : +1) o (xz: +l) o (xr: *l) n (ra:0) t'-l (xs:0) n (.ru: -l)]
: P(xt: *l)P(xz: *1)P(rr: +1)P(xq - O)P(xs:0)P(xo - -l)
: p3(t - p - q)'q - (0.4)3(0.3)'(0.3) : 0.0017

Random Variabtes

6.17 A binary source generates digits I and 0 randomly with probabilities 0.6 and 0.4, respectively.
(a) What is the probability that two ls and thee 0s will occur in a five-digit sequence?
(b) What is the prub,ability that at least three ts will oecur in a five-digit sequCIrce?
| 6.22 | Anolog and Digitol Communications

(a) Let X be the random variable denoting the number of ls generated in a flve-digit sequence.
Since there are only two possible outcomes (1 or 0) and the probability of generating I is
constant and there are five digits, it is clear that X has a binomial distribution described by
Eq. (6.85) with n : 5 and k:2. Hence, the probability that two ls and three 0s will occur in
a five-digit sequence is

Ptx -rl : lll


l.z]
(0.6)2(0.4 )3 : 0 23
'

(b) The probability th4t at least,three ls will occur in a five-digit sequertce is

P(X i:3)- r-P(XS2)


where' P(x <2): i [s)to o;110.4)'-o : o 3r7
fr lkl
I{ence, P(X> 3): 1-0.317:'0.683
,.
6.18 Let Xbe a binomial r.v. with parameters (r,p). Show that p*(k) given by Eq. (6.85) satisfies
Eq. (6.33c).

Recall that the binomial expansion formula is given by

(a * b)' - f,
*=o
l:)ru"-r \K )

n n l.^\
rhus,byEq (6.8s), D pxa): -(p*1-p)'-rn ==t
E
t:0 l;)n(r-p)'-o
\^ /

6.19 Show that whon,n is verJ/r,large (n >> k) andp yery small (p << 1), the binomial distribution
[Eq, (6,85)] can be ppproximated by the following Poisson distribution [Eq. (6.88)]:

P(X : k) = e-1P @P)o (6.103)


kt
From Eq. (6,85)
n''
' : kt(n-k),O-k n
P(x k
-n-k
- k) -- l;), on-k
n(n - l)...(n - k +l) _o ^n*o (6.104)
kt
When n )) k and p << l, then
n(n- l)... (r-k* 1)= n.n. ....n:nk
Q: | -p N e-P qn-k * "-{n*k)r
x e-P
Probabitity and Random Variabtes | 6.23 |

Substituting these relations into Eq. (6.104), we obtain

PIX: k) x e'o @P)k


kl

(a) Calculate the probability of more than one error in l0 received d.igits.
(b) Repeat (a), using the Poisson approximation, Eq. (6.I03).

(a) Let Xbe a binomial random variable denoting the number of errors in l0 received digits.
Then using Eq. (6.85), we obtain

P(X> l) : I - P(X- 0)- P(X - t)


r - ft0lto.ol)0(o.ee),.- f':l(0.01),(o.ss)e :00042
--'-[o] [r,l'
(b) Using 4q. (6.103) with rtP,: 10(0.01) : 0.1, we have

P(X > l) = I -
(0'1)0 (0'l)r : 0.0047
"-o' 0! -r--o''

6,21 Let X be a,,Foi*rorl r"v; ,firith'parameter e. Show that pr(k) given by Eq. (6.88) satisfies

By Eq. (6.88),
x- -k
D Px(k) - e-a \-u -e
-AAa
e -I
k-0 akl
6.22 Yeify Eq. (6.3s).
From Eqs (6.6) and (6.28), we have
P(X - x) < P(x- e ( X < *) - Fx(x) - F*(x - e)
)
for any e 0. As fr(x) is continuous, the right-hand side of the preceding expression approaches
:
J-

0 as e ---+ 0. Thus, P(X r)


- 0.

6.23 The pdf of a random variable Xis given by

tt alxlb
Jx@):
lo o*n1.*ir"
whereftisaconstant.
(a) Determine the value of k.
I
(b) Let a -- -l asd b - 2. Calculate P(lXl ( c) for c -
2
Anolog ond Digitol Communications

(a) From property I otf,(x) [Eq. (6.37a)), k must be a positive constant. From property 2 of fr(x)
[Eq. (6.37b)],

a): I
[: f.@)d. - [:
kdx -
- Kb
from which we obtain k - ll(b - a). Thus,

Ir
I alxlb
fx@) - lb- a (6. r 05)

[o otherwise

A random variableXhaving the preceding pdf is called aunifurm random variable.


(b) With a - -l and b
- 2 we have

f*(*):
1
3
-l< x12
0 otherwise

From Eq. (6.37c)

tl
r[r x r=
iJ - p[-1.
I z- - =:): [:, ftx)dx: I:, -dx:
33 -

6.24 The pdf of Xis given by

fx$) - ke* u(x')


where a is a positive constant. Determine the value of the constant ft.

From p.op..iy I of -f*(x) [Eq. (6.31a)], we must have k 2 0. From property 2 of f*(x)
lEq. (6.37b)1,

f-@)dx - o {* d*dx - L:T


[: a

from which we obtain k - a. Thus,

f*@): ae *u(x) a > o (6.106)

A random variableXwith the pdf given by Eq. (6.106) is called anexponential random variable'
with parameter a.
,]
6.25 All manufactured devices and machines fail to work sooner or later. If the failurE rate i$
constant, the time to failure I is modeled as an exponential random vanabtre. Suppose that a
particular class of co@s-x m chips has beon found to fa.4y! &0 e. failure larr of
:
Eq. (6.106) in hours. '

:
(a) Measurements show that the pmbability that the time to failure exceeds lOa hours (h) for
chips in the given class is e I(n; 0.363). Calculate the value of parmreter a for this case.
Probobility and Rondom Voiables

{b) Using the val :,of.i$ai,ameler o determined in part (a), calculate the time /o such'that the
probability is 0.05 ttrat the tirne to failure is less than /0.

(a) Using Eqs (6.38) and (6.106), we see that the distribution function of 7is given by

Fr(t): (l -
I'_* Jr(r)dr -
e-'t)u(t)

Now P(T> 104;- t-P(T < loo)


- I - Fr(l0o) : : e 1
1 -(l - "-a(to4)1
- r-a(t04)1

from which we obtain a - l}a


(b) We want Fr(ti:P(T</o):0.05
Hence, | _ e-or,,_ 1 _ r-{tO-4)r': 0.05
or ,_{to-4)ro: 0.95

from which we obtain /o : -104 ln 0.95 : 513 h

6.16, The joint pdf of X and I is given by


+ bYtu(x)u(y)
fxv@, y) - kel*
where a and 6 are positive oo nts. Determine the value of constant fr.

The value of k is determined by Eq. (6.49b), that is,

: L
,L- [:f*.(€,q)d€d't - o I, fo* "'"'+oD 4qd't - o
[o* "''ta€ .[o "-urdrl ab
I

Hence, k-ab.
6.27 Thejoint pdf ofXand lris given by
t ,-:ry) _ *y-y'*, + y')t2u(x)u(y)
\ / \r/

,n, :::"'. ^-''-


(a) Find the marginal pdf's/r(x) and/r(y).
(b) Are Xand l/independent?
(a) By Eqs (6.50a) and (6.50b), we have

.f*(x) : f.n(x, y)dy : xye-G2


+ \'2\"2
u@\dy
[: [;
-: xe-"t2u(x) f
* cly: xe-"'2u(x)
!e-v2rz
Jo
Sincelo(x, y) is symmetric with respect to x and y, interchanging x and l, wa obtain

fr(y) - ye-Y'12 u(y)

(b) Since fn(*, y) :fx@)fr(y), we conclude thgt X and Y are independent.


t---------
t'626 | Analog and Digital Communicotions

6;28 ' ,f,andoffirf fibffiffffidlY aia said to be jointly nonmal random variablesr:if theirioint:pdf is
given by
llr
fn(*,v):
ffi*et- rr*rt
"[t -oxr.)',l-,of.*.ltryl
*l{, -"l. ox oY . ov
]t J'I ,Ijl [=ol]] (6,07)

i (b) Shori, that X s$d 'Y ara ir,*dependent when p : 0.

(a) By Eq. (6.50a) the marginal pdf ofXis

f*(*): Jf* fn(*,y'ldy


-,x
By completiag the square in the exponent of Eq. (6.107), we obtain

t ['- ,, ]'i
"*rf -
fx@):'.oi-zl. *
,l] r* r

Wr-*W
.*{-
,i41,*-,1=?l]'},
:-{-'t#J']
,[G"* J
r* ,
-n ,[z"or(l - -
p')t''

[
expl-
v"v^Yl , o- *^')J*'
Il',, 1t, OL(* l'l a,
F)l
,"?1_ p,.l' - - ox - I

Comparing the integrand with Eq. (6.91), we see that the integrand is a normal pdf with mean

tty* oo' (*- t x)


o x
and variance

"?Q-p')
Thus, the integral must be unity, and we obtain

1
fx@): exp{-(x - 1t*12t2.o2*1 (6.108)
G;
In a similar manner, the marginal pdf of Iis .
Probobility and Rondom Vaiables

fr(v): [: .fn(*,!)dx: 1tr12tzo]1 (6-l0e)


hexp[-(v -
(b) When P:0, Eq. (6.107) reduces to

fn(*, y) : rh".o{ _ .
1r=r]' [=rJ' ]]
I --l ,(*-r*l'l , -t t(y-t)'l
W*o[ ,l;) )Wa'^ol ,l;) )

: fx@)-fr(Y)
Hence, X and Y are independent.

Functions of Random Variables

6.29 If X is i(pr; o';, then show that Z : (X - p,)/o is a standard normal r.v.; that is, i(0; 1).

The cdf of Z is

Fr(r):p(Z1z)-=rf{-l <rl - p(x<zo* t,): I::.' +"6-u)z/(2oz)d*


I o ) "-* tl lTo
By the change of variable y : (x - p")lo (that is x : ol * tt), we obtain

. FzQ)-P(Z<z)- ,tf'_oo +"-"''dy


,t2n

and '\
.fzQ) / -- dFr.(z) :
dz ]:Jz" "t'o
which indicates that Z - N(0; 1).

6.30 Verify Eq. (6.57).

Assume thaty g(x) is a continuous monotonically increasing function [Fig. 6.10(a)]. Then it has
- t(y) hb).Then
an inverse that we denote by x S - -
Fr(y): P(Y 1y) - PIX < h(y)l - Fxlh(y)j (6. r l0)

and fy(y): {rilh1\}


*FAy): *
Applying the chain rule of differentiation to this expression yields

f,(D : fxlh{ilfuwy1
----
'lr--:----
--1

6.28 | Anotog and Digital Communications

which can be written as

fr(y) -f*k)+
dy
x - h(y) (6.1 I 1)

If y - g(x) is monotonically decreasing [Fig. 6.10(b)], then

Fr(y) P(Y ( y) - PW > h(y))- I - F*[h(y)1 (6.1t2)


-
fr(v):
*F,0): -f^@#
and x - h(y) (6.1 r3)

Combining Eqs (6.111) and (6.113), we obtain

r, ( v) : rx @)l#l : *Lh (y))lryl


which is valid for any continuous monotonic (increasing or decreasing) function y g(x).
-

(b)

Fig. 6.10

631 Let Y *.2'X'+ 3. If a r.andorn variableX is uniformly distributed over l*l, 2f, fitdfy{y).
From Eq. (6.105) (Solved Problem 6.23), we have

Ir
l^
fAx): 'l I -l<x<-2
Io otherwise

The equationy
- g(x)x,- I2x2 * 3 has a single solution xt: (y -3)12, the range ofy is [1, 7], and
g'(x)
- 2. Thus, -l _< and by Eq. (6.58)

fr,): lt a,l: lj :;f,:


632 Let Y:' aN * 6.,$how that ifx+ N(p;i),txren r : N{ap + b; a2}).
Probability and Random Voiables

Theequation.y-g(x):ax *bhasasinglesolutionxr:(y-b)la,andg/(4):a.Therangeof
y is (-co, oo). Hence, by Eq. (6.58)

f,(y): i,*l+) (6.r l4)

Since X - N(p,;o'),by Eq. (6.91)

-rx',):h'.el- )o-rt'l (6.1 r s)

Hence, by Eq. (6.114)

r,(y)
''
r I \'' .^r[- )"lLl-r]'l
Jr" lolo"^'l zo' I o " ) )

h*r[-#0-at-r,'] (6.r 16)

which is the pAf of


'r1ap
+ b; a2o2\.Thus, if X: N(t ;r'1,thenY - N(ap + b; a2o21.

6.33 Let y : X2. rindflrfu) ifx: N(0;1).

lf y < 0, then the equationy - r' has no real solutions; hence,fr(-y) : 0.


If y > 0, then f : x2 has two solutions
*r: J! *z:-J!
Now, y.- g(x)
- x2 and d@ - 2x. Hence, by Eq. (6.58)

' f,(v) - +r l.r*d-v ) + f*(- ,! y llutt> (6.117)

SinceX- tr(0;I) from Eq. (6.91), we have

f*(x):
""' +
tl2n "-x2/2
(6.1 18)
'
Since/r(x) is an even function from Eq. (6.117), we have

fr(y): *f*d,
ly
)uU): +
lz7ty
e'Y/2u(),) (6.11e)

The input to a noisy commtrnication channel is a binary random variable X with P(X - 0) :
P(X: I) - 1. The'ou ut of Ch nnel Z is given by X +'[ where,:]Fis fie:additirc'noirc
introduaed by the channell Assuming that Xand I'are independent and I- N{0;1), find the

Using Eqs (6.24) and (6.26), we have


Fr(r): P(z <-z) - P(z < zW:0)j P(X:O)P(Z 3 zW
- 1)P(x: 1)
I 6-30 I Anolog and Digital Communications

Since Z_X*Y
P(Z < zW : 0) : P(X + Y < zW : 0) : P()' I z) : Fr(r)
Similarly, P(Z < zV : l) : P(X + Y < z{ : l) : P(Y I z - l) : Fv@ - l)
lt
Fz@):
Hence,
,rr(r) + ,Fr(z - t)
Since I-. N(0;l),
f,(Y): e-"'2
#
I

dz -t ,fik)+;frQ-t)
h@)-dF'(z)
arrd

- t[+
2l{2"" * L"-{z-r1'rzf
"-z'/z ' JG"
l
(6.120)

635 Consider the transformation


z - ax + by w: cx * dy (6.r2t)
i

Find the joint densrty fun*iamfr*(2, w) in terms of fn@, y).


If ad - bc * 0, then the system
ax*bY==, cxldY-*
has one and only one solution:
x-ez* 0wy:^lz*rlw
d
where a- B: -b u- ' n-
ad'-bc ad-bc ad-bc ad-bc
Since [Eq. (6.68)]

I
a, arl
J(x, y)
-
la*
la*
ayl
a*l
:l: 'rl: ad-bc
t__t
la* ayl
Eq. (6.67) yields

frr@, w) :, ', , fn(o, *


, Cw, ^yz + qw,) (6.122)
lad-Dcl

6.36 Let Z *X + f. Find:thepdf'of Zif Xand I/are independentrandomvariables.


:

We introduce an auxiliary random variable W, definedby

W_Y
Probability and Random

The system z : ** y, w : !has a single solution:


X-Z-W !:W
Since
a, a,l
I

r(x, y\:l'*
"l - l'I0 'lll : ,
l0* 0*l
la, arl
Eq. (6.67) yields [or by setting a : b : d : I and c - 0 in Eq. (6.122)l
fr*(r, w) : f*y(z - w, w) (6.123)
Hence, by Eq. (6.50a), we obtain

fz@): f,,(r, w)dw : .fn@ - w, u,)dw (6.124)


"[: [:
IfXand Y are independent, then

-fz@): f*@ - w) fr(w)dw (6.r2s)


,L:
which is the convolution of functions f*(x) andfr(y).
637 Suppqse'ftat X and Y are independent normalized norrnal random variables. Find the pdf of
Z_X+Y.
The pdf's of Xand Y are

I --,,. |
fx@): ,^ e-t'tz fr(y) : -v2t2

I tIt tl2"
-e

Then, by Eq. (6.125), we have

fr(,): I*_ r-c - w) fr(ut) dw

rf--x. ,L +
,12""-12-'w72r2 ,l2n "-w2/2
4*

:
* l: '.e[- *rr - 2zw +z-'t)a*
:
* t :.^, - \+ .lt-* - ft)'] }
{
L
-
h'-" ^
h I:'.,[- llt-* - ft)']'-
Analog and Digitol Communications

Letu: J1* -rtJr.Then


:
fz@)
h n=''^
i I:* -,f- e-u''2 dt,

Since the integrand is the pdf of N(0; l), the integral is equal to unity, and we obtain

I I r't1z1rfz1'1
lztz): ^-2214- ,"_ (6.126)
f;Jve-=-*:60
which is the pdf of Ar(O: Jl).
Thus, Z, is anormal random variable with zero mean and varian"" ,fi.
6-38 ,ConsidEr e transformation
Y
x2 +Y2 O : tan-l {6.,tr27}
X
fiudfi*(ro,d)lin,te.rrqs,affxx, ,y)., , ' ,'

We assume that r ) 0 and 0 < 0 < 2tr.Withthis assumption, the system

tl x'+Y2 -rtanlZ -o
x
has a single solution:
x -r cos 0 !: r sin 0
Since [Eq. (6.68)]

la*
0x 0x
o.rl

i (x,y): la,
a, ael
ae lcos d -rsin 0l|
lu, 0yavl -
_t

0y lsin 0 rcos -r0l


la,
0r ae
a,.l

(6.128)

{a:.iegD

t6.:130)
Probability ond Random Vaiables

: .r[X' + Y' (cos O cos cr,r/ * sin O sin a.,r)

: R cos (.,,'r - O)

where R : ,[X\ y' and O : tan-'


X-L
(b) Since X - Y: N(0; o2) andare independent, from Eqs (6.51) and (6.91)

frr(*, y) -- e-x2i(2o2) r-)'2i{2ozt


h h
: + r-@2+Y2)/Qoz) (6.13r)
Z'1TO

Thus, using'the result of Solved Problem 6.33 [Eq. (6.128)], we have

f^s(r, 0) : rfn(r cos 0, r sin 0)

- (6.t32)
-e-lt1zo21
zTto

Using Eqs (6.50a) and (6.50b), we obtain

f*(r) : f f*s(r, 0)d0 : e-r2t(2o\ n, : e-r2i(2o2) (6.133)


o* # .[o'" #

fs(il: [: f^s(r, o)dr : # f o* ,r-*'tzo2)dr : * (6.134)

f^s(r,0) -f^(r)fs(0) (6.13s)


Hence, R and 0 are independent.
Note that 0 is a uniform random variable, and R is called a Rayleigh random variable.

Statistical Averages
- -6.fhs ::
:

: $ ..,VdueS.0a&d I,withptobabilities a and P: 1'o, respectively.

Using Eqs (6.69) and (6.70), we have

Fx: E[xl - o(a) + r(p) - p

E[X']: 02(a) + 12@) - P


Analog ond Digital Communicotions

From Eq. (6.77)

o', : EIX'I- (ElXl)' : [r * P' : g0 - g) : ag

6.41 Binary data are transmitted over a noisy communicatio:r channel in a block of 16 binary digits.
The probability that a received digit is in error due.to.chuT.lnoise is 0.01. Assume that the
errors occurring in various digit positions within a block are independent.
(a) Find the mean (average number of) errors per block
(b) Find the variance of the number of errot's per block
(c) Find the probability that the number of errors per block is greater than or equal to 4.
(a) Let X be the random variable representing the number of errors per block. Then X has a
binomial distribution with n:76 andp : 0.0l.By Eq. (6.87) the average number of errors
per block is
E(X) - np -_ (16X0.01) : 9.16

(b) Bv Eq. (6.87)


: np(7 - p): (16)(0.01)(0.99) : 0.158
"'*
(c) P(X>4):l-P(X<3)
Using Eq. (6.86), we have

p(x <3) t6-k


- io:i [t,ll
\k
,o.or)k (o.ee)
)
-0.e86

Hence, P(X> 4): 1-0.986:0.014

6.42 Verify Eq. (6 9.0).


By Eq. (6.69)
ak
Px: ElXl - D
k:0
kP(X: t) - 0+ T
k=l
e-a
(r - 1)!

oo k-l oo m
Cl A
:O8'- -1 \--\ :Qe -(t -de :Q,
)
uo:, (t-1)! D
m:0 -:(Ye'-e 7nl.

Similarly,
.x, k
Elx(x-l)l : t k(k- l)P(x : k) :o + o + \--1
>e -o
Q.

t:0 fr (k -2)l

z _,r13 oo_,
m
^z^_rS A 2-oa
e -o 2

fr (k-z)t ffi --de ml.

or EW'- X): Elx2)- ElXl: ElXzl - cv : q.2

Thus, EW'l: ,' +o (6.136)


Probability ond Rondom Voiabler [--6.35 I
Then using Eq. (6.77) gives

o'* : Elx'l - (Etx))' : (n'* a) - o' : *


6.43 Veriff Eq. (6.9s).

Substituting Eq. (6.91) into Eq. (6.69), we have

l,x: ElXl: ,t: *"-(x-rL)2/(2o2)d*


h
Changing the variable of integrationto y - (x - I,Dlo, we have

Etxl: I s* v
(o),+ 1t)s-!2t2av
v 4/r
i ld -L

,[2n J - r" ""24, * u [: #'-'l]i2d);


The first integral is zero, since its integrand is an odd function. The second integral is unity, since
its integrand is the pdf of .n(0;l). Thus,
px -,= E[X] - p
From property 2 of f*(x) [Eq. (6.37b)], we have

1L)2/(2o2)d* _ o J2ir (6.t37)


L_ "-(,-
Differentiating with respect to o,we obtain

g_{_ 6-1t)2t(2o2)
f*
U -,X, O' " dx - ,tr"
Multiplying bofh sides by ozl ,tC, we have

l^-
(* - tD' "-(x-D2t(2o') d* : EL6 - - o,* : o,
W t: tO2)

6.44 LetX = ff(0; /). Show that

ffin:EW'l:
_ [o n-zk*l
r' ' { (6. r 38)
[t.3.... ..(n *l)o' n
-2k

X: N(0; o') -. f*(x): +lro e-\2/(2d)


I
The odd moments ffi2,,*t of X are 0 because fx?x) : fx@).Differentiating the identity

f* e-,r*rdx _ (6.13e)
J-n
fr times with respect to a when n 2k,, we obtain .
-
. | 6.36 | Analog and Digital Communications

* | .3 ...... (2k - l) E
f *2ku-ax2 dx -
J -n 2k \l azk+t
Setting a - 7/(2o2), we have

ffi2k: E[X'o] - [t! Ztro "S* *2k"-xzt(2oz)4*


-a
- 1.3 ...... (2k - D;o
, . ,,,],i : ,

Letfn(x,y)be the joint density function ofXand L Then using Eq. (6.71), we have

Etx + yt : I: t: (x + y) fxy@, y) dxdy

: ,f_: x fxv(x, v) dx dv. !fxy@, v) dx dv


"/-] J:_ Jl_
Using Eqs (6.50a) and (6.69), we have

l: t: xfxv(x,v)dxdv:
l:,[/: fn(*,norfo.
_ f* *f*(*)dx-EIX)
J -oo

In a similar manner, we have

,L". f*tt*tt, Y) dxdY : [*-tfrl)dY - EIY)

Thus, EIX + I'l : EIX) + ElYl


6.46 If X and f are independent, then show that

lf X and Y are independent, then by Eqs (6.5 1) and (6.7 l) we have

EW): ,L: xYfx@)fv@)dxdY


"f]
: xfr@)dx Yfv (Y) dY - EtxlEtYl
I:* t:
Similarly,

Els,(x)sz(Y )l : r* r* 6-lx)gz(y) f*@) fr(fi dx dy


J_,- J_,-
:
,[: s(x)fx!) * [: szo)fv(v)dv: Elsio-)lrlrro)l
Probability and Random

6;*7 Fi ,,,,fta,,,co sf and f if (*) they are independent and (b) I is related to X by
Y:d{+b.
(a) If X and )' are independent, then by Eqs (6 . 8 I ) and (6. I 40)

oxY: EIXY) - EWIEIY)


: ElxlEtrl - Elx)ElYl : 0 (6.142)

(b) ElxY) -- Etx(ax + b)l - aElxz) + bewl - aElx',f + bp*


tty: EIY) : EIaX + b) - aEWl * b : ap** b
Thus,
"-
:l#j:;-'f;,ff*' (6.143)
Note that the result of (a) states that if X and Y are independent, then they are uncorrelated. But
the conversg is not necessarily true. (See Solved Problem 6.49).

6.48 Let Z : aX + bY,uftere a and b are arbitrary constants. Show that if X and )'are independent,
then

*2 o o'otr * b2 o!, {6.144)


By Eqs (6.72) and (6.73),

Fz: ElZl: ElaX + bYl - aElX) + bElYl: alx+ bFv


By Eq. (6.7s)

oZ : El@* tr)21: E{l@x + bY)- (at x+ btr)l'}


- E{la(X - F) -t b(Y - p))')
- a2El(x - t )21 + 2abEl(x - p)(y - p)l + o2nyr - tr)')
- a2 o2x + 2abEl(x- px)V - p)1+ b'o? 6.145)
Since X and Y are independent, by Eq. (6.lal)
El(X - p)(Y - tty)) : EIX - uiElY - ttyf : 0

Hence, oZ : o'o'* + b2 ol,


LetXand IZ be:defined by
':i j :ri Y-= cos
. . -
w\rJ O
\-, and
curlr I*
r
-
sin O
\)rlt \,,

where O is a random variable uniformly distributed over l0,2rl.


(a) Show thatXand Iare uncorrelated.
(b) Sho'n thatXand Y ar:errot independent.
f 63{1 Arolog ord Digitol Corrurirotio,r,

(a) From Eq. (6.105)

: I t o<e.2tr
frr(0) l2n
Io otherwise
Using Eqs (6.69) and (6.701, we have

Elxl -
f :xf*(x)dx- Iif'os a)f6(0)d0 -
l-*
[r* cos g d0 : 0

Similarly,
I 12"
ElYl: znJo sin0d0-0
I r.2"
and EIXY): cos0sin?d0
ZnJo
: i4r J/^"rir, :
o ---- 20 de: o ElxlElyl
Thus, from Eq. (6.82), X and Y are uncorrelated.

(b) Elx') :* fo'" "or' o do


-* Ir'" ) u * cos2o) ot -:
ElY,l :* fo'",in, odo : * f: ) o - cos2o) do - !

Elxryrl: * Ir'" cor, esin20 d0: [r'")tr-cos40) d0: !


*
Hence, o1x2v21:1 =
;:ElX'lElY'l
If X and I were independent, then by Eq. (6 .l4l ) we would have EIX'Y'f - ElX2l E VI. There-
fore, X and Y are not independent.

6.50 If fx(r)
- 0 for x ( 0, then show that, for any a ) 0,

P(x> @ < Px- (6.146)


a
I
where Fx: Ff i71
EtXl. This is known as the Markov inequality.

From Eq. (6.37c)

P(X>a): f* 7*61a*
Jo
Probability and Rondom Variables

Since/r(x):0 forx < 0,

: Elx) : {.@)dx > a


ttx
I o* xfy(x)dx a [: {- -f*@ a*

Hence,
[* *a> dx - P(x >- o) < 4L

6.51 For any 6' > 0, show that


7
P(IX-trxl>4.>4 (6.147)
c
^L

where LIxTEIXI artd o! is the variance ofX. This is known as the Chebyshev inequality.

From Eq. (6.37c)

p|x- pxl2 r): [::' 7*1*1d* * [;." J*@)dx: {o_rr,, f*(*)d*

By Eq. (6.7s)

"'-
: I: (*-t*)'-f*(x)dx2
J
flx-p,.1>s$-tr)'f*(x)dx>e2 Jfl.r- y ,l?e
f*@)dx

Hence, f f*@)dx
Jix-pri>e - "
t+ U,
2
or P(W - pxl2 ul < 4
632 tietffan ' Ue*bal oar variaUtelll-l n** r.*Yornenrs. Show rhat
,,, ,,.,i, ,'. ,, ,r. (rtrrl),<EWrlEIyzl (6.148)

,i,,,,:r'This'is offilru tlre,frAuih!-Schwarz inequality


Because the mean-square value of a random variable can never be negative,

El6-aY)2)> 0

for any value of a. Expanding this, we obtain

EW'l-2aElXYl+ o'E1Y'l > o


Choose a value of a for which the left-hand side of this inequality is minimum

A: ElxY)
,r4
which results in the inequality
.l 6.4O I Analog and Digital Communications

_ (ElxYl)'
Elx'l- tg >o
or (E W YD2 < ElX',l ElY',)

6.53 Verifu Eq. (6.84).

From the Cauchy-Schwarz inequality Eq. (6.148) we have

{El6- p)(Y- t;..,l;l}'<El6- 1-L)ZEI(Y- t )2)


or o'xY S o2* ol

to': !*
2

Then
o'xo;,
s,
from which it follows that
lpxl s 1

6.1 For any three events A, B, and C, show that 6.5 A certain computer becomes inoperable
P(A u B u C) P(A) + P(B) + P(C) if two components A and B both fail. The
- probability that A falls is 0.01, and the prob-
-P(AnB)-P(B)C) ability that B fails is 0.005. However, the
-P(C.A)+P(A.B.C) probability that B fails increases by a factor
lHint)Write A.u B t) C : A u (Bu C) and of 3 if A has failed.
then apply Eq. (6.8).1 (a) Calculate the probability that the com-
6.2 -
Given that P(A) 0.9, P(B) 0.8, P(A n
- puter becomes inoperable.
B) :0.7s, find (a) P(A u B); (b) P@ n E); (b) Find the probability that A will fail if
and (c) eQ a E1 B has failed.
lAns. (a) 0.95; (b) 0.15; (c) 0.0s.1 lAns. (a) 0.00015 (b) 0.031
6.3 Show that if events A and B are indepen- 6.6 A certain binary PCM system transmits the
n B) : f1Zy1B1
- * 1, X : -l with equal
dent, then P(A two binary states X
lHint: Use Eq. (6.102) and the relationl probability. However, because of channel
,q : (tr n B) u (Z a Bl noise, the receiver makes recognition errors.
Also, as a result of path distortion, the
6.4 Let A and B be events defined in a sample receiver may lose necessary signal strength
space S. Show that if both P(A) and P(B) to make any decision. Thus, there are three
are nonzero, then the events I and B possible receiver states: Y: *1, Y 0, and
cannot be both mutually exclusive and Y : where Y : 0
-
corresponds to "loss
-1,
independent.
lHint: Show that condition (6.21) will not
of signal". Assume - - llX:
that P(Y +l)
- 0.1, P(f - +llx - -l) - 0.2,0.05.
and
hold.l P (Y : )lX: +l) P(Y :0lX -l) -
- -
f 6/il
(a) Find the probabilities P(Y
- *t), 6.11 Let X and Y be two independent random
P(Y
- -l), and P(r- 0). variables with
(b) Find the probabilities P(X -- +lJf : ae-* u(x) fr(Y') -- !3e-'t'tt(1)
.f*(*) -
*1) and P(X - -rlY - -1).
lAns. (a) P(Y - +l) - 0.525, P(Y - -1) Find the density function of Z: X + Y.

- 0.425, P(Y :0) : 0.05


(b) P(x- +llr- *l):0.81,, Il,tnr.f,(z)-lo_
f:o **"
(..-(,2 ^-r,\r1r1 ,l
-e = ^l
P6: -llr- -l):0.881 I

la2 ze-"'u1z1
6.7 Suppose 10000 digits are transmitted over a t : ol O
noisy channel having per-digit error proba-
6.12 Let X be a random variable uniformly dis-
bility p: 5 x l0-5. Find the probability that
tributed over fa, Dl. Find the mean and the
there will be no more than two-digit errors.
variance of X.
lAns.0.9856l
6.8 Show that Eq. (6.91) does in fact define a true t b+a z @- r)'1
probability density; in particular, show that L-
.v v -
-
t2 |

l
rx) 6.13 Let (X, I) be abivariate r.v. If X -tand Y are
J'-* f*(*)dx : L
independent, show that X and Y are uncor-
related.
fHint: Make a change of variable f.y - (, - p)l
lHint: Use Eqs (6.78) and (6.51)l
ol and show that I : [:r-f'' dy
-,[G1 6.14 Let (X, )') be a bivariate r.v. with the joint
which can be proved by evaluattng 12 by using pdf
the polar coordinates.
x2 + Y2 e
frr(*, Y) :
\x2 + v\tz
6.9 A noisy resistor produces a voltage V,(t)- At
t: t1, the noise level X: Vn(tr) is known to 4r
be a gaussian random variable with density -oo ( x,y 1xt
.l Show that X and I'are not independent but

"fr@)
:'w ,-x2t(2o2) are uncorrelated.
lHint: Use Eqs (6.50) and (6.69).1
Compute the probabilify that lXl > ko for 6.15 Given the random variable X with mean
k: 1,2,3. pt* and o'* , find, the linear transformation
lAns. P(W )
o) :0.3173, P(lXl > 2o) Y: o** b such that pty: 0 and o? : l.
- 0.0455,, P(lXl > 3o) 0.00271 : fr o: |
6.10 Consider the transformation LlX. Y: lnt. ox ,b:-'*l
ox)
(a) Find/r(y) in terms of f*(x). l.
6.16 Define random variables Z and Wby
(b) If f*(x1 : -3b-findfy1).
e.' + x' Z_X*aY W:X_aY
where a is a reai number. Determine a such
1v0): +
v-f.fL)
Ans.(a) that Z and W are orthogonal.
\v)
(b) fr(y) -
tt(o,tr)
- .:
uaz + y2 lo*,
6.17 The moment generating functron
Note that X and Y are known as Cauchv
defined by
random variables.
Analog and Digitai Communications

M*(\): Eleuf :
[: fx(x)e\'dx : 1. * ab + ,'), Elxtj
it|'
where .\ is a real variable. Then

- M(9
ffik: Elxol (o) k :1,2, ...
- 1(a' * b2a t baz + o')l
4'
where u(gfol - dk tut *.(x) l^ _ o
d^k 6.18 Show that if Xand Y are zero-mean jointly
(a) Find the moment generating function of normal random variables, then
Xuniformly distributed over (a, b).
(b) Using the result of (a), find EWl,
n1x2r21 - Elxzl EIY'y + 2(ElxYD2
EW' l, and EIX'). lHint: Use the moment generating function
ofXand Igiven by
i /,b _
lens'
(a) - "),af
- ql
M.o(\r, )r) - Ele^,x+ ^rr1

t1u) EIxl :
^@ : *Ll;)"*"*o1x{t;r'
*ru
* a), Elx'l [

You might also like