SSP4SE Appa
SSP4SE Appa
SSP4SE Appa
261
262 SOME PROBABILITY AND STATISTICS
defined as
(
2 2
∑k (k − mX )2 pX (k),
V[X ] = E[(X − mX ) ] = E[X ] − m2X = R∞ 2
x=−∞ (x − mX ) fX (x) dx.
µ = (m1 , . . . , mn )′ ,
Σ = (σ jk ) = the covariance matrix for X1 , . . . , Xn .
✤ ✜
With Q(x1 , x2 ) = (x − µ )′ Σ −1 (x − µ ),
Q(x1 , x2 ) =
= σ σ 1 −σ 2 (x1 − m1 )2 σ22 − 2(x1 − m1 )(x2 − m2 )σ12 + (x2 − m2 )2 σ11 =
11 22 12
2 2
1
= 1−ρ 2 1 −m1
x√
σ11 − 2ρ 1 −m1
x√
σ11
2 −m2
x√
σ22 +
2 −m2
x√
σ22 ,
0.1 1
0
0 −1
2
0 −2
−2 0 2 −2 0 2
−2
Figure A.1 Two-dimensional normal density . Left: density function; Right: elliptic
level curves at levels 0.01, 0.02, 0.05, 0.1, 0.15.
Remark A.2. Formula (A.1) implies that every covariance matrix Σ is pos-
itive definite or positive semi-definite, i.e., ∑ j,k a j ak σ jk ≥ 0 for all a1 , . . . , an .
Conversely, if Σ is a symmetric, positive definite matrix of size n×n, i.e., if
∑ j,k a j ak σ jk > 0 for all a1 , . . . , an 6= 0, . . . , 0, then (A.2) defines the density
function for an n-dimensional normal distribution with expectation mk and
covariances σ jk . Every symmetric, positive definite matrix is a covariance
matrix for a non-singular distribution.
Furthermore, for every symmetric, positive semi-definite matrix, Σ , i.e.,
such that
∑ a j ak σ jk ≥ 0
j,k
Proof. We show the theorem only for non-singular variables with density
function. It is true also for singular normal variables.
If X1 , . . . , Xn are uncorrelated, σ jk = 0 for j 6= k, then Σ , and also Σ −1 are
MULTIDIMENSIONAL NORMAL DISTRIBUTION 267
diagonal matrices, i.e., (note that σ j j = V[X j ]),
−1
σ11 . . . 0
det Σ = ∏ σ j j , Σ −1 = ... . . . ... .
0 . . . σnn
j −1
E[X] = mX , E[Y] = mY ,
Then
!
(B11 − B12 B−1
22 B21 )
−1 −(B11 − B12 B−1 −1 −1
22 B21 ) B12 B22
A= .
−(B22 − B21 B−1 −1 −1
11 B12 ) B21 B11 (B22 − B21 B−1
11 B12 )
−1
MULTIDIMENSIONAL NORMAL DISTRIBUTION 269
Proof. For the proof, see a matrix theory textbook, for example, [22].
✤ ✜
E[X | Y = y] = mX|Y=y = mX + Σ XY Σ −1
YY (y − mY ), (A.6)
C[X | Y = y] = Σ XX|Y = Σ XX − Σ XY Σ YY
−1
Σ YX . (A.7)
1 1
Σ−1
c exp{− Q(x, y)} = c exp{− (x′ − m′X|Y=y )Σ XX|Y (x − mX|Y=y )},
2 2
In most of the book, we have assumed all random variables to be real valued.
In many applications, and also in the mathematical background, it is advan-
tageous to consider complex variables, simply defined as Z = X + iY , where
X and Y have a bivariate distribution. The mean value of a complex random
variable is simply
while the variance and covariances are defined with complex conjugate on the
second variable,
Hence, if the real and imaginary parts are uncorrelated with the same vari-
ance, then the complex variable Z is uncorrelated with its own complex con-
jugate, Z. Often, one uses the term orthogonal, instead of uncorrelated for
complex variables.