Nothing Special   »   [go: up one dir, main page]

Untitled

Download as pdf or txt
Download as pdf or txt
You are on page 1of 248

Grundlehren der

mathematischen Wissenschaften 249


ASeries 0/ Comprehensive Studies in M athematics

Editors

M. Artin S. S. ehern J. L. Doob A. Grothendieek


E. Heinz F. Hirzebrueh L. Hörmander S. Mac Lane
W. Magnus C. C. Moore J. K. Moser M. Nagata
W. Sehmidt D. S. Seott J. Tits B. L. van der Waerden

M anaging Editors

B. Eckmann S. R. S. Varadhan
Kai Lai Chung

Lectures from
Markov Processes
to Brownian Motion

With 3 Figures

Springer Science+Business Media, LLC


Kai Lai Chung
Department of Mathematics
Stanford University
Stanford, CA 94305

AMS Subject Classifications (1980): 60Jxx

Library of Congress Cataloging in Publication Data


Chung, Kai Lai, 1917-
Lectures from Markov processes to Brownian
motion.

(Grundlehren der mathematischen Wissenschaften;


249)
Bibliography: p.
Includes index.
1. Markov processes. 2. Brownian motion pro-
cesses. L Title. II. Series.
QA274.7.C48 519.2'33 81-14413
AACR2

© 1982 by Springer Science+Business Media New York


Originally published by Springer-Verlag New York Tnc. in 1982.
Softcover reprint of the hardcover 1st edition 1982
All rights reserved. No part of this book may be translated or reproduced in any form
without written permission from Springer Science+Business Media, LLC
9 8 7 6 543 2 1

ISBN 978-1-4757-1778-5 ISBN 978-1-4757-1776-1 (eBook)


DOI 10.1007/978-1-4757-1776-1
Contents

Preface VII

Chapter I
Markov Process
1.1. Markov Property
1.2. Transition Function 6
1.3. Optional Times 12
1.4. Martingale Theorems 24
1.5. Progressive Measurability and the Projection Theorem 37
Notes 44

Chapter 2
Basic Properties
2.1. Martingale Connection 45
2.2. FeUer Process 48
2.3. Strong Markov Property and Right Continuity of Fields 56
2.4. Moderate Markov Property and Quasi Left Continuity 66
Notes 73

Chapter 3
Hunt Process
3.1. Defining Properties 75
3.2. Analysis of Excessive Functions 80
3.3. Hitting Times 87
3.4. Balayage and Fundamental Structure 96
3.5. Fine Properties 106
3.6. Decreasing Limits 116
3.7. Recurrence and Transience 122
3.8. Hypothesis (B) 130
Notes 135

Chapter 4
Brownian Motion
4.1. Spatial Homogeneity 137
4.2. Preliminary Properties of Brownian Motion 144
VI Contents

4.3. Harmonie Funetion 154


4.4. Diriehlet Problem 162
4.5. Superharmonie Funetion and Supermartingale 174
4.6. The Role of the Laplaeian 189
4.7. The Feynman-Kae Funetional and the Sehrödinger Equation 199
Notes 206

Chapter 5
Potential Developments
5.1. Quitting Time and Equilibrium Measure 208
5.2. Some Prineiples of Potential Theory 218
Notes 232

Bibliography 233
Index 237
Preface

This book evolved from several stacks of lecture notes written over a decade
and given in classes at slightly varying levels. In transforming the over-
lapping material into a book, I aimed at presenting some of the best features
of the subject with a minimum of prerequisities and technicalities. (Needless
to say, one man's technicality is another's professionalism.) But a text frozen
in print does not allow for the latitude of the classroom; and the tendency
to expand becomes harder to curb without the constraints of time and
audience. The result is that this volume contains more topics and details
than I had intended, but I hope the forest is still visible with the trees.
The book begins at the beginning with the Markov property, followed
quickly by the introduction of option al times and martingales. These three
topics in the discrete parameter setting are fully discussed in my book A
Course In Probability Theory (second edition, Academic Press, 1974). The
latter will be referred to throughout this book as the Course, and may be
considered as a general background; its specific use is limited to the mate-
rial on discrete parameter martingale theory cited in §1.4. Apart from this
and some dispensable references to Markov chains as examples, the book
is self-contained. However, there are a very few results which are explained
and used, but not proved here, the first instance being the theorem on pro-
jection in §1.6. The fundamental regularity properties of a Markov process
having a Feller transition semigroup are established in Chapter 2, together
with certain measurability questions which must be faced. Chapter 3 con-
tains the basic theory as formulated by Hunt, including some special topics
in the last three sections. Elements of a potential theory accompany the
development, but a proper treatment would require the setting up of dual
structures. Instead, the relevant circle of ideas is given a new departure in
Chapter 5. Chapter 4 grew out of a short compendium as a particularly
telling example, and Chapter 5 is a splinter from unincorporated sections
of Chapter 4. The venerable theory of Brownian motion is so well embel-
lished and ramified that once begun it is hard to know where to stop. In
the end I have let my own propensity and capability make the choice. Thus
the last three sections of the book treat several recent developments which
have engaged me lately. They are included here with the hope of inducing
further work in such fascinating old-and-new themes as equilibrium,
energy, and reversibility.
Vlll Preface

I used both the Notes and Exercises as proper non-trivial extensions of


the text. In the Notes a number of regretably omitted topics are mentioned,
and related to the text as a sort of guide to supplementary reading. In the
Exercises there are many alternative proofs, important corollaries and
examples that the reader will do well not to overlook.
The manuscript was prepared over a span of time apparently too long
for me to maintain a uniform style and consistent notation. For instance,
who knows whether "semipolar" should be spelled with or without a hyphen?
And if both lxi and Ilxll are used to denote the same thing, does it really
matter? Certain casual remarks and repetitions are also left in place, as they
are permissible, indeed desirable, in lectures. Despite considerable pains on
the part of several readers, it is perhaps too much to hope that no blunders
remain undetected, especially among the exercises. I have often made a
point, when assigning homework problems in c\ass, to say that the correc-
ti on of any inaccurate statement should be regarded as part of the exercise.
This is of course not a defense for mistakes but merely offered as prior
consolation.
Many people helped me with the task. To begin with, my first formal
set of notes, contained in five folio-size, lined, students' copybooks, was
prepared for a semester course given at the Eidgenässiche Technische
Hochschule in the spring of 1970. My family has kept fond memories of
a pleasant sojourn in a Swiss house in the great city 01' Zürich, and I
should like to take this belated occasion to thank our hospitable hosts. An-
other set of notes (including the lectures given by Doob mentioned in ~4.5)
was taken during 1971-2 by Harry Guess, who was kind enough to send
me a copy. Wu Rong, a visiting scholar from China, read the draft and the
galley proofs, and checked out many exercises. The comments by R. Getoor,
N. Falkner, and Liao Ming led to some final alterations. Most 01' the manu-
script was typed by Mrs. Gail Stein, who also typed some 01' my other
books. Mrs. Charlotte Crabtree, Mrs. Priscilla Feigen, and my daughter
Marilda did some of the revisions. I am grateful to the National Science
Foundation for its support of my research, some 01' which went into this
book.

August 1981 Kai Lai Chung


Chapter 1

Markov Process

1.1. Markov Property


We begin by describing a general Markov process running on continuous
time and living in a topological space. The time parameter is the set of
positive numbers, considered at first as just a linearly ordered set of indices.
In the discrete case this is the set of positive integers and the corresponding
discussion is given in Chapter 9 ofthe Course. Thus some ofthe proofs below
are the same as for the discrete case. Only later when properties of sam pIe
functions are introduced will the continuity of time play an essential role.
As for the living space we deal with a general one because topological
properties of sets such as "open" and "compact" will be much used while
specific Euclidean notions such as "interval" and "sphere" do not come into
question until much later.
We must introduce some new terminology and notation, but we will do
this gradually as the need arises. Mathematical terms which have been defined
in the Course will be taken for granted, together with the usual symbols to
denote them. The reader can locate these through the Index of the Course.
But we will repeat certain basic definitions with perhaps slight modifications.
Let (O,~, P) be a probability space. Let

T = [0, (0).

Let E be a locally compact separable metric space; and let C be the minimal
Borel field in E containing all the open sets. The reader is referred to any
standard text on real analysis for simple topological notions. Since the
Euclidean space Rd of any dimension d is a well known particular case of
an E, the reader may content hirnself with thinking of Rd while reading
about E, which is not a bad practice in the learning process.
For each tE T, let

be a function from 0 to E such that


2 1. Markov Process

This will be written as

and we say that X t is a randorn variable taking values in (E, 0"). For E = R 1,
0" = g&1, this reduces to the farniliar notion of areal randorn variable. Now
any farnily {X t , tE T} is called a stochastic process. In this generality the
notion is of course not very interesting. Special dasses of stochastic pro-
cesses are defined by irnposing certain conditions on the randorn variables
Xt, through their joint or conditional distributions. Such conditions have
been forrnulated by pure and applied rnathernaticians on a variety of grounds.
By far the most irnportant and developed is the dass of Markov processes
that we are going to study.
Borel field is also called <T-field or <T-algebra. As a general notation, for
any farnily of randorn variables {Za, IX E A}, we will denote the (J-field
generated by it by (J(Za, IX E A). Now we put specifically

ff~ = (J(X" S E [0, t]); .'#'; = (J(X S' S E [t, CD)).

Intuitively, an event in ff~ is deterrnined by the behavior of the process


{X s } up to the time t; an event in ff; by its behavior after t. Thus they repre-
sent respectively the "past" and "future" relative to the "present" instant t.
For technical reasons, it is convenient to enlarge the past, as folIows.
Let {~, tE T} be a farnily of (J-fields of sets in ff, such that
(a) if s < t, then ffs c ,cF;;
(b) for each t, X t E .~.
Property (a) is expressed by saying that "{ fft } is increasing"; property
(b) by saying that "{ Xt} is adapted to {~}". Clearly the farnily {ff~} satisfies
both conditions and is the minimal such farnily in the obvious sense. Other
instances of {.~} will appear soon. The general definition of a Markov
process involves {~} as weil as [Xtl.

Definition. {X t, ~, lET} is a Markov process iff one of the following equiv-


alent conditions is satisfied:
(i) Vt E T, A E .~, BE ff;:
P(A n BIX t) = p(AIXt)P(BIX t).
(ii) Vt E T, BE.'#';:
P(B I fft) = P(B I Xrl-
(iii) Vt E T, A E .~:
P(A I ff;) = P(A I XrJ.

The reader is rerninded that a conditional probability or expectation is an


equivalence dass of randorn variables with respect to the rneasure P. The
equations above are all to be taken in this sense.
1.1. Markov Property 3

We shall use the two basic properties of conditional expectations, for


arbitrary O"-fields tfi, tfi b '§ 2 and integrable random variables Y and Z:
(a) If Y E '§, then E{YZltfi} = YE{Zltfi};
(b) I['§j C '§2' then

See Chapter 9 of Course.


Let us prove the equivalence of (i), (ii) and (iii). Assurne that (i) holds, we
will deduce (ii) in the following form. For each A E g;; and B E ff; we have

(1)

N ow the left mem ber of (1) is eq ual to

E{E[1AP(BI X t)]! Xt} = E{P(A IXt)p(BI Xt)}


= E{P(A n BIX t)} = P(A n B).

Symmetrically, we have

which implies (iii).


Conversely, to show for instance that (ii) implies (i), we have

P(A n BIX t) = E{E(1 A 'lBIg;;)lx t }


= E{1 AP(BIg;;)IX t} = E{lAP(BIXt)IX t }
= p(BIXt)E{lAIX t} = p(BIXt)P(AIX t).

From here on we shall often omit such qualifying phrases as "Vt E T". As
a general notation, we denote by b'1f the dass of bounded real-valued
'§-measurable functions; by Ce the dass of continuous functions on E with
compact supports.
Form (ii) ofthe Markov property is the most useful one and it is equivalent
to any of the following:
(iia) VYEbff;:
E{YIg;;} = E{YIXJ

(iib) Vu 2. t,f E b<ff:

(iic) Vu 2. t, f E Ce(E):
4 I. Markav Pracess

It is obvious that each of these conditions is weaker than the preceding


one. To show the reverse implications we state two lemmas. As a rule, a
self-evident qualifier "nonempty" for a set will be omitted as in the following
proposition.

Lemma 1. For each open set G, there exists a sequence or./imctions [J;,l in Ce
such that
lim i In = IG·
n

This is an easy consequence of our topological assumption on E, and gives


the reader a good opportunity to review his knowledge of such things.

Lemma 2. Let S be an arbitrary space and [D a c!ass ol subsets or S. [D is


c!osed under finite intersections. Let C be a dass ol subsets ol S such that
SEC and [D c C Furthermore suppose that C has the Iollowing closure
properties:

(a) i.f An E C and An C A n+ 1 Ior n 2': 1, then Un'= 1 An E C;


(b) i.f Ac Band A E C, BE C, then B - A E C
Then C => O'([D).
Here as a general notation O'([D) is the O'-field genera ted by the dass of
sets [D. Lemma 2 is Dynkin's form of the "monotone dass theorem"; the
proof is similar to Theorem 2.1.2 of Course, and is given as an exercise there.
The reader should also figure out why the cited theorem cannot be applied
directly in wh at folIows.
Let us now prove that (iic) implies (iib). Using the notation of Lemma 1,
we have by (iic)

Letting n ---+ 00 we obtain by monotone convergence

(2)

Now apply Lemma 2 to the space E. Let [D be the dass of open sets, C the
dass of sets A satisfying

(3)

Of course [D is dosed under finite intersections, and [D c C by (2). The other


properties required ofC are simple consequences ofthe fact that each member
of (3), as function of A, acts like a probability measure; see p. 301 of Course
for a discussion. Hence we have C => tff by Lemma 2, which means that (3)
is true for each A in tff, or again that (iib) is true for I = l A , A E tff. The dass
1.1. Markov Property 5

of I for which (iib) is true is closed under addition, multiplication by a


constant, and monotone convergence. Hence it includes bg by a standard
approximation. eWe may invoke here Problem 11 in §2.1 of Course.]
To prove that (iib) implies (iia), we consider first

where t :s; U I < ... < U n , and!i E bg for 1 :s; j :s; n. For such a Y with n = 1,
(iia) is just (iib). To make induction from n - 1 to n, we write

EtUI !i(Xu)Ig;;} = E{E[jUI !i(Xu)lffun_]g;;}

= E{)~( !i(Xu)E[fn(XuJlffun-JI.~l (4)

Now we have by (iib)

for some 9 E bg. Substituting this into the above and using the induction
hypothesis with ln-I' 9 taking the place of ln-I, we see that the last term in
(4) is equal to

E ÜJ: !i(Xu)E[fn(XuJI ffun _J IXl} = E{E[PI .!j(Xu) Iffun _,]I Xl}


= EtUI !i(Xu) Xl}
I

since Xl E ff This completes the induction.


Un _ 1 '

Now let [D be the dass of subsets of Q of the form n:;=


I {X Uj E B j } with
the u/s as before and B j E g. Then [D is dosed under finite intersections.
Let C be the dass of subsets A of Q such that

Then Q E C, [D ce and C has the properties (a) and (b) in Lemma 2. Hence
by Lemma 2, C ::::J O"([D) which is just ff;. Thus (iia) is true for any indicator
Y E .'F; [that is (ii)], and so also for any Y E bff; by approximations. The
equivalence of (iia), (iib), (iic), and (ii), is completely proved.
Finally, (iic) is equivalent to the following: for arbitrary integers n ~ 1
and 0 :s; t l < ... t n < t < u, and I E CAE) we have

(5)
6 1. Markov Process

This is the oldest form of the Markov property. It will not be needed below
and its proof is left as an exercise.

1.2. Transition Function

The probability structure of a Markov process will be specificd.

Definition. The collection {PsI,'), 0 ::s; s < t < cx::} is a M arkov transition
function on (E, 0") iff Vs < t < U we have
(a) Vx E E:A --> Ps,,(x,A) is a probability measure on If,
(b) VA E 0":x --> Ps)x,A) is 0"-measurable;
(c) Vx E E, VA E 0":

P,jx,A) = SE Ps"(x,dy)P,,u(y,A).
This function is called [temporally ] homogeneous iff there exists a collection
{P,( . , . ), 0 < t} such that Vs < t, x E E, A E 0" we have

In this case (a) and (b) hold with Ps" replaced by P" and (c) may be rewritten
as follows (Chapman-Kolmogorov equation):

(1)

For f Eb0", we shall write

Prf(x) = P,Cx,f) = SE PJx, dy)f(y)·


Then (b) implies that Prf E b0". For each t, the operator P, maps M' into M,
also 0"+ into 0"+, where 0"+ denotes the class of positive (extended-valued)
0" -measurable functions. The family {PI' t > 0) forms a semigroup by (1)
which is expressed symbolically by

As a function of x and A, P,(x, A) is also cailed a "kerne1" on (E, 0'').

Definition. {X" .'Fr, tE T} is a homogeneous Markov process with (P,) as its


transition function (or semigroup) iff for t ~ 0, s > 0 and fE M we have

(2)
1.2. Transition Function 7

Observe that the left side in (2) is defined as a function of (j) (not shown!)
only up to a set of P-measure zero, whereas the right side is a complete1y
determined function of (j) since X, is such a function. Such a relation should
be understood to mean that one version of the conditional expectation on
the left side is given by the right side.
Henceforth a homogeneous Markov process will simply be called a
Markov process.
The distribution J1. of X ° is called the initial distribution of the process. If
0::; t 1 < ... < t n and! E b~n, we have

E{j(X " , ... ,X,J}

f f f
= J1.(dx o) P,JXo,dx 1 )' .. P'n-'n_,(Xn-l,dxn)!(X1' ... ,xn) (3)

where the integrations are over E. In particular if! is the indicator of A 1 X


... x Am where n ~ 1 and each A jE 8, this gives the finite-dimensional joint
distributions of the process.
If Q is the space of all functions from T to E, it is possible by Kolmogorov's
extension theorem (see e.g. Doob [lJ) to construct a process with the joint
distributions given in (3). We shall always assume that such a process exists in
our probability space. If xis any point in E, and J1. = Gx (the point mass at x),
the corresponding process is said to start at x. The probability measure on
the er-field ;y;o generated by this process will be denoted by p x , and the
corresponding expectation by EX. For example if y E b;Y;°:

and if Y = lA(X,), where A E~, then the quantity above reduces to

(4)

Furthermore for each A E ;y;O, the function x -+ PX(A) is ~-measurable. For


A = X ,- 1(A) this follows from (4) and property (b) of the transition function.
The general case then follows by a monotone dass argument as in the proof
that (iib) implies (iia) in §1.1.
The Markov property (2) can now be written as

(5)

where t ~ 0, s > 0, A E ~. Beware of the peculiar symbolism which allows


the substitution of X, for the generic x in PX(A). For instance, if s = t in the
second member of (5), the two occurrences of X, do not have the same
significance. [There is, of course, no such confusion in the third member of
(5).J Nevertheless the system of notation using the superscript will be found
workable and efficient.
8 I. Markov Process

We want to extend the equation to sets more general than {X s + t E Al =


X ;+1 r(A). This can be done expeditiously by introducing a "shift" {e n t 2': 0]
in the following manner. For each t, let er map Q into Q such that

(6)

With this notation we have

so that (5) becomes

(7)

In general if A E :!F 0 , then 0r- 1A E :!F; (proof?), and we have

(8)

More generally, if Y E bi';;O, we have

E{ Y ~
er I J"rJ
Ob\ -- EX,)\ Y} . (9)

The relations (8) and (9) follow from (7) by Lemma 2 of §1.1.
Does a shift exist as defined by (6)? If Q is the space of all functions on T
to E: Q = ET , as in the construction by Kolmogorov's theorem mentioned
above, then an obvious shift exists. In fact, in this ca se each co in Q is just the
sampie function X(·, co) with domain T, and we may set

which is another such function. Since X,(co) = X(S, co) the equation (6) is a
triviality. The same is true if Q is the space of all right continuous (or contin-
uous) functions, and such aspace will serve for our later developments. For
an arbitrary Q, a shift need not ex ist but it is always possible to construct a
shift by enlarging Q without affecting the probability structure. We will not
detail this but rather postulate the existence of a shift as part of our basic
machinery for a Markov process.
For an arbitrary probability measure f.l on IJ, we put

PIi(A) = f PX(A)f.l(dx), (10)

This is the probability measure determined by the process with initial


distribution p. For instance, equation (9) remains true if the E there is re-
placed by W. Note that pli (in particular PX) is defined so far only on /Fo, in
1.2. Transition Function 9

contrast to P which is given on :#' => :#,0. Later we shall extend pli to a larger
u-field by completion.
The transition function Pt (·, .) has been assumed to be a strict probabiJity
kernei, namely Pt(x, E) = 1 for every tE T and x E E. We will extend this by
allowing
Pt(X, E) ::; 1, I::ft E T, x E E. (11)

Such a transition function is called submarkovian, and the case where equality
holds in (11) [strictl y] M arkovian. A simple device converts the former to the
latter as follows. We introduce a new 0 f E and put

tff a = u{tff,{o}}.

The new point 0 may be considered as the "point at infinity" in the one-point
compactification of E. If E is itself compact, 0 is nevertheless adjoined as an
isolated point. We now define P; as follows for t > and A E 1&': °
P;(X, A) = Pt(x, A),
P;(x,o) = 1 - Pt(x, E), if x i= D; (12)

P;(O, E) = 0, P;(o, {on = 1.

lt is clear that P;(',·) on (E,], tff a) is Markovian. Let (X t , .~) be a Markov


process on (E n, tff c) with (P;) as transition function. Notwithstanding the last
two relations in (12), it does not follow that 0 will behave like an "absorbing
state" (or "trap"). However it can be shown that this may be arranged by an
unessential modification of the probability space. We shall ass urne that this
has al ready been done, so that 0 is absorbing in the sense that

I::fw, I::fs ;:::: 0: {X,(w) = o} c {Xt(w) = 0 for all t ;:::: s}. (13)

Now we define the function , from Q to [0, c/J] as follows:

((w) = inf{t E T:Xt{w) = a} (14)

where, as a standard convention, inf 0 = CfJ for the empty set 0. Thus
((co) = CfJ if and only if Xt(w) =f= (~ for all tE T, in other words Xt((I)) E E for
all t E T. The random variable ( is called the lifetime of the process X.
The observant reader may remark that so far we have not defined P o( ',').
There are interesting cases where P o(x, .) need not be the point mass I;A'),
then x is called a "branching point". There are also cases where P 0(',') should
be left undefined. However, we shall assurne until further notice that we are
in the "normal" case where
10 1. Markov Process

namely Po is the identity operator. Equivalently, we assume

(15)

Before proceeding furt her let us give a few simple examples of (homo-
geneous) Markov processes.

EXAMPLE I (Markov chain).


E = any countable set, for example the set of positive integers.
o= the lT-field of all subsets of E.

We may write Pij(t) = P,(i, UD for i E E, j E E. Then for any A E E, we have


P,(i, A) = L pJt).
jEA

The conditions (a) and (c) in the definition of transition function become in
this case:

(a) Vi E E: L Pij(t) = 1;
jE E

(c) Vi E E, k E E: Pik(S + t) = L Pij(S)Pjk(t);


jE E

while (b) is trivially true. For the submarkovian case, the" =" in (a) is replaced
by ":::;". If we add the condition

(d) Vi E E,j E E: lim Pij(t) = <>jj;


qo

then the matrix of transition function

is called a "standard transition matrix". In this case each pJ.) is a continuous


function on T. See Chung [2] for the theory of this special case of Markov
process.

EXAMPLE 2 (uniform motion).


E = R 1 = (-00, +00); g = the ciassical Borel field on R 1.

For x E R 1, t 2: 0, we put
1.2. Transition Function 11

Starting from any point x, the process moves deterministically to the right
with uniform speed. This trivial example turns out to be the source of many
counterexamples to facile generalities. A slight modification yields an
example for which (15) is false. Let E = {o} u (- 00, -1] u [1, (0),

P 0(0, { - I}) = 1;
Pt(X,') = 8 x +k), ifx 2:: 1; t 2:: 0;
P t (x,')=8 x -k), ifx::S; -1;t2::0;
Pt(O,') = H81+k) + Li-k)}·

Note that although Po is not the identity, we have P OPt = PtP 0 = Pt for
t 2:: 0.

EXAMPLE 3 (Poisson process). E = N = the set of positive (2:: 0) integers or


the set of all integers. For n E N, m E N, t 2:: 0:

if m < n,

if m 2:: n.

Note that in this case there is spatial homogeneity, namely: the function of
the pair (n, m) exhibited above is a function of m - n only.

EXAMPLE 4 (Brownian motion in R i ).

For real x and y and t > 0, put

Pt(x, y) =
1 [(y - X)2]
n:::-. exp -
'\/ 2m 2t

and define the transition function as follows:

Pt(x, A) = L Pt(x, y) dy, t > 0;


P o(x, A) = 8AA).

The function Pt(',') is a transition probability density. In this case it is the


Gaussian density function with mean zero and variance t. As in Example 3
there is again spatial homogeneity, indeed Pt is a function of Ix - yl only.
This example, and its extention to R d, will be the subject matter ofChapter 4.
12 1. Markov Process

1.~. Optional Times

A major device to "tarne" the continuum of time is the use of random times.
The idea is very useful even for discrete time problems and has its origin in
considering "the first time when a given event occurs". In continuous time
it becomes necessary to formalize the intuitive notions in terms of various
O'-fields.
We complete the increasing family {ff" tE Tl by setting

eJb-VeJb
,;,- f ' - ,;·7"1"

lET

Recall that the notation on the right side above means the minimal O'-field
including all ff" wh ich is not the same as UIET ff,. Although we shall apply
the considerations below to a Markov process, we need not specify the
family [ff, 1to begin with.

Definition. The function T: Q ..... [0, ex;] is called optional relative to {.il;} iff

VI E [0, CX!): {T::; tl E .'il;.

The preceding relation then holds also for t =CX; by letting I i:y.J through a
sequence. Define

SElO.I)

Vt E [0. ex;): .~+ = /\ :?,.


SE (I. x.)

We have clearly for each t:.il;- c.~ c .il;+.

Definition. The family {.~} is called right continuous iff

Vt E [O,ex;): .il; = ff,+.

It follows from the definition that the family {.~+} is right continuous. Note
the analogy with a real-valued increasing function t ..... f(t).

Proposition 1. T is optional relative 10 ['~I} il and only i/

Vt E [0, ex;): {T < tl E .'il;. (1)

Proof. lf T is optional relative to {.~ +}, then by definition


1.3. Optional Times 13

Hence we have for 1/n < t:

{T ~ t - ~} E ~t _ ljn) + C ~
and consequently
{T < t} = U Cf) {
T ~ t - -
1} E~.
n=l n

Conversely if (1) is true, then for t E [0, 00):

{T < t + ~} E ,~+ ljn

and consequently

{T ~ t} = nN {
T < t +- 1} E 1\
Y]
~ + ljn = ~ +. D
n=l n n=l

EXAMPLES.

1. The lifetime ( defined in (14) of §1.2 is optional relative to {.'1'~+ }, because


ais absorbing which implies (Q is the set of rational numbers)
{( < t} = U {X r = a} E ,'1'~.
rE Qn[O,t)

2. Suppose,'1'o contains all P-null sets and Vt: P{T = t} = 0. Then we have
Vt:{T~t}-{T<t}E~

so that optionality relative to {~} is the same as relative to {~+}.

3. Consider a Poisson process with left continuous paths and T = first jump
time. Then T is optional relative to {,'1'~+} but not to {,'1'~}, because

{T = t} ~ ,'1'~[ = ,'1'~_].

The reader is supposed to be familiar with elementary properties of the


Poisson process which are "intuitively obvious", albeit sometimes tedious to
establish rigorously.
From now on we fix the family {~, t E [0,00 J} and call T an "optional
time" or "strictly optional time" according as it is optional relative to {.~ + )
or {~}. We shall prove most results for the former ca se but some of them
extend to the latter.

Proposition 2. If Sand T are optional, then so are

Sv T, S + T.
14 I. Markov Proccss

Proof. The first two are easy; to treat the third we consider for each I:

(S + T< t} = U {S< r; T< t - r]


rEQt

where Qt = Q n (0, t). Clearly each member of the union is in .~; hence so
is the union. 0

Proposition 3. T II is optional for each n -2': 1, then so are

limsup T Il , !im inf TI/" (2)

Prooj. This follows from Proposition 1 and

{s~p T Il :::; t} = 0{T Il :::; t} E ·'Ft +:

{i~f T n < t} = Y{T < t}


II E ·'Ft ;

lim sup T n = inf sup T n ;


rn?ln~m

!im inf T n = sup inf TI/"


m',21 n2=.m
o
For strictIy optional T n only the first item in (2) is strictly optional. In
Example (3) above, if we set T n = T + l/n then each T n is strictly optional
but T = infn T n is not.
Associated with an optional time T, there is a O"-field defined as folIows.

Definition. If T is strictly optional, :!Fr is the dass of subsets of .'F such that

Vt E [O,x): An [T:::; t} E .'1't.

If T is optional, .'1'T+ is the dass of subsets of .'F such that

Vt E [0,00): An {T:::; t} E §',+. (3)

It follows from the proof of Proposition 1 that (3) is equivalent to

Vt E (0, 00 J: A n [T < t: E .~.

Let us verify, for example, that .C;;:T+ is a IT-field. We see from (3) that
Q E:FT+ because T is optional. It then follows that :FT+ is dosed under
complementation. It is also dosed under countable union; hence it is a
1.3. Optional Timcs 15

cr-field. For T = a constant t, :#'T reduces to !!i'; , and :#'T+ reduces to !!i';+.
Thus our definition and notation make sense and it is a leading idea beJow
to extend properties concerning constant times to their analogues for op-
tional tim es.
It is easy to see that if {!!i';} is right continuous, then any optional T is
strictly optional, and furthermore we have

Proposition 4. If T is optional and A E :#'T+' we define

(4)

Then T is also optional.

Proof. For t < 00, we have

{TA < t} = A () {T < t} E !!i';.

This implies {TA = CfJ} E :#'x" by the remark after Proposition 1. D

Theorem 5.
(a) If T is optional, then TE :#'T+ ;
(b) If Sand T are both optional and S ::; T, then :#'s+ C :#'T+;
(c) If T n is optional, T n + I ::; T n for each n;:::: 1, and lim n T n = T, then
w

:#'T+ = 1\ :#'T +'


n=1
n
(5)

In case each T n is strictly optional, we may replace :#'T n + by /IFTn in (5).

Proof. We leave (a) and (b) as exercises. To prove (c), we remark that T
is optional by Proposition 3. Hence we have by (b):
X;

:#'T+ C 1\
n=1
:#'T n +' (6)

Now let A E I\n"= I :#'T n +; then for each t:

so that A E ili'T+' Hence the inclusion in (6) may be reversed. D


16 I. Markov Process

Theorem 6. If Sand T are both optional, then the sets

{S< Tl, {S :s; Tl, [S = Tl

Proof. We have for each t:

{S< Tl n {T < t} = U {S< r < T< t},


rEQ,

where Qr = Q n (0, t). For each r, {r< T< t} E:Fr, {S < r} E .'Fr C .Y,.
Hence the union above belongs to :Fr and this shows : S < T} E ·?FT +.
Next we have

{S< T} n {S < t} = {S < T 1\ t} = U [S< r < T 1\ tJ


rEQ

= U {S< r < Tl E ·'Fr ,


rEQ/

since for each r < t, {r < Tl = {T:S; rJ" E :Fr + C .Y,. Hence, {S < Tl E .'F~".
Combining the two results we obtain {S < Tl E .?Fs + 1\ .'FTt . Since

{S:S; Tl = {T< SV,


[S = Tl = {T:S; S} - {T < S},

the rest of the theorem folIows. o


We will now extend the notion of the IT-field .'?F, _ to a random time T.
It turns out that we can do so with an arbitrary positive function T, but
.'FT - will have nice properties when T is an optional time.

Definition. For any function T from Q to [0, x], ·?FT - is the IT-field generated
by .'Fo + and the dass of sets:

(t< T} n A, where tE [0,::0) and A E .'Fr. (7)

It is easy to see that .?FT - contains also sets ofthe form

{T =y~} n A, where A E .?F,. (8)

As defined above, .?FT _ need not be induded in ff: but this is the case if
TE .?F, as we shall assurne in what folIows. It is obvious that TE .YT -.
If T is optional, then .?Fo + c 'YT; by Theorem 5, (b). Next, for each
u ;::: 0, the set
1.3. Optional Times 17

is empty for u ~ t, and belongs to ff'u if t < u and A E !Fr. Hence each gen-
erating set of ff'T- in (7) belongs to ff'T+, by definition of the latter. In other
words, we have for an optional T:

(9)

Similarly if T is strictly optional, then ff'T- c ff'T. The next proposition is


sharper; note that it reduces to ff'o+ c ff'T- for S == 0, which is true by
definition of ff'T-.

Proposition 7. 1f Sand T are both optional and S ~ T, then ff's- c ff'T-· If


moreover S < Ton {S < oo} n {T > O}, then

(10)

Proof. The first assertion is easy. Under the hypo thesis of the second as-
sertion, we have for any A:

A = U [A n {S< r< T}] u [A n {S = oo}] u [A n {T = O}].


rEQ

If A E ff's+, then An {S < r} E ff'r for each r, and so

[A n {S < r}] n {r< T} E ff'T-.

Next, An {S = oo} E ff'oo and so

An {S= oo} = [A n {S= oo}] n {T= oo} Eff'T_.

Finally,

An {T = O} = [A n {S = O}] n {T = O} E ff'T-'

because An {S = O} E ff'o+ C ff'T- and {T = O} E ff'T-. Combining these


results we obtain A E ff'T-. D

Next, we introduce a special dass of optional times which plays an


essential role in the general theory of stochastic processes. Although its use
will be limited in this introductory text, the discussion below will serve the
pur pose of further strengthening the analogy between optional and constant
times.

Definition. T is a predictable time iff there exists a sequence of optional times


{Tn } such that

Vn:Tn<T on{T>O}; and TniT. (11)


18 1. Markov Process

Here "T n i T' is a standard notation which means: "Vn: T n S T n + 1 and


lim n T n = T." The limit T is optional by Proposition 3. Observe also that
T n < 00 for each n. The sequence {Tnl is said to announce T.

Theorem 8. If T n is optional and T n i T, then

( 12)

lf T is predictable and {T n } announces T, then we have also

Ob
,-7"1'_ -VJb
- .7"1',,+' (13)
n=1

Proof. We have "::J" in (12) by Proposition 7. To prove the reverse inclu-


sion, we have if A E ~:

{t< Tl nA = U [{t < T n} n A] E V ·~T,,-·


n= 1 n=l

Also 9'"0 + C '~1'" _ for each n. Hence each generating set of 9'"1' _ belongs to
the right member of(12), and (12) is proved.1f {Tn } announces the predictable
T, then '~1'- ::J 9'"1',,+ by Proposition 7, and so we have "::J" in (13). Since
9'"1'" + ::J 9'"1'" _ for each n, the reverse inclusion follows from (12). 0

We now return to the Markov process X = {Xt, .~, tE T}.

Definition. X is said to be Borel measurable iff

(t, w) --> X(t, co)

is in !Jß x .~, where (JjJ is the Euclidean Borel field on T. X is said to have
right limits iff for each w, the sampIe function t --> X(t, w) has right limits
everywhere in T; X is said to be right continuous iff for each co, the sampIe
function t --> X(t, w) is right continuous in T. Similarly, "left" versions of
these properties are defined in (0, Cf) ].

Proposition 9. lf X is right [or lelt] continuous, thell it is Borel measurable.

Proof. For each n :2: 0, define

(14)
1.3. Optional Times 19

It is clear that

(k
+1 )
(t,w)---> Xn(t,w) = k~O l[k/2".k+l/2")(t)X ~,w
Cf)

is Borel measurable. If X is right continuous, then

X(t,w) = !im Xn(t,w)

for (t, w) E T x Q. Hence Xis Borel measurable; see the Lemma in §1.5 below
for a proof. If Xis left continuous we need only modify the definition in (14)
by omitting the "+ 1" in the right member. 0

Definition. Let X be Borel measurable and T an .'F -measurable function


from Q to [0,00] (namely an extended-valued random variable). We define
X T , or X(T), on the set {T < oo} by

X T(W) = X(T(w),w). (15)

If X C((w) is defined for all w E Q, then X T is defined on Q by (15).

The next result extends the hypothesis "XI E :Fr".

Theorem 10. Let X have right limits. Tf T is optional, then

(16)

Tf X has left limits as weil, and T is predictable, then

(17)

where X o. = X O·

Note. If T(w) = 00, the left members of(16) and (17) are defined to be zero
by convention, even when X 70 + or XX) _ are not defined.
Proof. Define for n 2': 0:

(18)

This is just the usual dyadic approximation of T from above. For each n, T n
is strictly optional (proof ?) and T n > T, lim n t T n = T. For convenience
let us introduce the "dyadic set"

( 19)
20 I. Markov Process

For each d E D and BE 6 we have

[T" = d; X(T,,) E B) = [T = d; X(d) II E B: E '~J

because T n is strictly optional. Hence for each t,

[X(T n ) E B} n {T n < t} = U (T n = d; X(T n ) E B} E '~r


dE VI

where D t = D n [0, t). This shows that {X( T n ) E B; T n <x} belongs to


:!FTn + (actually to :!FT ); since B is arbitrary we have X(Tn)lITn<xI E :!Frn ".
Since X(T +) = limn X(T n ) on {T < oo} which belongs to '~T+' and '~T,,+ 1
:!FT+' it follows that

X(T +)1 {J<x I E 1\ :!Frn + - '~Tt


rJ:::::.l

by Theorem 5.
I[ T is predictable, let {T n } announce T. Then

lim X(T n) = X(T -)

on {o < T < oo} provided X has left limits in (O,.Xl), and triviallyon {T = O}
provided we define X(O - ) = X(O). Now for each n, we have just shown that
X(T n ) E '~Tn+; hence

by Theorem 8, ifwe note that T E 9'T-' Ifperchance X has left limits in (O,x]
so that X is defined, then we have X(T -) E .?i'T-'
Cj)_ D

We can extend the shift et to eT for any function T from Q to [0, 'l.J 1
Observe that the defining equations for the shift, (6) of§1.2, is an identity in s
and t for each w. It follows that on the set {T <x} we have

(20)

Note that T +t is another function of the same type as T. Thus we have

Xt ·" er = X T +t

as functions on {T <XJ }. lf T is optional, then so is T + l; indeed T + t is


strictly optional for t > 0. Hence by Theorem 10, if X is right continuous:

(21)
1.3. Optional Times 21

So far so good, but the next step is to consider the inverse mapping ():r 1.
This would be awkward if ()T were to be defined on {T < oo} only. Here is
the device to extend it to Q. Put

Vw E Q: X x(w) = o.
Postulate the existence of a point Wo in Q for which

this then holds for t E [0,00]. Finally, define

Vw E Q: OAw) = WiJ.

After this fixing all three terms in (20) reduce to 0 when t = 00 or T = 00


or both. Hence (20) now holds for each t E [0, 00] and any T from Q to
[0, 00]. The existence of Wo is of course trivial in the case where Q = EI,
and can in general be arranged by enlarging the given Q.
With this amendment, we can rewrite (16) simply as

In equation (21) we can now omit the factor 1{T < 00 j' and the result is equiv-
alent to

where u(X t) is the u-field generated by Xt. Since ffT + t + increases with t it
follows by the usual extension that

(22)

Let us give a useful application at once.

Theorem 11. Let S be optional relative to {ff?} and T be optional relative to


{~}, then

(23)

is optional relative to {~}.

Proof. For each t, we have by (22)

(24)
22 1. Markov Process

Hence for each r < t:

by (24) with t - r replacing land the definition of g;T+I-r' Now we have

{T + S" 0/ < t} = U {T< r; S·· 8}" < I - r]


rEQt

and each member of the union above belongs to §',. Hence the set above
belongs to .~ and this proves the theorem. D

Remark. Later we will extend the result to an S which is optional relative


to (.~n or {9'";}; see §2.3.

What does the random variable in (23) mean? Let A be a set in If and
define
Diw) = inf{t ~ üIX(t,w) E A}. (25)

This is called the "first entrance time into A", and is a prime example of an
optional time. Indeed, it was also historically the first such time to be con-
sidered. The reader may weil recall instances of its use in the classical theory
of probability. However for a continuous time Markov process, it is not a
trivial matter to verify that DA is optional for a general A. This is easy only
for an open set A, and we shall treat other sets in §3.3. Suppose this has been
shown; we can identify the random variable in (23) when S = DA as folIows:

T(w) + DA 8T (w)
0 = inf{t ~ T(w)IX(t,w) E A}. (26)

In particular if T(w) == s, s + DA·' 8s is "the first entrance time into A on or


after time s"; if T(w) = DB(w) where BE If, then D B + DA 8 DB is "the first
0

entrance time into A on or after the first entrance time into B." Of course, a
similar interpretation is meaningful for any optional time T. We shall see
many applications to Markov processes involving such random times.
The definitions and results ofthis section have been given without mention
of probability. It is usually obvious how to modify them when exceptional
sets are allowed. For instance, if each T n is defined only P-a.e., and lim n T"
exists only P-a.e., then there is a set Qo with P(Qo) = 1 on which all Tn's are
defined and lim n T n exists for all w in Qo. Furthermore if {T n ::;; t} differs
from a set in :lF, by a P-null sets, then Qo n {T" ::;; t} E.~ provided that
(g;, P) is complete and 9'"0 ( c .~) is augmented (by all P-null sets). Thus T"
is optional considered on the trace of (Q, 9'") on Qo. In this way we can apply
the preceding discussions to a reduced probability space, and we shall often
do so tacitly. On the other hand, there are deeper results concerning option-
ality in which probability considerations are expedient; see for example
Problem 12 below.
1.3. Optional Times 23

Exercises
1. If T is optional and S = ({J(T) > T where ({J is a Borel function, then S
is strictly optional, indeed predictable. In general a strictly positive pre-
dictable time is strictly optional.
2. :#'T- is also genera ted by :#'0+ and one ofthe two classes ofsets below:
(i) {t:::::; T} (\ A, where 0:::::; t < 00 and A E :?F;-;
(ii) {t< T} (\ A, where 0:::::; t < 00 and A E :?F;+.
3. If Sand Tare functions from Q to [0, 00] such that S:::::; T, then :#'s- C ,'#'T-
if and only if SE :#'T-' This is the case if S is optional. On the other hand
if S is arbitrary and T is optional, :#'s- C :#'T+ may be false.
4. Define for any function T from Q to [0,00] the a-field
7J

:#'T+ = /\ :#'(T+ l/n)-'


n=l

Prove that if T is optional, this coincides with the definition given in (3).
5. If T is optional then
%

,'#'T+ = V :#'(TAn)+'
n=l

6. Let S be optional relative to {:?F;} and define '§t = ffs+1' where '§ 0 = .~s+·
Prove that if T ~ S, T is optional relative to {:?F;} if and only if T - S is
optional relative to {'§J
7. If T is predictable and A E :#'T-' then the TA defined in (4) is predictable.
8. Let {T(k)} be a sequence of predictable times. If T(k) increases to T,
then T is predictable. If T(k) "settles down" to T, namely T(k) decreases
and for each w, there exists an integer N(w) far which TN(W)(w) = T(w),
then T is "almost surely predictable". [Hint: for the second assertion
let {T~k)} announce T(k) such that p(T~k) + 2- n < T) < 2- n - k for all
k and n. Let Sn = infk T~k); then {Sn} announces Talmost surely.]
9. If S is optional and T is arbitrary positive, then
:#'s+ (\ {S < T} E ''#'T-;

namely; for each A E :#'s+' A (\ {S < T} E :#'T-'


10. If Sand T are both optional, then exactly one of the following four
relations is in general false:
'~(SA T)+ = ~s+ 1\ ·'FT +;
·~s A T) - = ·'#'s - 1\ :#'1' - ;

·~sv T)+ = ·'Fs + V ·"F1' +;


·~s v T)- = ·'#'s- V §1'-'
24 1. Markov Process

11. Let {9';} be an increasing family of (J-fields, and C§I be the (J-field generated
by 9'; and all P-null sets. Suppose S is optional relative to {C§J Prove
that there exists T which is optional relative to {.~} and P(S = T) = 1.
Furthermore if A E C§ s then there exists ME .?fCT+ such that P(A D M) = 0.
[Hint: approximate S as in (18); for the second assertion, consider
[SA = T <x} where SA is defined as in (4).J
12. If T is predictable, there exists an "almost surely" announcing sequence
{Tn } such that each T n is countably-valued with values in the dyadic
set. Namely, we have for each n, T n ::;: T n + 1 ::;: T, T n < Ton {T > O},
and P{lim n T n = T} = 1. Furthermore each T n is predictable. [Hint:
let T(k) k J + 1)/2
n = ([2 T n
k n ) >
' Show that there exists lf knJ1 such that P lf T(k
n
-
T i.o.} = 0. Put Sm = infn~m T~,k,,) and show that Sm is dyadic-valued. For
the last assertion use Problems 1 and 8.J

1.4. Martingale Theorems

Martingale theory is an essential tool in the analysis of Markov processes.


In this section we give abrief account of some of the basic results in con-
tinuous time, with a view to applications in later sections. It is assumed
that the reader is acquainted with the discrete time analogues of these re-
sults, such as discussed in Chapter 9 of Course. A theorem stated there for
a submartingale will be ci ted for a supermartingale without comment, since
it amounts only to a change of signs for the random variables involved.
Let (Q,:#P, P) be a probability space; {.~} an increasing family of (J-fields
(of sets in :#p) to which a stochastic process {X,} is adapted, as in §1.1. The
index set for t is T = [0, CX)) when this is not specified. We shall use the
symbol N to denote the set of positive (~O) integers, and Nm the subset
of N not exceeding m, where mE N. As usual, indices such as m, n and k
denote members of N without explicit mention.

Definition. {X I' :Y';, t E T} is a martingale iff


(i) Vt: XI is an integrable real-valued random variable;
(ii) if s < t, then
(I)
X is a supermartingale iff the "=" in (1) is replaced by "~"; and a sub-
martingale iff it is replaced by "::;:".

We write 5 H t to mean "5 > t, S ---+ t", and 5 iI t to mean "5 < I, S ---+ t".
Let S be a countable dense subset ofT; for convenience we suppose S :=J N.

Theorem 1. Let {X W~'} be a supermartingale. F or P a.e. w, the sampie func-


tion X(·, w) restricted to the set S has a right limit at ever}' t E [O,.~), and
1.4. Martingale Theorems 25

a teJt limit at every t E (0, 00). Thus we have:

X(t+,w) = lim X(s,w) exists Jor t E [0,00);


SE 5,'
sHt
(2)
X(t-,w) = lim X(s,w) exists Jor t E (0, 00).
SES
sn t
Furthermore, Jor euch .finite interval leT, the set of numbers

[X(s,w), SES nI} (3)

is bounded. H ence the limits in (2) are finite numbers.

ProoJ. Let {X m .'#'n, n E Nm} be a discrete parameter supermartingale, and let

U(w; Nm; [a, b]), where a < b,

denote thc number of upcrossings from strictly below a to strictly above


b by the sampIe sequence [Xn(w), n E Nm}. We recall that the upcrossing
inequality states that

(4)

See Theorem 9.4.2 of Course. Observe that this inequality really applies
to a supermartingale indexed by any finite linearly ordered set with m as
the last index. In other words, the bound given on the right side of (4) de-
pends on the last random variable of the supermartingale sequence, not
on the number of terms in it (as long as the number is finite). Hence if we
consider the supermartingale {X t } with the index t restricted to [0, m] n S',
where S' is any finite subset of S containing m, and denote the corresponding
upcrossing number by U([O,m] n S'; [a,b]), then exactly the same bound
as in (4) applies to it. Now let S' increase to S, then the upcrossing number
increases to U([O,m] n S; [a,b]). Hence by the monotone convergence
theorem, we have

f []
E1U(O,m nS; a,b)
[]}
s b-a+ a <00.
E(X;;;) (5)

It follows that for P-a.e. w.

U(w; [O,m] n S; [a,b]) <00. (6)

This is true for each mE N and each pair of rational numbers (a, b) such
that a < b. Therefore (6) holds simultaneously far all mE N and all such
26 1. Markov Proccss

pairs in a set D o with P(D o) = 1. We claim that for each W E D o, the limit
in (2) must exist. lt is sufficient to consider the right limit. Suppose that for
some t, !imsES . .,w X(s, w) does not exist. Then we have firstly

lim X(s,w) <!im X(s,w),

and hence secondly two rational numbers a and h such that

lim X(s, w) < (/ < b < !im X(s, w).

But this means that the sam pie function X(·, w) crosses from (/ to h infinitely
many times on the set S in any right neighborhood of t. Since wEDa this
behavior is ruled out by definition of Da.
To prove the second assertion of the theorem, we recall the following
inequality for a discrete parameter supermartingale (Theorem 9.4.1 of
Course). For any ;. > 0:

This leads by the same arguments as before to

;cp{ sup IX,12 ;~}:s; E(X a) + 2E(X';;). (7)


'E[O.m]nS

Dividing by )" and letting ;, -->x;, we conclude that

p{'E sup
[O.m]nS
IX,I = X;} = 0, or p{'E sup
[U.m]nS
IX,I <x} = I.

This means that the sampie function X(·, w) restricted to [0, m] n S IS


bounded P-a.e., say in a set 11/11 with P(Ll m ) = 1. Then p(n,;; .. I LI/II) = 1
and if W E n,;;=
1 Ll m , X(·. (1)) is bounded in InS for each finite interval I.

o
We shall say "almost surely" as paraphrase for "for P-a.e. (1)".

Corollary 1. lf X is right continuaus, then it has lelt limits ever\'where in


(O,x;) and is bounded on each finite interval, almost surely.

Praat. The boundedness assertion in Theorem 1 and the hypothesis of right


continuity imply that X is bounded on each finite interval. Next, if a sam pie
1.4. Martingale Theorems 27

function X(·, ) is right continuous but does not have a left limit at some t,
then it must oscillate between two numbers a and b infinitely many times in
a left neighborhood of t. Because the function is right continuous at the
points where it takes the values a and b, these oscillations also occur on the
countable dense set S. This is almost surely impossible by the theorem. 0

Corollary 2. IJ X is right continuous and either (a) Xe :::0: 0 Jor each t; or


(b) SUPe E(IXel) < 00, then lime~Xl Xe exists almost surely and is an integrahle
random variable.

Prooj. Under either condition we can let m --> 00 in (5) to obtain

A+a
E{U(S; [a,b])} ~ b _ a' (8)

where A = 0 under (a), Adenotes the sup under (b). Now let U(w; [a,b])
denote the total number of upcrossings of Ca, b] by the unrestricted sampIe
function X(-,w). We will show that if X(-,w) is right continuous, then

U(w; S; [a,b]) = U(w; [a,b]). (9)

For ifthe right member above be :::O:k, then there exist SI < t l < ... < Sk < t k
such that X(Sj' w) < a and X(t j, w) > b for 1 ~ j ::;; k. Hence by right con-
tinuity at these 2k points, there exist S'I < (1 < ... < s~ < t~, all members
of S such that 51 < S'I < t l < (1 < ... < Sk < s~ < t k < t~ such thaiX(sj, w) < a
and X(tj, w} > b for 1 ::;; j ::;; k. Thus the left member of (9) must also be :::0: k.
Of course it cannot exceed the right member. Hence (9) is proved. We have
therefore

(10)

It then follows as before that almost surely no sampIe function oscillates


between any two distinct values infinitely many times in T, and so it must
tend to a limit as t --> 00, possibly ± 00. Let the limit be X'lJ. Under condition
(a), we have by Fatou's lemma and supermartingale property:

Under condition (b) we have similarly:

E(IX xl) ::;; lim E(IXnl) ~ sup E(IXel)·


n e

Hence X is integrable in either case.


% o
28 l. Markov Process

Theorem 2. Let (X t, ,~) be a supermartingale, anel X I + and X I _ he os in (2),


We haue
'itE [0, x): XI 2 E[Xt+I·~), (11 )

'it E (0, x): XI 2 E[XJ~-:, ( 12)

Furthermore {X t +, ,~+} is a supermartinyale; it is a martin?Jale ij [X I' :~} iso

Prooj'. Let t n E S, t n H t, then

X t 2 E{XIJ~}. (13)

Now the supermartingale [XJ with s in the index set {t, ... ,t", ... ,tl) is
uniformly integrable; see Theorem 9.4.7(c) of Course. Hence we obtain (11)
by letting n -->XJ in (13). Next, let Sn E S, Sn Ii t; then

Letting 11--> X we obtain (12) since :Fs .. i .~_; here we use Theorem 9.4.8,
(ISa) of Course. Finally, let U n > U > t n > C, U" E S, t n E S, Un 11 u, t" H t, and
A E :ll';+. Then we have

Letting n --> CD and using uniform integrability as before, we obtain

This proves that {X t +, .~ +} is a supermartingak The case of a martingale


is similar. 0

Two stochastic processes X = {X t} and Y = {1';} are said to be versions


of each other iff we have

'it: P{Xt = 1';} = L (14)

lt then follows that for any countable subset S of T, we have also

P{X t = 1'; forall tES} = L

In particular, for any (t h .. ,,(,,), the distributions of (X tl' . . . ,XtJ and


(1';" ... , Y,J are the same. Thus the two processes X and Y have identical
finite-dimensional joint distributions. In spite of this they may have quite
different sample functions, because the properties of a sam pie function
X(', w) or y(., w) are far from being determined by its values on a countable
1.4. Martingale Theorems 29

set. We shall not delve into this question but proceed to find a good version
for a supermartingale, under certain conditions.
From now on we shall suppose that (Q,:F, P) is a complete probabi!ity
space. A sub-<T-field of :F is called augmented iff it contains all P-null sets
in:F. We shall assurne for the increasing family {~} that:Fa is augmented,
then so is ~ for each t. Now if {r;} is aversion of {X t }, and X t E ~, then
also Y; E ~. Moreover if {XI>~} is a (super) martingale, then so is {y;, ~},
as can be verified trivially.

Theorem 3. Suppose that {X t, .Ji't} is a supermartingale and {:Ft } is right


continuous, namely ~ = ~+ for each tE T. Then the process (XJ has a
right continuous version if and only if the function
t --> E(X t ) is right continuous in T. (15)

Proof. Suppose {Y;} is a right continuous version of {X t }. Then if t n 11 t


we have
!im E(Xt..l = !im E(Y;"l = E(Y;) = E(XJ,
n n

where the second equation follows from the uniform integrability of { Y;J
and the right continuity of Y. Hence (15) is true.
Conversely suppose (15) is true; let S be a countable dense subset of T
and define X t + as in (2). Since X t + E ~+ = ~, we have by (11)

P-a.e. (16)

On the other hand, let tn E Sand t n 11 t; then

(17)

where the first equation follows from uniform integrabi!ity and the second
from (15). Combining (16) and (17) we obtain

\:It: P(X t = X t +) = 1;

namely {X t+} is aversion of {X t}. Now for each w, the function t --> X(t +, w)
as defined in (2) is right continuous in T, by elementary analysis. Hence
{X t +} is a right continuous version of {Xt}, and {Xt+,~} is a supermar-
tingale by a previous remark. This is also given by Theorem 2, but the point
is that there we did not know whether {X t +} is aversion of {Xt}. D

We shall say that the supermartingale {Xt,~} is right continuous iff


both {X t } and {:F t } are right continuous. An immediate consequence of
Theorem 3 is the following.
30 1. Markov Process

Corollary. Suppose {9';} is right continuous and (Xt,.~) is a martingale. Then


{X t) has a right continuous version.

A particularly important dass of martingales is given by

JE(YI
l %) , d
.Tt :$1
tf (18)

where Y is an integrable random variable. According to the Corollary above,


the process {E( Y I.~)} has a right continuous version provided that {.~}
is right continuous. In this case we shall always use this version ofthe process.
The next theorem is a basic tool in martingale theory, ca lied Doob's
Stopping Theorem.

Theorem 4. Let {Xt,.~) be a right continuous supermartingale satis{ving


the following condition. There exists an inteqrable random variable Y such
that
(19)

Let Sand T be both optional (lml S :::;: T Then we have


(a) limt~ x X t = X, exists almost surei).'; X s and X T are illtewable,
where X s = X x on {S = 'X; }, X T = X x on {T =x } ;
(b) X s ;:::' E(XTI.~s+)'
In case {Xt,.~t} is a martingale, there is equalit).' in (b).

Proof. Let us first observe that condition (19) is satisfied with Y == 0 if X t ;:::. 0,
Vt. Next, it is also satisfied when there is equality in (19), namely for the dass
of martingales exhibited in (18). The general case of the theorem amounts to
a combination ofthese two cases, as will be apparent below. We put

Then {Zt,.~} is a right continuous positive supermartingale by (19) and the


fact that the martingale in (18) is right continuous by choice. Hence we have
the decomposition
X t = Zt + E(YI9';) (20)

into a positive supermartingale and a martingale of the special kind, both


right continuous. Corollary 2 to Theorem 1 applies to both terms on the
right side of (20), since Zt;:::' 0 and E[IE(YI.~t)I}:::;: E(IYI) < x. Hence
limt~ r X t = X x exists and is a finite random variable almost surely. More-
over it is easily verified that

Vt: X t 2 E(X x I#';)


1.4. Martingale Theorems 31

so that {X t , ff" tE [0,00 J} is a supermartingale. Hence for each n,

is a discrete parameter supermartingale, where N 00 = {O, 1,2, ... ,oo}. Let

Then Sn and T n are optional relative to the family {$'(k/2 n), k E N}, indeed
strictly so, and Sn ::; T n. Note that X(Sn) = X(S) = X 00 on {S = oo}; X(T n) =
X(T) = X 'Xl on {T = oo}. We now invoke Theorem 9.3.5 of Course to obtain

(21)

Since 9's+ = A:'= 1 $'Sn+ by Theorem 5 of§1.3, this implies: for any A E $'s+
we have
(22)

Letting n --+ 00, then X(Sn) --+ X(S) and X(T n) --+ X(T) by right continuity.
Furthermore {X(Sn), $'(Sn)} and {X(Tn), $'(Tn)} are both supermartingales
on the index set N = {... ,n, ... , 1, O} which are uniformly integrable. To
see this, e.g., for the second sequence, note that (21) holds when Sn is replaced
=
by T n- 1 since T n- 1 Z T n; next if we put Sn 0 in (21) we deduce

00 > E{X(O)} Z E{X(Tn)},


and consequently

00 > lim i E{X(Tn)}·


n

This condition is equivalent to the asserted uniform integrability; see


Theorem 9.4.7 of Course. The latter of course implies the integrability of the
limits X s and X T, proving assertion (a). Furthermore we can now take the
limits under the integrals in (22) to obtain

L X(S)dP z L
X(T)dP. (23)

The truth of this relation for each A E $'s+ is equivalent to the assertion (b).
The final sentence of the theorem is proved by considering { - X t} as weB as
{XI}' Theorem 4 is complete1y proved.
32 1. M arkov Process

Let us consider the special case where

(24)

We know X x = limt~f X t exists and it can be easily identified as

Now apply Theorem 4(b) with equality and with T for S, CD for T there. The
resuIt is, for any option al T:

since :#'T+ C :#'n. We may replace ''#'T+ by .'1'r above since {.~] is right
continuous by hypothesis.
Next, let T be predictable, and {T n } announce T. Recalling that [XI} has
left limits by Corollary 1 to Theorem 1, we have

(26)

where X o - = X o . As n increases, :#'r n + increases and by Theorem 9.4.8 of


I
Course, the limits is equal to E( Y V,;~ 1 ''#'T n +}' Using Theorem 8 of ~1.3, we
conclude that
(27)

Since .'1' T- C .'1'T+' we have furthermore by (25)

(28)

It is easy to show that these last relations do not hold for an arbitrary optional
T (see Example 3 of ~1.3). In fact, Dellacherie [2] has given an example in
which X T- is not integrable for an optional T.
The next theorem, due to P. A. Meyer [1], may be used in the study of
excessive functions. Its proof shows the need for deeper notions of mea-
surability and serves as an excellent introduction to the more advanced
theory.

Theorem 5. Let {x~n), ff,} be a right continuous positive supermartingale Ior


each n, and suppose that almost surely we have

(29)

!f X t is integrable Ior each t, then {X n ff, } is a riqht continuous supermartinqale.


1.4. Martingale Theorems 33

Proof. The fact that {XI' g;;} is a positive supermartingale is trivial; only the
right continuity of t -+ X, is in question. Let D be the dyadic set, and put for
each t ~ 0:

Y;(w) = !im Xs(w).


sHI
SED

[Note that Y; is denoted by X I + in (2) above when S is D.] By Theorem 1, the


limit above exists for all t ~ 0 for w E Qo where P(Qo) = 1. From here on we
shall confine ourselves to Qo.
Let t k = ([2 k t] + 1)j2 k for each t ~ 0 and k ~ O. Then for each wand t we
have by (29) and the right continuity of t -+ x(n)(t, w):

X(t, w) = sup x(n)(t, w) = sup !im x(n)(t b w)


n k-oo

Next let T be any optional time and T k = ([2 k T] + 1)j2 k • Then we have by
Theorem 4,

for each k ~ 1. Letting n -+XJ we obtain by monotone convergence:

(31 )

Since X(Td converges to Y(T) by the definition of the latter, it follows from
(31) and Fatou's lemma that

E{X(T)} ~!im E{X(Tk )} ~ E{Y(T)}. (32)


k

Now for any [; > 0 define

T(w) = inf{t ~ 01 Y;(w) - XI(w) ~ c;} (33)

where inf 0 = + XJ as usual. The verification that T is an optional time is a


delicate matter and will be postponed until the next section. Suppose this is
so; then if T(w) < CIJ we have 6j ~ 0, 6j -+ 0 such that

Y(T(w) + 6j , w) - X(T(w) + 6j , w) ~ D.

Hence we have for all n ~ 1:


34 1. Markov Process

because X 2': x(n). Since both t -> Y(t, W) and t -> x(n)(t, w) are right con-
tinuous the inequality above implies that

Y(T(w),w) - x(n)(T(w),w) 2': /:

as !5 j --+ 0. Note that the case where all ()j = 0 is included in this argument.
Letting n ->X) we obtain

Y(T(w), (1)) - X(T(w), (1)) 2': /:. (34)

Therefore (34) holds on the set {T < .XJ}, and consequently (32) is possible
only if P {T <X)} = 0. Since <; is arbitrary, this means we have alm ost surely:

Vt 2': 0: Y(t, w) ::; X(t, w). (35)

Together with (30) we conclude that equality must hold in (35), namely that
P{Vt 2': 0: Y; = Xt] = 1. The theorem is proved since t -> Y(t,(!)) is right
continuous for (J) E Qo· 0

Remark. If we do not assume in the theorem that X t is integrable for each


t,of course {X t } cannot be a supermartingale. But it is still true that t--+
X(t, w) is right continuous as an extended-valued function, for P-a.e. w. The
assumption that each x(n) is positive mayaiso be dropped. Both these ex-
tensions are achieved by the following reduction. Fix two positive integers u
and k and put

y(n)
t
= (x(n)
t
_ E lf X(l)
u
I !F})
t
1\ k
~

where the conditional expectation is taken in its right continuous version.


Then {y~n)} satisfies the conditions of the theorem, hence

is right continuous alm ost surely in [0, u]. Now let first k iX), then u iY.::.
An important complement to Theorem 5 under astronger hypothesis will
next be given.

Theorem 6. U nder the hypotheses oJ Theorem 5, suppose Jurther that the index
set is [O,X)] and (29) holds for t E [0, oc]. Suppose also that Jor an.\' sequence
oJ optional times {Tn } increasing to Talmost surely, we have

lim E{X(Tn )} = E{X(Tl}. (36)

Then almost surely the convergence o{ x(n) to X t is unif()rm in tE [0, y~].


1.4. Martingale Theorems 35

Proof. For any 8 > 0, define

T~(w) = inf{t;:o: 0IX(t,W) - x(n)(t,W);:O: 8}.

Since X as weil as X n is right continuous by Theorem 5, this time it is easy


to verify that T~ is optional direct1y without recourse to the next section.
Since x~n) increases with n for each t, T~ increases with n, say to the limit
T'oo(::;; 00). Now observe that:

x(n) i X uniformly in [0, 00], P-a.e.


~ "18 > 0: for P-a.e. w, 3n( w) such that T~(w)( w) = 00
~ "18> 0: P{3n:T~ = oo} = 1
~ "18> 0: P{Vn: T~ < oo} = °
~ "18> 0: lim P{T~
n
< oo} = °
By right continuity, we have:

Hence for m ::;; n:

8prlT'
n
< 00 1.J <
- E{X(T e
n
) - Xn
(Te).
"'
Ten < 00 1.J

::;; E{X(T~) - Xm(T~)}.

Note that for any optional T, E {X(T)} ::;; E {X(O)} < 00 by Theorem 4. By
the same theorem,

Thus

Letting n -+X) and using (36), we obtain:

f;!im P{T~ < oo} ::;; E{X(T'oo) - Xm(T'oo)}. (37)

Under the hypothesis (29) with t = 00 included, Xm(T~)i X(T~), P-a.e.


Since X(T~,) is integrable, the right member of (37) converges to zero by
dominated convergence. Hence so does the left member, and this is equiv-
alent to the assertion of the theorem, as analyzed above. 0

The condition (36) is satisfied for instance when all sam pie functions of X
are left continuous, therefore continuous since they are assumed to be right
36 1. M arkov Process

continuous in the theorem. In general, the condition supplies a kind of left


continuity, not for general approach but for "optional approaches". A
similar condition will be an essential feature of the Markov processes we are
going to study, to be called "quasi left continuity" of Hunt processes. We will
now study the ca se where T n --> Cf) in (36), and for this discussion we introduce
a new term. The reader should be alerted to the abuse of the word "potential",
used by different authors to mean different things. For this reason we give it
aprefix to indicate that it is a special case of supermartingales. eWe do not
define a subpotential !]

Definition. A superpotential is a right continuous positive supermartingale


{X t } satisfying the condition
lim E(X t ) = O. (38)
t--+ 'JJ

Since X w = lim t -+ cx X t exists almost surely by Corollary 2 to Theorem I,


the condition (38) implies that X JJ = 0 almost surely. Furthermore, it follows
that {X t } is uniformly integrable (Theorem 4.5.4 of Course). Nonetheless this
does not imply that for any sequence of optional times T n --> Cf) we have

lim E(X TJ = O. (39)


n

When this is the case, the superpotential is said to be "of dass D"-a nomen-
dature whose origin is obscure. This dass plays a fundamental role in the
advanced theory, so let us prove one little result about it just to make the
acquaintance.

Theorem 7. A superpotential is of dass D if and only if the dass ()f random


variables {X T: T E 0] is uniformly integrable, where 0 is the dass of all
optional times.

Proof. If T n --> 00, then X T n --> 0 almost surely; hence uniform integrability
of {X TJ implies (39). To prove the converse, let nE N and

Since almost surely X(t) is bounded in each finite t-interval by Corollary I


to Theorem 1, and converges to zero as t --> 00, we must have T n i 00. Hence
(39) holds for this particular sequence by hypothesis. Now for any optional
T,define
on {T;::: T n },
on {T < T n }.
Then S ;::: T n and we have

r
JtXT:o>n}
X T dP ~ r
J{T:o>T n }
X T dP = r
Ja
X s dP ~ r
Ja
X T dP
n
1.5. Progressive Measurability and the Projection Theorem 37

°
where the middle equation is due to X 00 == and the last inequality due to
Doob's stopping theorem. Since the last term above converges to zero as
n -+ 00, {X T:T E 0} is uniformly integrable by definition. 0

1.5. Progressive Measurability and the Projection Theorem

°
For t :::::: let Tl = [0, t] and ggl be the Euclidean (classical) Borel field on
Tl' Let {'~l} be an increasing family of u-fields.

Definition. X = {Xl> t E T} is said to be progressively measurable relative to


{.?l> tE T} iff the mapping

(s, w) -+ X(s, w}

restricted to Tl X Q is measurable ggl x .?r for each t :::::: 0. This implies that
Xl E.?r for each t, namely {Xl} is adapted to {.?r} (exercise).

This new concept of measurability brings to the fore the fundamental


nature of X(',') as a function of the pair (s, w). For each s, X(s,') is a ran-
dom variable; for each w, X(-, w) is a sampie function. But the deeper
properties of the process X really concern it as function of two variables on
the domain T x Q. The structure of this function is complicated by the fact
that Q is not generally supposed to be a topological space. But particular
cases of Q such that ( - 00, + (0) or [0,1] should make the comparison with
a topological product space meaningful. For instance, from this point of view
a random time such as an optional time represents a "curve" as illustrated
below:

T(w) T

Thus the introduction of such times is analogous to the study of plane geom-
etry by means of curves- a venerable tradition. This apparent analogy has
been made into a pervasive concept in the "general theory of stochastic
processes;" see Dellacherie [2] and Dellacherie-Meyer [1].
According to the new point of view, a function X(·, .) on T x Q which is
in:!J x .~ is precisely a Borel measurable process as defined in §1.3. In partic-
ular, for any subset H of T x Q, its indicator function 1H may be identified
38 1. Markov Process

as a Borel measurable process { 1H(t), tE T} in the usual notation, where 1H(t)


is just the abbreviation of w ---> 1H(t, w). Turning this idea around, we call the
set H progressively measurable iff the associated process {1 H(t)] is so. This
amounts to the condition that H n ([0, t] x Q) E @t X .~ for each t 2:: O.
Now it is easy to verify that the dass of progressively measurable sets forms
a u-field 'ff, and that X is progressively measurable if and only if X E ~§ / (j in
the notation of §1.1. It follows that progressive measurability is preserved
by certain operations such as sequential pointwise convergence. The same is
not true for, e.g., right continuous process.
Recall that ff7 = u(X" 0 :S; s :S; t). If X is progressively measurable
relative to {ff7}, then it is so relative to any {ff,} to which it is adapted.

Theorem 1. If {X t } is right [or left] continuous, then it is progressively mea-


surable relative to {.'1'n

Proof. We will prove the result for right continuity, the left case being similar.
Fix t and define for n 2:: 1 :

x(n)(s, (1)) k + 1 t,
X ( -----y;- ) . [k k+l) < k < 2" - l'
2"'t -2"- t ,0-
= (J) If S E -
- ,

X(II)( t, (1)) = X (t, (1)).

Then X(II) is defined on T t x Q and converges pointwise by right continuity.


For each BEg, we have

{(s,w)lx(n)(S,W)EB} =
k k+
2" - 1 ( [
kVOyt,-----y;-t 1) x {wlX (k + 1 ) })-'2/11,(1) EB

u ({t} x {wIX(t,W)E B})Eßß t x .'1'~).

Hence the limit, which is X restricted to T t x Q, is likewise measurable by


the following lemma which is spelled out he re since it requires a little care in
a general state space. The latter is assumed to be separable metric and locally
compact, so that each open set is the union of a countable family of closed
sets.

Lemma. Let si be any u-field of subsets of T x Q. Suppose that X(II) E .01 fOI"
each n and X(II) ---> X everywhere. Then X E .01.

Proof. Let IC be the dass of sets B in g such that X - '(B) E .01. Let G be open
and G = U:'=l Fk where Fk = {xld(x,GC) 2:: l/k] an d is a metric. Then we
have by pointwise convergence of x(n) to X:

X-'(G) =
x
UU
k=l n=1 m=n
n (x(m))-'(F k )·
1.5. Progressive Measurability and the Projection Theorem 39

This belongs to si, and so the dass IC contains all open sets. Since it is a
O'-field by properties of X-I, it contains the minimal O'-field containing all
open sets, namely tff. Thus X E si /tff. 0
The next result is an extension of Theorem 10 of §1.3.

Theorem 2. Ir X is progressively measurable and T is optional, then


X T1IT< x.} E .?l'T+·

Proof. Consider the two mappings:

qJ: (5, w) ...... X(s, w)


t/J :w ...... (T( w ), w).
Their composition is

qJ t/J: w ...... X(T(w), w) = X T(W).

It follows from the definition of optionality that for each s < t:

(I)

F or if A E :Jß sand A E $'" then

{w T( w) E A; w E A}
J = {T E A} n A E :?l's + c g;;.

Next ifwe write for a fixed s the restrietion of qJ to T s x Q as i[>, we have by


progressive measurability

(2)

Combining (1) and (2), we obtain

In particular if BE tff, then for each s < t:

{wJ T(w):S:: s; X T(W) E B} E g;;.

Taking a sequence of s increasing strictly to t, we deduce that for each t:

{XTEB} n {T< t} Eg;;.

This is equivalent to the assertion of the theorem. D

Remark. A similar argument shows that if T is strictly optional, then


X T1IT<x,J E :?l'T'
40 I. Markov Process

Let H be a subset ofT x Q. We define the debut of H as folIows :

DH(w) = inf{t 2 0l(t,w) EH} (3)

where inf 0 = 00 as usual.

Theorem 3 (The Projection Theorem). IJ Jar each t, (Q,:F;, P) is a camplete


measure space, then D H as defined in (3) is an aptianal time Jar each pro-
gressively measurable set H.

PraaJ. Let T t - = [0, t) and consider the set

Then H t E fJI t x ff t since H is progressively measurable. Let 7r n denote the


projection mapping of T x Q onto Q, namely for any H c T x Q :

7r n (H) = {w E QI3t E T such that (t,w) EH].

\. H

w --------------------------~~_____

It is clear that

7rn(Ht) = {wl3s E [O,t) such that (s,w) E H}


(4)
= {wIDH(w) < t}.

[Observe that ifwe replace [0, t) by [0, t] in the second term above we cannot
conclude that it is equal to the third term with " < " replaced by ":-s; ".] By
the theory of analytical sets (see Dellacherie-Meyer [1], Chapter 3) the
projection of each set in !!J t x :F; on Q is an "fft-analytic" set, hence .~­
measurable when (Q,:F;, P) is complete. Thus for each t we have {D/J < t} E
:F;, namely D H is optional. 0
1.5. Progressive Measurability and the Projection Theorem 41

The following "converse" to Theorem 3 is instructive and is a simple


illustration of the general methodology. Let T be optional and consider
the set
H = {(t,w)1 T(w) < t}.
Then
1 on {(t,w)1 T(w) < t},
( )
1H t,w = { ° on { I T(w);;::-: t }.
(t,w)

Since T is optional, we have 1H (t) E ffr for each t. Furthermore, it is trivial


that for each w, the sampie function 1H( " w) is left continuous. Hence 1H
is progressively measurable by Theorem 1 above. Now a picture shows that

Thus each optional time is the debut of a progressively measurable set.


We now return to the Tin (33) of §1.4 and show that it is optional. Since
each x(n) is right continuous, it is progressively measurable relative to {ffr}
by Theorem 1. Therefore X is progressively measurable as the limit (or
supremum) of x(n) by the Lemma above in which we take d to be the
progressively measurable O"-field. On the other hand, the process (t, w) ~
Y(t, w), being right continuous, is progressively measurable by Theorem 1.
Hence the set
{(t,w)1 Y(t,w) - X(t,w);;::-: 8}

is progressively measurable, and T is optional by Theorem 3.


As an illustration of the fine analysis of sam pie function behavior, we
prove the following theorem which is needed in Chapter 2. Recall that for
a discrete time positive supermartingale, almost surely every sampie se-
quence remains at the value zero if it is ever taken. In continuous time the
result has a somewhat delicate ramification.

Theorem 4. Let {X t, ffr} be a positive supermartingale having right continuous


paths. Let
T 1(w) = inf{t;;::-: 0IXiw) = O},
T 2 (w) = inf{t;;::-: 0IXt-(w) = O},
T= T I /\ T 2 •

Then we have almost surely X(T + t) = °Jor all t;;::-: °


on the set {T < oo}.

ProoJ. Since {Xt} is right continuous, it has left limits by Corollary 1 to


Theorem 1 of §1.4. By Theorem 1, both {Xt} and {X t -} are progressively
42 1. Markov Process

measurable relative to {:Ft }. If we put

HI = {(t,w)IX(t,w) = O}, H 2 = {U,w)IX(t-,w) = o},

then TI = D H1 , T 2 = DH2 ; henee both are optional. It follows from Doob's


stopping theorem that for eaeh t ~ 0:

E{X(T I ); Tl < oo} ~ E{X(T I + t); Tl < oo}. (5)

But X(T l ) = °
on (TI <oo} by the definition of TI and right eontinuity
of paths. Henee the right membcr of (5) is also equal to zero. Sinee X ~ °
this implies that

P{X(T I + t) = °
for all tE Q (\ [0,00); TI< oo} = P{T I < CD}.

We may omit Q in the above by right eontinuity, and the result is equivalent
to the assertion of the theorem when T is replaeed by Tl.
The situation is different for T 2 beeause we ean no longer say at onee
that X(T z) = 0, although this is part of the desired eonclusion. It is eon-
eeivable that X(t) --> as t ° n
T 2 but jumps to a value different from at
Tz. To see that this does not happen, we must make a more detailed analysis
°
of sam pie funetions of the kind frequently needed in the study of Markov
proeesses later. We introduee the approximating optional times as folIows:

It is easy to show Sn is optional beeause (- 00, 11n) is an open set without


using progressive measurability. Clearly Sn inereases with n, and St/ i S ::;; T
where S is optional. Now we must eonsider two eases.
Case 1. Vn:S n < S. In this ease X(S-) = lim n X(St/) = 0. Sinee X(t) ~ I/n
for t E [0, Sn)' so also X(t - ) ~ I In for t E [0, Sn), it is clear that S = T 2 = T.
°
Case 2. :Jn o : Sno = St/o+ I = ... = S. In this ease X(S) = beeause X(SIlJ::;; IIn
for all n, and we have S = Tl = T. Unless Sno = 0, X jumps to at St/o.
We have therefore proved that T = S is optional (without proving that
°
TI and T 2 are ). Now applying Doob's stopping theorem as folIows:

(6)

Sinee the left member does not exeeed I In, letting n -->CXJ we obtain

E {X(S), S <:1)} = 0.
The rest is as before. D
1.5. Progressive Measurability and the Projection Theorem 43

We used the fact that {S < oo} c nn{Sn < oo}. The reverse indusion is
not true!

Exercises
1. Let {XI'~} be a martingale with right continuous paths. Show that
there exists an integrable Y such that X t = E( Y I~) for each t, if and only
if {X t} is uniformly integrable.
2. Show that if T is the firstjump in a Poisson process (see Example 3 of §1.3),
then E{XTI%T-} =1= X T-·
3. In the notation ofthe proof ofTheorem 6 of§l.4, show that Xl n ) converges
to X t uniformly in each finite t-interval, P-a.e., if and only if for each
°
c > we have P{lim n T~ = oo} = 1.
4. If(t,w)--+ X(t,w) is in.0B x %0 then for each A E tff', the function (t,x)--+
Pt(x, A) is in gg x tff'. Hence this is the case if {X t} is right [or left] con-
tinuous. [Hint: consider the dass of functions ((J on T x Q such that
(t,x)--+P{((J} belongs to fJ6xtff'. It contains functions of the form
IB(t)lA(w) where BE gg and A E %0.]
5. If {X t } is adapted to {~}, and progressively measurable relative to
{~+ ,} for each 8 > 0, then {X t } is progressively measurable relative to
f%}
l t·

6. Give an example of a process {X t } which is adapted to {.g;;;} but not


progressively measurable relative to {~}.
7. Suppose that {~} is right continuous and Sand T are optional relative
to {~}, with S s T. Put

[eS, T)) = {(t,w) E T x QIS(w) s t < T(w)};

similarly for [eS, T]], ((S, T)), ((S, TJ]. Show that all these four sets are
progressively measurable relative to {~}.

The (J-field generated by [eS, T)) when Sand T range over all optional
times such that S S T is called the optional field; the (J-field generated by
[eS, T)) when Sand T range over all predictable times such that S S T is
called the predictable field. Thus we have

predictable field c optional field


c progressively measurable field.

These are the fundamental (J-fields for the general theory of stochastic
processes.
44 I. Markov Process

NOTES ON CHAPTER 1

§1.1. The basic notions of the Markov property, as weil as optionality, in the discrete
parameter case are treated in Chapters 8 and 9 ofthe Course. A number ofproofs carry
over to the continuous parameter case, without change.
§1.2. Among the examples of Markov processes given here, only the case of Brownian
motion will be developed in Chapter 4. But for dimension d = 1 the theory is somewhat
special and will not be treated on its own merits. The case ofMarkov chains is historically
the oldest, but its modern development is not covered by the general theory. It will only
be mentioned here occasionally for peripheral illustrations. The c1ass of spatially
homogeneous Markov processes, sometimes referred to as U:vy or additive processes,
will be briefly described in §4.1.
§1.3. Most of thc material on optionality may be found in Chung and Doob [1] in
a more general form. For deeper properties, which are sparingly used in this book, see
Meyer [1]. The latter is somewhat da ted but in certain respects more readable than the
comprehensive new edition which is Dellacherie and Meyer [1].
§1.4. It is no longer necessary to attribute the foundation of martingale theory to
Doob, but his book [1] is obsolete especially for the treatment ofthe continuous param-
eter case. The review here borrows m uch from M eyer [1] and is confined to later nceds,
except for Theorems 6 and 7 which are given for the sake of illustration and general
knowledge. Meyer's proof of Theorem 5 initiated the method of projection in order to
establish the optionality of the random time T defined in (33). This idea was later
developed into a powerful methodology based on the two CT-fields mentioned at the
end ofthe section. A very c1ear exposition ofthis "general theory" is given in Dellacherie
[2].
A curious incident happened in thc airplane from Zürich to Beijing in May of 1979.
At the prodding ofthe author, Doob produced a new proof ofTheorem 5 without using
projection. Unfortunately it is not quite simple enough to be inc1uded here, so the
interested reader must await its appearance in Doob's forthcoming book.
Chapter 2

Basic Properties

2.1. Martingale Connection

Let a homogeneous Markov process {X t, :#,;, t E T} with transition sem i-


group (PI) be given. We seek a dass of functions fon E such that {I(X t), .~}
is a supermartingale.

Definition. Let J E Iff, °J


:s; :s; 00; f is rx-superaveraging relative to (PI) iff
(1)

fis rx-excessive iff in addition we have

J = lim e-atPJ (2)


tlO

It follows from (1) that its right member is a decreasing function of t, hence
limit in (2) always exists.

Excessive functions will play a major role in Chapter 3. The connection


with martingale theory is given by the proposition below.

Proposition 1. IJ fis rx-superavernging and f(X t ) is integrable Jo/" each t, then


{e -a~f(XI)':#';} is a supe/"martingale.

P/"oof. Let s > 0, t ;:::: 0. We have by the Markov property:

Hence

and this establishes the proposition. D

Next we present a subdass of rx-superaveraging functions which plays a


dominant role in the theory. We begin by stating a further condition on the
transition function.
46 2. Basic Properties

Definition. (Pt) is said to be Borelian iff for each A E IS':

is measurable (J,ß x Jf. This is equivalent tO: for each f E bIS':

(t, x) -+ PJ(x)

is measurable ßB x Jf. According to Exercise 4 of §1.5, if e.g. (X t } is right


continuous, then (Pt) is Borelian. The next definition applies to a Borelian
(Pt) in general.

Definition. Let fE hJf, rx > 0, then the rx-potential of f is the function Uaf
given by
Uaf(x) = So' e-'tPJ(x)dt

= p{Sox e-'~f(Xt)dt} (3)

The integration with respect to t is possible because of the Borelian as-


sumption above and the equality of the two expressions in (3) is due to
Fubini's theorem. In particular U'f E Jf. If we denote the sup-norm of f by
Ilfll, then we have
IIU'i11 :<:;; ~'Y. Ilfll· (4)

Taking! == 1 we see that the "operator norm" satisfies

( 5)

U' is also defined for fE Jf + but may take the value + oc. The family of
operators { U', rx > O} is also known as the "resolvent" of the semigroup (Pt),
or by abuse of language, of the process (X t ). We postpone a discussion of
the relevant facts. For the present the next two propositions are important.

Proposition 2. If f E M' +, then for each rx > 0, U'f is 'Y.-excessive ; and


{e -at U'!(X t), :~'t} is a supermartingale.

Proof We have
2.1. Martingale Connection 47

This shows that U'f is IX-superaveraging. As t 1 0, the third term above


converges to the fourth term, hence uaf is IX-excessive-a fact we store for
later use. Since uaf is bounded the last assertion of the proposition follows
from Proposition 1. 0

Proposition 3. Suppose {X r } is progressively measurable relative to {.~}. For


f E bo+ alld IX > 0, define

(6)

Then {1";, .~} is a ullij(Jrmly integrable martingale wh ich is progressively


measurable relative to {.~}.

Proof. Let

which is a bounded random variable; then we have for each x:

We have

The first term on the right belongs to ~, because {X r} is progressively


measurative relative to {~} (exercise). The second term on the right side of
(7) is equal to

E {fox e-a(t+U){(X t +u)du I~}

= E{e- ar fo" e-aUf(X u (lr)dU1fft}


U

= E{e-'tyx . (lJ~} = e-,rEX'{Yy} = e-arU~l(Xt)·

Hence we have

E{YxJ~t} = 1";. (8)

Since Yü is bounded, {1";, ~} is a martingale. It is an easy proposition that


for any integrable Y, the family E {Y I-;§} where -;§ ranges over all O"-fields of
48 2. Basic Properties

(Q, ;?, P), is uniformly integrable. Here is the proof:

r
JIIE(YI(1)12:/l1
IE(YI~§)ldP<
-
r
Jlllc(YIWll2:/I}
IYldP',

1
p{IE(YI~§)1 ~ 11) ::;; - E(IYI) -> O.
11
Hence {r;} is uniformly integrable by (8). Finally, the first term on the right
side of (7) is continuous in t, hence it is progressively measurable as a process.
On the other hand since U a! E Cf, it is dear that [U'!(X t )} as a process is
progressively measurable if {Xt} is, relative to {.~}. D

Remark. Let us change notation and put

Then {At} is an increasing process, namely for each w, At(w) increases with
t; and Ar. = Y x by definition of the latter. The decomposition given in (6),
together with (8), may be rewritten as:

'-"U'[(X)
i: . t -- EJA
( y
I .Tt!
Vb 1. - A t· (9)

This is a simple case of the Doob-Meyer decomposition of a supermartingale


into a uniformly integrable martingale minus an increasing process. If we
assume that the family [.~} is right continuous, then by the corollary to
Theorem 3 of §1.4, the supermartingale in (9) has a right continuous version.
It is in fact a superpotential of dass D (exercise).

2.2. FeUer Process

Before further development of the machinery in §2.1, we turn our attention


to a dass of transition semigroups having nice analytic properties. We show
that the sampie functions of the associated Markov process have desirable
regularity properties, if a proper version is taken. This will serve as a model
for a more general setting in Chapter 3.
Let E be the state space as in §1.1, and let E" = E u {al (where cl rt E) be
the Alexandroff one-point compactification of E. Namely, if E is not com-
pact, then a neighborhood system for the "point at infinity cl" is the dass of
the complements of all compact subsets of E; if Eis compact, then D is an
isolated point in Er. E,l is a compact separable metric space. Let IC denote the
dass of all continuous functions on E". Since E 8 is compact, each ! in IC is
bounded. We define the usual sup-norm of ! as folIows:

IIIII = SLIp
XEE(-
II(x)l·
2.2. Feiler Process 49

The constant function x --t f(ri) on ED is of course in IC, and if we put

'1x E Eo: fo(x) = fex) - f(o),

then limx~r1 fo(x) = O. Let 1C0 denote the subclass of IC vanishing at ri ("van-
ishing at infinity"); lC e denote the subclass of 1C0 having compact supports.
Recall that IC and 1C0 are both Banach spaces with the norm 11 11; and that 1C0
is the uniform closure (or completion) of lC e .
Let (Pt) be a submarkovian transition semigroup on (E,6"). It is extended
to be Markovian on (Ei;' 6" a), as shown in §1.2. The following definition applies
to this extension.

Definition. (Pt) is said to have the "Feiler property" or "Fellerian" iff Po =


identity mapping, and
(i) '1fEIC,t~O,
(ii) '1f E IC,
lim IIPJ -
t~O
fll = O. (1)

lt turns out that the condition in (1), which requires convergence in norm, is
equivalent to the apparently weaker condition below, which requires only
pointwise convergence:
(ii') '1f E IC, x E E,<
lim PJ(x) = fex). (2)
t~O

The proof is sketched in Exercise 4 below.


Since each member of IC is the sum of a member of 1C0 plus a constant, it
is easy to see that in the conditions (i), (ii) and (ii') above we may replace IC
by 1C0 without affecting their strength.

Theorem 1. The function


(t, x,f) --t PJ(x)

on T x E,) x IC, is continuous.

Proof. Consider (t, x,f) fixed and (s, y, g) variable in the inequality below:

IpJ(x) - psg(y)1 :s; IpJ(x) - PJ(y)1 + IPJ(y) - PJ(y) I


+ Ips/(y) - p,g(y)l·
Since PJ E IC, the first term converges to zero as y --t x; since (PJ is Markov-
ian, Ilpull = 1 for each u, the second term is bounded by
50 2. Basic Propertics

which converges to zero as It - si ---> 0 by (I); the third term is bounded by


IIPJ - p,gll ~ IIP,IIIII - gll = 111 - gll
which converges to zero as g ---> I in IC. Theorem 1 folIows. o
A (homogeneous) Markov process (X o .~, t E T) on (E", ge) whose semi-
group (Pt) has the Feiler property is called a Feiler process. We now begin the
study of its sam pie function properties.

Proposition 2. {XI> t E T} is stochastically continuous, namely {(ir each tE T,


X s ---> X t in prohability as s ---> t, sET.

Proof. Let lEI[, gEI[, then if t > 0 and () > 0 we have

by the Markov property. Since PJg E I[ and PJg ---> g, we have by bounded
convergence, as b 1 0:

(3)

Now if his a continuous function on Er x Er, then there exists a seguence of


functions {h n} each of the form Ij'= 1 f~,(.l()gnJ y) where In; E 1[, gn, E I[ such
that hn ---> h uniformlyon Er x EI" This is a conseguence of the Stone-
Weierstrass theorem, see e.g., Royden [1]. It follows from this and (3) that

Take h to be ametrie of the space Ea. Then the limit above is egual to zero,
and the result asserts that X tH converges to X t in probability. Next if
o < b < t, then we have for each x:
e{f(Xt-J)g(X t)} = e{f(Xt_ö)Ex,-o[g(XJ)]J
(4)
= e{f(Xt-tl)P"g(X t - o)} = P,-oUP"g)(x).

By Theorem 1, the last term converges as () 1 0 to

It follows as before that 'r:/x:


2.2. Feiler Process 51

hence also

The preceding proof affords an occasion to clarify certain obscurities.


When the symbol P or E is used without a superscript, the distribution of
of X Dis unspecified. Let it be denoted by f.1; thus

f.1(A) = P(X 0 E A), AES.

For any A E ffo (not ff!), we have

P(A) = E{P(Alffo)} = E{PXO(A)} = JE f.1(dx)PX(A) = PI'(A);


see (10) of §1.2. More generally if Y E L 1(ffO, P), then

E(Y) = EI'(Y).

Given any probability measure f.1 on S, and the transition function {P t (',·),
tE T}, we can construct a Markov process with f.1 as its initial distribution,
as reviewed in §1.2.1t is assumed that such a process exists in the probability
space (Q, ff, P). Any statement concerning the process given in terms of
P and E is therefore ipso facto true for any initial f.1, without an explicit
claimer to this effect. One may regard this usage as an editorial license to
save print! For instance, with this understanding, formula (3) above contains
its duplicate when E there is replaced by EX, for each x. The resulting relation
may then be written as

which may be easier to recognize than (3) itself. This is what is done in (4),
since the corresponding argument for convergence as r5 ! 0, under E instead
of EX, may be less obvious.
On the other hand, suppose that we have proved a result under p x and
EX, for each x. Then integrating with respect to f.1 we obtain the result under
pI' and EIl, for any f.1. Now the identification of P(A) with PI'(A) above shows
that the result is true under P and E. This is exactly what we did at the end
of the proof of Proposition 2, but we have artfully concealed f.1 from the
view there.
To sum up, for any A E ffo, P(A) = 1 is just a cryptic way of writing

Vx E E: PX(A) = 1.

We shall say in this case that A is almost sure. The apparently stronger state-
ment also follows: for any probability measure f.1 on S, we have PI'(A) = 1.
After Theorem 5 of §2.3, we can extend this to any A E ff -.
52 2. Basic Properties

The next proposition concerns the IX-potential Var. Let us re cord the
following basic facts. If (Pt) is Fellerian, thcn

(5)

Vf E C: lim C(--+'Xl
IIIXV~f - fll = O. (6)

Since PJ E IC for each t, (5) follows by dominated convergence as x tends


to a limit in the first expression for Vaf in (3) of §2.1. To show (6), we have
by a change of variables:

s~p IIIXVj(x) - f(x)11 ::0; s~p fo' lXe->t 1PJ(x) - f(x)1 dt


::0; foxe-uIIPu/J - fll du,
which converges to zero as IX -+XJ by (1) and dominated convergence. In
particular, we have for each x

(7)

this should be compared with (2). As a general guide, the properties of


IXV' as IX -+ 00 and as IX -+ 0 are respectively reflected in those of Pt as t -+ 0
and t -+ 00. This is a known folklore in the theory of Laplace transforms,
and V' is nothing but the Laplace transform of Pt in the operator sense.
A dass of functions defined in Iffe is said to "separate points" iff for any
two distinct points x and Y in 6;" there exists a member f of the dass such
that f(x) -:f. f(y). This concept is used in the Stone-Weierstrass theorem.
Let {On, nE N} be a countable base of the open sets of 8;0, and put

We know <{Jn E IC.

Proposition 3. The countable suhset D of IC below separates points:

D = {Va<{Jn: IX E N, nE N}. (8)

Prooj'. For any x -:f. y, there exists On such that x E On and Y ~ 0". Hence
«Jn(x) = 0 < «Jn(y), namely <(Jn separates x and y. We have by (7), for suffi-
ciently large IX E N:
IIXV'qJn(X) - qJn(X)1 < !«Jn(Y),
IIXV'<{Jn(Y) - qJn(y)1 < !«Jn(Y)·

Hence IXV'qJn{X) -:f. IXV'qJn(Y), namely V'qJn separates x and y. [Of course
we can use (6) to get IIIXV'qJn - qJnll < !qJn(Y)·] 0
2.2. Feiler Process 53

The next is an analyticallemma separated out for the sake of darity. We


use 9 u h to denote the composition of 9 and h; and (g h)ls to denote its 0

restriction to S.

Proposition 4. Let [J) be a dass of continuous functions from Si! to R = (- 00,


+ 00) which separates points. Let h be any function on R to Sv' Suppose that
S is a dense subset of R such that for each 9 E [J),

(g 0 h)ls has right and left limits in R. (9)

Then his has right and left limits in R.

Proof. Suppose for some t, his does not have a right limit at t. This means
that there exist t n E S, t n H t, t~ E S, t~ !! t such that

There exists gE [J) such that g(x) #- g(y). Since 9 is continuous, we have

(g 0 h)(t n) ~ g(x) #- g(y) ~ (g 0 h)(t~).

This contradicts (9) and proves the proposition since the case of the left
limit is similar. 0

We are ready to regulate the sampIe functions of a FeIler process. The


trick, going back to Doob, is to relate the latter with those of supermartingales
obtained by composition with potentials of functions.

Proposition 5. Let {XI'~' t E T} be a Feiler process, and S be any countable


dense sub set of T. Then for almost every w, the sampie function X(',w)
restricted to S has right limits in [0, 00) and left limits in (0, 00).

Proof. Let 9 be a member of the dass [J) in (8), thus 9 = Ukep where k E N,
ep E C. By Proposition 2 of §2.1,

is a supermartingale. Hence by Theorem 1 of §1.4, there exists Qg with


P(Qg) = 1, such that if w E Qg, then the sam pIe function

restricted to S has right and left limits as asserted. Clearly the factor e- kt
may be omitted in the above. Let Q* = ()9E[D Qg, then P(Q*) = 1. If W E Q*,
then the preceding statement is true for each 9 E [J). Since [J) separates points
54 2. Basic Properties

by Proposition 3, it now follows from Proposition 4 that t ---> X,(w) restricted


to S has the same property.
Now we define, for each w E Q*:

Vt ~ 0: X,(w) = lim XJw); Vt > 0: X,(w) = !im X,(W). (10)


SE S SE S
S llt sH'

Elementary analysis shows that for each w, X,(w) is right continuous in


t and has left limit X, at each t > O. Similarly X, is left continuous in t and
has right limit X, at each t ~ O. D

Theorem 6. Suppose that each ff, is augmented. Then each of the processes
{X t } and {X t } is aversion of {X,}; hence it is a Feiler process with the same
transition semigroup (PJ as {X,}.

Proof. By Proposition 2, for each fixed t, there exists Sn E S, Sn 11 t such that

(11)

This is because convergence in probability implies almost sure convergence


along a subsequence. But by Proposition 5, the limit in (11) is equal to X,.
Hence P {X, = X,} = 1 for each t, namely {X,} is aversion of {XJ It follows
that for each f E IC:

almost surely. Augmentation of :-F, is needed to ensure that X, E .'F" for


each t. This yields the assertion concerning {X,}; the ca se of {X,} is similar.
D

Let us concentrate on the right continuous version X, and write it simply


as X. Its sample functions may take a, namely infinity, either as a value
or as a left !imiting value. Even if (P,) is strict1y Markovian so that P X, E X
{

E} = P,(x, E) = 1 for each x and t, it does not follow that we have X(t, w) E E
for all tE T for any w. In other words, it may not be bounded in each finite
interval of t. The next proposition settles this question for a general sub-
markovian (PJ

Theorem 7. Let {X" ff,} be a Feiler process with right continuous paths having
lejt limits; and let

((w) = inf{t ~ 0IX,_(w) = aor X,(w) = D}. (12)

Then we have almost surely X(( + t) = a for all t ~ 0, on the set {( <J.J}.
2.2. Feller Process 55

Proof. As be fore let d be ametrie for Ei) and put

<p(x) = d(x, c), X E Ei).

Then <p vanishes only at C. Since <p E C, we have Pr<p ---> <P as t 1 0, and
consequently

vanishes only at U, like <po Write

then {Zl'.'Fr} is a positive supermartingale by Proposition 2 of §2.1, and {Zr}


has right continuous paths with left limits, because {Xr} does and U 1 <p is
°
continuous. Furthermore Zr- = if and only if X r- = D; Zr = if and only
if X r = U. Therefore we have
°

and the theorem follows from Theorem 4 of §1.5. D

Corollary. l{ (Pt) is strictly Markovian, then almost surely the sampie function
is bounded in each finite t-interval. N amely not only X(t, w) E E .f(Jr all t ~ 0,
°
but X(t-,w) E Efor all t > as weil.

Prooj. Under the hypo thesis we have

P{X r E E for all tE T n Q} = 1,

It follows from the theorem that for each t E Q:

Pg< t} = Pg < t; X r E E} = 0.

Hence P g <xc} = 0, which is equivalent to the assertion of the Corollary.


D

Theorem 7 is a case of our discussion is §1.2 regarding aas an absorbing


state: cf. (13) there.

Exercises
1. Consider the Markov chain in Example 1 of §1.2, with E the set of positive
integers, no ci, and (f the discrete topology on E. Show that iflimlj 0 Pii(t) =
1 uniformly in all i, then (Pt) is a Feiler semigroup. Take aversion which is
right continuous and has left limits. Show that for each sam pie function
56 2. Basic Propcrties

X(·, w), there is a discrete set of tat which X(t -, w) =1= X(t, (1)), and X(', (1))
is constant between two consecutive values of I in this set.
2. Show that the semigroups of Examples 2, 3 and 4 in §1.2 are all Fellerian.
3. For a Markov chain suppose that there exists an i such that lim t10 [I -
Pii(t)J/t = + w. (Such astate is called instantaneous. It is not particularly
easy to construct such an example; see Chung [2J, p. 285.) Show that for
any 6 > 0 we have pi( X(t) = i for all tE [0, 6J] = O. Thus under pi, no
version of the process can be right continuous at t = O. Hence (Pt) cannot
be Fellerian. Prove the last assertion analytically.
4. Prove that in the definition of Feiler property, condition (2) implies
condition (I). [Hint: by the Riesz representation theorem the dual space
ofC is the space offinite measures on Er. Use this and the Hahn-Banach
thorem to show that the set of functions of the form (Uacp}, where rx > 0
and cp E C, is dense in C. Iff belongs to this set, (I) is true.J

2.3. Strong Markov Property and Right Continuity of Fields

We proceed to derive several fundamental proper ti es of a Feiler process. It


is assumed in the following theorems that the sam pIe functions are right
continuous. The existence of left limits will not be needed until the next
section. We continue to adopt the convention in §1.3 that Xx = ?, which
makes it possible to write certain formulas without restriction. But such
conventions should be watched carefully.

Theorem 1. For each optional T, we haue for each I E C, und u ~ 0:

(1)

Proof. Observe first that on {T = w}, both sides of (1) reduce to f((} Let

so that T n 11 T, each T n is strictly optional, and takes values in the dyadic


set D (see (19) of §1.3). Recall from Theorem 5 of §1.3 that
f.

.Q?;
'/'T+ =
1\ %
'''"1',,'
n=l

Hence for any A E :#'1'+' if we write A d = A n {T n = d] for d E D, thcn


A d E ff"d' Now the Markov property applied at t = d yields
2.3. Strang Markov Praperty and Right Continuity of Fields 57

see (2) of §1.2. Since T n = d on A d , we have by enumerating the possible


values of T n :

Since both fand PJ are bounded continuous, and X(·) is right continuous,
we obtain by letting n ---> OC! in the first and last members of (2):

The truth ofthis for each A E :FT + is equivalent to the assertion in (1). 0

When T is a constant t, the equation (1) reduces to the form of Markov


property given as (iic) in §1.1, with the small but important improvement
that the ~ there has now become ~+. The same arguments used in §1.1 to
establish the equivalence of(iic) with (iib) now show that (1) remains true for
fE btff. More generally, let us define the "post- T field" ff~ as folIows:

which reduces to :F; when T = t. Then we have for each integrable Y in :F~:

This is the extension of(iia) in §1.1. Alternatively, we have for each integrable
Y in ffo:
(3)

Definition. The Markov process {Xt,~} is said to have the strong Markov
property iff (3) is true for each optional time T.

Thus Theorem 1 is equivalent to the assertion that:


A Feiler process with right continuous paths has the strong Markov
property.
An immediate consequence will be stated as a separate theorem, which
contains the preceding statement when T == O.

Theorem 2. For each optional T, the process {X T + p :FT + t +, t E T} is a M arkov


process with (Pt) as transition semigroup. Furthermore, it has the strong M arkov
property.
58 2. Basic Properties

Proof. Recall that T + t is optional, in fact strictly so for t > 0; and XT+' E
.Y'1+t+ by Theorem 10 of ~1.3. Now apply (1) with T replaced by T + t:

(4)

This proves the first assertion of the theorem. As for the second, it is a con-
sequence of the re-statement of Theorem 1 because [X T+" .9'1+,+] IS a
Feiler process with right continuous paths. 0

There is a further extension ofTheorem 1 which is useful in many applica-


tions. Indeed, the oldest case of the strong Markov property, known as
Desire Andres reflection principle, requires such an extension (see Exercise
12 of~4.2).

Theorem 3. Let S :::::: T and S E ''#'T +. Then we haue ji)r each f E IC:

(5)

where S - T is defined to hex It' S = x (euen it' T =XJ); and X , = (cJ(?) = O.

The quantity in the right member of(5) is the value ofthe function (t, x)-->
e{f(X t )} = PJ(x) at (S(w) - T(w), XT(w)), with (!) omitted. It may be
written out more explicitly as

(6)

lt is tempting to denote the right member of (5) by Ps _ d(X 1) in analog with


(1), but such a notation would conflict with oUf later, and established, usage of
the balayage operator PT in ~3.4.

Proof of Theorem 3. Observe first that (5) reduces to fra) = f(?) on [S =XJ J
by the various conventions, so we may suppose S < x in wh at folIows. Let

U
[2"(S - T)]
=-----~-.
+1
" 2" '

then U" 1 (S - T). Since Sand T belong to ·'J7T ~, so does U" for each I!.
Consequently. for each d E D (the dyadic set) we have {U" = d} E .1'/+. Now
let A be any set in .9'/+, and put A d = A n {U" = d] E .1'1+' We have
for each f EI[:

L fex T+u.)dP= L
dED
L d
fex T+d)dP

= L L Pdf(XT)dP =
dED d
L uJ(X
p 1 )dP (7)
2.3. Strong Markov Property and Right Continuity of Fields 59

where the second equation is by (1). Now t --* PJ(x) is continuous for each
x andf E C by Theorem 1 of§2.2. Letting n --* 00 in the first and last members
of (7), we obtain by right continuity:

This is equivalent to (5). o


The next theorem demands a thorough understanding of the concepts of
"completion" and "augmentation". For the first notion see Theorem 2.2.5
of the Course. The second notion will now be discussed in detail.
Let (Q, .'#', P) be a complete probability space. The dass of sets C in :?
with P(C) = 0 will be denoted by.;f/ and called the P-null sets. Completeness
means: a subset of a P-null set is a P-null set. Let '§ be a sub-O'-field of :?
Put
'§* = O'('§ v .AI),

namely '§* is the minimal O'-field induding '§ and .H. '§* is called the aug-
mentation of,§ with respect to (Q,:?, P). It is characterized in the following
proposition, in which a "function" is from Q into [ - 00, + 00]. Recall that
for a function Yon E, Y E '§* is an abbreviation of Y E '§* /t! where t! is the
Borel field of E.

Lemma. A subset A of Q belongs to '§* if and only if there exists B E '§ such
that A /::" BE JV'. A function Y belongs to '§* if and only !f there exists a
function Z E '§ such that {Y =f. Z} E .H.

Proof. Let .S!I denote the class of sets A described in the lemma. Observe
that A L. B = C is equivalent to A = B /::" C for arbitrary sets A, Band C.
Since '§* contains all sets of the form B L. C where BE '§ and CE .;(1', it is
clear that '§* ::::J .91. To show that .91 ::::J '§* it is sufficient to verify that .91 is a
O'-field since it obviously includes both '§ and .H·. Since AC LI B C= A L1 B,
.r4' is closed under complementation. It is also closed under countable union
by the inclusion relation

and the remark that if the right member above is in .AI, then so is the !eft
member. Thus .si is a O'-field and we have proved the first senten ce of the
Lemma.
Next, !et Z E '§ and {Y =f. Z} E .H. Then for any subset S of [ - 00, + 00],
we have the trivial inclusion
{YE S} L. {Z E S} c {Y =f. Z}.
60 2. Basic Properties

F or each Borelian S, we have {Z ES} E ~§; hence {Y ES} E ctj* by what we


have proved, and so Y E ctj*. Conversely, let Y E r§*, and Y be countably-
valued. Namely Y is of the form I~ 1 C}A j , where the c/s are points in
[ - CD, + 00], and the A /s are disjoint sets in ctj*. Then for each.i there exists
B j E ctj such that A j D. B j E .A/·. Since the A/s are disjoint, Bi n B j E . Y if
i i= j. Put B'1 = B!, and Bi = B~ ... Bj _! B j for j ;:::: 2, then B j D. Bi E . rand
so A j D. Bj E ./V. The Bj's are disjoint. Put Z = Ij~ 1 c} Then Z E er; and 8',
{Y i= z} E U {A j D. Bi} E ./V.
j= !

Thus Y is of the form described in the second sentence of the lemma. If


Y E ctj* there exists countably valued y(k) E ctj* such that y(k) --+ Y. Let Z(k) E CI}
such that {Y(k) i= Z(k)} E .V; and Z = limk~x Z(k). Then Z E ctj and

[Y i= Z} c U {y(k) i= Z(k)} E vV. o


k=!

Remark. If (Q, .?l", P) is not complete, the Lemma is true provided that we
assume Y E ff in the second assertion. We leave this to the interested reader.

Corollary. (ctj*)* = '!J*. (Q, ~I}*, P) is camplete.

We now review the definition of a conditional expectation, although this


has been used hundreds of times above. Let Y E ff and E(I YI) <CD. Then
the conditional expectations E( Y I ctJ) is the dass of functions Z having the
following properties:

(a) ZE~§;
(b) for each W E b~l}, we have

E(WZ) = E(WY). (8)

Note that (8) is then also true for W E hctj*. It follows that if Z is a mem ber of
the dass E( Y IctJ), and Z* is a member of the dass E( Y IctJ*), then {Z i= Z*} E
.N·. We state this as folIows:

E(YlctJ) 0= E(YICI}*), mod.V. (9)

We are ready to return to (1) above. First put T 0= t, then shrink .~ -t to


ff?+ as we may:
(10)
2.3. Strong Markov Property and Right Continuity of Fields 61

We have seen that there is advantage in augmenting the a-fields ~~+ appear-
ing in (10). Since ~?+ ~ ~? ~ ~g, it is sufficient to augment ~g to achieve
this, for then fl' will be induded in it, and a fortiori in the others.

Theorem 4. Suppose that .~g is augmenled, then the family {.~?, t E T} is


right continuous.

Proof. According to (10), its right member, say Z [, is a representative of its


left member. If Z2 is any other representative ofthe latter, then {Zl #- Zz} E
.;11'. This is a consequence of the definition of the conditional expectation
wh ich implies its uniqueness mod ,/V. Hence Z2 E .~? by the Lemma. Thus
the entire dass in the left member of (10) belongs to ~? Now the same
argument as used in §1.1 to show that (iic) implies (iia) there yields, for any
YEb~;:

(11)

Let Yo E b.~?, then a representative of E {Yo I~?+} is Yo itself because ~? c


~?+. It follows as before that

(12)

Lemma 2 of §l.l may be used to show that the dass of sets Y satisfying (11)
constitutes a a-field. We have just seen that this dass indudes ~? and ~;.
Since ~o = a(~?, ~;) for each t, the dass must also indude ~o. Hence (11)
is true for each Y E ~o. For any Y E ~?+ (c ~o), a representative of
E{YI~?+} is Y itself. Hence Y E~? by (11), and this means .~?+ c .~?
Therefore ~?+ = ~? as asserted. 0

According to the discussion in §2.2, Theorem 4 may be read as folIows.


For each probability measure J.l on tff, let (Q, ~/l, P/l) be the completion of
(Q, ~o, P/l); and let ~~ denote the augmentation of ~? with respect to
(Q, ~/l, P/l). [Note that if Y E .~/l, then EIl(Y) is defined as EIl(Z) where Z E ~o
and P/l{ Y #- Z} = O.J Then {.~~, tE T} is right continuous. Furthermore, we
have as a duplicate of(10):

and more gene rally for Y E b.~o, and each T optional relative to {~n:

(13)

cf. (3) above. The notation EIl indicates that the conditional expectation is
with respect to (Q, ~/l, P/l); and any two representatives ofthe left number in
62 2. Basic Properties

(13) differs by a pl'-null function. lt will be shown below (in the proof of
Theorem 5) that (13) is also true for Y E hY;l'. Now if Y E .'F 0 , then Y 0, E
y;o. But if Y E Y;I', we cannot conelude that Y c Ot E .'FI', even if Y = feX J
This difficulty will be resolved by the next step.
Namely, we introduce a smaller O'-field

.'F- = /\ .'FI' (14)


I'

and correspondingly for each t 2': 0:

'F-
't = /\ 'F I
'
't· (15)
I'

In both (14) and (15), J1 ranges over all finite measures on IJ. lt is easy to see
that the result is the same if J1 ranges over all O'-finite measures on {;'. Leaving
aside astring of questions regarding these fields (see Exercises), we state the
useful result as folIows.

Corollary to Theorem 4. The family {.'F~, t E Tl is right continuous, as weil


as the Iamily {.'Fi, t E T} for each J1.

Proof. We have al ready proved the right continuity of {.'Fn and will use
this to prove that of {y;n. Let S(J1, s) be a elass of sets in any space, for each
J1 and s in any index sets. Then the following is a trivial set-theoretic identity:

/\ /\ S(J1, s) = /\ /\ S(J1, s). (16)


I' s s I'

Applying this to S(J1, s) = .'F~, for all finite measures J1 on 0', and all s E (t, oc),
we obtain
/\ (,'Fn+ = (Y;~)+ (17)
I'

where
(}};,I't ) +
(J. = /\ (}};'I'
,,;r .... , (.'F~) + = / \ .'F;.
s>t s>t

These are to be distinguished from (g;;+)1' and (.~+r = /\/l(g;;+)I', but see
Exercise 6 below. Since (.'Fi) + = .'Fi, (17) reduces to .'F~ = (Y;~)+ as asserted.
o
After this travail, we can now extend (13) to Y in h.'#'-. But first we must
consider the function x --> e {Y}. We know this is a Borel function if Y E b.'F°;
what can we say if Y E bY;-? The reader will do weil to hark back to the
theory of Lebesgue measure in Euclidean spaces for a elue. There we may
begin with the O'-field fJj) of classical Borel sets, and then complete :ßj with
2.3. Strong Markov Property and Right Continuity of Fields 63

respect to the Lebesgue measure. But we can also complete /J)J with respect to
other measures defined on /!/J. In an analogous manner, beginning with :F 0 ,
we have completed it with respect to pI' and called the result .~'". Then we
have taken the intersection of all these completions and called it :F-. Exactly
the same procedure can be used to complete rffD (the Borel field of Ea) with
respect to any finite measure J.l defined on $. The result will be denoted by
«,'"; we then put
(18)

As before, we may regard rS"'" and rS" as classes of functions defined on E, as


well as subsets of E which will be identified with their indicators. A function
in rS"'" is called J.l-measurable; a function in rS"- is called universally measurable.
A function f is J.l-null iff J.l( {f #- O}) = O. See Exercise 3 below for an instruc-
tive example.
The next theorem puts together the various extensions we had in mind.
Note that :F!]. = (:F!].) + as a consequence of the Corollary to Theorem 4.

Theorem 5. If Y E h:F-, thell the function qJ defined on E hy

(19)

is in rS"-. For each J.l and each T optional relative to {:Fn, we haue

(20)
(21)

Proof. Observe first that for each x, since Y E :FEx, EX(Y) is defined. Next, it
folio ws easily from the definition of the completion :FI' that Y E h:F1' if and
only ifthere exist Y! and Yz in h:F ü such that P'"{ Y! #- Yz} = 0 and Y! :s:; Y:s:;
Yz. Hence ifwe put

i = 1,2;

qJ! and qJz are in rS", qJ! :s:; qJ :s:; qJz and J.l({qJl #- qJz}) = O. Thus qJ E rS"1' by
definition. This being true for each J.l, we have qJ E IS'-. Next, define a measure
v on rS"a as folIows, for each f E hrS":

v(f) = JJ.l(dx)E-'{f(X T)} = W{f(X T)}'


This is the distribution of X T when J.l is the distribution of X 0' Thus for any
Z E b:F°, we have

(22)
64 2. Basic Properties

If Y E ff-, then there exists ZI and Z2 in ffo such that r{Zj =F Z2} = 0
and ZI ::::; Y::::; Z2' We have Zi 0 GT E .'F11 by (22) of §1.3, with {.~n for {.~}
(re ca II ffl1 = ff~,), and

(23)

by (22) above. This and

(24)

imply (20). Furthermore, we have by (24):

Now put t/Ji(X) = e(zJ, i = 1,2. Since Zi E ffo, t/Ji(X 1') is a representative of
P{Zi • GTlffj} by (13). On the other hand, we have t/Jl ::::; <p::::; t/J2' hence

(26)
and

Comparing (25) and (26) we conclude that <p(X 1') is a representative of


E{Y. GTlffj}. This is the assertion (21). 0

We end this section with a simple but important result which is often
called Blumenthal's zero-or-one law.

Theorem 6. Let A E ff;; then far each x we haue PX(A) = 0 or PX(A) = 1.

Praaf. Suppose first that A E .'Fg. Then A = X 0 I(A) for some A E {;. Since
PX{X o = x} = 1, we have

which can only take the value 0 or 1. If A E .~~, then for each x, there
exists A X such that r(A D A X ) = 0, so that PX(A) = PX(A X ) which is either
oor 1 as just proved. 0

If we think that ffg is a sm all (J-field, and .'F~ is only "trivially" larger.
the result appears to be innocuous. Actually it receives its strength from the
fact that ff; = (ff-)o+, as part of the Corollary to Theorem 4. Whether an
event will occur "instantly" may be portentous. For instance, let T be optional
relative to {ff;. tE T}. then {T = O} E .~; and consequently the event
{T = O} has probability 0 or 1 under each p x . When T is the hitting time
2.3. Strong Markov Property and Right Continuity of Fields 65

ofa set, this dichotomy leads to a basic notion in potential theory (see §3.4)~
the regularity of a point for a set or the thinness of a set at a point. For a
Brownian motion on the line, it yields an instant proof that almost every
sampie function starting at 0 must change sign infinitely many times in any
sm all time interval.

Exercises
1. Prove that if 1ft ~ 0: :Fr = .~+, then for any T which is optional relative
to {.~}, we have 1ft ~ 0: ''#'T+t = :FT + t +.
2: Let N* denote the dass of all sets in :F 0 such that PX(A) = 0 for every
x E E. Let /F* = lT(:F° v N*). Show that :F* c .'#'-. The lT-field :F* is not
to be confused with :F-; see Problem 3.
3. Let Q = R l and E = R l . Define Xlw) = w + t; then {Xt, t ~ O} is the
uniform motion (Example 2 of §1.2). Show that:F~ = ggl, the usual Borel
field on R l , and p x = Gx , the point mass at x. Show that
1\ (:F0)"x is the dass of all subsets of R 1;
xERl

1\ (.,#,0)1' is the dass of universally measurable sets of R 1


I'

where f.1 ranges over alI probability measures on gol. The dass N* defined
in Problem 2 is empty so that :F* = gol. (This example is due to Getoor.)
4. Show that
V :FI( = :FI'.
tE T

Is it true that
V Ob-=
.;r t Ob-?•
•.er
tET

5. F or each niet <§ n be a dass of positive (~O) functions dosed under the
operation oftaking the lim sup ofa sequence; let.Yf be a dass ofpositive
functions dosed under the operation of taking the lim inf of a sequence.
Denote by <§n + Yt' the dass offunctions ofthe form gn + h where gn E ~t}n
and hE Yt'. Suppose that <§n::;) <§n+l for every n and <§ = n~j=l <§n'
Prove that

[Hint: if .f belongs to the dass in the left member, show that


lim sup gn + lim inf h n where gn E <§"' h n E ff; show that !im sup Yn E ~t}.J
.f =
6. Apply Problem 5 to prove that

1\ (,'#'~)Il (1\ :F~)I';


s>t
=
s>t
66 2. Basic Properties

namely that the two operations of "taking the intersection over (t,Xi)"
and "augmentation with respect to pll" performed on {JF?, sET} are
commutative.
7. 1f both Sand T are optional relative to {JF;-}, then so is T + SJ Or.
[Cf. Theorem 11 of §1.3.]

2.4. Moderate Markov Property and Quasi Left Continuity

From now on we write .~ for .'F;- and JF for JF-, and consider the Feiler
process (XI,:lFr, PIl) in the probability space (Q, ff, PIl) for an arbitrary fixed
probability measure J1 on 15. We shall omit the superscript J1 in what follows
unless the occasion requires the explicit use. For each w in Q, the sampIe
function X(·, w) is right continuous in T = [0, Xi) and has left limits in
(0, Xi). The family {:lFr, t E T} is right continuous; hence each optional time
T is strictly optional and the pre- T field is .'Fr( = .'F] +). The process has the
strong Marov property. Recall that the latter is a consequence of the right
continuity of paths without the intervention of left limits. We now proceed
to investigate the left limits. We begin with a lemma separated out for its
general utility, and valid in any space (Q, .F, P).

Lemma 1. Let ~§ be a sub-(J~field oj .'F and X E qj; Y E .'F. Suppose that j(n'
each f
E Co we have

E {f( Y) I ~§} = f(X). ( 1)


Then P {X = Y} = 1.

Proof. It follows from (1) that for each open set U:

(2)

see Lemma 1 of §1.1. Take A = {X ~ U}, and integrate (2) over Il:

P { Y EU; X ~ U} = P {X EU; X ~ U} = 0. (3)

Let fUn} be a countable base ofthe topology; then we have by (3):

o
Corollary. Let X and Y be two randorn variables such that .f(n· any I E Ce.
g E Ce we have
E{f(X)g(X)} = E{f(Y)g(X)}. (4)

Then P{X = Y} = 1.
2.4. Moderate Markov Property and Quasi Left Continuity 67

Proof. As be fore (4) is true if g = lA, where Ais any open set. Then it is also
true if A is any Borel set, by Lemma 2 of §1.1. Thus we have

r
J{XEA}
f(X)dP = r
J{XEAj
f(Y)dP'
,

and consequently

f(X) = E{f(X)I~} = E{f(Y)I~},

where ~ = lT(X), the lT-field generated by X. Therefore the corollary follows


from Lemma 1. 0

The next lemma is also general, and will acquire further significance
shortly.

Lemma 2. Let {X t , tE R} be an arbitrary stochastic process which is sto-


chastically continuous. If almost every sample function has right and leit
limits everywhere in R, then for each tE R, X is continuous at talmost surely.

Proof. The assertion means that for each t,

p{lim X s =
s~t
Xt} = 1.
Since X is stochastically continuous at t, there exist Sn ii t and tn H t such
that almost surely we have
lim X Sn = Xt = lim X tn '
n n

But the limits above are respectively X t - and X t + since the latter are assumed
to exist almost surely. Hence P{ X t - = X t = X t +} = 1 wh ich is the assertion.
o
Remark. It is sufficient to assume that X t - and X t + exist almost surely,
for each t.
The property we have just proved is sometimes referred to as folIows:
"the process has no fixed time of discontinuity." This implies stochastic
continuity but is not implied by it. For example, the Markov chain in
Example 1 of §1.2, satisfying condition (d) and with afinite state space E, has
this property. But it may not if Eis infinite (when there are "instantaneous
states"). On the other hand, the simple Poisson process (Example 3 of §1.2)
has this property even though it is a Markov chain on an infinite E. Of course,
the said property is weaker than the "almost sure continuity" of sampie
functions, which means that alm ost every sam pie function is a continuous
function. The latter is a rather special situation in the theory of Markov
68 2. Basic Properties

processes, but is exemplified by the Brownian motion (Example 4 of §1.2)


and will be discussed in §3.1.
The next result is the left-handed companion to Theorem 1 of §2.3. The
reader should observe the careful handling of the values 0 and 00 for the
T below-often a nuisance which cannot be shrugged off.

Theorem 3. For each predictable T, we have for each f E 1[, al1d u 2': 0:

(5)

where X o - = X o.

Proof. Remember that limt~w X t need not exist at all, so that X T - is unde-
fined on {T = oo}. Since T E .~T-' we may erase the two appearances of
11T < c j in (5) and state the resuIt "on the set {T < oo}". However, since T
lf

does not necessarily belong to :#'Tn, we are not allowed to argue blithely
below "as if T is finite".
Let {T n} announce T, then we have by Theorem 1 of §2.3:

E{f(X Tn+u)I:#'T.,} = PJ(X TJ (6)

Since T n E :#'Tn' we may multip1y both members of (6) by IITn<;Mj' where M


is a positive number, to obtain

(7)

Since T n = 0 on {T = O}, T n ii Ton {T > O}, andfis continuous, we have

· j·(X {/ ·(X)
) =. on 1.rT -- 0 1j,
11m T +u
u
r 1.
n~7 n f(XT+u-) on lT> OJ.

Fortunately, X u = X,,_ almost surely in view of Lemma 2, and so we can


unify the result as fiX T+u-) on Q. On the right side of (7), we have similarly

lim PJ(X TJ = PJ(X T-),


n---'t f)

because X o = X o - by convention on {T = O}, and PJ is continuous. Since


also :#'Tn increases to :#'T- by Theorem 8 of §1.3, and fis bounded, we can
take the limit in 11 simultaneously in the integrand and in the conditioning
field in the left side of (7), by a martingale convergence theorem (see Theorem
9.4.8 of Course). Thus in the limit (7) becomes

(8)

Letting M --+ 00 we obtain (5) with X T+u- replacing X T+u in the left member.
2.4. Moderate Markov Property and Quasi Left Continuity 69

Next, letting u U 0, since lim uHO X T+u- = X T by right continuity and


pu! ---> f by Feiler property, we obtain

Now define X'lj _ (as weil as X exJ to be aeven where limt~CX'. X t exists and is
not equal to a. Then the factor 1(T<x.) in the above may be cancelled. Since
X T- E ~T- by Theorem 10 of §1.3, it now follows from Lemma 1 that we
have almost surely:
XT=X T-· (9)

This is then true on {T < oo} without the arbitrary fixing ofthe value of X.x;_.
For each optional T and u > 0, T + u is predictable. Hence we have just
proved that X T+u = X T+u- almost surelyon {T < oo}. This remark enables
us to replace X T + u _ by X 1"+ u in the left mem ber of (8), th us com pleting the
proof of Theorem 3. D

Corollary. For Y E b~l',

(10)

The details of the proof are left as an exercise. The similarity between (10)
above and (13) of §2.3 prompts the following definition.

Definition. The Markov process {X t , ffr} is said to have the moderate


Markov property iff alm ost all sam pie functions have left limits and (l0) is
true for each predictable time T.

Thus, this is the ca se for a Feiler process whose sam pIe functions are right
continuous and have left limits. The adjective "moderate" is used here in
default of a better one; it does not connote "weaker than the strong".
There is an important supplement to Theorem 3 which is very useful in
applications because it does not require T to be predictable. Let {Tn } be a
sequence of optional times, increasing (loosely) to T. Then T is optional but
not necessarily predictable. Now for each w in Q, the convergence of Tn(w)
to T(w) may happen in two different ways as described below.
Ca se (i). "In: T n < T. In this case X T " ---> X T - if T< 00.

Case (ii). :lno: T no = T. In this case X T n = X T for n ::::: no.


Heuristically speaking, T has the predictable character on the set of w for
which ca se (i) is true. Hence the result (9) above seems to imply that in both
cases we have X T" ---> X T on {T < oo}. The difficulty lies in that we do not
know whether we can apply the character of predictability on apart of Q
without regard to the other part. This problem is resolved in the following
70 2. Basic Properties

proof by constructing a truly predictable time, which coincides with the


given optional time on the part of Q in question.

Theorem 4. Let TI! be optional and increase to T. Then we haue almost surely

limXT,,=X T on{T<x}. (11)

Proof. Put
A = [\In: TI! < T};
and define
on {TI! < T}, T on A,
T' = {TI! T'- {
n 00 on{Tn=T}; 'AJ on Q - A.

Since {TI! < T} E .'FTn and A E:'FT by Proposition 6 of §1.3. T;, and T' are
optional by Proposition 4 of §1.3. We have

A n {T < oo} c {T' < oo} c A. ( 12)


Hence we have
\In: T;, = TI! < T = T' on A. ( 13)

lt follows that T~ 1\ n < T' and T~ 1\ 11 increases to T' on Q. Hence T' is


predictable. (Note: {T = O} c Q - A, hence T' > 0 on Q.) Applying the
resuIt (9) to T', we obtain

and consequently

!im Kr" = X~.. _ = Xl" on {T' < (0). ( 14)

In view of (12), (13) and (14), Equation (11) is true on A n {T <J'~ }. But (11)
is trivially true on Q - A. Hencc Theorem 4 is proved. D

The property expressed in Theorem 4 is called the "quasi left continuity"


of the process. The preceding proof shows that it is a consequence of the
moderate Markov property.

We are now ready to prove the optionality of the hitting time for a closed
set as weIl as an open set. For any set A in {j'4> define

T A(W) = inf{ t > 0 X (t. w) I E A}. (15)

This is called the (first) hitting time of A. Compare with the first entrance
time DA defined in (25) of §1.3. The difference between them can be crucial.
2.4. Moderate Markov Property and Quasi Left Continuity 71

They are re la ted as follows:

TA = lim 1(s + DA 0 Os), (16)


sHo

This follows from the observation that

s + DA ces = s + inf{t 2 0IX(s + t, w) E A}


= inf{t 2 sIX(t,w) E A}.

It follows from Theorem 11 of §1.3 that if DA is optional relative to {.?"~},


then so is s + DA Os for each s > 0, and consequently so is TA by Proposi-
0

tion 3 of §1.3. It is easy to see that the assertion remains true when {ff~} is
replaced by {ffn, for each 11.
Now it is obvious that DA is the debut of the set

HA = {(t,w)IX(t,w) E A} (17)

which is progressively measurable relative to {ff~} c {ffn, since X is


right continuous. For each t, (Q, ff~, plt) is a comp1ete probability space as a
consequence of the definition of ff':. Hence DA is optional relative to {.?"n
by Theorem 3 of §L5. However that theorem relies on another wh ich is not
given in this book. The following proof is more direct and also shows why
augmentation of ff~ is needed.

Theorem 5. IJ the M arkov process {X t} is right continuous, then Jor each open
set A, DA and TA are both optional relative to {ff~}. IJ the process is right
continuous almost surely and is also quasi left continuous, then Jor each closed
set as weil as each open set A, DA and TA are both optional relative to {.?";-}.

Proof. By aremark above it is sufficient to prove the results for DA" If A is


open, then for each t > 0, we have the identity

{DA< t} = U {XrEA} (18)


rEQt

where Qt = Q n [0, t), provided all sampie functions are right continuous.
To see this suppose DA(W) < t, then X(s, w) E A for so me s E [D A(W), t), and
by right continuity X(r, w) E A for some rational r in (D A(W), t). Thus the left
member of (18) is a subset of the right. The converse is trivial and so (18) is
true. Since the right member clearly belongs to ff~, we conclude that
{D A < t} E ff~.
Now suppose that only P-almost all sam pie functions are right continuous.
Then the argument above shows that the two sets in (18) differ by a P-null
set. Hence {DA< t} is in the augmentation of ff~ with respect to (Q, .?", P).
Translating this to pli, we have {DA< t} E ff': for each 11, hence {DA< t} E
ff;-. Since {ff;-} is right continuous we have {DA:::; t} E ff;-.
72 2. Basic Properties

Next, suppose that A is closed. Then there is a sequence of open sets An


such that An :::::J An+ 1 for each n, and

(19)

We shall indicate this by An 11 A. For instance, we may take

Clearly DA" increases and is not greater than DA; let

S = lim i DA" ::;; DA- (20)


n

We now make the important observation that for any Borel set B, we have
almost surely
(21 )

For if DB(w) < 00, then for each () > 0, there exists tE [DB(w), DB(w) + (5)
such that X(t, w) E B. Hence (21) follows by right continuity. Thus we have
X(D AJ EAn for all n and therefore by quasi left continuity and (19):

X(S) = lim X(D AJ En An = A


n

almost surelyon {O::;; S < oo}. The case S = 0 is of course trivial. This
implies S ;::: DA; together with (20) we conclude that DA = S a.s. on {S <Xc }.
But DA ;::: S = Cf) on {S =00 }. Hence we have proved a.s.

(22)

Since each DA" is optional relative to {JT~}; so is DA by (22). o


If we re pI ace the D's by T's above, the result (22) is no longer true in
general! The reader should find out why. However, since for each Borel set
B, DB = TB on {X 0 rj: B}, we have the following consequence of (22) which
is so important that we state it explicitly and more generally as follows.

Corollary. Let An be closed sets, An:::::J A n+ I for all n, and nn An = A. Ij


X rj: A, then PX-a.s. we have

(23)

Here A may be empty, in which case TA = +00.


2.4. Moderate Markov Property and Quasi Left Continuity 73

Exercises
1. Here is a shorter proof of the quasi left continuity of a Feller process
(Theorem 4). For IX> 0, fE bC+, {e-~tU"f(Xt)} is a right continuous
positive supermartingale because U"j is continuous. Hence if T n i T,
and Y = limn X(T n) (which exists on {T < oo}), we have a.s.

U"j(Y) = li~ U~f(X(Tn)) = E {U~f(X(T)) IY ffTn }-

Now multiply by IX, let IX ....... 00, and use Lemma l.


2. Prove the following "separation" property for a Feller process. Let K be
compact, G be open such that K c G. Then for any 15 > 0 there exists
t o > 0 such that
inf r{TGc ~ t o} ~ 1 - b.
xeK

[Hint: let 0 :::; f:::; 1, f = 1 on K, f = 0 in Ge. There exists t o > 0 such


that sUPO$t$to Ilpt! - fll < 15/2. Now consider

and apply Theorem 3 of §2.3.]


3. If Y E ff-/,g and fE ,g-/[JB then f(Y) E ff-/[JB. Hence x ....... EX{f(Y)} is
in ,g- if the expectation exists. [This will be needed for Lebesgue mea-
surability for important functions associated with the Brownian motion.]

To appreciate the preceding result it is worthwhile to mention that a


Lebesgue measurable function of a Borel measurable function need not be
Lebesgue measurable (see Exercise 15 on p. 14 of Course). Thus if we take
ffo = [JB and Y E ffo; if f E ffll we cannot infer that f(Y) E ffll, where J1
is the Lebesgue measure on ffo.

NOTES ON CHAPTER 2

§2.1. This chapter serves as an interregnum between the more concrete Feiler
processes and Hunt's axiomatic theory. It is advantageous to introduce some of the
basic tools at an early stage.
§2.2. Feiler process is named after William Feiler who wrote aseries of pioneering
papers in the 1950's. His approach is essentially analytic and now rarely cited. The
sampie function properties of his processes were proved by Kinney, Dynkin, Ray,
Knight, among others. Dynkin [lJ developed Feller's theory by probabilistic methods.
His book is rich in content but difficult to consult owing to excessive codification.
Hunt [lJ and Meyer [2J both discuss Feiler processes before generalizations.
§2.3. It may be difficult for the novice to appreciate the fact that twenty five years
ago a formal proof of the strong Markov property was a major event. Who is now
interested in an example in which it does not hold?
74 2. Basic Properties

A full discussion of augmentation is given in Blumenthai and Getoor [1]. This is


dry and semi-trivial stuff but inevitable for a rigorous treatment of the fundamental
concepts. Instead ofbeginning the book by these questions it seems advisable to postpone
them until their relevance becomes more apparent.
§2.4. There is some novelty in introducing the moderate Markov property before
quasi left continuity; see Chung [5]. It serves as an illustration of the general methodo-
logy alluded to in §1.3, where both F T+ and F T- are considered. HistoricaUy, a moderate
Markov property was first observed at the "first infinity" of a simple kind of Markov
chains, see Chung [3]. It turns out that a strong Markov process becomes moderate
when the paths are reversed in time, see Chung and Walsh [I].
A more complete discussion of the measurability of hitting times will be given in
§3.3. Hunt practically began his great memoir [1] with this question, fully realizing that
the use ofhitting times is the principal method in Markov processes.
Chapter 3

Hunt Process

3.1. Defining Properties

Let {X t , .~, t E T} be a (homogeneous) Markov process with state space


(E a, ga) and transition function (Pt), as specified in §U and §1.2. Here .'#ir is
the :F; defined in §2.3. Such a process is called a Hunt process iff
(i) it is right continuous;
(ii) it has the strong Markov property (embodied in Theorem 1 of §2.3);
(iii) it is quasi left continuous (as described in Theorem 4 of §2.4).
Among the basic consequences ofthese hypotheses are the following:
(iv) {.~} is right continuous (Corollary to Theorem 4 of §2.3);
(v) {X t } is progressively measurable relative to {~} (Theorem 1 of
§1.5);
(vi) (Pt) is Borelian (Exercise 4 of §1.5).
We have shown in Chapter 2 that given a FeIler semigroup (Pt), a Feiler
process can be constructed having all the preceding properties (and others
not implied by the conditions above). Roundly stated: a Feiler process is a
Hunt process. Whereas a Feiler process is constructed from a specific kind
of transition function, a Hunt process is prescribed by certain hypotheses
regarding the behavior of its sampie functions. Thus in the study of a Hunt
process we are pursing a deductive development of several fundamental
features of a Feller process.
To begin with, we can add the foilowing result to the above list of pro-
perties of a Hunt process.

Theorem 1. Almost surely the sampie paths haue lelt limits in (0, IX)).

Proof. For a fixed G > 0, define

(I)

where d denotes a metric of the space Ea. Our first task is to show that T is
optional, indeed relative to {:F7}. For this purpose let {zd be a countable
76 3. Hunt Process

dense set in Ea, and B kn be the closed ball with center Zk and radius n ~ 1 : B kn =
{xld(x,Zk):-S; n~l}. Then {B kn } forms a countable base ofthe topology. Put

Then Tkn is optional relative to {,F?} by Theorem 6 of §2.4, since the set
{xld(x,B kn ) > c} is open. We claim that

{T < t} = U{X 0 E B kn; T kn < t}. (2)


k.n

It is clear that right member of (2) is a subset of the left member. To see the
converse, suppose T(w) < t. Then there is s(w) < t such that d(Xs(w),
X o(w)) > c; hence there are n and k (both depending on w) such that d(Xs(w),
Xo(w)) > B + 2n~1 and Xo(w) E Bk"" Thus d(Xs(w), Bkn ) > I; and Tkn(O)) < t;
namely 0) belongs to the right member of (2), establishing the identity. Since
the set in the right member of (2) belongs to .~?, T is optional as claimed.
Next, we define T o == 0, Tl == T and inductively for n 2: 1:

Each T n is optional relative to {~?} by Theorem 11 of §1.3. Since T n increases


with n, the limit S = lim n T n exists and S is optional. On the set {S < Cf.)}, we
have limnX(Tn) = X(S) by quasi left continuity. On the other hand, right
continuity of paths implies that d(X(Tn+1)' X(T n)) 2: Balmost surely for all
n, which precludes the existence oflimnX(Tn). There would be a contradiction
unless S = 00 almost surely. In the latter event, we have [0,00) = U~'=o [T n ,
T n+1)' Note that if T n = 00 then [T", T n+1) = 0. In each interval [T n, T n+1)
the oscillation of X(·) does not exceed 2[; by the definition of T n + l' We have
therefore proved that for each c, there exists Qe with P(Qe) = 1 such that X(')
does not oscillate by more than 2c in [T~, T~+ 1)' where
~JJ

[0, x) = U [T~, T~+d· (3)


n=O

Let Q* = n~= 1 Q I/rn; then P(Q*) = 1. We assert that if 0) E Q*, then X(-, 0))
must have left limits in (0,00). For otherwise there exists tE (0, 00) and m
such that X(·, 0)) has oscillation > 2/m in (t - (), t) for every () > 0. Thus
t tJ [T~/m, T~~md for all n 2: 0, which is impossible by (3) with B = I/rn. D

Remark. We prove later in §3.3 that on {t < n


we have Xt~ =1= a, namely
Xt~ E E. For a FeUer process this is implied by Theorem 7 of §2.2.
Let us observe that quasi left continuity implies that X(·) is left continuous
at each fixed tE (0, 00), almost surely. For if the tn's are constant such that
° n
:-s; t n t, then each t n is optional and so X(tn) --+ X(t). Coupled with right
continuity, this implies the continuity of almost aU paths at each fixed t. In
3.1. Defining Properties 77

other words, the process has no fixed time of discontinuity. For a Feiler
process, this was remarked in §2.4.
Much stronger conditions are needed to ensure that almost all paths are
continuous. One such condition is given below which is particularly adapted
to a Hunt process. Another is given in Exercise 1 below.

Theorem 2. Let {X t } be a M arkou process with right continuous paths hauing


left limits in (0, (0). Suppose that the transition function satisfies the j(Jllowing
condition: for each E > 0 and each compact K c E we haue

. 1
hm - sup [1 - Pt(x, B(x, E))] = 0 (4)
t-O t XEK

where B(x, E) = {y E E cI d(x, y) :-:;; c:}. Then almost all paths are continuous.

Proof. The proof depends on the following elementary lemma. o


Lemma. Let f be a function from [0,1] to Eil which is right continuous in [0,1)
and has left limits in (0, 1]. Then f is not continuous in [0, 1] (co nt inuit y at the
endpoints being defined unilaterally) if and only if there exists [; > 0 such that
for all n ~ no(C:) we haue

max
O~k~n-l
d (f (~),f (~))
n n
> E. (5)

Proof of the Lemma. If f is not continuous in [0, 1], then there exists t E
(0,1] and c: > 0 such that d(f(t - ), f(t)) > 2E. For each n ~ 1 define k by
kn- 1 < t:-:;; (k + 1)n- 1 . Then for n z no(E) we have d(f(kn- 1 ), f(t-)) < <:/2,
d(f( (k + l)n -1 ),f(t)) < E/2; hence d(f(kn - 1),f( (k + l)n - 1)) > [; as asserted.
Conversely, if f is continuous in [0,1], then f is uniformly continuous
there and so for each c: > 0 (5) is false for all sufficiently large n. This is a
stronger conclusion than necessary for the occasion.
To prove the theorem, we put for a fixed compact K:

M = {wIX(·,w) is not continuous in [0,1]; X(s,w) E K for all SE [0, 1]},

Mn={wl sup
O~k~n-l
d(X(~'W),X(~'W))>E;
n n

X(s,w) E K for all s E [0,1]}.

It folio ws from the lemma that M E $'0, and


78 3. Hunt Process

If we apply the Markov property at all kn- 1 for °: :; k :::; n - 1, we obtain

:::; n sup P lIn(X, B(x, sn.


XEK

Using the condition (4) with t = n -1, we see that the last quantity above
tends to zero as n ~ 00. Hence P(liminfn M~):::; lim n P(M~) = and P(M) =
0. Now replace the interval [0,1] by [//2,1/2 + 1] for integer 1 ;;:: 1, and K
°
by Km U {a} where Km is a compact subset of E and Km i E. Denote the
resulting M by M(l, m) and observe that we may replace K by K U {cJ
in (4) because ais an absorbing state. Thus we obtain

which is seen to be equivalent to the assertion of the theorem. o


EXAMPLE. As we have seen in §2.5, the Brownian motion in R 1 is a Feiler
process. We ha ve

Pt(x, B(x, B)") = 1


M::::.
l
'V 2m Iy-xi >,
[
exp - (Y_X)2] dy
2t

= 2 JEI"
J2ni [u J
2
exp - 2t du

{2 IX ~(exp[-1A~JLl.)dU:::; (2 ~exp[-~~J.
=
~ mJ, u 2t t ~ m B 2t

Hence (4) is satisfied even if we replace the K there by E = R 1. It follows


that alm ost all paths are continuous. Recall that we are using the fact that the
sampIe paths of a [version of] Feiler process are right continuous in [O,x)
and has left limits in (0, 00), which was proved in §2.2. Thus the preceding
proofis not quite as short as it appears. The result was first proved by Wien er
in 1923 in a totally different setting. Indeed it pre-dated the founding of
the theory of stochastic processes.

Exercises
1. A stochastic process {X(t), t ;;:: O} is said to be separable iff there exists
a countable dense set S in [0,00) such that for almost every w, the sam pIe
function X(·, w) has the following property. For each t ;;:: 0, there exists
Sn E S such that Sn ~ t and X(S.,w) ~ X(t,w). [The sequence {Sn} depends
3.1. Defining Properties 79

on w as well as t!] It is an old theorem of Doob's that every process has a


version which is separable; see Doob [1]. If {X(t)} is stochastically
continuous the proof is easy and the set S may be taken to be any count-
able den se set in [0,00). Now take a sam pIe function Xe, w) wh ich has
the separability property above. Show that if X(s, w) with s restricted
S is uniformly continuous in S, then X(t, w) is continuous for all t ~ 0.
Next, suppose that there exist strictly positive numbers b, IX, ß and C
° °
such that for t ~ and < h < b the following condition is satisfied:

Take S to be the dyadics and prove that X(s, w) with SES is continuous
on S for almost every w. Finally, verify the condition above for the
Brownian motion in R 1. [Hint: for the last assertion estimate L~: c/
P{IX((k + l)r n) - X(k2- n)1 > n- 2 } and use the Borel-Cantelli lemma.
For two dyadics sand s', X(s) - X(s') is a finite sum of terms of the
form X((k + 1)2-n) - X(kr n). The stated criterion for continuity is due
to Kolmogorov.]
2. Give an example to show that the Lemma in this section becomes false
if the condition "f has left limits in (0, 1]" is dropped. This does not
seem easy, see M. Steele [1].
3. Let X be a homogeneous Markov process. A point x in E is called a
°
"holding point" ifffor some b > we have P X { X(t) = x for all t E [0, b]} >
°
0. Prove that in this case there exists A ~ such that PX{T{x}c > t} =
e- Jet for all t > 0. When A = 0, x is called an "absorbing point". Prove
that if X has the strong Markov property and continuous sam pIe func-
tions, then each holding point must be absorbing.
4. For a homogeneous Markov chain (Example 1 of §1.2), the state i is
holding (also called stahle) if and only if

. 1 - pu(t)
11m < 00.
tlO t

The limit above always exists but may be + 00, when finite it is equal
to the A in Problem 3. In general, a Markov chain does not have aversion
which is right continuous (even if all states are stable), hence it is not a
Hunt process. But we may suppose that it is separable as in Exercise 1
above.
5. For a Hunt process: for each x there exists a countable collection of
optional times {T n} such that for PX-a.e. w, the set of discontinuities
of X(-,w) is the union UnTn(w). [Hint: for each e> Odefine S<e) =
inf{t > 0ld(Xt-,X t) > e}. Show that each S(e) is optional. Let Sie) = S(f.),
s~eL = s~e) + S(e) e(s~f.)) for n ~ 1. The collection {S~l/m)}, m ~ 1, n ~ 1,
0

is the desired one.]


80 3. Hunt Process

6. Let T be one of the Tn's in Exercise 5. Suppose that Rn is an increasing


sequence of optional times such that p x{!im n Rn = T <x:} = 1, then
PX { Un
(Rn = T)} = 1. This means intuitively that T cannot be predicted.
Astronger property (called "total inaccessibility") holds to the effect
that for each x the two probabilities above are equal for any increasing
sequence of optional {Rn} such that Rn S T for all n; see Dellacherie
and Meyer [1].
7. Prove that for a Hunt process, we have §T = a(§T-, X T)' (See Chung
[5].)

3.2. Analysis of Excessive Functions

Abasie tool in the study of Hunt processes is a dass of functions called


by Hunt "excessive". This is a far-reaching extension of the dass of super-
harmonie functions in dassical potential theory. In this section simple basic
properties of these functions will be studied which depend only on the
semigroup and not on the associated Markov process. Deeper properties
derived from the process will be given in the following sections.
Let a Borelian transition semigroup (P" t ;:::: 0) be given where Po is the
identity. For rt. ;:::: 0 we write

(1)

thus p? = P,. For each rt., (P~) is also a Borelian semigroup. It is necessarily
submarkovian for a > O. The corresponding potential kernel is where V"
vaf
. =
Jor' paj' dt.
l.
(2)

Here f E h6+ or f E {; +. If fE bff+ and a > 0, then the function V~f is finite
everywhere, whereas this need not be true if fE ff +. This is one reason why
we have to deal with va sometimes even when we are interested in VO = V.
But the analogy is so nearly complete that we can save a lot of writing by
treating the casea = 0 and daiming the result fora;:::: O. Only the finiteness
of the involved quantities should be watched, and the single forbidden
operation is "XJ -XJ".
The rt.-potential has been introduced in §2.l, as are rt.-superaveraging and
rt.-excessive functions. The dass of rt.-excessive functions will be denoted by
sa, and SO = S.
We begin with a fundamental lemma from elementary analysis, of which
the proof is left to the reader as an essential exercise.

Lemma 1. Let {u mn } be a double array of positive numbers, where m and n are


positive integer indices. If u mn increases with n j(n' each fixed m, and increases
3.2. Analysis of Excessive Functions 81

with m Ior each fixed n, then we have

lim lim Umn = lim lim Umn = lim Unn-


m n n m n

0I course these limits may be 00.

Proposition 2. For each (J. 2 0, sa is a cone which is closed under increasing

sequential limits. The class oI rx-superaveraging Iunctions is also such a cone


which is Iurthermore closed under the minimum operation" /\ ".

ProoI. It is sufficient to treat the case rx = O. To say that S is a co ne means:


if./; E S, and Ci are positive constants for i = 1,2, then cd! + c212 E S. This
is trivial. Next let In E Sand In i I. Then for each n and t

letting n -> 00 we obtain I 2 Pt! by monotone convergence. If I is super-


averaging, then Pt! increases as t decreases. Hence if In E Sand In i I, we
have
I = lim In = lim lim Pt!n = lim lim Pt!n = lim Pt!,
n t l t

by Lemma 1 applied to n and a sequence of t (> 0) decreasing to O. This proves


I E S. Finally, if./; is superaveraging for i = 1,2, then./; 2 Pt.!; 2 PM'! /\I2);
henceI! /\I2 2 Pt(f! /\I2); namely thatI! /\I2 is superaveraging. D

It is remarkable that no simple argument exists to show that S is closed


under" /\ ". A deep proofwill be given in §3.4. The following special situation
is very useful.

Proposition 3. Suppose Pt converges vaguely to Po as t 1 o. II I is super-


averaging and also lower semi-continuous, then I is excessive.

ProoI. By a standard result on vague convergence, we have under the stated


hypotheses :
I = Pol:<:::; lim Pt!:<:::; f. D
t~O

If (Pt) is Fellerian, then the condition on (Pt) in Proposition 3 is satisfied.


More gene rally, it is satisfied if the sampie paths of the associated Markov
process are right continuous at t = O. For then if I E bC, as tl 0:

(3)

Here we have a glimpse of the interplay between the semigroup and the
process. Incidentally, speaking logically, we should have said "an associated
Markov process" in the above.
82 3. Hunt Process

Proposition 4. If rx < ß, then S" c Sß. For each rx 2: 0, S" = nß>a Sß.

Proof. I t is trivial that if rx < ß, then P~I 2: Nf for I E 1,0+. N ow if I = lim t I 0


NI for any value of ß 2: 0, then the same limit relation holds for all values
of ß 2: 0, simply because lim!) 0 e - ßt = 1 for all ß 2: 0. The proposition follows
quickly from these remarks. 0

The next result gives a basic connection between superaveraging func-


tions and excessive functions. It is sufficient to treat the ca se rx = 0. If I
is superaveraging, we define its regularization as follows: 1
J(x) = lim i PJ(x). (4)
tlO

We have already observed that the limit above is monotone. Hence we may
define Pt+f(x) = lim s11t PJ.

Proposition 5. J E S (md 1 is the largest excessiue Iunction not exeeediny f.


We haue
Vt 2: 0: pJ = Pt+f. (5)

Proof. We prove (5) first, as follows:

It is immediate from (4) that 1 :s; I and 12: PJ 2: pJ Furthermore by (5),


lim
, II 0
pJ = lim
11
P,+J =
t 0
lim PJ
t II ü
=.1.
where the second equation is by elementary analysis; hence 1 ES. If 9 E S
and g :s; I, then P,g :s; PJ; letting tU 0 we obtain g :s;.f. 0

The next result contains an essential calculation pertaining to a potential.

Theorem 6. Suppose fES, PJ < 00 Ior each t > 0, and

lim PJ = O. (6)

Then we haue

I = lim
hlü
i u(L- Pd). h
(7)
3.2. Analysis of Excessive Functions 83

Praaf. The hypothesis Pt! < 00 allows us to subtract below. We have for
h > 0:

S~ PsU - Phf) ds = S~ Ps! ds - S:+h Ps! ds = S: Ps! ds - f+h Ps! ds.

If we divide though by h above, then let t i 00, the last term converges to
zero by (6) and we obtain

(8)

The integrand on the left being positive because J ?: PhJ, this shows that
the limit above is the potential shown in the right member of (7). When
h 1 0, the right member increases to the limit J, because lims!ü i Ps! = f.
This establishes (7). 0

Let us observe that there is an obvious analogue of Proposition 6 for


sa. If rJ. > 0 and J E btg' +, then the corresponding conditions in the proposi-
tion are satisfied. Hence (7) holds when U and P h are replaced by U a and
P~ for such an J. This is an important case of the theorem.
An alternative approach to excessive functions is through the use of
resolvents (U a ) instead of the semigroup (Pt). This yields a somewhat more
general theory but less legible formulas, which will now be discussed briefly.
We begin with the celebrated resaluent equatian (9) below.

Proposition 7. F ar rJ. > 0 and ß > 0, we haue

(9)

Praaf. This is proved by the following calculations, for J E blff + and rJ. "'" ß:
84 3. Hunt Process

Note that the steps above are so organized that it is immaterial where
ß - r:t. > 0 or < O. This remark estab!ishes the second as weil as the first
equation in (9). 0

If fES, then since

we have
(10)

!im i r:t.U~l = f. (11 )


'1 C(c

[The case fex) = 00 in (11) should be scrutinized.] The converse is rather


tricky to prove.

Proposition 8. If fE 0 + und (10) is true, then !im, 1 <


is also true then f is excessive.
i r:t.U'l = r Il (11)

Proof. Let 0 < ß < r:t. and fE brff +. Then we have by (9):

(12)

Here subtraction is allowed because U i7f <00. If (10) is true, then gaß =
f - (r:t. - ß)U1 ;:::: O. Since UIJg,ß is ß-excessive, Uaf is ß-excessive for all
ß E (0, r:t.). Hence U'I E S by Proposition 4. Next, we see from (12) by simple
arithmetic that

by (10). Hence if we put

f* = !im i r:t.U"f = !im i U'(r:t.f),


,I x 'Ix,

we have f* ES by Proposition 2.
For a general fE rff + satisfying (0), let fn = f 1\ n. Then r:t.U"f~ ::;: j~ by
(10) and the inequa!ity r:t.U'n ::;: n. Hence we may apply what has just been
proved to each fn to obtain r::
= lim,lw i r:t.U~fn. Also the inequa!ity
r:t.U"j~ ;:::: ßUßln leads to r:t.U"l;:::: ßUßf, if r:t. ;:::: ß. Hence we have by Lemma 1:

= !im i !im i r:t.U"ln = !im i f~.


nixe 'Iw niCk
3.2. Analysis of Excessive Functions 85

Since each I~ E S as shown above, we conclude that I* E S. If 9 E Sand


9 s I, then rxUag srxuaI; letting rx i 00 we obtain 9 s I*· This identifies I*
as the largest excessive function sI, hence I* = J by Proposition 5. The
second assertion of Proposition 8 is now trivial. D

Proposition 8 has an immediate extension as folIows. For each ß ~ 0,


I E sß if and only if
Vrx > 0: rxudßI S I and !im i rxudßI = f. (14)
ai w

This is left as an exercise.


The next result is extremely useful. It reduces many problems concerning
an excessive function to the same ones for potentials, which are often quite
easy.

Theorem 9. Ir I E S, then Ior each ß > 0, there exists gn E brff + such that
(15)
n

Prool. Suppose first that I E brff +. Then as in the preceding proof, we have
Ua(rx.f) = Uß(rxgaß)' Thus we obtain (15) from (11) with gn = ngnß' In general
we apply this to obtain for each k, I/\ k = lim n i Ußg~k). It follows then by
Lemma 1 that

I = \im i \im i Ußg~k) = !im i ußg~n). D


k n

We remark that this proposition is also a consequence ofthe extension of


Theorem 6 to ß-excessive functions, because then limt~ pU = 0 for I E brff +,
CD

etc. But the preceding proof is more general in the sense that it makes use
only of (10) and (11), and the resolvent equation, without going back to the
semlgroup.
It is an important observation that Theorem 9 is false with ß = 0 in (15);
see Exercise 3 below. An additional assumption, which turns out to be of
particular interest for Hunt processes, will now be introduced to permit
ß = 0 there. Since we are proceeding in an analytic setting, we must begin by
assuming that
"Ix E E: U(x,E) > O. (16)

This is trivially true for a process with right continuous paths; for a more
general situation see Exercise 2 below. Remember however that U(a, E) = O.
To avoid such exceptions we shall agree in what follows that a "point" means
a point in E, and a "set" means a subset ofE, without specific mention to the
86 3. Hunt Process

contrary. The case of i1 can always be decided by an extra inspection, when


it is worth the pain.

Definition. The semigroup (Pt) or the corresponding U is called transient


iff there exist a function h E g + such that

0< Uh< 00 on E. (\ 7)

An immediate consequence of (17) is as folIows. There exists a sequence


of hn E M + (with hn(i3) = 0) such that

Uh n > 0 and Uh n I'X; both in E. (18)

We may take hn = nh to achieve this.


The probabilistic meaning of transience as defined above, as weil as some
alternative conditions, will be discussed later in ~3.7.

Proposition 10: Ir (Pt) is transient, and fES, then ther!! !!xists 4n Eh(; + such
that
(19)

In fact, gn:::;: n 2 and Ug n :::;: n.

Proof. Put

where hn is as in (18). Then we have

as tl ·XJ because Uh n <x:. By Proposition 2, j~ is superaveraging; let its


: :;:
regularization beln. Since pJ~ PJ~ < x, and ---> 0 as t IXJ, we may apply
Theorem 6 to obtain
1;, = !im I Ug nk
k

where
gnk = k(J~ - P 1 / k .1;,):::;: kn.

From the proof of Theorem 6 we see that

Ug nk = k S.
Ilk

()
~
PsIndx:::;: n.
3.3. Hitting Times 87

For each n, Ug nk increases with k; for each k, Ug nk increases with n, hence


we have by Lemma 1 :

lim
n
i J:. = lim i lim i
n k
Ug nk = lim
n
i Ug nn ·

On the other hand, we have again by Lemma 1 :

lim
n
i J:. = lim
n
i lim i
,~o
Pt!n = lim
,~o
i lim i
n
Pt!n = lim Pt! = f·
,~o

It follows that (19) is true with gn = gnn- D


Exercises
1. A measure J1. on <f is said to be c-finite ("countably finite") iff there is a
countable collection of finite measures {/ln} such that J1.(A) = 1 J1.n(A) I:=
for all A E <f. Show that if J1. is u-finite, then it is c-finite, but not vice versa.
For instance, Fubini's theorem holds for c-finite measures. Show that
for each x, U(x,') is c-finite.
2. Show that for each x E E, P,(x, E) is a decreasing function of t. The
following properties are equivalent:
(a) U(x,E)=O; (b) lim P,(x, E) = 0; (c) pxg = O} = 1.
'HO
3. For the Brownian motion in R 1 , show that there cannot exist positive
Borel (or Lebesgue) measurable functions gn such that limn_c<J Ug n = 1.
Thus Theorem 9 is false for ß = O. [See Theorem 1 of §3.7 below for
general elucidation.J
4. If fE <f + and rt.UlZf ::; f for all rt. > 0, then Pt! ::; f for (Lebesgue) almost
all t ~ O. [Hint: limlZh' rt.UlZf = f* ::; fand U"j* = U"j for all rt. > O.J

3.3. Hitting Times

We have proved the optionality of TA for an open or a closed set A, in


Theorem 6 of§2.4. The proofis valid for a Hunt process. It is a major advance
to extend this to an arbitrary Borel set and to establish the powerful approxi-
mation theorems "from inside" and "from outside". The key is the theory
of capacity due to Choquet. We begin by identifying the probability ofhitting
a set before a fixed time as a Choquet capacity.
Let {X" .'Fr, t E T} be a Hunt process as specified in §3.1. F or simplicity of
notation, we shall write P for pI' (for an arbitrary probability measure J1. on <f);
but recall that ff', c ~': and (~I', pI') is complete. Fix t ~ O. For any sub set
88 3. Hunt Process

A of E a, we put

A' = (w E QI:ls E [0,tJ: X,(w) E A}.

This is the projection on Q of the set of (s, w) in [0, t] x Q such that X(s, w) E A.
The mapping A -> A' from the dass of all subsets of Er to the dass of all
subsets of Q has the following properties:
(i) Al C A 2 =A'1 C A~;
(ii) (Al U A 2 )' = A'l U A~;
(iii) =
An i A A~ i A'.
In view of (i) and (ii), (iii) is equivalent to
(iii') (Un AnY = Un A~ for an arbitrary sequence {An}·
So far these properties are trivial and remain valid when the fundamental
internal [0, t] is replaced by [0, O'J). The next proposition will depend
essentially on the compactness of [0, t J.
We shall write Al (A z to denote P(A 1 \A z ) = 0, and Al == A z to denote
P(A I L::,. A z ) = O.

Proposition 1. Let Gn be open, K be compact such that

K =nGn(=n G
n n
n ). (1)

Then we have
K' == n G~. (2)

Pro()f". This is essentially a re-play of Theorem 6 of §2.4. Consider the first


entrance times D Gn ; these are optional (relative to {.'F~}). If G~, WEnn
then Ddw) ~ t for all n. Let D(w) = !im" i Ddw); dearly D(w) ~ Ddw)
and D(w) ~ t. Put

we have P(N) = 0 by quasi left continuity. If WEn" G~\N, then X(D) E


nn E
Gn = K since X(DGJ Gn by right continuity. Thus DK(w) ~ D(w) and
so DK(w) = D(w) ~ t; namely w E K'. We have therefore proved G~ ( K'. n"
But nn
G~ :::::J K' by (i) above, hence (2) is proved. D

Proposition 2. Let K n be compact, Knl and n" K n = K. Then

K' == n K~. (3)


3.3. Hitting Times 89

Proof. K being given, let {G n } be as in Proposition 1. For each m, Gm ::::J K =


nn K n ; hence by a basic property of compactness, there exists n m such that
Gm ::::J K nm and consequently G~ ::::J K~m by (i). It follows that

K' == n
m
G~ ::::J n
m
K~m ::::J n
K~ ::::J K',

where the first equation is just (2). This implies (3). D

Corollary. For each compact K, and c; > 0, there exists open G such that
K c G and
P(K') ::::; P(G') ::::; P(K') + t:.
In particular
P(K') = inf P(G') (4)
G=oK

where the inf is over all open G containing K.

From here on in this section, the letters A, B, G. Kare reserved for ar bi-
trary, BoreI, open, compact sets respectively.

Definition. Define a set function C for all subsets of Eo as folIows:


(a) C(G) = P(G');
(b) C(A) = infG=OA C(G).
Clearly Cis "monotone", namely Al c A 2 implies C(A I ) ::::; C(A 2 ). Note also
that it follows from (4) that
C(K) = P(K'). (5)

We proceed to derive properties ofthis function C. Part (b) ofits definition


is reminiscent of an outer measure which is indeed a particular case; on the
other hand, it should be clear from part (a) that Cis not additive on disjoint
sets. What replaces additivity is a strong kind of subadditivity, as given
below.

Proposition 3. C is "strongly subadditive" over open sets, namely;

(6)
Proof. We have, as a superb example ofthe facility (felicity) of reasoning with
sampIe functions:

(GI u G 2 )' - G'l = {wl:Js E [O,t]:Xs(w) E GI U G2 ; Vs E [O,t]:Xs(w) rj: Gd


C {WI:JSE[O,t]:X s (W)EG 2 ;VSE[O,t]:X s (w)rj:G 1 nG 2 }

= G~ - (GI n G 2 )'.
90 3. Hunt Process

Taking probabilities, we obtain

Up to here GI and G2 may be arbitrary sets. Now we use definition (a) above
to convert the preceding inequality into (6). D

The same argument shows C is also strongly additive over all compact
sets, because of (5). Later we shall see that C is strongly additive over all
capacitable sets by the same token.

Lemma 4. Let Ani. An C Gn, c;n > 0 such that

(7)

Then we have for each finite m:

(8)

Prooj'. For m = 1. (8) reduces to (7). Assume (8) is true as shown and observe
that

(9)

We now apply the strong subadditivity of Cover the two open sets U:= I Gn
and Gm + bits monotonicity, and (9) to obtain

This completes the induction on m. D

We can now summarize the properties of C in the theorem below.

Theorem 5. Wehave
(i) Al c Az=C(A I ) ~ C(A z );
(ii) An i A=ClAn) i C(A);
(iii) Kn 1K = C(K n ) 1 C(K).
3.3. Hitting Times 91

Proof. We have already mentioned (i) above. Next, using the notation of
Lemma 4, we have

by definition (a) of C, and the monotone property of the probability measure


P. Applying Lemma 4 with I: n = I : r n , we see from (8) that

rn

Since 8 is arbitrary, (ii) follows from this inequality and (i). Finally, it follows
from (5), Proposition 2, and the monotone property of P that

C(K) = P(K') = P (n K~)


n
= lim
n
P(K~) = lim C(K.).
n

This proves (iii). o


Definition. A function defined on all the subsets of Ea and taking values in
[ - 00, + CI)] is called a Choquet capacity iff it has the three properties (i), (ii)
and (iii) in Theorem 5. A subset A of Ea is called capacitable iff given any
I: > 0, there exist an open G and a compact K such that K c A c G and

C( G) :s; C(K) + 1:. (10)

This definition (which can be further generalized) is more general than


we need here, since the C defined above takes values in [0,1] only. Here is
the principal theorem of capacitability.

Choquet's Theorem. In a locally compact Hausdorff space with countable base,


each analytic set is capacitable.

A set A c Ea is analytic iff there exists a separable complete metric space


(alias "Polish space") M and a continuous mapping cp of M into E a such
that A = cp(M).
Such adefinition needs much study to make sense, but we must forego
it since we shall not use analytic sets below. Suffice it to say that each Borel
set, namely each A in g in our general notation, is analytic and therefore
capacitable by Choquet's theorem. We are ready for the principal result.

Theorem 6. F or each Bore! set B (i.e., BEg ö) we have

C(B) = P(B'). (11 )


92 3. Hunt Process

Proof. Since B is capacitable by Choquet's theorem, for each n ;::: 1 there


exist K n c B c Gn such that

(12)

We have K~ c B' c G~. Let

(13)

Then for each n ;::: 1 :

and consequently
(14)

It follows that

P(B') = P(A 1 ) = lim P(K~) = lim C(K n ) = C(B)

where the last equation is by (12). o


Corollary. P(B') = lim P(K~) = lim P(GJ

The optionality of DB and TB follows quickly.

Theorem 7. For each B E IS", DB and TB are hoth optional relative to {.~).
Proof. Recall that t is fixed in the foregoing discussion. Let us now denote
the B' above by B'(t). For the sake of explicitness we will replace (Q, .'1', P)
above by (Q, .~, PIl) for each probability measure f1 on IS, as fully discussed
in §2.3. Put also Qt = (Q n [0, t)) u {tl. Since G is open, the right continuity
of S --+ X s implies that
G'(t) = {w[:ls E Qt:Xs(w) E G}.
Hence G'(t) E gF~ c .'F'(. Next, we have Pi'{B'(t) D G~(t)} = n,.
by (13),
hence B'(t) E .'F'( because .'F'( contains all PIl-null sets. Finally, a careful
°
scrutiny shows that for each B E ffa and t > 0:

{w[DB(w)<t}= U {W[:lsE[O,r]:Xs(w)EB}
rE Qn[O.t)

U B'(r) E gF'(.
rE Qn[O.t)
3.3. Hitting Times 93

Hence DB is optional relative to {~n. The same is true of TB by the argu-


ment in Theorem 6 of §2.4. D

It should be recalled that the preceding theorem is also an immediate


consequence of Theorem 3 of §1.5. The more laborious proof just given
"tells more" about the approximation of a Borel set by compact subsets and
open supersets, but the key to the argument which is concealed in Choquet's
theorem, is the projection from T x Q to Q, just as in the case of Theorem 3
of §1.5. This is also where analytic sets enter the picture; see Dellacherie
and Meyer [1].
The approximation theorem will be given first for DB, then for TB, in two
parts. The probability measure J1 is defined on tffa, though the point 0 plays
only a nuisance role for what follows and may be ignored.

Theorem 8(a). Far each J1 and BE rffo, there exist K n c B, K ni and Gn ::::J B, Gnl
such that
(15)

Praaf. The basic idea is to apply the Corollary to Theorem 6 to B'(r) for all
rE Q. Thus for each rE Q, we have

(16)

such that
( 17)
n

Let {r j } be an enumeration of Q, and put


n n
Kn= U K rjn ,
j= 1
Gn = n Grjn·
j= 1

Then D' ~ DB ~ DU. We have for eachj:

{D' > rj > DU} c n{D n


Kn > rj > DGJ c n{G~(r) - K~(r)}. (18)

For n ~ j, we have

hence
94 3. H un t Process

Hence by (17),
lim pl1{ G~(r) - K~(r)} = 0
n

and consequently by (18), PI1{D' > r j > DU} = O. This being true for all r j ,
we conclude that PII{D' = D B = DU} = 1, which is the assertion in (15). 0

Theorem 8(b). For each /l and B Elfe, there exists K n c B,K ni such that

Pll-a.s. (19)

If /l is such that /leB) = 0, then there exist Gn ~ B, Gnl such that

(20)

Proof. The basic formula here is (16) of §2.4:

TB = !im 1 (s + D B 0 (J s)' (21 )


sHO

applied to a sequence Sk 11 O. Let /lk = /lP'k' Then for each k :::::: 1, we have
by part (a): there exist K kn C B, Kkni as l1i such that

which means
(22)

Let K n = u~= 1 K kn : then K n C B, K n1, and it is clear from (22) that

P"-a.s.

Therefore, we have by (an analogue 00 Lemma 1 in §3.2:

TB = !im 1 (Sk + D B 0,.) = lim 1 !im 1 (Sk + D Kn o,J


k k n

= !im llim 1 (Sk + D K " 0 O,J = !im 1 T Kn ,


n k

This proves (19) and it is important to see that the additional condition
°
/leB) = is needed for (20). Observe that TB = DB unless X 0 E B; hence under
the said condition we have pli { TB = D B } = 1. lt follows from this remark
that T G = D G for an open G, because if X 0 E G we have T G = 0 (which is
not the ca se for an arbitrary B). Thus we have by part (a):

o
3.3. Hitting Times 95

Remark. The most trivial counterexamplc to (20) when 1l(B) i= 0 is the case
ofuniform motion with B = {0},11 = Eo. Under pI', it is obvious that TB = Cf]
but T G = 0 for each open G ::::J B. Another example is the case of Brownian
motion in the plane, if B = {x} and 11 = Ex; see the Example in §3.6.
As a first application of Theorem 8, we consider the "left entrance time"
and "left hitting time" of a Borel set B, defined as folIows:

D ii (w) = inf {t 2: 0 [X t _ ( w) E B},


(23)
Tii(w) = inf{t > O[Xt_(w) E B}.

We must make the convention X o - = X o to give a meaning to DB .

Theorem 9. We haue almost surely:

Da = DG ; Ta = T G;
Dii 2: DB; Tii 2: TB'

Proof. Since the relation (21) holds for Tii and Dii as weil, it is sufficient to
consider the D's.
If X t E G and G is open, then there is an open set GI such that GI c G
and X t E GI' The right continuity of paths then im pli es that for each w there
exists 60 (w) > 0 such that X tH E GI for 0< 6 :$; 60 and so X tH - E GI' This
observation shows Da :$; DG . Conversely if t > 0 and X t - E G, then there
exist t n ii t such that X tn E G; whereas if X o - E G then X o E G. This shows
DG :$; Da and consequently D G = Da.
For a general B, we apply the second part of (15) to obtain Gn ::::J B such
that DGn i DB, PI'-a.s. Since Dii 2: Dan = DGn , it follows that Dii 2: DB, PI'-a.s.
D

We can now settle a point remarked in §3.1, by applying Theorem 9 with


B = {a}.

Corollary. For a Hunt process, we haue almost surely:

(24)

This implies: on the set {t < 0, the closure of the set of ualues USE[O.tl X(s, w)
is a compact subset of E.
We dose this section by making a "facile generalization" which turns
out to be useful, in view oflater developments in §3.4.

Definition. A set A c E a is called nearly Borel iff for each finite measure 11 on
there exist two Borel sets BI and B 2 , depending on 11, such that BI c A c
(j'",
B 2 and
(25)
96 3. Hunt Process

This is equivalent tO:

(26)

Of course, this definition depends on the process {X t }.


The dass of nearly Borel sets will be denoted by g'. It is easy to verify
that this is a tr-field. It is induded in 1%-, the universally measurable tr-field,
because (26) implies

One can describe g' in a folksy way by saying that the "poor" Hunt process
{X t } cannot distinguish a set in,r;" from a set in ~;o. It should be obvious that
the hitting time of a nearly Borel set is optional, and the approximation
theorems above hold for it as weil. eWe do not need "nearly compact" or
"nearly open" sets!] Indeed, if A E g' then for each 11 there exists B E ~;, such
that DB = DA, PI'-a.s. Needless to say, this B depends on 11.
A function f on Ea to [ - 00, + 00] is nearly Borel when fE IJ '. This is
the case if and only if for each 11, there exist two Borel functions fl and .t~,
depending on 11, such that fl :S: f :S: .t~ and

(27)

lt follows that we may replace fl or.t~ in the relation above by f. We shall


see in the next section that all excessive functions are nearly Borel.

3.4, Balayage and Fundamental Structure


Let {X w 1't} be a Hunt process, where {i~;} is as specified in §3.1.
For f E g+, (X ~ 0, and optional T, we define the operator par as folIows:

(1)

We write PT for P~. If we use the convention that X = iJ and f( D) = 0 for


f

each f, then we may omit "T < 00" in the expression above. We shall fre-
quently do so without repeating this remark.
The following composition property is fundamental. Recall that if Sand
T are optional relative to {g;;}, so is S + Tc Os: see Exercise 7 of §2.3.

Proposition 1. We have

(2)
3.4. Balayage and Fundamental Structure 97

Proo!. Since XT(W) = X(T(w),w), we have

It follows that for! E brff + :

P~P'}! =E'{e-aSp'}!(Xs)} = E'{e-aSEXs[e-aT!(XT)]}


= E'{e-aS[e-aT!(X T)] es}0

= E'{e-a(S+ToOslj(X S+ToOs )} . o
Here and henceforth we will adopt the notation E'(' .. ) to indicate the
function x -+ E X (, • ').

Two important instances of (2) are: when S = t, a constant; and when S


and T are hitting times of a nearly Borel set A. In the latter case we write P~
for PT A' We shall reserve the letter A for a nearly Borel set below.

Definition, A point x is said to be regular for the set A iff

(4)

The set of all points in Eo which are regular for A will be denoted by Ar; and
the union A u Ar is called the fine closure of A and denoted by A *. The
nomenc1ature will be justified in §3.5.

According to the zero-or-one law (Theorem 6 of §2.3), x is not regular for


A iff PX{TA = O} = 0 or r{TA > O} = 1. In this case we say also that the set
A is thin at the point x. Let Adenote the topological c10sure of A, then if
x rf: A, a path starting at x must remain for some time in an open neighbor-
hood of x which is disjoint from A, hence x cannot be regular for A. Thus
A * c A. Since in general the set {TA = O} belongs to :#'0 = :#''0 and not to
:#'g, the function x -+ r {TA = O} belongs to rff- rather than rff. This is a
nuisance which will be ameliorated. Observe that (4) is equivalent to

(5)

for each rx> 0; indeed r{TA = O} may be regarded as lim aico P~l(x).
Finally if x E Ar, then for any ! we have

Theorem 2, For each x and A, the measure P ix, . ) is concentrated on A *. In


other words,
98 3. H u nt Process

Proof. We have by definition (A*)" = AC n (AT If Y E (Ary, then pY: TA> 0)


= 1. It follows that

PX{TA < ::x:.:; X(T A ) E (A*Y)


:s: EX[T A < 00; X(T4 ) E AC; pX(TAl[T A > O]}. (6)

Applying the strong Markov property at TA, we see that the right member
of (6) is equal to the PX-probability of the set

where TA c 8TA means the "time lapse between the first hitting time of A and
the first hitting time of A thereafter." If X T)W) r/: A and TA 0T)W) >
then the sampie function X(·, W) is not in A for TE [T A(W), TA(w) +
°
TA 0 0T)W)), a nonempty interval. This is impossible by the definition of TA-
Hence the right member of (6) must be equal to zero, proving the assertion
~~ilioo~. 0

The following corollary for rx = 0 is just (21) of §2.4, wh ich is true for TB
as well as D B .

Corollary. IlA is a closed set, then P~(x,·) is concentrated in A pir eachrx 2: 0


and each x.

The operator P A(X, .) corresponds to what is known as "balayage" or


"sweeping out" in potential theory. A unit charge placed at the point x is
supposed to be swept onto the set A. The general notion is due to Poincare,
for a modern analytical definition see e.g. Helms [1]. Hunt was able to
identify it with the definition given above, and that apparently convinced
the potential theorists that "he's got something there"!
We interrupt the thrust of our preceding discussion by an obligatory
extension of measurability. According to Theorem 5 of §2.3, for Y E h.Y;-
hence also for Y E .~-;:, the function x --+ EX(Y) is universally measurable,
namely in (r defined in (18) of §2.3. In particular P~ 1 is so measurable for
A E g. We have therefore no choice but to enlarge the dass of functions
considered from g to g-. This could of course have been done from the
outset but it is more convincing to introduce the extension when the need
has arisen.
From here on, the transition probability measure P,(x, .) is extended to
g-. It is easy to see that for each A E g-,

(t, x) --+ P,(x, A)

beJongstoPJ x g-; alternately, (t,x)--+PJ(x) isin fJjJ x g-foreachfEhIf-


or g-;:. It follows that ual is in ~- for such an fand rx 2: o. Finally, an (X-
excessive function is defined as be fore except that f E g-, rather than f E (J
3.4. Balayage and Fundamental Structure 99

as previously supposed. A universally measurable function is sandwiched


between two Borel measurable functions (for each finite measure J1 on Ca)
and this furnishes the key to its handling. See the proof of Theorem 5 of §2.3
for a typical example of this remark.
In wh at follows we fix the notation as folIows: fand gare functions in
C-;:; x E E,,; rJ. ~ 0; T is optional relative to {.9'n; A is a nearly Borel set; K
is a compact set; G is an open set. These symbols may appear with subscripts.
We begin with the formula:

(7)

which is derived as folIows. Making the substitution t = T + u, we transform


the right member of (7) by the strong Markov property into

E'{e~aT[foCX) e~auf(XJduJ 0 8 T} = E'{e~aTEXT[foCXo e~auf(XJduJ}


= E'{e~aTU'i(XT)}'

which is the left member of (7). It follows at once that

(8)

On {T = oo} we have p~U~r = 0 by our convention. Note also that when


T is a constant (8) has al ready been given in §2.1.
We use the notation 11for the support of f, namely the set
{xEEillf(x»O}.

Theorem 3.

(a) p~UJ':::;,U'i;
(b) p~UJ' = U'i, if 11 c A;
(c) If we have
Uaf:::;' uag (9)

on 11, then (9) is true everywhere.


Proof. Assertion (a) is obvious from (8) with T = TA; so is (b) ifwe observe
that f(X t) = 0 for t < TA so that the first term on the right side of (8) vanishes.
To prove (c) we need the approximation in Theorem 8(b) of §3.3. Let A = 11;
given x, let K n c A such that T K n 1 TA, r-a.s. We have then by (7), as n -->00:

Ptuaf(x) = EX {fT:" e~a~f(Xt)dt} i W {fT~: e~a~f(Xt)dt}


= p~uaf(x). (10)
100 3. Hunt Process

This is a fundamental limit relation whieh makes potentials easy to handle.


By the Corollary to Theorem 2, PKjx,') is eoneentrated on K n , henee
Uj::::; U"g on K n by the hypothesis of (e), yielding

Letting n -> 00, using (a) and (b), and (10) for both land y, we obtain

wr(x) = P~Uj(x)::::; P~U"g(x)::::; U'g(x). o


Assertion (e) is known as the "domination prineiple" in potential theory,
whieh will be amplified after Theorem 4. Note however it deals with the
potentials of funetions rather than potentials of measures, whieh are mueh
more diffieult to deal with.
We state the next few results for S, but they are true for S", mutatis mutandis.

Theorem 4. Let I ES. Then PAI ES and PAI::::; f. rr AI c A z , then

(11)

F or each x, there exist compacts K n c A such that

( 12)

Proof. Let us begin by reeording an essential property of a hitting time


whieh is not shared by a general optional time:

'v't??O:TA::::;t+TAdi,; and TA=liml(t+T A 0,). (13)


'10

This is verified exaetly like (16) of §2.4, and the intuitive meaning is equally
obvious. We have from (7):

It follows from this and (13) that P~ U"y E S. For I E S, we have by Theorem
9 of §3.2, I = lim k i U"Yk where rx > O. Henee P~I = lim k i P~ U'Yk and so
P~I ES" by Proposition 2 of §3.2. Finally P AI = lim a10 i P~I; henee
P AI E S by Proposition 4 of §3.2. Next if Ale A 2, then it follows at onee
from (7) that P~, U'g ::::; P~2U'g. Now the same approximations just shown
establish (11), whieh includes PAI : : ; I as a partieular ease.
Next, given x and A, let {K n } be as in the preeeding proof. As in (10), we
have
3.4. Balayage and Fundamental Structure 101

Hence by Lemma 1 of §3.2,

P~f = lim r P~ Uag k = lim r lim r P~n Uag k


k k n

= lim
n
r lim r Ptuagk(X) =
k
lim r p~J(x).
n

Letting 0: ! 0 and using Lemma 1 once again we obtain (12). D

Corollary. Part (c) of Theorem 3 is true if uag is replaced by any excessive


function f in (9).

The next result is crucial, though it will soon be absorbed into the bigger
Theorem 6 below.

Theorem 5. Let fES, and A E <ff •. If x E Ar, then

inf f :::;; f(x) :::;; sup f·


A A

Proof. Since x E Ar, f(x) = P Af(x) as noted above. It follows from Theorem
8(b) of §3.3 that for each c > 0, there exists K c A such that

(14)

since PX{TA = O} = 1. At the same time, we may choose K so that

by (12). Hence by the Corollary to Theorem 2, we have

P K(X, K) inf f(y) :::;; P Kf(x) :::;; sup f(y).


YEK

Since P K(X, K) ~ p x {T K :::;; c} ~ 1 - c by (14), it follows from the inequalities


above that
(1 - c) inf f(y) :::;; f(x) :::;; sup f(y) + 1:.

Since c is arbitrary this is the assertion of the theorem. D

We have arrived at the fOllOwing key result. In its proof we shall be more
circumspect than elsewhere about pertinent measurability questions.

Theorem 6. Let fES. Then almost surely t -..... f(X t ) is right continuous on
[0,00) and has left limits (possibly + 00) in (0, 00]. M oreover, f E <ff •.
102 3. Hunt Process

Praaf. We begin by assuming that fES n S". Introduce the metric p on


[0,00] which is compatible with the extended Euclidean topology: p(a, b) =
l(al1 + a) - (bl1 + b)l, where 00/(1 + (0) = 1. For each e > Odefine

S.(W) = inf{t > 0lp(.f(Xo(w»,f(Xt(w») > e}.

An argument similar to that for the Tin (2) of §3.1 shows that Sr. is optional
(relative to (~». It is clear that we have

n
k= I
{S\lk>O}c{wllimf(Xt(W»=f(Xo(W»}=A,
tlO
(15)

say. For each x put A = {y E Ealp(.f(x),f(y» > e}. Then A ES"; and S"
reduces to the hitting time TA under p x . It is a consequence of Theorem 5
that x f/= Ar; for obviously the value f(x) does not lie between the bounds of
fon A. Hence for each x, PX{Sr. > O} = 1 by Blumenthal's zero-one law.
Now {Sr. > O} E ,~, hence r{S, > O} E S- by Theorem 5 of §2.3, and this
implies PfJ{S, > O} = 1 for each probability measure /1 on Si!' lt now follows
from (15) that A E fffJ since (,97 fJ ,pfJ) is complete. This being true for each /1,
we have proved that A E ,~ and A is an almost sure set. For later need let
us remark that for each optional S, A " es E ff by Theorem 5 of §2.3.
Our next step is to define inductively a family of optional times T~ such
that the sampie function t -+ f(X t ) oscillates no more than 2e in each interval
[T~, Td 1)' This is analogous to our procedure in the proof of Theorem 1
of §3.1, but it will be necessary to use transfinite induction here. The reader
will find it instructive to compare the two proofs and scrutinize the difference.
Let us denote by (D the well-ordered set of ordinal numbers before the first
uncountable ordinal (see e.g., Kelley [1; p. 29]). Fix e, put T o == 0, Tl == Sr.,
and for each IX E (D, define T~ as folIows:
(i) if IX has the immediate predecessor IX - 1, put
T~ = T~-1 + Tl 0 eTO _ 1 ;

(ii) if IX is a limit ordinal without an immediate predecessor, put


T~ = sup T p where ß ranges over all ordinals < IX.
ß<a.

Then each Ta. is optional by induction. [Note that the sup in ca se (ii) may be
replaced by the sup of a countable sequence.] Now for each IX E (D, we have
for each /1:
3.4. Balayage and Fundamental Structure lO3

by the strong Markov property and the fact that r(A) = 1 for each x, proved
above. It follows that P/J-a.e. on the set {T~ < oo}, the limit relation in the

T~+ 1 > T~. Consider now the numbers


0 °
first member of(16) holds; hence Tl ()T~ > by the definition of Tl' hence

rJ.E il).

We have just shown that if b~ > 0, then

°
Now the set il) has uncountable cardinality. If b~ > for all rJ. E il), then the
uncountably many intervals (Cd b C~), rJ. E il), would be all nonempty and
disjoint, which is impossible because each must contain a rational number.
Therefore there exists rJ.* E il) for which b~. = 0; namely that

For each rJ. < rJ.*, if sand t both belong to [T~, Td 1)' then p(f(Xs),f(Xt ))::;; 28.
This means: there exists Q~ with P/J(Q,) = 1 such that if W E Q~, then the
sampIe function t~ I(X(t,w)) does not oscillate more than 28 to the right
Q:
01 any tE [0,00). It follows that the set of w satisfying the latter require-
ment belongs to f7 and is an almost sure set. Since nk'= 1 QT/k is contained
in the set of w for which t ~ I(Xt(w)) is right continuous in [0,00), we have
proved that the last-mentioned set belongs to .'#i and is an almost sure set.
This is the main assertion of the theorem.
In order to fully appreciate the preceding proof, it is necessary to reflect
that no conc1usion can be drawn as to the existence ofleft limits ofthe sam pIe
function in (0, 00). The latter result will now follows from an earlier theorem
on supermartingales. In fact, if we write In = I 1\ n then {fn(X t),~} is a
bounded supermartingale under each r, by Proposition 1 of §2.1, since J"
is superaveraging. Almost every sampIe functions t ~ J,,(X t ) is right con-
tinuous by what has just been proved, hence it has left limits in (0, 00 ] by
Corollaries 1 and 2 to Theorem 1 of §1.4. Since n is arbitrary, it follows
(why?) that t ~ I(X t ) has left limits (possibly + 00) as asserted.
It remains to prove that I E S implies I E co. In view of Theorem 9 of
§3.2, it is sufficient to prove this for U~g where rJ. > 0 and 9 E bC:. For each
j1, there exist gl and g2 in Cil such that gl ::;; 9 ::;; g2 and j1U~(92 - 91) = 0.
For each t ;?: 0, we have

It follows that
104 3. Hunt Process

Thus we have for each t ?: 0:

P1l_ a.s. (17)

Therefore (17) is also true simultaneously for all rational t > O. But
U~(g2 - gl) E S~ n t&', hence the first part of our proof extended to S' shows
that the function of t appearing in (17) is right continuous, hence must
vanish identically in t, PIl_ a.s. This means U~g E t&'. as asserted. Theorem 6
is completely proved. 0

In the proof of Theorem 6, we have spelled out the .'1'-measurability of


"the set ofall W such thatf(X{-,w)) is right continuous". Such questions
can be handled by a general theory based on T x Q-analytic sets and pro-
jection, as described in §1.5; see Dellacherie and Meyer [1], p. 165.

If fES, when can f(X t ) = (fJ? The answer is given below. We introduce
the notation
Vt E (0, (fJ]: f(X t ) _ = !im f(X.). (18)
sHt

This limit exists by Theorem 6, and is to be carefully distinguished from


f(X t -). Note also that in a similar notation f(X t )+ = f(X t ) = f(X t +).

Theorem 7. Let F = {f < oo}. Then we have almost surely:

( 19)

where DF is the .first entrance time into F.

Proof. If XE F, then under p x , {f(X t ), .~} is a right continuous positive


supermartingale by Theorem 6. Hence by Corollary 1 to Theorem I of
§1.4, f(X t ) is bounded in each finite t-interval, and by Corollary 2 to the
same theorem, converges to a finite limit as t ~ 00. Therefore (19) is true
with DF = O. In general, let Fn = {f ::;; n}, then F n E t&'. by Theorem 6 and
so DFn is optional; also f(X(DFJ) ::;; n by right continuity. It follows by the
strong Markov property that (19) is true when the DF there is replaced by
DFn . Now an easy inspection shows that lim n 1 DFn = DF, hence (19) holds
as written. 0

It is obvious that on the set {D F > O},f(X t ) = (fJ for t < DF, andf(X t )- =
00 for t ::;; DF ; but f(X(D F)) may be finite or infinite.
There are numerous consequences ofTheorem 6. One ofthe most import-
ant is the following basic property of excessive functions which can now be
easily established.

Theorem 8. The class of excessive functions is closed under the minimum


operation.
3.4. Balayage and Fundamental Structure 105

Proof. Let fl ES, f2 E S, then we know f1 I\f2 is superaveraging. On the


other hand,
!im PtU~ I\f2)(X) = lim EX{fI(X t ) I\f2(X t )}
tlO tlO

by Fatou's lemma followed by the right continuity of j;{X,) at t = 0, i = 1,2.


Hence there is equa!ity above and the theorem is proved. D
In particular, if fES then f 1\ n E S for any n. This truncation allows us
to treat bounded excessive functions be fore a passage to the limit for the
general case, which often simplifies the argument.
As another application of Theorem 6, let us note that p~ 1 is :x-excessive
°
foreach :x ~ and each A E6', hence P~1 E 6'. Since Ar = (xl P~l = 1: as
remarked in (5), we have Ar E g', A* E g'. The significance of these sets will
be studied in the next section.

Exercises
1. Let f E bg +, T n i T. Prove that for a H un t process

!~~j Uaf{X(T n)) = E{ Uj{X(T))ly ffTn }.

Hence if we assume that


(20)

then we have

Remark, Under the hypothesis that (20) holds for any optional T n i T,
the function t --> Uaf(X(t)) is left continuous. This is a complement to
Theorem 6 due to P. A. Meyer: the proof requires projection onto the pre-
dictable field, see Chung [5].

2. Let A E g' and suppose T n i TA' Then we have for each :x> 0, a.s. on
the set nn{Tn < TA< oo}:
!im E X (1n){e- aTA } = 1.

[Hint: consider EX{e- aTA ; T n < TA IffTJ and observe that TA E Vn ·~T,,·]
3. Let X o be a holding but not absorbing point (see Problem 3 of§3.l). Define
T = inf{t > O:X(t) # xo}.
106 3. Hu n t Process

For an increasing sequence of optional T n such that T n ::::; T and T n i T


we have

PXOLOI [T n < TJ} = O.

Indeed, it follows that for any optional S, pxo{ 0 < S < T} = O. [Hint: the
second assertion follows from the first, by contraposition and transfinite
induction, as suggested by J. B. Walsh.J
4. Let fE 0+ and f 2': PKf for every compact K. Let gE IJ such that
U(g+) /\ U{g-) < (XJ (namely U(g) is defined). Then

12': Ug on lH..::.

implies f 2': Ug (everywhere). [Hint: 1+ U(g-) 2': U(g+); for each K c ~


we have f + U(g-) 2': PKU + U(g-)) 2': PKU(g+), now approximate lfJ~
by compacts.J
5. Suppose U is transient (see (17) of §3.1). For the I in Problem 4 we have
f 2'::xU"f for every:x > O. [Hint: we may supposei bounded with compact
support; put 9 = f - :xU~r, then f 2': U(:xg) = Ua(:xf) on {g 2': O}. This is
a true leger-demain due to E. B. Dynkin [l].J
6. Prove Theorem 6 for a Feiler process by using Theorem 5 of §1.4. [Hint:
show first Uar(X t ) is right continuous for I E htS' +, which is trivially true
for f E Co·J
7. Letf be excessive and A = U= CD}. Iff(xo) <x:;, then P A l(xo) = O.

8. The optional random variable T is called terminal iff for every t 2': 0
we have
T = t + T Ot on {T > t}
and
T = lim(t + Tc 0t).
tt 0

Thus the notion generalizes that of a hitting time. Prove that if T is


terminal, then x-+PX[T<X!} is excessive; and x-+EX{e-,r) is:x-
excessive for each :x > O.

3.5. Fine Properties

Theorem 5 of §3.4 may be stated as folIows: if fES, then the bounds ofi on
the fine closure of Aare the same as those on A itself. Now if I is continuous
in any topology, then this is the case when the said closure is taken in that
topology. We proceed to define such a topology.
3.5. Fine Properties 107

Definition. An arbitrary set A is called finely open iff for each x E A, there
exists B E tff such that x E B c A, and

(1)

Namely, almost every path starting at a point in A will remain in a nearly


Borel subset of A for a nonempty initial interval of time. Clearly an open
set is finely open. It is immediately verified that the dass of finely open sets
meets the two requirements below to generate a topology:
(a) the union of any collection of finely open sets is finely open;
(b) the intersection of any finite collection of finely open sets is finely
open.
We shall denote this class by O. Observe that if we had required a finely
open set to belong to tff·, condition (b) would fai!. On the other hand, there
is a fine base consisting of set in tff·; see Exercise 5 below.
Since a is absorbing, the singleton {a} forms a finely open set so that iJ
is an isolated point in the fine topology. If cp(x) = EX{e- a(}, where ( = T le },
then cp(a) = 1. We must therefore abandon the convention made earlier that a
function defined on E a vanishes at a. There may weil be other absorbing
points besides a. A point x is called holding iff x is not regular for the set
E rl - {x}; in other words iff PX-almost every path starting at x will remain
at x for a nonempty initial period of time. This is the case if and only if x is
isolated in the fine topology, but it need not be absorbing. A stable state
in a Markov chain is such an example; indeed under a mild condition all
states are stable (see Chung [2]).
A set is finely closed iff its complement in Ea is finely open. It is immediate
that if B E tff· then B is finely closed iff B r c B, which is in turn equivalent
to B* = B. Thus the operation "*,, is indeed the closure operation in the
fine topology, justifying the term "fine dosure".
The next result yields a vital link between two kinds of continuity.

Theorem 1. Let fE tff·. Then f is finely continuous if and only if t ~ f(X t )


is right continuous on [0, 00), almost surely.

Proof. Assume the right continuity. For any real constant c, put

A = {.f< c}, A' = {.f> c}.

For each XE A, under p x we have f(X o) = fex) < c, hence PX{lim,to


f(X t ) < c} = 1 by right continuity at t = 0. This implies (1) with B = A.
Hence A is finely open; similarly A' is finely open. Consequently f - l( U) is
finely open, first for U = (Cl' c 2 ) where Cl < C2' and then for each open set U
of ( - 00, 00). Therefore f is finely continuous, namely continuous in the
fine topology, by a general characterization of continuity.
108 3. Hunt Process

Conversely, let fE g' and f be finely continuous. For each q E Q, put

A = {f> q},

Then A is finely open; also A E g' implies that TA is optional, and cp q is


l-excessive. It follows from Theorem 6 of §3.4 that there exists Q* with
PX(Q*) = 1 such that if w E Q* then

Vq E Q: t --+ cPq(XJ is right continuous. (2)

We claim that for such an w, limsHJ(X(s,w))~f(X(t,w)) for all t;::::O.


Otherwise there exist t ;:::: 0, t n U t, and a rational q such that

f(X(t,w)) < q,

Since X(tn, w) E A and A is finely open, the point X(t n, w) is certainly regular
for A, hence cpiX(tn, w)) = 1 for all n. Therefore we have cpiX(t, w)) = 1 by
(2). But B = {f< q} is also finely open and X(t, w) E B, hence by definition
the point X(t, w) is not regular for Be, a fortiori not regular for A since
Be:=J A. Thus Cpq(X(t, w)) < 1. This contradiction proves the claim. A similar
argument shows that limsHJ(X(s,w));:::: f(X(t,w)) for all t;:::: 0, PX-a.s.
Hence t --+ f(X(t, w)) is right continuous in [0, CX)), a.s. 0

Corollary. If fE sa, then f is finely continuous.

Theorem 2. Ir Aland A 2 are in g', then

(3)

For each A E g', we have


(AT c Ar; (4)

T Ar ;:::: TA almost surely. (5)

Proof. (3) is easy. To prove (4) let f = g·{e- TA }. Then f(x) = 1 if and only
if x E Ar. If X E (Ar)' then fix) ;:::: inf f by theorem 5 of §3.4. Hence fix) = 1.
To prove (5), note first that Ar E g' so that TAr is optional. We have by
Theorem 2 of §3.4, almost surely:

X(T Ar) E (A r)* = Ar u (AT = Ar

where the last equation is by (4). Hence TA eTAr = 0 by the strong Markov
0

property applied at T Ar, and this says that TA cannot be strict1y greater
than TAr. 0
3.5. Fine Properties 109

We shall consider the amount of time that the sampie paths of a Hunt
process spend in a set. Define for A c E a :

(6)

Thus for each t ~ 0, "t E J A(W)" is equivalent to "X(t, w) E A". We leave it


as an exercise to show that if A E Iff·, then J A(W) E f!Ij, a.s. Thus if m denotes
the Borel-Lebesgue measure on T, then m(J A(W» is the total amount of
time that the sam pie function X(·, w) spends in A, called sometimes the
"occupation time" of A. It follows that

(7)

and consequently by Fubini's theorem for each x:

(8)

Thus the potential of A is the expected occupation time.

Definition. A set A in Iff· is said to be of zero potential iff U(-, A) == O.

Proposition 3. If Ua (., A) == 0 for some iJ. ~ 0, then Ua (., A) == 0 for all iJ. ~ O.

Proof. We have by the resolvent equation, for any ß:

It follows from (8) that if A is of zero potential, then m(J A) = 0 almost


surely; namely alm ost every path spends "almost no time" (in the sense
of Borel-Lebesgue measure) in A.

Proposition 4. If A is of zero potential, then AC = Eil - A is finely dense.

Proof. A set is dense in any topology iff its closure in that topology is the
whole space; equivalently iff each nonempty open set in that topology
contains at least one point of the set. Let 0 be a finely open set and XE O.
Then almost every path starting at x spends a nonempty initial interval of
time in a nearly Borel subset B of 0, hence we have EX{m(J B)} > O. Since
P{m(JA)} = 0 we have EX{m(JBnAc)} > O. Thus 0 n AC is not empty for
each finely open 0, and so AC is finely dense. 0

The following corollary is an illustration of the utility of fine concepts.


It can be deduced from (11) of §3.2, but the language is more pleasant here.
110 3. Hunt Proccss

Corollary. 1f two excessive functions agree except on a set or zero potential,


then they are identical.

For they are both finely continuous and agree on a finely den se set. The
assertion is therefore reduced to a familiar topological one.

From certain points of view a set of zero potential is not so "smalI" and
scarcely "negligible". Of course, what is negligible depends on the context.
For example a set of Lebesgue measure zero can be ignored in integration
but not in differentiation or in questions of continuity. We are going to
define certain sm all sets which play great roles in potential theory.

Definition. A set A in ~. is called thin iff Ar = 0; namely iff it is thin at


every point of E a. A set is semipolar iff it is the union of countably many
thin sets. A set A in ~. is called polar iff

\Ix E E,': P X [ TA< oo} = O. (9)

The last condition is equivalent to: e{e-'TA} == P~l(x) == 0 for each CI. > O.
For comparison, A is thin if and only if

(10)

It is possible to extend the preceding definitions to all subsets of Er"


without requiring them to be nearly Borel. For instance, a set is polar iff it
is contained in a nearly Borel set which satisfies (9). We shall not consider
such an extension.
The case of uniform motion on R t yields facile examples. Each singleton
{x o} is thin butnotpolar. Next, let xnlt xx> - 00, then thesetA= U~=l {xnJ
is semipolar but not thin. For a later discussion we note also that each
compact subset of A is thin.
From the point of view of the process, the smallness of a set A should
be reflected in the rarity of the incidence set JA- We proceed to study the
relations between the space sets and time sets. For simplicity we write CfJ A
for P~ 1 below for a fixed CI. which may be taken as I. A set A will be called
very thin iff A E !S'. and
sup CfJ A(X) < 1. (11)
XEA

Note that (11) does not imply that SUPXE E CfJ A(X) < 1. An example is furnished
by any singleton {xo} in the uniform motion, since CfJ{xo}(xo) = 0 and
lim xil Xo CfJ {xo}(x) = 1.

Proposition 5. [f A is very thin, then it is thin. Furthermore almost surely JA


is a discrete set (namely, finite in each finite time interval).
3.5. Fine Properties 111

Proof. Recall that ({JA is IX-excessive; hence by Theorem 5 of §3.4, the sup in
(11) is the same if it is taken over A * instead of A. But if x E Ar, then ({J A(X) = 1.
Hence under (11) Ar = 0 and A is thin.
Next, denote the sup in (11) by so that e e<
1. Define Tl == TA and for
n:;::: 1: T n+ 1 = T n + TA Tn0. e
Thus {Tn, n:;::: I} are the successive hitting
times of A. These are actually the successive entrance times into A, because
X(Tn ) E A* = A here. The adjective "successive" is justified as folIows. On
[T n < oo}, X(Tn ) is a point which is not regular for A; hence by the strong
Markov property we must have TA 0 e
Tn > 0, namely T n + 1 > Tn' Now we
have, taking IX = 1 in cP A:

It follows that E{e- Tn }


consequently P {T cx. =
::;

oo}
e1
n

=
° as n ---+ 00. Let T n i TX). Then E{e- T C}
1. Therefore, we have almost surely
= 0;

JA = U {T,,}I{Tn <xj;
n~l
(12)

Since T" i 00, this is a discrete set in [0,00). o


Theorem 6. Each semipolar set is the union of countably many very thin sets.
For each A E g', A\A r is semipolar.

Proof. For any A E g', we define for n :;::: 1:

(13)

Then we have

(14)

For each n, it is obvious that

Hence each A n A(n) is a very thin set. If Ais thin, then (14) exhibits A as a
countable union of very thin sets. Therefore each semipolar set by its de-
finition is also such a union. Finally, (14) also shows that A - (A n Ar) is
such a union, hence semipolar. 0

Corollary. 1f A is semipolar then there is a countable collection of optional


times {T n} such that (12) holds, where each T n > 0, a.s.
112 3. H un ( Process

This follows at once from the first assertion of the theorem and (12). Note
however that the collection {T n } is not necessarily weII-ordered (by the
increasing order) as in the case of a very thin set A.
We summarize the hierarchy of small sets as folIows.

Proposition 7. A polar set is very thin; a very thin set is thin; a thin set is
semipolar; a semipolar set is oJ zero potential.

Proof. The first three implications are immediate from the definitions. If A is
semipolar, then JA is countable and so m(J A) = 0 almost surely by Theorem 6.
Hence U(·, A) == 0 by (8). This proves the last implication. 0

F or Brownian motion in R 1, each Borel set of measure zero is of zero


potential, and vice versa. Each point x is regular for [x}, hence the only thin
or semipolar set is the empty set. In particular, it is trivial that semipolar and
polar are cquivalent notions in this case. The last statement is also true for
Brownian motion in any dimension, but it is a deep result tantamount to
Kellogg-Evans's theorem in classical potential theory, see §4.5 and §5.2.

Let us ponder a little more on a thin set. Such a set is finely separated in
the sense that each ofits points has a fine neighborhood containing no other
point of the set. If the fine topology has a countable base (namely, satifies
the second axiom of countability), such a set is necessarily countable. In
general the fine topology does not even satisfy the first axiom of countability,
so a finely separated set may not be so sparse. Nevertheless Theorem 6
asserts that almost every path meets the set only countably often, and so
only on a countable subset (depending on the path). The question arises if
the converse is also true. Namely, if A is a nearly Borel set such that almost
every path meets it only countably often, or more generally only on a count-
able subset, must A be semipolar? Another related question is: if every
compact subset of A is semipolar, must A be semipolar? [Observe that if
every compact subset of A is polar, then A is polar by Theorem 8 of §3.3.]
These questions are deep but have been answered in the affirmative by
Dellacherie under an additional hypo thesis which is widely used, as folIows.

Hypothesis (L). There exists a measure (0 on er,


which is the sum ()[ countably
many.finite measures, such that if (o(A) = 0 then A is o[ zero potential. In
other words, U(x, . ) « (0 Jor all x.

It follows easily that ifwe put ( = (oU" for any lJ. > 0, then Ais ofpotential
zero ifand only if ((A) = O. Hence under Hypothesis (L) such a measure exists.
It will be called a reJerence measure and denoted by ( below. For example,
for Brownian motion in any dimension, the corresponding Borel-Lebesguc
measure is a reference measure. It is trivial that there exists a probability
measure which is equivalent to (. We can then use P~ with ( as the initial
distribution.
3.5. Fine Properties 113

We now state without proof one of Dellacherie's results; see Dellacherie


[1 ].

Theorem 8. Assume Hypothesis (L). Let A E g. and suppose that almost surely
the set JA in (6) is countable. Then A is semipolar.

The expediency of Hypothesis (L) will now be illustrated.

Proposition 9. If f1 and f2 are two excessive functions such that f1 :::; f2 ~ -a.e.,
then f1 :::; f2 everywhere. In particular the result is true with equalities re-
J
placing the inequalities. If f is excessive and E c f d~ = 0, then f == O.

This is just the Corollary to Proposition 4 with a facile extension.

Proposition 10. Let A E 1%' •• Under (L) there exists a sequence of compaet sub-
sets K n of A such that K n i and for each x:

(15)

Proof. This is an improvement of Theorem 8 of §3.3 in that the sequence {K n }


does not depend on x. Write ~(f) for hc
f d~ below; and put for a fixed
rx> 0:
c = sup ~(~1).
KcA

There exists K n c A, K n i such that

c = lim ~(~).
n

Let f = limn i P';.:); then c = ~(f) by monotone convergence. For any


compact K c A, let g = limn i ~nuK1. Both fand gare rx-excessive and
g ::::: f by Theorem 4 of §3.4 extended to sa. But ~(g) :::; c = ~(f), hence g = f
by Proposition 9 (extended to sa). Thus ~1 :::; f for each compact K c A,
and together with the definition of f we conclude that

Vx: f(x) = sup P';.:l(x) :::; P~l(x). (16)


KcA

On the other hand, for each x it follows from Theorem 4 of §3.4 that there
exists a sequence of compact subsets Ln(x) such that

lim PL(x) l(x) = P~l(x).


n
114 3. Hunt Process

Therefore (16) also holds with the inequality reversed, hence it holds with
the inequality strengthened to an equality. Recalling the definition of f we
have proved that lim n P'i) = P~1. Let T Kn 1 S 2 TA- We have e{e- aS } =
e{e- aTA }, hence r{S = TA} = 1 which is (15). D

The next curious result is a lemma in the proof ofDellacherie's Theorem 8.

Proposition 11. Let A be a semipolar seI. There exists aßnite measure v on G·


such that if B E {J. und B c A, then B is polar if and only if v(B) = 0.

Proof. U sing the Corollary to Theorem 6, we define v as folIows:

(17)

If B E Cf· and v(B) = 0, then

If B cA, then J B c JA = Un [T n }I{T n <xJ by thc definition of {T n }, hence


the above implies that

Hence B IS polar by Proposition 9 applied to f = P BI. The converse is


trivial. D

The following analogous result, without Hypothesis (L), is of interest.

Proposition 12. Let A he a semipolar set. Then Jor each x, there exists F =
u~~ 1 K n where the Kn's are compact subsets of A, such that PX_a.s. we haue
TA - F = CD.

Prao/. Replace ~ by x in (17) and call the resulting measure V Since this is X

a finite measure on a locally compact separable metric space, it is regular.


Hence
vX(A) = sup vX(K)
KcA

where K ranges over compact sets. Let K n be a sequence of compact subsets


of A such that vX(K n ) increases to vX(A). Then its union F has the property
that vX(A - F) = 0. The argument in the proof of Proposition 11 thcn shows
that J A-F is empty PX-almost surely. [J
3.5. Fine Properties 115

Exercises
1. Prove that g' is a O'-field and that rc g-.
2. Prove that if A is finely dense, then Ar = Ei" Furthermore for almost
every w, the set J A(W) is dense in T. [Hint: consider EX(e - TA:.]
3. Suppose that Hypothesis (L) holds. Then for ry, ?: 0, eachry,-excessive
function is Borelian. If A E 6, then Ar E 6. F or each t ?: 0, X -> P T ß ::;; t:
X
[

is Borelian. [Hint: the last assertion follows from the first through the
Stone-Weierstrass theorem; this is due to Getoor.]
4. Suppose that for some ry, > 0, all ry,-excessive functions are lower semi-
continuous. Then Hypothesis (L) holds. [Hint: consider AU a where J, =
Ln 2- ncx" and {x n} is a dense set in K]
5. Let AcE and A be a fine neighborhood of x. Then for ry, > 0 there exists
an ry,-excessive function cp and a compact set K such that {cp < I} c K c
A, and {cp < 1} is a fine neighborhood of x. [Hint: Let Y be open with
compact closure, B = AC u y c. By Theorem 8(a) of §3.3, there exists
open G::J B such that cp(x) = P{e- aTG } < 1. Take K = Ge.]
6. Fix ry, > O. If /!7 is a topology on Ea which renders all ry,-excessive functions
continuous, then /!7 is a finer topology than the fine topology, namely
each finely open set is open in .'!7. Thus, the fine topology is the least fine
topology rendering all ry,-excessive functions continuous. [Hint: consider
the cp in Problem 5.]
7. If .f is bounded and finely continuous, then for each ß ?: 0, lima~x
rt.Ua+ßf = f. As a consequence, if the function.f in Problem 4 of §3.2 is
finely continuous, then it is excessive.
8. Let A E g' and x E Ar and consider the following statement: "for almost
every w, X(t,w) = x implies that there exists t n 11 t such that X(tn,w) E
A." This statement is true for a fixed t (namely when t is a constant
independent of w), or when t = T(w) where T is optional; but it is false
when t is generic (namely, when t is regarded as the running variable in
t-> X(t,w)). [Hint: take x = 0, A = {O} in the Brownian motion in R I .
See §4.2 for a proof that 0 E [0]'.]
9. Let A be a finely open set. Then for almost every w, X(t, w) E A implies
that there exists 6(w) > 0 such that X(u,w) E A for all u E [t, t + 6(w)).
Here t is generic. The contrast with Problem 8 is interesting. [Hint:
suppose A E g' and let cp(x) = EX{e- TAc }. There exists 6'(w) ::;; 1 such
that cp(X(u,w))::;; A < 1 for u E [t, t + 6'(w)). The set B = AC n {cp::;; A}
is very thin. By Proposition 5 of §3.5, JB(w) n [t, t + 6'(w)) is a finite
set. Take 6(w) to be the minimum element of this set.]
10. If A is finely closed, then alm ost surely J A( w) is closed from the right,
namely, t n 11 t and t n E J A(W) implies t E J A(W), for a generic t.
116 3. Hunt Process

3.6. Decreasing Limits

The limit of an increasing sequence of excessive functions is excessive, as


proved in Proposition 2 of §3.2. What can one say about a decreasing se-
quence? The following theorem is due to H. Cartan in the Newtonian case
and is actually valid for any convergent sequence of excessive functions. lt
was proved by a martingale method by Doob, see Meyer [2]. The proof
given below is simpler, see Chung [4].

Theorem 1. Let In E Sand lim In = f. Then I is superaveraging and the set


U> ]} is semipolar. In case] < 00 everywhere, then

U>]+c} isthin (1)

Ior every 1: > O.

Prool. For each t z 0, In z PJn; letting n -+ 00 we obtain I z PJ by Fatou's


lemma. Hence I is superaveraging. For each compact K, In z PKi~ by
Theorem 4 of §3.4; letting n -+ 00 we obtain I z Pd' as before. Now let A
denote the set in (1) and let K be a compact subset of A; then by the Corollary
to Theorem 2 of §3.4 we have I > ] + f, on the support of P K(X, .) for every x.
Therefore, since ] < 00 on A,
I z PKI z P Kt1 + c) = P K] + cPKl
and consequently
(2)

Both P K] and PKI are excessive1y by Theorem 4 of §3.4. Letting I 10 in (2),


we obtain
(3)

For a fixed x there exists a sequence of compact subsets K~ of A such that


PK](x) i PA] and another sequence K~ such that P K;;l(x) i PAl(x); by (12)
of §3.4. Taking K n = K~ u K~ we see that

Using this sequence of K n in (3) we obtain

(4)

If ] < 00 everywhere this relation implies that Ar = 0; for otherwise if


x E Ar it would read lex) z lex) + c which is impossible. Hence in this ca se
3.6. Decreasing Limits 117

A is thin. Letting c t 0 through a sequence we conclude that {J >!} is


semipolar.
In the general case, let f~m) = fn /\ m, f(m) = f /\ m. We have just shown
that {J(m) > J(m) + c} is thin for each c > O. Notice that f(m) is superaver-
aging and J /\ m ;:::: J(m). It follows that
00

{J>J+c}= U {J/\m>(]/\m)+c}
m=l
00

c U {J(m) > pm) + c}


m=l

and therefore {J > J + c} is semipolar. Hence so is {J > !}. D

Corollary. If the limit function f above vanishes except on a set of zero potential,
then it vanishes except on a polar set.

Proof. Let c > 0, A = {J ;:::: c} and K be a compact subset of A. Then we


have by the Corollary to Theorem 2 of §3.4:

(5)

Hence PKI vanishes except on a set of zero potential, and so it vanishes on a


finely dense set by Proposition 4 of §3.5. But P K 1 being excessive is finely
continuous, hence it vanishes identically by Corollary to Proposition 4 of
§3.5. Thus K is polar. This being true for each compact K c A, A is polar by
a previous remark. D

The corollary has a facile generalization which will be stated below to-
gether with an analogue. Observe that the condition (6) below holds for an
excessive f.

Proposition 2. Let f E 13": and suppose that we have for each compact K:

(6)

If {J #- O} is of zero potential, then it is polar. If {J = OCJ} is of zero potential,


then it is polar.

Proof. The first assertion was proved above. Let K c {f = oo}, then we have
as in (5):

H ence P K 1 vanishes on {f < oo}. The rest goes as before. D


118 3. Hunt Process

An alternative proof of the second assertion in Proposition 2, for an


excessive function, may be gleaned from Theorem 7 of §3.4. For according to
that theorem, almost surely the set (t: f(X(t)) = oo} is either empty or of the
form [0, DF ) or [0, DF ], where DF is the first entrance time of U< CfJ}. If
U = oo} is of zero potential then U< oo} is finely dense by Proposition 4
°
of §3.5, which implies that DF = because {t I f(X(t)) = Xl} cannot contain
any nonempty interval (see Exercise 2 of §3.5).
A cute illustration of the power of these general theorems is given below.

EXAMPLE. Consider Brownian motion in R 2. We have Pt(x, dy) = PI(.' - y) dy


where

P (x)
t
= _1 ex p
2nt
(_llx2tI12 ),

Weil known convolution property yields

Define for Ci > 0:

this is just UX(x) in (10) of §3.7 below. For each s ~ 0, we have

from which it follows that f is Ci-excessive. It is obvious from the form of


Pt (x) thatf(x) is finite for x # 0, andf(o) = + CD. Thus U = oo} = {o}. Since
°
P t ( . , {o} ) = for each t, {o} is of zero potential. By the second assertion of
Proposition 2 applied to the semigroup (e-a,p t ), we conclude that [o} is a
polar set. Therefore every singleton is polar for Brownian motion in R 2 , and
consequently also in R d , d ~ 2, by considering the projection on R 2 • We shall
mention this striking fact later on more than one occasion.
°
For Ci = we have f == CXJ in the above. It may be asked whether we can
find an excessive function to serve in the preceding example. The answer will
be given in Theorem 1 and Example 2 of the next section.

The next theorem deals with a more special situation than Theorem 1. I t
is an important part of the Riesz representation theorem for an excessive
function (see Blumenthai and Getoor [1], p. 84).

Theorem 3. Let B E {j. and B 1; let T = T Rn' Let fES and y


Il Il II = lim" Pr,}·
Then 9 is superaveraging and 9 = {j on {g < CXJ} n (nil B~t
3.6. Decreasing Limits 119

Praaf. We have by Fatou's lemma,

P,g = P/lim pTJ):::;;!im p,PTJ:::;;!im PTJ = 9


\ n n n

since PTJ is excessive by Theorem 4 of §3.4. Thus 9 is superaveraging. If


g(x) < 00, then there exists no such that pTJ(x) < 00 for n ~ no. We have
by Proposition 1 of §3.4,

p,PTJ(x) = EX{f(X(t + T n lJ,); T n < oo}


0

~ W{f(X(t + Tn 0 er); t < T n < oo}. (7)

But for any A E ISo

(8)

This relation says that the first hitting time on A after t is the first hitting
time of A after 0, provided that A has not yet been hit at time t. It follows
from (7) that

and consequently by subtraction:

(9)

Now under px, {f(X(Tn)), $'T n , n ~ I} is a supermartingale, and {T n :::;; t} E


.'#' T n ' Therefore, by the supermartingale inequality we have

EX{f(X(Tn )); T n :::;; t} ~ EX{f(X(Tn+d); T n :::;; t}


(10)
~ EX{f(X(Tn + d); T n + 1 :::;; t}.

Thus the right member of(9) decreases as n increases. Hence for each k ~ no
we have for all n ~ k:

(11)

If n --+ 00, then p,PTJ(x) --+ P,g(x) by dominated convergence, because


PTJ(x):::;; PTJ(x), and p,PTJ(x):::;; pTJ(x) < 00. It follows from (11) that

(12)

= 1. Letting t 1 0 in (12) we obtain g(x) - g(x) = O.


If x fj:. B~, then PX{Tk > O}
This conclusion is therefore true for each x such that g(x) < 00 and
nk=
x fj:. 1 Bk, as asserted. D
120 3. Hunt Process

Two cases of Theorem 3 are of particular interest.

Case 1. Let Gn be open, Gn- I C Gn and Un


Gn = E. For each x E E, there
exists n such that x E Gn - t so that x <t (G~)'. Hence nn(G~)' = {cl}.lfwe put
f(D) = 0 as by usual convention, then
· P Ta~ f = 9 = 9~
I1m on Ir <
1. Cf)
l
I' (13 )
n

We can also take K n compact, K n - I C K~ and Un K n = E. Then n,,(K~)' =


{al and (13) is true when Gn is replaced by Kn-
a
Case 2. For a fixed X o =1= let G n be open, Gn::::J Gn +1 and G n = {x o ). nn
For each x =1= xo, there is an n such that x <t G~. Hence for any such x for
which f(x) < co, we have

lim PTG./(x) = g(x) = g(x).


n

On closer examination the two cases are really the same!


In a sense the next result is an analogue of Theorem 3 when the optional
times T n are replaced by the constant times n.

Theorem 4. Let fES and suppose that

h(x) = lim 1 PJ(x) <co (14)

for all x. Then h = Pth for each t ::::: 0; and

f = h + fo (15)

where fo is excessive with the property that

lim PJo == o. (16)

Proo/. For each x, the assumption (14) implies that there exists 10 = t o(x) such
that ptJ(x) < -co. Since PJ ::::; PtJfort ::::: t o, and PJPtof)(x) ::::; Ptof(x) <co
for s ::::: 0, the following limit relation holds by dominated convergence, for
each s ::::: 0:

P)1 = Ps (lirn. Pt!)


! - f'
lim PsP,f = lim Ps+J = h.
(- j i-x
(17)

Put j~ = f - h, so that .fo(x) =co if f(x) = co. Then we have for 1 ::::: (0:

PJo = PJ - P,h = PJ - h, (18)

and it follows thatf~ is excessive. Letting t -+ co in (18) we obtain (16). 0


3.6. Decreasing Limits 121

This theorem gives a decomposition of an excessive function into an


"invariant" part hand a "purely excessive" part fo which satisfies the impor-
tant condition (6) in Theorem 6 of §3.2. It is an easy and rather emasculated
version ofthe Riesz decomposition in potential theory. The reader is we1come
to investigate wh at happens when the condition (14) is dropped.
A noteworthy instance of Theorem 4 is when f = PA 1 where A E rff·. In
this case we have

t-+ CI) t-+ CI)

Since t + TA 0 e increases with t, we may write (19) as


t

(20)

A !ittle reflection shows that the set within the braces in (20) is exactly the
set of w for which J A(W) is an unbounded subset ofT, where JA is defined in
(6) of §3.5.

Definition. The set A is recurrent iff the probability in (19) is equal to one
for every x E E; and transient iff it is equal to zero for every x E E. Of course
this is not a dichotomy in general, but for an important dass of processes
it indeed is (see Exercise 6 of §4.1).

Clearly if A is recurrent then PA 1 == 1 (on E). The converse is true if and only
if(P t ) is strictIy Markovian. The phenomenon ofrecurrence may be described
in a more vital way, as folIows. For each A E rff· we introduce a new function:

LA(w)= sup{t ~ 0IX(t,w) E A} (21)

°
where sup 0 = as by standard and meaningful convention. We call LA the
"quitting time of A", or "the last exit time from A". Its contrast to the hitting
time TA should be obvious. To see that LA E .'F-, note that for each w E Q:

{LA(w) s; t} == {X(s,w) t/= A for s > t} == {t + TA 0 et(w) = oo}. (22)

It follows that
ro
{LA< oo}= U {TA 0 e n = cD}; (23)
n=1

n {TA
CI)

{LA = oo}= 0 e n < oo}. (24)


n=1

Thus A is recurrent [transient] if and only if LA = 00 [ < 00 ] almost surely.


This may be used as an alternative definition.
122 3. Hunt Process

It is dear from the first equivalence in (22) that LA is not optional; indeed
{LA::s; t} E cr(X" SE (t, (0)), augmented. Such a random time has special
properties and is a kind of dual to a hitting time. We shall see much of it in
§5.l.

Exercises
I. Let I E t'+ and T = TA where A E fr Suppose that for each .\ and t > 0,
we have
I(x)?: EX {{(X t )1{t<n}·
Show that ifI(x) <x, then {{(X t ) 1{t < Ti' /#'10 PX} is a supermartingale.
2. In the notation of Case 1 of Theorem 3, show that for each compact
K cE, we have g = PK'g on {g <x}.
3. Under Hypothesis (L), let F be a dass of excessive functions. Then there
exists a decreasing sequence {In} in F such that if u = lim n 1 In we have
u?: inffEFI?: u. [Hint: choose in decreasing and Iim n dln/(l + f~)) =
inffEF~(f/(1 +f)).]

3.7. Recurrence and Transience

A Hunt process will be called recurrent iff any, hence every, one of the equi-
valent conditions in the following theorem is satisfied. We use the convention
below that an unspecified point, set, function is of, in, on E, not Ei). Thus
a function is said to be a constant ifit is a constant on E; its value at C, when
defined, is not at issue. Neverthless, the proof of Theorem 1 is complicated
by the fact that it is not obvious that assertion (1) holds prima Iacie under
each of the conditions (i) to (iv) there. The additional ca re needed to avoid
a
being trapped by is weil worth the pain from the methodological point of
view. Recall that , = T{(ll'

Theorem 1. The Iollowing Iour propositions are equivalent Ior a Hunt process,
provided that E contains more than one point.
(i) Each excessive Iunction is a constant.
(ii) F or each I E $ '+, either VI == 0 or VI == 00.
(iii) F or each B in $' which is not thin, we have P BI == 1.
(iv) For each B in $', either PBi == 0 or P Bl == 1. Namely, each nearly
Borel set is either polar or recurrent as defined at the end ol §3.6.
M oreover, any of these conditions implies the jcJ/lowing:

(1)

In other words, (Pt) is a strictly Markovian semigroup on E.


3.7. Recurrence and Transience 123

Proof (i) => (ii). Since V f is excessive, it must be a constant, say c, by (i).
Suppose c =1= 00. We have for t ~ 0:

(2)

Thus So f(X.) ds is integrable under g, hence finite PX-almost surely. We


now prove that (i) implies (1), as follows.
Let Xl and X2 be two distinct points. Then there exist open sets 0 1 and
O2 with disjoint closures such that Xl E Ob X2 E 02. Both Po) and Po) are
excessive functions which take the value one, hence both must be identically
equal to 1 by (i). Put

and for n ~ 1:

Since X(ToJ E 0 1 C (0 2 )<, we have Tl < T 2 on {Tl< oo}. The same argu-
ment then shows that the sequence {Tn } is strictly increasing. Let lim n T n =
T; then on {T < oo}, X(T-) does not exist because X(Tn ) oscillates between
0 1 and O2 at a strictly positive distance apart. This is impossible for a
Hunt process. It follows that almost surely we have T = 00, and this implies
(1) because 0 is absorbing. Hence Ptc = C for all t.
We now have by (2) and dominated convergence:

c = lim Ptc = lim PtVf = o.


t-oo t-CL:;

Remark, It is instructive to see why (i) does not imply (ii) when E reduces
to one point.
(ii) => (iii). Fix BE C' and write qJ for P B 1. By Theorem 4 of §3.6, we have
qJ = h + qJo where h is invariant and qJo is purely excessive. Hence by Theorem
6 of§3.2, there existsfn E C~ such that qJo = lim n i Vfn. Since Vj,,:::; qJo:::; 1,
condition (ii) implies that Vfn == 0 and so qJo == O. Therefore qJ = PtqJ for
every t ~ o. If B is not thin, let Xo E Br so that qJ(x o) = 1. Put A = {x E EI
qJ(x) < I}. We have for each t ~ 0:

Since Pt(xo, Ei) = 1, the equation above entails that Pt(xo, A) = O. This
being true for each t, we obtain VI A (xo) = O. Hence condition (ii) implies
that VIA == O. But A is finely open since qJ is finely continuous. If XE A, then
VI A (x) > 0 by fine openness. Hence A must be empty and so qJ == 1.
124 3. Hunt Process

(iii) => (iv). Under (iii), P B l == 1 for any nonempty open set B. Hence (1) holds
as under (i). If B is not a polar set, there exists X o such that P Bl(xo) = i5 > O.
Put

(3)

then A is finely open. Since X o E A, Xo E Ar by fine openness. Hence PA 1 == 1


by (iii). Now we have

(4)

where the first inequality is by Theorem 4 of §3.4, the second by Theorem 2


of §3.4, and the third by (3) and the fine continuity of P B1. Next, we have for
t 2:: 0, PX-almost surely:

PX{T B < 00 I~} = PX{TB ~ tl~,} + PX{t < TB< 00 I~,}


= I{TH"'} + l{t<TH}pX (t)[TB < 00]. (5)

This is because {TB ~ t} E .'?J',+ = .'?J',; and TB = t + TB 0 e, on {t < TB} (see


(8) of §3.6), so that we have by the Markov property:

PX{t< TB< 00 I.'?J',} = PX{t < TB; t + TB e, <


0 00 I·F,}
= 1{l<TB}pX(ll[TB < 00].

As t ~ 00, the first member in (5) as weIl as the first term in the third member
converges to I{TH<"'}' Hence the se co nd term in the third member must
converge to zero. Now we have X(t) E E for t < 00 and therefore pX(ll[TB <
00] 2:: 6/2 by (4). It follows that limt~.x. 1{'<TB} = 0, PX-almost surely. This
isjust P B l(x) = 1, and x is arbitrary.
(iv) => (i). Let f be excessive. If fis not a constant, then there are real n umbers
a and b such that a < b, and the two sets A = {f < a}, B = U> b} are both
nonempty. Let x E A, then we have by Theorems 4 and 2 of §3.4:

since f 2:: b on B* by fine continuity. Thus P B l(x) < 1. But B being finely
open and nonempty is not polar, hence P B l == 1 by (iv). This contradiction
proves that f is a constant. 0

FinaIly, we have proved above that (i) implies (1), hence each of the other
three conditions also implies (1), which is equivalent to P,(x, E) = 1 for
every t 2:: 0 and x E E.
3.7. Recurrence and Transience 125

Remark. For the argument using (5) above, cf. Course, p. 344.

EXAMPLE I. It is easy to construct Markov chains (Example I of §1.2) which


are recurrent Hunt processes. In particular this is the case if there are only
a finite number of states which communicate with each other, namely for
any two states i and j we have Pij =i= 0 and Pji =i= O. Condition (ii) is reduced
to the following: for some (hence every) state i we have

(7)

Since each i is regular for {i}, the only thin or polar set is the empty set.
Hence condition (iv) re duces to: p i {1{j) < oo} = 1 for every i andj. Indeed,

(8)

in the notation of (21) of §3.6. For a general Markov chain (7) and (8) are
equivalent provided that all states communicate, even when the process is
not a Hunt process because it does not satisfy the hypotheses (i), (ii) and (iii);
see Chung [2].

EXAMPLE 2. For the Brownian motion In R d, the transition probability


kerne! Pt(x, dy) = Pt(x - y) dy where

Pt(Z) = (2n!)d/2 exp [JI~lrl where I zI1 2= Jl zr (9)

For CI. 2 0, define

(10)

Then the resolvent kerne! U~(x, dy) = u~(x - y) dy.


For d = 1 or d = 2, we have u(z) = 00. Hence by the Fubini-Tonelli
theorem,

U(x,B) = SB u(x - y)dy = 0 or 00

according as m(B) = 0 or m(B) > O. It follows that condition (ii) of Theorem


1 holds. Therefore, Brownian motion in R 1 or R 2 is a recurrent Hunt process.
Since each nonempty ball is not thin, the sampie path will almost surely
enter into it and returns to it after any given time, by condition (iv). Hence it
will almost surely enter into a sequence of concentric balls shrinking to a
given point, and yet it will not go through the point because a singleton is
a polar set. Such a phenomenon can happen only if the entrance tim es into
the successive balls increase to infinity almost surely, for otherwise at a
126 3. Hunt Process

tinite limit time the path must hit the center by continuity. These statements
can be made quantitative by a more detailed study of the process.

Note: Whereas under recurrence UI B =f. 0 implies PBI == 1, the converse is


not true in general. For Brownian motion in R 2 , the boundary circle C of a
nenempty disk is recurrent by the discussion above, but Ul c == 0 because
m(C) = o.
Turning in the opposite direction, we call a Hunt process "transient" iff
its semigroup is transient as defined at the end of §3.2. However, there are
several variants of the condition (17) used there. It is equivalent to the con-
dition that E is "o"-transient", namely it is the union ofa sequence oftransient
sets. Hence the condition is satisfied if each compact subset of E is transient,
because E is O"-compact. However, to relate the topological notion of com-
pactness to the stochastic notion of transience some analytic hypo thesis is
needed. Consider the following two conditions on the potentials (resolvents)
of a Hunt process:

Vr E= If"+: Ur is lower semi-continuous; (11)

:Ja> 0 such that VJ E= g'+: Uar is lower semi-continuous. (12)

Observe that there is no loss of generality if we use only bounded J, or


bounded J with compact supports in (11) and (12), because lower semi-
continuity is preserved by increasing limits. lt is easy to see that (12) implies
(11) by the resolvent equation.
Recall that condition (16) of §3.2 holds for a Hunt process and will be
used below tacitly. Also, the letter K will be reserved for a compact set.

Theorem 2. In general, ei/her (ii) or (iv) belmv implies (iii). Under [he condition
(11), (iii) implies (iv). Under the condition (12), (iv) implies (i).

(i) VK: UlK is bounded.


(ii) VK: Ul K is finite everywhere.
(iii) :Jh E= g + such that 0 < Uh < CD on E.
(iv) VK; lim,~ P,P K I = o. (13)

Proof. (ii) = (iii) under (11). Let K n be compact and increase to E, and pul

The function UI Ank is bounded above by k on the support of I An"' hence


by the domination principle (Corollary to Theorem 4 of §3.4), we have

(14)
3.7. Recurrence and Transience 127

everywhere in E. Define h as follows:

CIJ 00 1
h= L I k2~n;n.
n=l k=l
(15)

Clearly h ::; 1. For each x, there exist n and k such that XE A nk by assump-
tion (ii). Hence h > 0 everywhere and so also Uh > O. It follows from (14)
that Uh ::; 1. Thus h has all the properties (and more!) asserted in (ii).
(iii) ~ (iv) under (11). Since Uh is finite, we have

(16)

as in (2). Since Uh > 0 and Uh is lower semi-continuous by (11), for each


K we have infxEK Uh(x) = C(K) > o. Hence by the corollary to Theorem 2
of §3.4 we have

Uh ) 1 1 (17)
PKI ::; PK ( C(K) = C(K) PKUh ::; C(K) Uh.

The assertion (13) follows from (16) and (17).


(iv) ~ (iii) and under (12)(iv) ~ (i). Let K n C K~+ 1 and Un K n = E. Applying
Theorem 6 of §3.2 to each P K) we have

(18)

where gnk ::; k by the proof of Theorem 6 of §3.2. It follows by Lemma 1


of §3.2 that
1 = lim i PK ) = !im i Ug nn . (19)
n

Define g as folio ws :
g = f
n= 1
gnn
n2 n ·

Then g::; 1 and 0< Ug::; 1 by (19). Now put h = uag; then h::; Ug::; 1.
It follows from the resolvent equation that

1 1
Uh = UUag ::; - Ug ::;-. (20)
rx rx

The fact that for each x we have


128 3. Hunt Process

implies that

Hence Vh > 0 and h satisfies the conditions in (iii). Finally, under (12)
h is lower semi-continuous. Since h > 0, for each K we have infxEK h(x) =
C(K) > 0; hence by (20):

Vh 1
VI <-~<-­
K - C(K) - a.C(K)'
namely (i) is true. Theorem 2 is completely proved. o
Let us recall that (iv) is equivalent to "every compact subset of E is
transient", i.e.,
VK: L K < CfJ almost surely. (21)

In this form the notion of transience is probabilistically most vivid.

EXAMPLE 3. For a Markov chain as discussed in Example 1 above, in which


all states communicate, transience is equivalent to the condition J~ Pii(t) dt <
CfJ for so me (hence every) state. A simple ca se of this is the Poisson process
(Example 3 of §1.2).

EXAMPLE 4. Brownian motion R d , when d ~ 3, is transient. A simple com-


putation using the gamma function yields

(22)

in particular for d = 3:
dy
V(x,dy) = -1--1'
-2nx-y (23)

lf B is a ball with center at the origin and radius r, we have by calculus:

I r dy nr 2
U(o, B) = 2n JB TYr = 2'

Let f be bounded (Lebesgue) measurable with compact support K, then

Vf(x) = ~ r f(y) dy = ~ r f(y) dy +~ r -.l0'L_ dy (24)


2n JK Ix - yl 2n JKnB Ix - yl 2n Ix - yl
JK\B
3.7. Recurrence and Transience 129

where B is the ball with center x and radius (j. The first term in the last
member of(24) is bounded uniformly in x by (2n)-11IfIIU(o,(j) = 4- 1 1IfIW.
The second term there is continuous at x by bounded convergence. It is
also bounded by (j -lllfllm(K). Thus Ufis bounded continuous. In particular
if f = 1K , this implies condition (i) of Theorem 2. The result also implies
condition (11) as remarked before. Hence all the conditions in Theorem 2
are satisfied. The case d > 3 is similar.
We can also compute for d = 3 and (J( ~ 0:

e-Izl~

ulI(z) = 2nlzl (25)

by using the following formula, valid for any real c:

Hence for any bounded measurable f, we have

UlIf(x) = r exp[ -Ix - yl ~] f(y) dy. (26)


JR3 2nlx - yl

For (J( > 0, an argument similar to that indicated above shows that VlIf
belongs to Co. This is stronger than the condition (12). For d ~ 3, (J( ~ 0,
ulI can be expressed in terms of Bessel functions. For odd values of d, these
reduce to elementary functions.

Let us take this occasion to introduce a new property for the semigroup
(Pt) which is often applicable. It will be said to have the "strong FeBer
property" iff for each t > 0 and f E btff +, we have Pd E C, namely continuous.
[The reader is warned that variants of this definition are used by other
authors.] This condition is satisfied by the Brownian motion in any dimen-
sion. To see this, we write

PJ(x) = JB Pt(x - y)f(y)dy + JRd-B Pt(x - y)f(y)dy. (27)

For any X o and a neighborhood V of xo, we can choose B to be a large ball


containing V to make the second term in (27) less than B for all x in V. The
first term is continuous in x by dominated convergence. Hence PJ is con-
tinuous at x o.
We can now show that V"j E C for (J( > 0, and fE btff+ without explicit
computation of the kernel VII in (26). Write
130 3. Hunt Process

and observe that the integrand is dominated by Ilflle-'t, and is continuous


in x for each t > O. Hence U'f(x n ) -> Uj(x) as X n -> x by dominated con-
vergence. Since Uf = lim"_o i U"f it now follows that Uf is lower semi-
continuous, without the explicit form of U. This is a good example where
the semigroup (Pt) is used to advantage-one which classical potential
theory lacks. Of course, a Hunt process is a lot more than its semigroup.
Let us remark that for any (Pt), the semi-group (P~) where P~ = e-,tp,
is ciearly transient because U"(x, E) :S; I/ex. This simple device makes available
in a general context the results which are consequences of the hypothesis
of transience. Theorem 9 of §3.2 is such an example by comparison with
Proposition 10 there.

Exercises
1. For a Hunt process, not necessarily transient, suppose A Elf, BEg such
that Be A and U(x o, A) < 00. Then if inCll U(x, A) > 0, we have
pxo{L ll <oo} = 1, where L ll is the last exist time from B. [Hint: define
T n = n + T ll 8n, then U(xo,A) ~ PO(U(X(Tn),A); T n <00].]
2. Let X be a transient Hunt process. Let Albe compact, Anl and nnAn = B.
Put f = lim n PA)' Prove that fis superaveraging and its excessive regu-
larization J is equal to P B1. [Hint: f ~ J ~ PBl. If x ~ B, then

px { 0 [TA" < OO]} = PX ( TB <00 }

by transience. Hence J= PHI on Be u Br which is a finely den se set.]


3. Let hE g+ and Uh< 00. In the same setting as Problem 2, prove that
iff = lim n PA"Uh thenJ = PBUh.
4. Let X be a transient Hunt process and suppose Hypothesis (L) holds
with ~ a referencc probability measure. Let B a compact polar set. Prove
that there exists a function f E g + such that ~(.f) :S; land f = 00 on B.
[Hint: in Problem 2 take An to be open, and fn = PA)' On Be, j~ 1 PBI,
hence ~(.fn) 1 0; on B, fn = 1. Take {nk} so that ~(~>fnJ:S; 1. This is
used in classical potential theory to define a polar set as the set of "poles"
of a superharmonic function.]
5. Prove that (iii) in Theorem 2 is equivalent to E being (J-transient. [Hint:
under (iii), {Uh > B} is transient.]

3.8. Hypothesis (B)

This section is devoted to a new assumption for a Hunt process which Hunt
listed as Hypothesis (B). What is Hypothesis (A)? This is more or less the
set of underlying assumptions for a Hunt process given in §3.l. Unlike
3.8. Hypothesis (B) 131

some other assumptions such as the conditions of transience, or the FeUer


property and its generalization or strengthening mentioned in §3.7, Hy-
pothesis (B) is not an analytic condition but restricts the discontinuity of
the sam pie paths. It turns out to be essential for certain questions of potential
theory and Hunt used it to fit his results into the classical grain.

Hypothesis (B). Far same rx :2: 0, any A E ~., and open G such that Ac G,
we have
P~ = P'bP~. (1)

This means of course that the two measures are identical. Recalling
Proposition 1 of §3.4, for S = T G and T = TA, we see that (1) is true if

TA = TG + TA (()TG almost surelyon {TA< Cf]}. (2)

On the other hand, if rx > 0, then the following particular case of (1)

(3)

already implies the truth of (2). For (3) asserts that

E{e- aTA } = E{exp[ -rx(TG + TA ()ral]},


0

whereas the left member in (2) never exceeds the right member which is
inf{t> T G : X(t) E A}. We will postpone a discussion ofthe case rx = until °
the end of this section.
Next, the two members of (2) are equal on the set {T G < TA}. This is
an extension of the "terminal" property of TA expressed in (8) of §3.6. Since
G :::J A, we have always T G S; TA- Can T G = TA? [If A c G, this requires
the sampie path to jump from Ge into .4:.] There are two cases to consider.
Case (i). T G= TA < 00 and X(T Gl = X(T A) E Ar. In this case, the strong
Markov property at T G entails that TA ()ra = 0. Hence the two members
0

of (2) are manifestly equal.


Case (ii). T G = TA < 00 and X(T G ) = X(T A) 1= Ar. Then since X(T A) E
Au Ar, we must have X(T G ) E A\Ar. Consequently TA ()TG > and the
left member of (2) is strictly less than the right.
0 °
We conclude that (2) holds if and only if case (ii) does not occur, which
may be expressed as follows:

VX: PG(x,A\A r) = 0. (4)

Now suppose that (2) holds and A is thin. Then neither case (i) nor case
(ii) above can occur. Hence for any open Gn :::J A we have alm ost surely

(5)
132 3. Hunt Process

Suppose first that x ~ A; then by Theorem 8(b) of §3.3 there exist open sets
Gn => A, Gn ! such that T Gn i TA, PX-a.s. Since left limits exist everywhere
for a Hunt process, we have limn~oo X(TGJ = X(T A-); on the other hand,
the limit is equal to X(T A) by quasi left continuity. Therefore, X(·) is con-
tinuous at TA, PX-a.s. For an arbitrary x we have

PX{TA < 00; X(T A-) = X(T A)} = lim PX{t < TA< 00; X(T A-) = X(T A)}
tlO
= lim PX{t < T A,. pX(t)[TA < 00·,
tlO
X(TA-)=X(T A)]}
= lim PX{t < TA; pX(tl[TA < oo]}
tlO
= lim PX{t < TA; P Al(X t)}
tlO

In the above, the first equation follows from Px { TA > O} = 1; the second by
Markov property; the third by what has just been proved because X(t) ~ A
for t < TA; and the fifth by the right continuity of t --> PA l(X t). The result is
as follows, almost surely:

(6)

Equation (6) expresses the fact that the path is almost surely continuous
at the first hitting time of A. This will now be strengthened to assert con-
tinuity whenever the path is in A. To do so suppose first that A is very thin.
Then by (12) of §3.5, the incidence set JA consists of a sequence {Tn , n ~ I}
of successive hitting times of A. By the strong Markov property and (6), we
have for n ~ 2, alm ost surely on {Tn_ 1 < oo}:

PX{Tn < 00; X(T n - ) = X(Tn )} = pX(Tn-d{TA < 00; X(T A-) = X(TAn
= pX(Tn-d{TA < oo} = PX{Tn < oo}.

It follows that X(T n - ) = X(Tn ) on {Tn < oo}, and consequently

Vt ~ 0: X(t-) = X(t) almost surelyon {Xt E A}. (7)

Since (7) is true for each very thin set A, it is also true for each semi-polar set
by Theorem 6 of §3.5.
Thus we have proved that (2) implies the truth of (7) for each thin set
A c G. The converse will now be proved. First let A be thin and contained
in a compact K which in turn is contained in an open set G. We will show
that(4) is true. Otherwisethereis anx such thatPX{TG < oo;X(TG ) E A} > o.
Since A is at a strictly positive distance from Ge, this is ruied out by (7). Hence
3.8. Hypothesis (B) 133

PG(x, A) = O. Next, let K c Gas before. Then K\Kr = Un


Cn where each C n
is (very) thin by Theorem 6 of §3.5. Since C n c K, the preceding argument
yields PG(x, Cn) = 0 and consequently PG(x, K\K r) = O. By the discussion
leading to (4), this is equivalent to:

Finally let A E C·, Ac G. By Theorem 8(b) of §3.5, for each x there ex ist
compact sets K n, K n c A, K ni such that T K n 1 TA, PX-almost surely. On
{TA < oo} we have T K n < 00 for all large n. Hence (8) with K = K n yields
(2)asn ~ 00.
We summarize the results in the theorem below.

Theorem 1. For a Hunt process the following four propositions are equivalent:
(i) Hypothesis (B), namely (1), is true for some IX> O.
(ii) Equation (3) is true for some IX > O.
(iii) For each semi-polar A and open G such that A c G, we have
P G(x, A) = 0 for each x.
(iv) For each semi-polar A, (7) is true.

M oreover, if in (1) or (3) we restriet A to be a compact set, the resulting condi-


tion is also equivalent to the above.
It should also be obvious that if Hypothesis (B) holds for some IX > 0, it
holds for every IX > O. We now treat the case IX = O.

Theorem 2. 1f the Hunt process is transient in the sense that it satisfies condition
(iii) of Theorem 2 of §3.7, then the following condition is also equivalent to the
conditions in Theorem 1.
(v) Hypothesis (B) is truefor IX = O.

Proof. Let h > 0 and Uh ::; 1. Then applying (1) with IX = 0 to the function
Uh, we obtain

On the set {TA< T G + TA ßh ; TA< oo}, the integral above is strictly


0

positive. Hence the preceding equation forces this set to be alm ost surely
empty. Thus (2) is true, which implies direct1y (1) for every IX 2 O. D

Needless to say, we did not use the full strength of (v) above, since it is
applied only to a special kind of function. It is not apparent to wh at extent
the transience assumption can be relaxed.
In Theorem 9 of §3.3, we proved that for a Hunt process, TB ::; TB a.s. for
each B E C·. It is trivial that TB = TB if all paths are continuous. The next
134 3. Hunt Process

theorem is a deep generalization of this and will play an important role


in §5.1. It was proved by Meyer [3J, p. 112, under Hypothesis (L), and by
Azema [lJ without (L). The following proof due to Walsh [IJ is shorter
but uses Dellacherie's theorem.

Theorem 3. Under Hypotheses (L) and (B), we have TB = TB a.s. for each
BE rf,u ••

Proof. The key idea is the following constrained hitting time, for each BEg':

SB = inf{t > 01 X(t-) = X(t) E B). (9)

To see that SB is optional, let {J n} be the countable collection of all jump


times of the process (Exercise 5 of §3.1), and A = U;:C;"1 {TB =f. in}. Then
A E %(TB) by Theorem 6 of §1.3, hence SB = (T B)A is optional by Proposition
4 of §1.3.1t is easy to verify that SB is a terminal time. Define cp(x) = E X ( e -SB}.
Then cp is 1-excessive by Exercise 8 of §3.4. Now suppose first that B is
°
compact. For < c; < 1 put

A = B n{xlcp(x) s 1 - F.}.
Define a sequence ofoptional times as follows: R 1 = SA' and for n:?: 1:

On {Rn< Cf]} we have X(R n) E A because A is finely closed. Hence


cp(X(R n)) S 1 - c; and so pX(Rn){SA > O} = 1 by the zero-one law. It follows
that Rn < R n + 1 by the strong Markov property. On the other hand, we have
as in the proof of Proposition 5 of §3.5, for each x and n :?: 1:

Hence Rn i Cf) a.s. Put /(W) = (t > 0IX(t,W) E A}. By the definition of Rn'
for a.e. W such that Rn(w) < 00, the set I(w) n (Rn(w),Rn+1(w)) is contained
in the set of t where X(t-,w) =f. X(t,w), and is therefore a countable set.
Since Rn(w) i 00, it follows that /(w) itselfis countable. Hence by Theorem 8
of §3.5, A is a semi-polar set. This being true for each c;, we conclude that the
set B' = B n {x 1 cp(x) < I} is semi-polar.
Now we consider three cases on {TB <oo}, and omit the ubiquitous "a.s."
(i) X(T B-) = X(T B). Since B is compact, in this case X(T B-) E Band
so TB S TB'
(ii) X(T B-) =f. X(T B) and X(T B) E B - B'. In this case cp(X(TBll = 1
and so SB 8 TB = 0. Since TB S SB we have TB S TB'
0

(iii) X(T B- ) =f. X(T B) and X(T B) E B'. Since B' is semi-polar this is ruled
out by Theorem 1 under Hypothesis (B).
3.8. Hypothesis (B) 135

We have thus proved for each compact B that Ti :s:: TB on {TB < oo}; hence
Ti = TB by Theorem 9 of §3.3. For an arbitrary BEg·, we have by Theorem
8(b) of§3.3, for each x a sequence of compacts K n c B, K n i such that PX{ T K n 1
TB} = 1. Since T Kn = T K nas just proved and Ti :s:: T Kn for n ;:::: 1, it follows
that p x {Ti :s:: TB} = 1. As before this implies the assertion ofTheorem 3. D

Exercises
1. Give an example such that P AI = P GP AI but PA;:::: PGP A is false. [Hint:
starting at the center of a circle, after holding the path jumps to a fixed
point on the circle and then execute uniform motion around the circle.
This is due to Dellacherie. It is a recurrent Hunt process. Can you make
it transient?]
2. Under the conditions of Theorem 1, show that (7) remains true if the set
{XI E A} is replaced by {XI _ E A}. [Hint: use Theorem 9 of §3.3.]
3. If (7) is true for each compact semi-polar set A, then it is true for every
semi-polar set. [Hint: Proposition 11 of §3.5 yields a direct proof without
use of the equivalent conditions in Theorem 1.]
4. Let E = {O} u [1, (0); let 0 be a holding point from which the pathjumps
to 1, and then moves with uniform speed to the right. Show that this is a
Hunt process for which Hypothesis (B) does not hold.
5. Let A be a thin compact set, Gn open, and Gn U A. Show that under
Hypothesis (B), we have under px, x ~ A:

Thus TA is predictable under px, x ~ A.


6. Assurne that the set A in Problem 5 has the stronger property that
sup P{e- TA } < 1.
XE Er·

Then A is polar. [Hint: use Problem 2 of §3.4.]

NOTES ON CHAPTER 3

§3.1. Hunt process is named after G. A. Hunt, who was the author's classmate in
Princeton. The basic assumptions stated here correspond roughly with Hypothesis
(A), see Hunt [2]. The Borelian character of the semigroup is not always assumed. A
more general process, called "standard process", is treated in Dynkin [1] and BlumenthaI
and Getoor [1]. Another technical variety called "Ray process" is treated in Getoor [1].
§3.2. The definition of an excessive function and its basic properties are due to Hunt
[2]. Another definition via the resolvents given in (14) is used in BlumenthaI and Getoor
[1]. Though the former definition is more restrictive than the latter, it yields more
transparent results. On the other hand, there are results valid only with the resolvents.
136 3. Hunt Process

§3.3. We follow Hunt's exposition in Hunt [3]. Although the theory of analytical sets
is essential here, we follow Hunt in citing Choquet's theorem without details. A proof
of the theorem may be found in Helms [1]. But for the probabilistic applications (pro-
jection and section) we need an algebraic version of Choquet's theory without endowing
the sampie space Q with a topology. This is given in Meyer [1] and Dellacherie and
Meyer [1].
§3.4 and 3.5. These seetions form the co re of Hunt's theory. Theorem 6 is due to
Doob [3], who gave the prooffor a "subparabolic" function. Its extension to an excessive
function is carried over by Hunt using his Theorem 5. The transfinite induction used
there may be concealed by some kind of maximal argument, see Hunt [3] or Blumenthai
and Getoor [1]. But the quiekest proof of Theorem 6 is by means of a projection onto
the optional field, see Meyer [2]. This is a good place to learn the new techniques
alluded to in the Notes on §1.4.
Dellacherie's deep result (Theorem 8) is one of the cornerstones of wh at may be
called "random set theory". Its proof is based on a random version of the Cantor-
Bendixson theorem in classical set theory, see Meyer [2]. Oddly enough, the theorem
is not included in Dellacherie [2], but must be read in Dellacherie [1], see also
Dellacherie [3].
§3.6. Theorem 1 originated with H. Cartan for the Newtonian potential (Brownian
motion in R 3 ); see e.g., Helms [1]. This and several other important results for Hunt
processes have companions wh ich are valid for left continuous moderate Markov
processes, see Chung and Glover [1]. Since a general Hunt process reversed in time has
these properties as mentioned in the Notes on §2.4, such developments may be interesting
in the future.
"Last exit time" is hereby rechristened "quitting time" to rhyme with "hitting time".
Although it is the obvious concept to be used in defining recurrence and transience, it
made a belated entry in the current vocabulary ofthe theory. No doubt this is partly due
to the former prejudice against a random time that is not optional, despite its demon-
strated power in older theories such as random walks and Markov chains. See Chung
[2] where a whole section is devoted to last exits. The name has now been generalized to
"co-optional" and "co-terminal"; see Meyer-Smythe-Walsh [1]. Intuitively the quitting
time of a set becomes its hitting time when the sense of time is reversed ("from infinity",
unfortunately). But this facile interpretation is only a heuristic guide and not easy to
make rigorous. See §5.1 for a vital application.
§3.7. For a somewhat more general discussion of the concepts of transience and
recurrence, see Getoor [2]. The Hunt theory is mainly concerned with the transient
case, having its origin in the Newtonian potential. Technically, transience can always
be engineered by considering (P~) with C( > 0, instead of (Pt). This amounts to killing
the original process at an exponential rate e-· t , independently ofits evolution. 11 can be
shown that the resulting killed process is also a Hunt process.
§3.8. Hunt introduced his Hypothesis (B) in order to characterize the balayage
(hitting) operator PA in a way recognizable in modern potential theory, see Hunt [1; §3.6].
He stated that he had not found "simple and general conditions" to ensure its truth.
Meyer [3] showed that it is implied by the duality assumptions and noted its importance
in the dual theory. It may be regarded as a subtle generalization ofthe continuity ofthe
paths. Indeed Azema [1] and Smythe and Walsh [1] showed that it is equivalent to the
quasi left continuity of a suitably reversed process. We need this hypo thesis in §5.1 to
extend a fundamental result in potential theory from the continuous case to the general
case. A new condition for its truth will also be stated there.
Chapter 4

Brownian Motion

4.1. Spatial Homogeneity

In this chapter the state space E is the d-dimensional Eudidean space R d ,


d;::::: 1; lff is the dassical Borel field on Rd • For A E lff, BE lff, the sets A ± B
are the vectorial sum and difference of A and B, namely the set of x ± y
where XE A, Y E B. When B = {x} this is written as A ± x; the set 0 - A
is written as - A, where 0 is the zero point (origin) of R d•
Let (Pt; t ;::::: 0) be a strictly stochastic transition semigroup such that for
each t ;::::: 0, X E E and X o E E we have

(1)

Then we say (Pt) is spatially homogeneous. Temporally homogeneous


Markov processes with such a semigroup constitute a large and important
dass. They are called "additive processes" by Paul Levy, and are now also
known as "U:vy processes"; see Levy [1]. We cannot treat this dass in
depth here, but some of its formal structures will be presented here as a
preparation for a more detailed study of Brownian motion in the following
sections.
We define a family of prob ability measures as follows.

(2)

The semigroup property then becomes the convolution relation

(3)

where the integral is over E and * denotes the convolution. This is valid for
s ;::::: 0, t ;::::: 0, with n o the unit mass at 0. We add the condition:

n t -> n o vaguely as t 1 0. (4)


138 4. Brownian Motion

For fE 0"+ and tz 0 we have


PJ(x) = f f(x + y)nMy)· (5)

As a consequence of bounded convergence, we see that if fE bC, then


PJ E bC; if fE Co, then PJ E Co· Moreover limt\o PJ = f for fE Co,
by (4). Thus (P,) is a Feller semigroup. Hence a spatially homogeneous pro-
cess may and will be supposed to be a Hunt process.
For A E et, we define

and Ua(x, A) = Ua(A - x). Then U a is the ex-potential kernel and we have
for f E et+:
u1 = f f(x + y)Ua(dy). (6)

We shall denote the Lebesgue measure in E by m. If fE L l(m), then we


have by (5), Fubini's theorem and the translation-invariance of m:

f m(dx)PJ(x) = f [f m(dx)f(x + y)]n,(dY )

= f [f m(dX)f(x)]n,(dY ) = f m(dx)f(x).

For a general measure fl on {;', we define the measure flP, or flU a by

(7)

The preceding result may then be recorded as

mP,=m, Vt z O. (8)

We say m is an invariant measure for the semigroup (P,). It follows similarly


from (6) that
1
mU a = -m, Vex> 0; (9)
ex

namely m is also an invariant measure for the resolvent operators (exU').

Proposition 1. Let ex > O. Ir uaf is lower semi-continuous for every f E b6' +,


then u a « m. If u a « m, then uaf is continuous for every fEber
4.1. Spatial Homogeneity 139

Proof. To prove the first assertion, suppose A E tff and m(A) = O. Then
U"(x, A) = 0 for m-a.e. x by (9). Since U"l A is lower semi-continuous, this
implies U"(A) = U"(o, A) = O. Next, if U" « mIet u" be a Radon-Nikodym
derivative of U" with respect to m. We may suppose u" 2:': 0 and u" E tff.
Then (6) may be written as

U"f(x) = f f(x + y)u"(y)m(dy) = f f(z)u"(z - x)m(dz). (10)

Since u" E L l(m), a c1assical result in the Lebesgue theory asserts that

(11)

(see e.g. Titchmarsh [1], p. 377). Since fis bounded, the last term in (10)
shows that U"f is continuous by (11). D

Proposition 2. The three conditions below are equivalent:


(a) For all rx 2:': 0 all rx-excessive functions are lower semi-continuous.
(b) For some rx > 0, all rx-excessive functions are lower semi-continuous.
(c) A set A (in tff) is of zero potential if m(A) = O.

Proof. If (b) is true then U" « m by Proposition 1. Hence if m(A) = 0, then


for every x, m(A - x) = 0 and U"(x, A) = U"(A - x) = O. Thus A is of zero
potential.
If (c) is true, then U" « m. Hence by Proposition 1, U"f is continuous for
every f E btff +. It follows that U"f is lower semicontinuous for every f E tff +,
by an obvious approximation. Hence (a) is true by Theorem 9 of§3.2, trivially
extended to all rx-excessive functions. D

We proceed to discuss a fundamental property ofspatially homogeneous


Markov processes (XI): the existence of a dual process (XJ The latter is
simply defined as follows:
(12)

It is c1ear that (XI) is a spatially homogeneous process. Let PI' 0", fi l , fl" be
the quantities associated with it. Thus

It is convenient to introduce
140 4. Brownian Motion

Thus we have
(13)
and

V"(x, A) = L m(dy)u'(y, x). (14)

(In the last relation it is tempting to write V"(A, x) far V"'(x, A). This is indeed
a good practice in many lengthy formulas.) Now let J E b{,u +, then

Vj(y) = f m(dx)J(x)u"(x, y). (15)

Recall the notation

U"fl(x) = f u"(x, y)fl(dy) (16)

for a measure fl. If follows from (15) and (16) that if fl is a (I-finite measure
as weil as m, we have by Fubini's theorem:

f Vj(y)fl(dy) = f m(dx)J(X)U'fl(X);

or perhaps more legibly:

f Vj dfl = fu· U"fl)dm. (17)

This turns out to be a key formula worthy to be put in a general context.


We abstract the situation as follows. Let (X,) and CX,) be two Hunt pro-
cesses on the same general (E, 0"); and let (P,), (P,); (U"), (V") be the associated
semigroups and resolvents. Assurne that there is a (reference) measure m and
a function u" ~ 0 such that for every A E 0" we have

U"(x, A) = L u"'(x, y)m(dy), V"'(x, A) = L


m(dy)u"( y, x). (18)

Under these conditions we say the processes are in duality. In this case the
relation (17) holds for any JE 0"+ and any (I-finite measure fl; clearly it
also holds with U'" and (j'" interchanged. Further conditions may be put
on the dual potential density function u(·,·). We content ourselves with
one important result in potential theory wh ich f10ws from duality.

Theorem 3. Assume duality. Let fl and v be two (I-finite measures such that Jor
same rx ~ 0 we haue
U"'fl = U"'v< 00. (19)
Then fl == v.
4.1. Spatial Homogeneity 141

Praaf. We have by the resolvent equation for ß ~ IX:

Thus if (19) is true it is also true when IX is replaced by any greater value.
To illustrate the methodology in the simplest situation let us first suppose
that p. and v are finite measures. Take f E bC, then

boundedly because (X,) is a Hunt process. It folIo ws from this, the duality
relation (17), and the finiteness of p. that

This is also true when p. is replaced by v, hence by (19) for alIlarge values of
IX we obtain

This being true for all f E bC, we conclude that p. == v. D

In the general ca se the idea is to find a function h on E such that h >


and Sh dp. < 00; then apply the argument above to the finite measure h . dp..
°
If p. is aRadon measure, a bounded continuous h satisfying the conditions
above can be easily constructed, but for a general lT-finite p. we use a more
sophisticated method involving an IX-excessive function for the dual process
(see BlumenthaI and Getoor [1 J). Since Uap. < 00, it is easy to see that there
exists g E blff, g > 0, such that by (17):

°
Put h = vag; then clearly hE blff, h > and Shdp. < 00. Moreover, h being
an IX-potential for (X,), is IX-excessive for (P,) (calIed "IX-co-excessive"). Thus
it follows from (11) of §3.2 that lim a _ oo IXVah = h. But we need a bit more
than this. We know [rom the Corollary to Theorem 1 of §3.5 that h is finely
continuous with respect to (X,) (calIed "co-finely continuous"); hence so is
fh for any f E bC, because continuity implies fine continuity and continuity
in any topology is preserved by multiplication. It follows (see Exercise 7
of §3.5) that
142 4. Brownian Motion

Now we can apply (17) with f replaced by fh to conclude as before that


SIhdf.1 = SIhdv; hence hdf.1 = hdv; hence f.1 == v. 0

Next, we shctll illustrate the method of Fourier transforms by giving an


alternative characterization of a spatially homogeneous and temporally
homogeneous Markov process (without reference to sampie function re-
gularities). A stochastic process {X t, t ;:::: o} is said to have stationary inde-
pendent increments iff it has the following two properties:
(a) For any °: ;
t o < t 1 < ... < tn' the random variables {X(to), X(tk) -
X(tk-tl, 1::; k::; n} are independent;
(b) For any s;:::: 0, t;:::: 0, X(s + t) - X(t) has the same distribution as
X(s) - X(O).

Theorem 4. A spatially homogeneous and temporally homogeneous Markov


process is a process having stationary independent increments, ami conversely.

Proof. We leave the converse part to the reader (cf. Theorem 9.2.2 of Course).
To prove the direct assertion we use Fourier transform (characteristic
functions). We write for x ERd, Y ERd:

<x, y) = I Xj)'j (20)


j= 1

if x = (Xl' . . . ,Xd ), Y = (Yl, ... ,Yd)' Let i = P and Uk ERd; and set
X(L d = o. We have

E {ex p [i :t~ <uk, X(t k) - X(t k-1) JI·?l'~}


= exp [i Jo <uk, X(t k) - X(t k- 1) J

(21 )

Next we have ifu ERd, s;:::: 0, t;:::: 0:

E{exp[i<u, X(s + t) - X(t)J[X(t)]

= exp[ - i<u, X(t) Jf P,(X(t), dy)exp[i<u, y) J

= exp[ - i<u, X(t) Jf P,(o, dz)exp[i<u, z + X(t) J


= fP,(o,dz)exp[i<u,z)J = n,(u) (22)
4.1. Spatial Homogeneity 143

where ii s is the characteristic function of the probability measure 1t s defined


in (2). Taking expectations in (21) we obtain

E{exp[i :t~ <Uk> X(t k) - X(tk- 1) J}


= E{exp[i Jo <Uk,X(tk) - X(tk-d) J}. u
ii tn + 1 -rJ n+d·

Hence by induction on n, this is equal to

n+l
fito(uo) n ntk-tk_l(Uk),
k=1

where fit is the characteristic function ofthe distribution {Lt of X(t). Assertion
(a) follows from this by a standard result on independence (see Theorem 6.6.1
of Course), and assertion (b) from taking expectations in (22). Note that 1ts
is the distribution of X(s) - X(O). 0

We dose this seetion by mentioning an important property of a process


having stationary independent increments: its remote field is trivial. The
precise result together with some useful consequences are given below in
Exercises 4, 5, and 6.

Exercises
1. Is m the unique a-finite measure satisfying (8), apart from a multiplicative
constant? [This is a hard question in general. For the Brownian motion
semigroup the answer is "yes"; for the Poisson semigroup the answer is
"no".]

2. For the uniform motion semigroup (Pt) (Example 2 of §1.2), show that
U~ « m and find u~(x, y). In this case Plx, . ) is singular with respect to m
for each t and x. For the Poisson semigroup (Pt) (Example 3 of §1.2),
find an invariant measure mo (on N). In this case Pt(n,·) « mo for each
t and n.

3. The property of independent increments in Theorem 4 may be extended as


follows. Let T 0 = 0; for n ~ 1 suppose T n- 1 < T n and T n - T n- 1 is
optional relative to the a-fields (dr), where dr is genera ted by
{X(Tn- 1 + s) - X(T n- 1 ), 0< s ~ t}. Then therandom variables X(T n)-
X(T n- d, n ~ 1, are independent.

The next three exercises are valid for a process having stationary in-
dependent increments.
144 4. Brownian Motion

4. !>ut

where - denotes augmentation as in §2.3. The a-field "§ is called the


"remote fie1d of the process. Prove that if A E "§, then p x {A} is zero or
one for every x. Is the result true for a process having arbitrary inde-
pendent increments? [Hint: for discrete time analogues see §8.1 of Course. J
5. Suppose there exists t> 0 such that n t has a density. If fE M' and
f = Pt!, then fis a constant. [Hint: show fE IC and use the Choquet-
Deny-Hunt theorem (Course, p. 355).J
6. Under the hypo thesis on (Pt) in Problem 6, a set tff is recurrent or transient
(see §3.6) according as PA 1 == 1 or PA 1 =1= 1. [Hint: let f = limn _ 1 P nP A1,
00

then f is a constant by Exercises 5; now use Problem 4.J


7. Suppose that for each t > 0, there exists a density n; of n t with respect
to m such that n; > 0 in E. Then if A E tff and P Al(xo) = 0 for some xo, A
must be polar. [Hint: show that P Al(x) = 0 for nt-a.e. x, hence PtP A1(X) = 0
for all x.]
8. Suppose that for each y, u"'(·, y) is oc-excessive. (It is possible to polish up
u'" to achieve this.) Prove that for any measure /1, U"'/1 is oc-excessive.
Under this condition show that the condition (19) in Theorem 3 may
be required to hold only m-a.e.

4.2. Preliminary Properties of Brownian Motion

The Brownian motion process in Rd was introduced in Example 2 of §3.7.


For d = 1 it was introduced in Example 4 of §1.2. It is a Markov process
with transition function:

Pt (x, dy) = pb, y) dy, Pt(x, y) = JI J2ni


d 1 ( [(Yj -2t X
exp -
)2J) . (1)

Here

is the Lebesgue measure in R d , also written as m(dy). We shall write "a.e."


for "m-a.e.", and "density" for "density with respect to m". The product form
in (1) shows that the Brownian motion X(t) in Rd may be defined as the
vector (X l(t), ... ,XAt)) where each co ordinate X/t) is a Brownian motion
in R 1 , and the d coordinates are stochastically independent. It is spatially
homogeneous, and so by Theorem 4 of §4.1 may be characterized as a process
having stationary independent increments, such that X(t) - X(O) has the
d-dimensional Gaussian (normal) distribution with mean the zero vector
4.2. Preliminary Properties of Brownian Motion 145

and covariance matrix equal to t times the d x d identity matrix. For d = 1,


another useful characterization is by way of a Gaussian process, which is
defined to be a process having Gaussian distributions for all finite-dimen-
sional (marginal) distributions. The Brownian motion is then characterized
as a Gaussian process such that for every sand t:

E(X(t)) = 0; E(X(s)X(t)) = S 1\ t. (2)

The proof requires Exercise 2 of §6.6 of Course, which states that mutually
orthogonal Gaussian random variables are actually independent.
A trivial extension of the definition consists of substituting a 2 t for t in (1),
where a 2 > O. We deern this an unnecessary nuisance. In this section we
review and adduce a few basic results for the Brownian motion. We begin
with the most important one.
(I) There is aversion of the Brownian motion with continuous sam pIe
paths.
For d = 1 the proofis given in the Example in §3.1. The general case is an
immediate consequence of the coordinate representation (even without
independence among the coordinates). This fundamental property is often
taken as part of the definition of the process. For simplicity we may suppose
all sam pIe functions to be continuous.
(11) The Brownian motion semigroup has both the FeIler property and
the strong FeIler property.
The latter is verified in §3.7, and the former in Exercise 2 of §2.2. As
defined, the strong FeIler property does not imply the FeIler property!
Recall also from Example 4 of §3.7 that Uf is bounded continuous if fE btff
and f has compact support, and Uaf for IX > 0 is bounded continuous for
fE btff. These properties can often be used as substitutes for the FeIler
properties.
(111) The Brownian motion in Rd is recurrent for d = 1 and d = 2; it is
transient for d ~ 3.
This is verified in Examples 2 and 4 of §3.7. Let us mention that for d = 1
and d = 2, there are discrete-time versions of recurrence which supplement
the conelusions ofTheorem 1 of §3.7. For each (j > 0, let {X(n(j), n ~ I} be a
"skeleton" ofthe Brownian motion {X(t), t ~ O}. Then {X(n(j) - X((n - 1)(j),
n ~ I} is a sequence of independent and identically distributed random
variables, with mean zero and finite second moment (the latter being needed
only in R 2 ). Hence the recurrence criteria for a random walk in R 1 or R 2
(§8.3 of Course) yield the following result. For any nonempty open set G:

px {X(n(j) E G for infinitely many values of n} = 1.

This is just one illustration of the elose tie between the theories of random
walk and of Brownian motion. In fact, all the elassicallimit theorems have
146 4. Brownian Motion

their analogues for Brownian motion, which are frequently easier to prove
because there are ready sharp estimates. Since our emphasis here is on the
process we shall not delve into those questions.
Let us also recall that for any d ::?: 1, each set is either recurrent or transient
by Exercise 6 of §4.1.
(IV) A singleton is recurrent for Brownian motion in R 1, and polar in
Rd for d::?: 2.
The statement for R 1 is trivial by recurrence and continuity of paths.
The statement for Rd, d ::?: 2, is proved in the Example of §3.6. Another
proof will be given in §4.4 below.
As a consequence, we can show that the fine topology in R d , d ::?: 2, is
strict1y finer than the Euclidean topology. For example, if Q is the set of
points in R d with rational coordinates then R d - Q is a finely open set
because Q is a polar set. It is remarkable that all the paths "live" on this
set full of holes.
(V) Let

F = {x E Rd Ix I = O},
(3)
Then F c G; n G;.
To prove this we need consider only the ca se d = 1, but we will argue
in general. Let x E F. By the symmetry of the distribution of X l(t), we have

Since PX{X(t) E F} = °
by the continuity of the distribution of XI(t), we
deduce that r{X(t) E GI} =~. Hence for each u > 0:

PX{X(t) E GI for all t E [O,u]} ::;; t


which implies PX {T G2 ::;; u} ::?:~. It follows that r{TG2 = O} ::?: t and con-
sequently XE G; by Blumenthal's zero-or-one law (Theorem 6 of §2.3).
Interchanging GI and G z we obtain x E G;.

Corollary. For the Brownian motion in R\ each x is regular .ror {x}.

By (V), the path starting at () must enter GI and G z in an arbitrarily short


time interval [0, G], and therefore must cross F infinitely many times by
continuity. Thus 0 is a point of accumulation of the set {o} in the fine to-
pology; the same is true for any x by spatial homogeneity.
(VI) For any Borel set B, we have

(4)
4.2. Preliminary Properties of Brownian Motion 147

This is a consequence of the continuity of paths, but let us give the details
of the argument. By the definition of TB, on the set {TB< oo} for any e > 0
there exists tE [TB, TB + e) such that X(t) E B. Hence either X(T B) E B,
or by the right continuity of the path,

X(T B) = lim X(T B + t) E 13.


tHO

On the other hand, on the set {O< TB}, X(t) E Be for 0< t < TB, hence
by the left continuity of the path,

X(T B) = lim X(TB - t) E Be.


tftO

Since aB = 13 n Be, (4) is proved.


If XE B r , then PX{X(TB) = X(O) = X} = 1, but of course X need not be
in aB, for instance if B is open and x E B. This trivial possibility should be
remembered in quick arguments.
(VII) Let C n be decreasing closed sets and nn C n = C. Then we have
for each x E ce U C:

(5)

This is contained in the Corollary to Theorem 5 of §2.4. We begin by the


alert that (5) may not be true for x E C - C! Take for example a sequence
of closed balls shrinking to a single point. If XE C, (5) is trivial. Let x E CC.
Then there exists k such that x E q. Since q is open x is not regular for
Cn for all n ~ k. The rest of the argument is true PX-a.s. We have Tc" > 0
for all n ~ k, and TcJ. Let S = limn i Tc", then 0< S ::;; Tc. On {O < Tc" <
oo}, we have X(TcJ E C n by (VI). Hence on {O< S < oo}, we have by
continuity of paths:
X(S) = lim X(TcJ
n
E nC
n
n = C.

Thus S ~ Tc and so S = Tc. On {S = oo}, Tc = 00.

(VIII) Let .~ be either .~? or fi'; (see §2.3). Then for any x,

(6)

are martingales. (Here IIxI1 2 = I.1=


1 xJ.)
To verify the second, note that since
d
IIX(t)11 2 - td = I. (X)t)2 - t)
j= 1
148 4. Brownian Motion

we need only verify it for d = 1, which requires a simple computation.


Observe that in the definition of a martingale {X" ff?, p x }, if the o--field
ff? is completed to:F: with respect to r, then {X"ff:,P is also a mar- X }

tingale. Hence we mayas weil use the completed o--field.


See Exercise 14 below for a useful addition.
(IX) For any constant c > 0, (1/c)X(c 2 t), t ;::: O} is also a Brownian motion.
Ifwe define

X(t) = txG) t> for 0;


(7)
= 0 for t = 0;

then {X(t), t;::: O} is also a Brownian motion.


The first case is referred to as "scaling". The second involves a kind
of reversing the time since l/t decreases as t increases, and is useful in trans-
forming limit behavior of the path as t i 00 to t 1 O. The proof that X is
a Brownian motion is easiest if we first observe that each coordinate is a
Gaussian process, and then check in R 1 for s > 0, t > 0:

Thus (2) is true and the characterization mentioned there yields the
conclusion.
(X) Let B'be a Borel set such that m(B) < w. Then there exists e > 0
such that
sup e{exp(eTBc)} < W. (8)
XE E

Proof. If x E (13)<, then r {Tw = O} = 1 so (8) is trivial. Hence it is sufficient


to consider x E 13. For any t > 0, we have

sup PX{TBc > t} S sup r{X(t) E B}


XEB XEB
(9)
r m(B)
= ~~~ JB p,(x,y)dy S (2nt)d I 2'

This number may be made ::::;1 if t is large enough. Fix such a value of t
from here on. It follows from the Markov property that for x E 13 and
n;::: 1;

PX{TBc > (n + l)t} = PX{T w > nt; pX(n'l[TBc > t]}


s 1PX{Tw > nt}
4.2. Preliminary Properties of Brownian Motion 149

since X(nt) E B on {TBc> nt}. Hence by induction the probability above


is :-:; 1/2n + 1. We have therefore by elementary estimation:

00

EX{exp(cTw )} :-:; 1 + I ee(n+1)/P{nt< TBc:-:; (n + 1)t}


n=O

00

:-:; 1 + L ee(n+ 1)/2- n.


n=O

This series converges for sufficient small c and (8) is proved. o


Corollary. If m(B) < 00 then E X{ T~c} < 00 for all n z 1; in particular
P{TBc< oo} = 1.

This simple method yields other useful results, see Exercise 11 below.

For each r > 0, let us put

T, = inf{t > 0IIIX(t) - X(O)llz r}. (10)

Let B(x, r) denote the open ball with center x and radius r, namely

B(x, r) = {y E Ellix - yll < r}; (11)

and S(x, r) the boundary sphere of B(x, r):

S(x, r) = 8B(x, r) = {y E Ellix - yll = r}. (12)

Then under p x , T, is just TBc where B = B(x, r); it is also equal to TaB. The
next result is a useful consequence ofthe rotational symmetry ofthe Brownian
motion.

(XI) For each r > 0, the random variables T, and X(T,) are independent
under any P. Furthermore X(T,) is uniformly distributed on S(x, r) under
PX.
A rigorous proof of this result takes longer than might be expected, but
it will be given. To begin with, we identify each w with the sampIe function
X(-, w), the space Q being the dass of all continuous functions in E = R d .
Let qJ denote a rotation in E, and qJW the point in Q which is the function
X(·, qJw) obtained by rotating each t-coordinate of X(·, w), namely:

X(t, qJw) = qJX(t, w). (13)

Since qJ preserves distance it is dear that

(14)
150 4. Brownian Motion

It foHows from (VI) that X(Tr) E S(X(O), r). Let X(O) = x, t 2 0 and A be
a Borel subset of S(x, r); put

A = {wl Tr(w)::::; t; X(Tr(w),w) E A}. (15)

Then on general grounds we have

Using (13) and (14), we see that

(16)

Observe the double usage of <p in <p - 1 A and <p -1 A above. We now claim
that if x is the center of the rotation <p, then

(17)

Granting this and substituting from (15) and (16), we obtain

For fixed x and t, if we regard the left member in (18) as a measure in A,


then (18) asserts that it is invariant under each rotation <po It is weH known
that the unique probability measure having this property is the uniform
distribution on S(x, r) given by

a(A)
A E g, A c S(x, r) (19)
cr(r) ,

where cr is the Lebesgue (area) measure on S(x, r); and

cr(r) = cr(S(x, r)) = r d - 1 cr(I), (20)

Since the total mass of the measure in (18) is equal to P{Tr ::::; t}, it follows
that the left member of(18) is equal to this number multiplied by the number
in (19). This establishes the independence of T r and X(Tr) since t and A
are arbitrary, as weH as the asserted distribution of X(TJ
Let us introduce also the foHowing notation for later use:

f rd
v(r) = m(B(x, r)) = J~ cr(s) ds = d cr(l). (21 )
4.2. Preliminary Properties of Brownian Motion 151

It remains to prove (17), which is better treated as a general proposition


(suggested by R. Durrett), as follows.

Proposition. Let {Xt} be a Markov process with transition function (Pt). Let
cp be a Borel mapping of E into E such that for all t ~ 0, X E E and A E tff, we
have

Then the process {cp(X t)} under px has the same finite-dimensional distribu-
tions as the process {X t} under p'P(x).

Remark. Unless cp is one-to-one, {cp(X t)} is not necessarily Markovian.

Proof. For each f E btff, we have

f Pt(x, dy)f(cpy) f Pt(cpx, dy)f(y)


=

because this true when f = 1A by hypothesis, hence in general by the usual


°
approximation. We now prove by induction on 1 that for ~ t 1 < ... < t l
and .fj E btff:

The left member of (22) is, by the Markov property and the induction
hypo thesis, equal to

W{f1(cpXtJPXC1[JicpX'2-tJ'" !z(cpX,,-t,_J]}
= EX{f1(cpXt,)P'PX"[J2(XtrtJ ... .h(Xt,-t,_ J]}

which is equal to the right member of (22). o


The proposition implies (17) if A E [F0 or more generally if A E [F-.

We conclude this seetion by discussing some questions of measurability.


After the pains we took in §2.4 about augmentation it is only fair to apply
the results to see why they are necessary, and worthwhi1e.
(XII) Let B be a (nearly) Bore1 set, f1 and f2 universally measurable,
bounded numerical function on [0, (0) and E a respectively; then the functions

(23)

are both universally measurable, namely in tff-/fJ6 where fJ6 is the Borel
fie1d on R 1 •
152 4. Brownian Motion

That TB E .'#'-/PlJ is a consequence of Theorem 7 of §3.4; next, X(T B) E


g;-/tff follows from the general result in Theorem 10 of §1.3. The assertions
of (XII) are then proved by Exercise 3 of §2.4. In particular the functions
in (23) are Lebesgue measurable. This will be needed in the following sections.
It turns out that if fl and f2 are Borel measurable, then the functions
in (23) are Borel measurable; see Exercise 6 and 7 below.
To appreciate the problem of measurability let f be a bounded Lebesgue
measurable function from R d to R 1 , and {Xt} the Brownian motion in R d •
Can we make f(X t ) measurable in some sense? There exist two bounded
Borel functions f, and f2 such that f, ~ f ~ f2 and m( {fl i= f2} ) = O. It
follows that for any finite measure f..l on tff and t > 0, we have

Thus by definition f(X t ) E /\11 .,#,11 = g;-. Now let T r be as in (10), then
under px, X(T r) E S(x, r) by (VI). But the Lebesgue measurability of f does
not guarantee its measurability with respect to the area measure (J on S(x, r),
when f is restricted to S(x, r). In particular if 0 ~ f ~ 1 we can alter the
fl and f2 above to make fl = 0 and f2 = 1 on S(x, r), so that E''lf~(Xt) -
fl(X t )} = (J(S(x,r)). It should now be clear how the universal measurability
of f is needed to overcome the difficulty in dealing with f(X(T r )). Let us
observe also that for a general Borel set B the "surface" aB need not have
an area. Yet X(T,oB) induces a measure on aB under each pl1, and if f is
universally measurable P{f(X(TcB ))} may be defined.

Exercises
Unless otherwise stated, the process discussed in the problems is the
Brownian motion in Rd , and (Pt) is its semigroup.
1. If d ~ 2, each point x has uncountably many fine neighborhoods none
of which is contained in another. For d = 1, the fine topology coincides
with the Euclidean topology.
2. For d = 2, each line segment is a recurrent set; and each point on it
is regular for it. For d = 3, each line is a polar set, whereas each point
of a nonempty open set on a plane is regular for the set. [Hint: we can
change coordinates to make a given line a coordinate axis.]
3. Let D be an open set and x E D. Then P T DC = T"D) = 1. Give an
X
(

example where P T DC < T(15Y} > O. Prove that if at each point y on


X {

aD there exists a line segment yy' E D<, then PX{TDc = TU)c} = I for
every XE D. [Hint: use (V) after a change of coordinates.]
4. Let B be abalI. Compute e {T ßC} for all x E E.
5. If BE tff and t > 0, then PX{TB = t) = O. [Hint: show Jp X{ TB = s} dx = 0
for all but a countable set of s.]
4.2. Preliminary Properties of Brownian Motion 153

6. For any Hunt process with continuous paths, and any closed set C,
we have Tc E fJ'0. Consequently x --+ PX{Tc :::; t} and x --+ EX{f(X(Td)
are in C for each t ~ 0 and fE bC or C +. Extend this to any CE C
by Proposition 10 of §3.5. [Hint: let C be closed and Gn t! C where
Gn is open; then nn{3tE[a,b]:X(t)EGn} = {3tE[a,b]:X(t)EC}.]
7. For any Hunt process satisfying Hypothesis (L), BE C, fE flJ and fE C+
respectively, the functions x --+ EX{f(TBn and x --+ EX{f(X(TB))} are
both in C. [Hint: by Exercise 3 of §3.5, x --+ EX{e-~TB} is in C for each
0( > 0; use the Stone-Weierstrass theorem to approximate any function

in Co([O, (0) by polynomials of e- For the second function (say ep) we


X•

have O(U~ep --+ ep if fE bC+. This is due to Getoor.]


8. If f E L l(Rd) and t > 0, then PJ is bounded continuous.
9. If f E C and pt!f! < 00 for every t > 0, then PJ is continuous for every
t > O. [Hint: for IIxll:::; 1 we have !PJ(x) - PJ(o)!:::; A[Pt!f!(o) +
P4t !f!(o)]. This is due to T. Liggett.]
10. If fE bC, then lim t _ oo [PJ(x) - PJ(y)] = 0 for every x and y. [Hint:
put x - z = fit, in S!Pt(x, z) - Pt(Y, z)! dz.]
11. In (X) if t: is fixed and m(B) --+ 0, then the quantity in (8) converges to one.
12. Let X be the Brownian motion in R 1, a > 0 and B be a Borel set contained
in ( - 00, a]. Prove that for each t > 0:
pO{T(Ql:::; t; X(t) E B} = PO{X(t) E 2a - B}
and deduce that
pO{T(Ql:::; t} = 2pO{X(t) > a}.
This is Andres reflection principle. A rigorous proof may be based on
Theorem 3 of §2.3.
13. For Brownian motion in R 1 , we define the "last exit from zero before
time t" as follows:
y(t) = sup{s:::; t:X(s) = O}.
Prove that for SE (0, t) and x E R 1 , we have
ds
pO{y(t) E ds} = ,
n~s(t-s)
x
pO{y(t) E ds; X(t) E dx} = e- x2j2 (t-s)dsdx.
2n~s(t - s?
[This gives a glimpse of the theory of excursions initiated by P. Levy.
See Chung [8] for many such explicit formulas. The literat ure is growing
in this area.]
14. For each A ERd, {exp«A, X t ) -IIAII 2 t/2, ff" r} is a martingale, where
(A,x) = L~=l AjXj.
154 4. Brownian Motion

4.3. Harmonie Funetion

A domain in R d( = E) is an open and connected (nonempty 1) set. Any open set


in R d is the I;lnion of at most a countable number of disjoint domains, each
of which called a component. It will soon be apparent that there is some
advantage in considering a domain instead of an open set. Let D be a domain.
We denote the hitting time of its complement by T v , namely:

TD = TJJc = inf{t > 0IX(t) E D'J. (1)

This is called the "(first) exit time" from D. The usual convention that Tl) =
°
+ 00 when X(t) E D for all t > is meaningful; but note that X(oo) is generally
notdefined. By(X)of§4.2, wehavePX{T v < oo} = 1 forallx E Difm(D) <00,
in particular if D is bounded.
The cJass ofBorel measurable functions from R d to [ -00, +x] or [0, cx;]
will be denoted by ,g or ,g +; with the prefix "hO' when it is bounded. We say
f is 10ca1\y integrable in D and write fE Lioc(D) iff JK dm <x for each Ifl
compact K c D, where m is the Lebesgue measure. We say A is "strictly
contained" in Band write A € B iff A c D.
Letr E g +, and define a function h by

h(x) = P{.f(X(T D )); Tl) <x}. (2)

This has been denoted before by PTlJ or PvJ. Although h is defined for al\
x E E, and is cJearly equal to f(x) if x E (D')O = (15)', we are mainly concerned
with it in 15. It is important to note that h is universally measurable, hence
Lebesgue measurable by (XII) of§4.2. More specifically, it is Borel measurable
by Exercise 6 of §4.2 because D' is cJosed and the paths are continuous.
Finally ifr is defined (and Borel or universal\y measurable) only on aD and
we re pi ace T v by T"l) in (2), the resulting function agrees with h in D (Exercise
1).
Recall the notation (11), (12) and (20) [rom §4.2.

Theorem 1. Ir h "1-00 in a domaitl D, thell h <J"J in D. For any hall B(.\", r) €: D,


we have

Ir(x) = -I-I.
rT(r) S(x.r)
h(Y)rT(dy). (3)

Furthermore, h is continuous in D.

Prooj". Write B for B(x, r), then almost surely TB < x; under P\ TB< T v so
that Tl) = TB + T v GTB • Hence it follows from the fundamental balayage
formula (Proposition 1 of §3.4) that

(4)
4.3. Harmonie Funetion 155

which is just
(5)

This in turn becomes (3) by the second assertion in (XI) of§4.2. Let us observe
that (4) and (5) are valid even when both members ofthe equations are equal
to + 00. This is seen by first replacing f by f 1\ n and then letting n ~ 00 there.
A similar remark applies below. Replacing r by s in (3), then multiplying by
O"(s) and integrating we obtain

h(x)
JoIr O"(s) ds = Ir JS(x,s)
Jo I h(y)O"(dy) ds = JB(x,r)
I h(y)m(dy). (6)

where in the last step a trivial use of polar coordinates is involved. Recalling
(21) of §4.2 we see that (6) may be written as

h(x) = _(1) I
v r JB(x,r)
h(y)m(dy), (7)

whether h(x) is finite or not.


Now suppose h t= 00 in D. Let h(xo) < 00, X o E D; and B(x, p) c B(x o, r) E
D. It follows from (7) used twice for x and for X o that

h(x) = _(1) I
vp JB(x,p)
h(y)m(dy) ~ _(1) I
V p JB(xQ,r)
h(y)m(dy)

(8)

Thus h < 00 in B(x o, r). This shows that the set F = D n {x Ih(x) < oo} is
open. But it also shows that F is c10sed in D. For if X n E Fand X n --+ X oo E D,
then X oo E B(x n , 15) E D for some 15 > 0 and some large n, so that X oo E F by
the argument above. Since D is connected and our hypothesis is that F is not
empty, we conc1ude that F = D. The relation (7) then holds for every x in D
with the left member finite, showing that h E L~c(D).
To prove that h is continuous in D, let r be so small that both B(x, r) and
B(x', r) are strictly contained in D. Applying (7) for x and x' we obtain

Ih(x) - h(x')1 ~ v~r) Sc h(y)m(dy) (9)

where C = B(x, r) 6. B(x', r). If x is fixed and x' --+ x then it is obvious that
m( C) --+ 0, and the integral in (8) converges to zero by the proven integrability
of h. Thus h is continuous and Theorem 1 is proved. D

We have shown above that the "sphere-averaging" property given in (3)


entails the "ball-averaging" property given in (7). The converse is also true.
156 4. Brownian Motion

For we may suppose h oJ= CIJ in D; then h is finite eontinuous in D by the


proof above. Now write (7) as folIows:

h(x)v(r) = Ir I
Jo JS(x.sl
h(Y)(J(dy)ds.

Sinee h is eontinuous its surfaee integral over Sex, s) is also eontinuous in s.


Differentiating the equation above with respeet to r we obtain (3) sinee
v'(r) = (J(r). We now proeeed to unravel the deeper properties of the fune-
tion h.

Definition 1. Let D be an open set. A funetion h is ealled harmonie in D iff


(a) it is finite eontinuous in D;
(b) the sphere-averaging property holds for any B(x, r) c. D.

By the proof ofTheorem 1, if Dis a domain and h E g +, h oJ= Cf.:; in D, then (b)
entails (a). Another easy eonsequenee is as folIows.

Corollary. Let D be a domain and f he a Borel measurahle function on cD. If


P Delfl oJ= 00 in D, then P Dei is harmonie in D. In partieular this is the case if f
is bounded on cD.

It is remarkable that there is another quite different eharaeterization of


the dass of harmonie funetions. Reeall that C(kl(D) is the dass of k-times
eontinuously differentiable funetions in D;

Ll = Ld (cl)2
l
j= 1 (X j

is the Laplacian in R d .

Definition 2. A funetion h is harmonie in the open set D iff it belongs to C(2)(D),


and satisfies in D the Laplaee equation:

Llh = O. (10)

Theorem 2. (Gauss-Koebe) Definitions 1 and 2 are equivalent. More(lUer a


harmonie jimction in D belangs to C(a: l(D).

Pro()f. Let h be harmonie in D aeeording to Definition I. We show first that


hE C(Xcl(D). Far any <5 > 0 there exists a funetion rp in C(exl(E) with the
following properties: rp(x) = rp(llxll):rp(x) > 0 for Ilxll < <5; rp(x) = 0 for
Ilxll ~ t5; and SE (p(x)m(dx) = 1. See Exereise 6 far an example. We have then
(11 )
4.3. Harmonie Function 157

For each X in D, there exists <5 > 0 such that h is bounded in B(x, <5) and (3)
holds for 0 < r < <5. It follows that

h(x) I
= Jo CXl
[~( I )
a r JS(x,r)
h(y)a(dY)] <p(r)a(r) dr

I
= Jo Xl
fS(x,r)
h(y)<p(ilx - yll)a(dy)dr (12)

= SE h(y)<p(llx - yli)m(dy).

Since <p has support in B(o, <5), and all partial derivatives of<p are bounded
continuous in B(o, <5), we may differentiate the last-written integral with
respect to x under the integral. Since <p(llx - yll) is infinitely differentiable
in x for each y, we see that h is infinitely differentiable.
To prove (10) let us recall Gauss's "divergence formula" from calculus, in
a simple situation. Let B be a ball and h be twice continuously differentiable
in a neighborhood of B, then the formula may be written as follows:

I ,1h(y)m(dy) = I aa h (y)a(dy) (13)


JB JaB n

where oh/on is the outward normal derivative. Now let us denote the right
member of (3) by A(x, r). Making the change of variables y = x + rz we have

A(x, r) I
= a (11) JS(o,l) h(x + rz)a(dz). (14)

Straight forward differentiation gives for fixed x:

, d ah oh
h (x + rz) = I Zj;l (x + rz) = -;- (x + rz); (15)
j=l uZj un

A'(x, r) I
= a (11) JS(o,l) ~h (x + rz)a(dz)
un

1 I ah
= a(r) JS(x,r) on (y)a(dy)

= ~( )I
a r JB(x,r)
,1h(y)m(dy). (16)

The differentiation of(13) under the integral is correct because h has bounded
continuous partial derivatives in a neighborhood of S(o, 1). The first member
of (16) vanishes for all sufficiently small values of r by (3), hence so does the
integral in the last member (rid of the factor l/a(r)). This being true for all
158 4. Brownian Motion

suffieiently small values of r, we eonclude Llh(x) = 0 by continuity. Thus h is


harmonie aeeording to Definition 2. Conversely if that is the hypothesis,
then for any B(x, r) @ D, the formula (16) is valid beeause h E C(2)(D). Sinee
Llh = 0 in D the last member of(16) vanishes, henee so does the first member.
Henee A(x, r) = !im r t 0 A(x, r), whieh is equal to h(x) by the eontinuity of h.
Therefore (3) is true and Theorem 2 is proved. D

Definition 2 allows a "loeal" eharaeterization ofharmonicity. The funetion


his harmonie at x iff it belongs to C(2) in a neighborhood of x and Ah(x) = O.
It also yields an improvement ofDefinition 1, by retaining eondition (a) there
but weakening eondition (b) as folIows:
(b the sphere-averaging property holds for eaeh x in D, and all suffieiently
f
)

small balls (strietly) eontained in D with center at x.


Actually, even weaker eonditions than (hf) suffiee, but these are teehnieal
problems; see Rao [1], p. 49. We add two basic propositions about harmonie
funetions below, both dedueible from Definition 1 alone. The first is some-
times referred to as a "principle of maximum." However, there are some
other results in potential theory ealled by that name.

Proposition 3. Let h he harmonie in the domain D, and put

M = sup h(x), m = inf h(x).


XED XED

Ir there exists an x in D such that h(x) = M or h(x) = m, then h is a constant


in D.

Proof. Suppose h(x) = M. Then it follows at onee from the ball-averaging


property (7) that h(y) = M first for a.e. y in a neighborhood of x, then for all
y there sinee h is eontinuous. Thus the set D n {xlh(x) = M} is open in D;
it is also relatively closed in D by continuity. Henee h = M in D. The proof
for the infimum is exaetly the same. D

Corollary. 11 h is harmonie in a bounded domain D, and continuous in [5, then


the maximum and minimum values of h are taken on uD. In particular if h == 0
on riD, then h == 0 in [5.

The next proposition is one of Harnaek's theorems whieh hark back to


Theorem 1. We relegate two other Harnaek's theorems to Exereises 8 and 9.

Proposition 4. Let D be a domain and {h n , n ~ I} a sequence 01 harmonie


lunetions in D. Suppose h n inereases to hex in D. Then either h, == + CfJ in D,
or h is harmonie in D.

Proof. Even without the use ofDefinition 2, it is easy to see that we need only
prove that h x • is harmonie in eaeh bounded subdomain D J @ D. On D J ,
4.3. Harmonie Funetion 159

h1 ;:::: by eontinuity. Let hn = hn- m; then Tin inereases to h N =


m> - CfJ
h oo - Tin has the sphere-averaging property in D 1 , henee so does
m ;:::: O. Eaeh
ha) by monotone eonvergenee. Therefore by the proof of Theorem 1 from
(6) on, either hX) == + CfJ in D b or TiX) is finite eontinuous, henee harmonie
inDt· 0

Thanks to Definition 2, it is trivial to find explieit harmonie funetions. It


is weIl known that the real part of a eomplex analytie funetion is harmonie
(in R 2 ), see e.g. Ahlfors [1]. Indeed the two eharaeterizations have their
analogues in the Cauehy integral formula and holomorphy, respeetively. It
is obvious from Definition 2 that in R 1, harmonieity is tantamount to
linearity.

EXAMPLE. In R Z, logllxll is harmonie in any open set not eontaining O. In


R d , d ;:::: 3, 1/llxlld-z is harmonie in any open set not eontaining O. Let K be
a eompaet set, f.1 a measure sueh that f.1(K) < CfJ; then

fK logllx - yllf.1(dy), (17)

are harmonie funetions of x in R 2 - K and R d - K respeetively.


The first two assertions are standard exereises in ealeulus upon using
Definition 2. The last assertion then follows either by differentiation under
the integrals, or perhaps better by integration using Definition 1 and Fubini's
theorem. Now eonsider the partieular ease of (17) when f.1 is the uniform
distribution on the sphere S(o, r). If Ilxll > r, then z ~ logllzll is harmonie in
a neighborhood of B(x, r); henee the sphere-averaging property yields

logllxll = _(1)
0" r
r
JS(x,r)
10gllzII0"(dz) =~) r logllx -
O"(r JS(o,r)
yIIO"(dy);

similarly for the seeond integral in (17). However, to evaluate the integrals
when Ilxll < r we need the following proposition.

Proposition 5. If h is harmonic and is a jimction of Ilxll alone, then it must be


of the fOrm

Cl logllxll + Cz, (18)

respectively in R 2 and R d , d;:::: 3: where Cl and C2 are constants.

Proqf. Writing p for Ilxll, we see that the Laplaee equation for h reduees to

dZ d- 1 d
-h+---h=O. (19)
df? p dp
160 4. Brownian Motion

Solving this simple differential equation we obtain the general solutions


given in (18). These are ea11ed fundamental radial solutions of the Laplaee
equation. 0

Proposition 6. Wehave for an y r > 0:

_1
v(r)
r
JS(o.r)
logllx - Yllv(dy) = 10g(llxll v r), d = 2; (20)

cl ~ 3. (21)

Proo.f. Denote the left member of (20) by fix). It is harmonie in B(o, r) and
is a funetion of Ilxll alone upon a geometrie inspeetion. Henee by Proposition
5, fIx) = a logllxll + b. Far x = 0 we have f(o) = log r by inspeetion; henee
a = 0 and b = log r. It follows that fIx) = log r for Ilxll < r. Similarly, fIx) =
a' logllxll + b' far Ilxll > r. It is easily seen that limllxll~ [fix) - logllxll =
0; henee a' = 1 and b' = O. Thus fix) = logllxll for Ilxll > r. We have already
observed this above. It remains to show that f is finite eontinuous at Ilxll = r.
This is a good exereise in ealculus, not quite trivial and left to the reader.
The evaluation of (21) is similar. 0

The exaet ealculation of formulas like (20) and (21) farms a vital part of
classieal potential theory, and is the souree of many interesting results.

Exercises
1. Far any open set D show that PX[ X(TD) E cD} = 1 for a11 XE 15. However,
if XE ilD it is possible that PX{T D < T cv } = 1. An example is known as
Lehesyue' s thorn, see Port and Stone [1], p. 68.
2. Show that if h is harmonie then so are aU its partial derivatives of a11
orders. Verify the harmonieity of the following funetions in R 3 :

2x~ - xi - x~.
Ilxll'
3. Let h be harmonie in R d , d ~ 1 such that Ptlhl <x for all t > O. Then
h = Pth for a11 t ~ 0, where (Pt) is the semigroup of the Brownian motion
The proeess {h(X t), §;o t ~ O} is a eontinuous martingale. [Hint: use
polar coordinates and the sphere-averaging property; a by-produet is
the value of v(l).]
4. Let h E g, 0::;; h < CD and h = P 1h. Then h is a eonstant. [Hint: if h 1= x
then h is 10ea11y integrable; now eompare the values of P)J at two points
as n -+ 00, by elementary analysis. This solution is due to Hsu Pei.]
4.3. Harmonie Funetion 161

5. (Pieard's Theorem.) If h is harmonie and ~O in R d , then it is a eonstant.


[Hint: there is a short proof by ball-averaging. A longer proof uses
martingale and Exereise 4 of §4.l to show that lim t _ oo h(X t ) is a eonstant
a.s.]
6. Let qJI(X) = CI exp«llxII Z _1)-1) for Ilxll < 1, qJI(X) = 0 for Ilxll ~ 1, and
CI is so chosen that JRd qJI(x)dx = 1. Put qJä(X) = qJ(x/b)(l/b d ). Show that
qJä has all the properties required of qJ in the proof of Theorem 2.
7. Let D be a domain and h harmonie in D. If D is unbounded we adjoin a
point at infinity 00 to 8D, and say that x --+ 00 when Ilxll--+ 00. Suppose
now that
lim h(x) s M. (22)

Then h(x) s M for all XE D. Give an example in whieh the conc1usion


beeomes false if { oo} is omitted in (22).
8. Let {ha} be a family of harmonie funetions in an open set D. Suppose
that the family is uniformly bounded on each eompaet subset of D. Then
from any sequenee from the family we ean extraet a subsequenee whieh
eonverges to a harmonie funetion in D, and the eonvergenee is uniform
on eaeh eompaet subset of D. [Hint: prove first equi-eontinuity of the
family on eaeh eompaet sub set by using (9), then apply theAseoli-Arzela
theorem.]
9. Let D be a domain. For each eompaet K c D there exists a constant c(K)
such that for any harmonie funetion h > 0 in D we have for any XI E K,
Xz E K:
h(xd
h(xz) ~ c(K).

[Hint: the c1assieal proof uses inequalities whieh are derived from the
Poisson representation for a harmonie funetion in a ball (see §4.4). But a
proof follows from (8), followed by the usual "ehain argument."].
10. Let a > 0, b > O. Evaluate the following integrals by means of Pro-
position 6:

S 10g(a
OZ1t
Z + bZ - 2ab eos 8)d8;

Compare the method with eontour integration in eomplex variables.


11. Prove the eontinuity of the left members of (20) and (21) as functions of
x, as asserted in Proposition 6.
162 4. Brownian Motion

12. Compute

SB(O,r) logllx - yllm(dy), r


JB(o,r)
Ilx - yI12-d m (dy)
in R 2 and Rd, d ;?: 3, respeetively. [Answer for the seeond integral:
0'(1) rd
dllxll d- 2
if Ilxll ;?: r;
0'(I)r 2
- 2 - + O'(I)ll xl
2 (1d -:21) ' if Ilxll :-s; r.]
13. (Gauss) In Rd, d ;?: 3, let ~ c K c B(o, r), then
~-2
J1(K) =- () r U J1(y)O'(dy).
0'd r JS(o,r)

Here O'ir) = O'(r) in (20) of §4.2.


14. Extend the result in Exereise 13 to R 2 , using U* (given in (14) of §4.6
below) for U.

4.4, Dirichlet Problem

Let D be an open set and J E bIS on iJD, and eonsider the funetion h defined
in 15 by (2) of §4.3. Changing the notation, we shall denote h by H DJ, thus

XE 15. (1)

This formula defines a measure H D(X, .) on the boundary iJD, ealled the
harmonie measure (of x with respeet to D). For eaeh Borel subset A of DD,
the funetion H(', A) is harmomie in D by the Corollary to Theorem 1 of
§4.3. We may regard the measure H(x,') as defined for all Borel sets, though
it is supported by iJD. The measure H(x,') is also defined for XE DC and is
harmonie in (15Y = (DC)o. For example if GD is the horizontal axis in R 2 ,
H Df is a harmonie funetion both in the upper and lower open half-planes,
but not neeessarily in the wh oie plane.
We now study the behavior of h(x) as x approaehes the boundary. Sinee
h is defined only in 15, we shall not repeat the restrietion that XE 15 below.
Reeall that a point z on iJD is regular or irregular Jor D aeeording as P r D =C Z
[

O} = 1 or 0 by Blumenthal's zero-or-one law. It turns out that this is the


same as its being a "regular boundary point of D" in the language of the
non-probablistie treatment of the Diriehlet problem. Sinee z may be regular
for DC but irregular for D, the differenee of terminology should be borne in
mind. For instanee, it is easy to eonstruet a domain with a boundary point
z whieh is irregular for DC, but not so easy for z to be irregular for D.
4.4. Dirichlet Problem 163

Proposition 1. F or each t > 0, the function

is lower semi-continuous in E.

Proof. Since D is open and X is continuous, we have {r D = t} c {X(t) E DC }.


It follows that
{r D > t} = {VSE(O,t]:X(S)ED}

n 0{VS G,
1 E t }X(S) E D}.
The set above is in ~7, and

(2)

The probability on the right side of (2) is of the form pt/ncp(x), where cp(x) E
bC. Hence by the strong Feller property «11) of §4.2), x ~ pt/ncp(x) is bounded
continuous. Therefore the left member of (2) is upper semi-continuous in x,
which is equivalent to the proposition. 0

Note that on account of Exercise 5 of §4.2, F{r D = t} = 0, but we do not


need this.

Theorem 2. If z on aD is regular for DC, and if fis continuous at z, then we have

lim h(x) = f(z). (3)


D3X-+Z

Proof. We have by Proposition 1, for each e > 0:

(4)

Hence the corresponding limit exists and is equal to one. Note that here x
°
need not be restricted to [5. Let T r be as in (10) of §4.2. For r > and e > 0,
PX{T, > e} does not depend on x, and for any fixed r we have

lim PO{T, > e} = 1, (5)


e!O

because T, > ° almost surely. Now for any x, an elementary inequality gives
164 4. Brownian Motion

Consequently we have by (4)

(7)

and so by (5):
lim r{'D < T r } = 1. (8)

On the set {'D< T r }, we have IIX('D) - X(O)II < rand so IIX('D) - zll <
IIX(O) - zll + r. Since f is continuous at z, given b > 0, there exists I] such
that if Y E oD and IIY - zll < 21], then If(Y) - f(z)1 < b. Now let XE l5 and
Ilx - zll < 1]; put r = 1]. Then under r, X('D) E oD (Exercise I of §4.3) and
IIX('D) - zll < 21] on {'D < T~}; hence If(X('D)) - f(z) I < b. Thus we have
P{lf(X('D)) - f(z)I;, D< oo} < bPX{'D < T~} + 211J11r{ T q ::; 'D < (0). (9)

When x -+ z, the last term above converges to zero by (8) with I] for r. Since
f> is arbitrary, we have proved that the left member of (9) converges to zero.
As a consequence,
lim h(x) = f(z) lim P X{, D < oo} = f(z)
by (4). o
Let us supplement this result at once by showing that the condition of
regularity of z cannot be omitted for the validity of (3) in Rd, d ~ 2. Note:
in R 1 every z E oD is regular for {z} c DC•

Theorem 3. Suppose d ~ 2 and z is irregular for DC• 1hen there exists f E bC


on 0D such that
lim h(x) < f(z). (10)
D3X-Z

Proof. Indeed (10) is true for any fE bc.{oD) such that f(z) = 1 and f < 1 on
GD - {z}. An example of such a function is given by (1 -llx - .:11) v O.
Since {z} is polar by (IV) of §4.2, and z is irregular for D we have p z {X(, D) =
C,

z} = O. This implies the first inequality below:

1> E{j(X('D)); 'D < oo}


= lim E'{Tr < 'D; h(X(Tr ))}
rLO

~ lim PZ{Tr < 'D} inf h(x).


rL 0 XE B(z.r)nD

where the equation follows from lim dO PZ{Tr < 'D} = 1, and the strong
Markov property applied at T r on {Tr < 'D}. The last member above is equal
to the left member of (10). 0
4.4. Dirichlet Problem 165

We proceed to give a sufficient condition for z to be regular for D C• This


is a sharper form of a criterion known as Zaremba's "cone condition". We
shall replace his solid cone by a flat one. Let us first observe that if z E GD
and B is any ball with z as center, then z is regular for D C if and only if it is
regular for (D n B)<. In this sense regularity is a local property. A "flat cone"
in Rd , d ?: 2, is a co nc in a hyperplane of dimension d - 1. It is "truncated"
if all but a portion near its vertex has been cut off by a hyperplane perpen-
dicular to its axis. Thus in R l , a truncated cone reduces to a (short) line
segment; in R 3 , to a (narrow) fan.

Theorem 4. The boundary point z is regular for DC if there is a truncated flat


cane with ver tex at z and lying entirely in D C•

Praof. We may suppose that z is the origin and the flat cone lies in the
hyperplane ofthe first d - 1 coordinate variables. In the following argument
all paths issue from o. Let

Tn = inf{t > ~IXd(t) = o}.


Since a is regular for {o} in R 1 by (V) of§4.2, we have limn~ <7' T n = 0, PO-a.s.
Since X k ) is independent of Y(-) = {X 1 (-), . . . , X d _ 1 (-)}, T n is independent
of Y(·). I t follows (proof ?) that Y(Tn ) has a (d - 1}-dimensional distribution
which is seen to be rotationally symmetrie. Let C be a flat cone with vertex
0, and Co its truncation which lies in D C• Then rotational symmetry implies
that po { Y(T n) E C} = e, where e is the ratio of the "angle" of the flat cone
to a "full angle". Since limn~ :I.) Y(Tn ) = 0 by continuity, it follows that
limn _a:. P O { Y(Tn ) E Co} = e. Since Co E D C , this implies

pO{rD = o} = lim PO{rD:S:; T n } ?:


n- (Xl
e > o.
Therefore, 0 is regular for DC by the zero-or-one law. o
Let us call a domain D regular when every point on GD is regular for DC •
As examples ofTheorem 4, in R 2 an open disk with a radius deleted is regular;
similarly in R 3 an open ball minus a sector of its intersection with a plane
166 4. Brownian Motion

is regular, see Figure (p. 165). Of course, a ball or a cube is regular; for a
simpler proof, see Exercise 1.
The most classical form of the Dirichlet boundary value problem may be
stated as follows. Given an open set D and a bounded continuous funetion
f on oD, to find a function whieh is harmonie in D, eontinuous in D, and
equal to f on oD. We shall refer to this problem as (D,j). The Corollary to
Theorem 1 of §4.3, and Theorem 2 above taken together establish that the
funetion HDf is a solution to the problem, provided that D is regular as
just defined. When B is a ball, for instanee, this is ealled the "interior Dirichlet
problem" if D = B, and the "exterior Dirichlet problem" if D = (ßt The
neeessity of regularity was aetually diseovered long after the problem was
posed. If there are irregular boundary points, the problem may have no
solution. The simplest example is as follows. Let D = B(o, 1) - {o} in R Z,
namely the punetured unit disk; and define f = 0 on oB(o, 1), f(o) = 1. The
function HDf reduees in this ease to the constant zero (why ?), whieh does
not satisfy the boundary eondition at o. But is there a true solution to the
Dirichlet problem? Call this h l ; then h l is harmonie in D, eontinuous in D
and equal to 1 and 0 respectively on oB(o, 1) and at o. Let cp be an arbitrary
rotation about the origin, and put hz(x) = hl(cpx) for XE D. Whether by
Definition 1 or 2 of §4.3, it is easy to see that h2 is harmonie in D and satisfies
the same boundary eondition as h l . Henee h l is identieal with h 2 (see Prop-
osition 5 below). In other words, we have shown that h l is rotationally
invariant. Therefore by Proposition 5 of §4.3, it must be of the form
Cl logllxll + C2 in D. The boundary eondition implies that Cl = 0, C2 = 1. Thus
h l is identically equal to one in D. Ii eannot eonverge to zero at the origin.
[The preeeding argument is due to Ruth Williams.] It is obvious that 0 is an
irregular boundary point of D, but one may quibble about its being isolated
so that the eontinuity of fon oD is a fietion. However, we shall soon diseuss
a more general kind of example by the methods of probability to show
where the trouble lies. Let us state first two basic results in the simplest case,
the first of which was already used in the above.

Proposition 5. 1f D is bounded open and regular, then HDf is the unique


solution to the original Dirichlet problem (D,j).

Proof. We know that h = HDf is harmonie in D. Sinee all points of oD are


regular, and f is eontinuous in oD, h is eontinuous in D by Theorem 2. Let
h l be harmonie in D, eontinuous in D, and equal to fon oD. Then h - h 1 is
harmonie in D, eontinuous in D, and equal to zero on oD. Applying the
Corollary to Proposition 3 of §4.3, we obtain h - hl = 0 in D. D

The following result is to be earefully distinguished from Proposition 5.

Proposition 6. Let D be a bounded open set. Suppose that h is harmonic in D


and continuous in D. Then

hex) = HDh(x), Vx E D.
4.4. Dirichlet Problem 167

Praaf. There exist regular open sets D n such that D n Ti D. In fact each D n
may be taken to be the union of cells of a grid on R d, seen to be regular by
the cone condition. We have h = HDnh in 15n by Proposition 5. Let first
XE D, then there exists m such that x E D n for n ~ m; hence

Letting n -+ 00, we have 'Dn i 'D r-a.s. by (VII) of §4.2. Hence the right
member above converges to H Dh(X) by bounded convergence. Next if XE oD
and x is regular for De, then HDh(x) = h(x) trivially. Finally if XE oD and x
is not regular for De, then PX{'D > O} = L We have for each t > 0:

e{t < 'D; h(X('D))} = EX{t < 'D; EX(t)[h(X('D))]}


= EX{t < 'D; h(X(t))}

because X(t) E D on {t < 'D}' so the second equation follows from what we
have already proved. As t t 0, the first term above converges to HDh(x), the
third term to EX{O < 'D; h(X(O))} = h(x) by bounded convergence. 0

We are now ready to discuss a general kind of unsolvable Dirichlet


problem. Let D o be a bounded open domain in R d, d ~ 2, and S a compact
polar set contained in Do . For instance, if d = 2, S may be any countable
compact set; if d = 3, S may be a countable collection oflinear segments. Let
D = D o - S, and define f = 1 on oD o, f = 0 on S. Note that oD = (oD o) u S
since a polar set cannot have interior points. There exists a sequence of open
.u
neighborhoods B n S such that Bn c D o. Let D n = D o - B n • Suppose that
h is harmonie in D, continuous in 15, and equal to one on iJD o. Nothing is
said about its value on S. Since h is harmonie in Dn and continuous in 15",
we have by Proposition 6 for x E D n :

h(x) = EX{h(X('DJ)}
(11)
= r{TilDo < TilBJ + EX{TilBn < TilDo ; h(X(TilBJ)}·

Since S is polar, we have

!im r{TilBn < TilDo } = PX{Ts < TilDo } = O. (12)


n

Since h is bounded in 15, it follows that the last term in (11) converges to
zero as n --+ 00. Consequently we obtain

h(x) = !im PX{TilDo < TilBJ = PX{TilDo < oo} = 1. (13)


n

This being true for x in D n for every n, we have proved that h == 1 in D. Thus
the Dirichlet problem (D,f) has no solution.
168 4. Brownian Motion

More generally, let! be eontinuous on aD


o , and arbitrary on S. Then
H D! is harmonie in D, and the argument above shows that H Da! is the
unique eontinuous extension of H D! to Do (whieh is harmonie in Do to boot).
In partieular, the Diriehlet problem (D,f) is solvable only if! = H Du! on S.
In the example above we eonstrueted an irregular part of aD by using the
polar set S. We shall see in §4.5 below that for any open D, the set ofirregular
points of aD is always a polar set. This deep result suggests a reformulation
of the Diriehlet problem. But let us eonsider some simple examples first.

EXAMPLE 1. In R d, d ~ 3, let B = B(o, b). Consider the funetion P B1(x) for


XE B This funetion is harmonie in B C and is obviously a funetion of Ilxll
C•

alone. Henee by Proposition 5 of §4.3, it is of the form cl l xl1 2-d + C 2 . The


following equation illustrates a useful teehnique and should be argued out
direet1y:
\Ix E W: U(x, B) = P X { TB< oo} U(y, B) (14)

where y is any fixed point on aB. But aetually this is just the identity UI B =
PBUI B eontained in Theorem 3(b) of §3.4, followed by the observation that
U(y, B) is a eonstant for y E aB by rotational symmetry. Now the left member
of(14), apart from a eonstant, is equal to

r dy
JB Ilx _ Ylld 2
whieh eonverges to zero as Ilxll-- 00. Henee PBI(x) -- 0 as Ilxll-- 00, and so
Cl= O. Sinee PBI = 1 on aB, Cl = bd - 2 . Thus .

Ilxll ~ b. (15)

Next, let 0 < a < b < 00 and

D = {xla < Ilxll < b}. (16)

Consider the Diriehlet problem (D,f), where! is equal to one on S(o, a), zero
on S(o, b). The probabilistie solution given by PDc! signifies the probability
that the path hits S(o, a) before S(o, b). It is the unique solution by Proposition
5. Sinee it is a funetion of Ilxll alone it is of the form C-LIIXI12-d + Cl' The
eonstants Cl and Cl are easily determined by its valuefor Ilxll = a and Ilxll = b.
The result is reeorded below:

x ) _llxlll-d - bl - d
P {Ts(o,a) < TS(o,b)f -
a 2-d
- b2 - d ' a ~ Ilxll ~ b (17)
4.4. Dirichlet Problem 169

If h j + CXJ then TS(o.b) j 00 (why?), and we get (15) again with h replaeed
bya.
Exaetly the same method yields for d = 2;

Xf _ logllxll - log h
P l TS(o,a) < TS(o,b)} - log a - log h ' a ~ Ilxll ~ b. (18)

Ifwe let b j + 00, this time the limit ofthe above probability is 1, as it should
be on aeeount of reeurrenee. On the other hand, if we fix band let a 1 0 in
(18), the result is PX{ T{O} < TS(o,b)} = 0 (why?). Now let b j'X! to eoncludethat
P{o}l(x) = 0 for x#- o. Sinee Pt(x, {o}) = 0 for any x in E and t > 0, it follows
that P tP{o)l(x) = 0; henee P{o}l(x) = lim tlO P tP{o}l(x) = O. Thus {v} is apolar
set. This is the seeond proof ofa fundamental result reviewed in (IV) of§4.2.

EXAMPLE 2. The harmonie measure H B(X, .) for a ball B = B( 0, r) ean be


explieitly eomputed. It is known as Poisson's formula and is one of the key
formulas in the analytieal developments of potential theory. In R 2 it is an
analogue of, and ean be derived from, the Cauehy integral formula for an
analytie funetion in B:

. 1
j(x)=-
2ni
f f(z)
--dz
S(o,r) X - Z
(19)

where the integral is taken eountercloekwise in the sense of eomplex integra-


tion. We leave this as an exereise and proeeed to derive the analogue in
R d , d ~ 3. The method below works also in R 2 •
For fixed z we know that Ilx - z112-d is harmonie at all x exeept z. Henee
so are its partial derivatives, and therefore also the following linear
eombination:

d 2z. a 1 1 IIzl12 - II x l1 2
L d~2 -~ II
j= 1 - (;X j X - Z W- 2 Ilx - zlld-2 Ilx - zlld .

Now eonsider the integral:

r r 2 -ll x l1 2 (J(dz)
JS(o,r) Ilx - zW .

Then this is also harmonie for x E B(o, r) (why?). Inspeetion shows that it is
a funetion of Ilxll alone, henee it is ofthe form c 1 11x11 2 - d + c 2 . Sinee its value
at x = 0 is equal to rZ-d(J(r) we must have Cl = 0 and C2 = r 2 - d (J(r) = m(l).
Reeall the value of (J(1) from (20) of §4.2 whieh depends on the dimension d
and will be denoted by (JAl) in this example. Now let f be any eontinuous
funetion on S(o, r) and put

. _ 1 r r2 - Il x 11 2 . _
(20)
I(x,f) - mAl) JS(o,rj Ilx _ zW j(z)(J(dL.).
170 4. Brownian Motion

This is the Poisson integral off for B(o, r). We have just shown that lex, 1) = 1
for XE B(o, r). We will now prove that l(x,f) = H B(X,f) for all fE q8B),
namely that the measure HB(x,') is given explieitly by lex, A) for all A E @,
Ac aB.
The argument below is general and will be so presented. Consider

1 r2 -llxl12
hex, z) = mAl) Ilx _ zlld ' Ilxll < r, Ilzll = r,
then h > 0 where it is defined, and SS(o,r) hex, z)(J(dz) = 1. Furthermore if
Ilyll = r, y =F z, then limx~z hex, y) = 0 boundedly for y outside any neighbor-
hood of z.1t follows easily that the probability measures lex, dz) = hex, z)(J(dz)
eonverge vaguely to the unit mass at z as x --+ z. Namely for any f E qaB),
l(x,f) eonverges to fez) as x --+ z. Sinee l(',f) is harmonie in B(o, r), l(',f) is
a solution to the Diriehlet problem (D,f). Henee it must eoineide with
HBf = HB(-,f) by Proposition 5.
Let us remark that H B(X, .) is the distribution of the "exit plaee" of the
Brownian motion path from the ball B(o, r), if it starts at x in the ball. This
is a simple-so unding problem in so-ealled "geometrie probability". An
expliet formula for the joint distribution of TB and X(TB) is given in Wendel
[1 ].
In view of the possible unsolvability of the original Diriehlet problem, we
will generalize it as follows. Given D and f as before, we say that h is a
solution to the generalized Dirichlet problem (D,f) iff h is harmonie in D and
eonverges to f at every point of aD whieh is regular for DC• Thus we have
proved above that this generalized problem always has a solution given by
HDf. If Dis bounded, this is the unique bounded solution. The proof will
be given in Proposition 11 of §4.5. We shall turn our attention to the general
question of non-uniqueness for an unbounded domain D.
Consider the funetion g defined by

g(x) = PX{T D = oo} = 1 - P D c1(x). (21)

If D is bounded, then T D < 00 almost surely so that g == O. In general, g is a


solution of the generalized Diriehlet problem (D,O). The question is whether
it is a trivial solution, i.e., g == 0 in D. Reeall from §3.6 that (for a Hunt proeess)
a Borel set A is reeurrent if and only if PA 1 == 1. It turns out that this identity
holds if it holds in AC. For then we have for any x,

PX{TA < oo} 2 r{X(t) E A} + P{X(t) E AC; pX(t)[TA < oo]}


= PX{X(t) E A} + r{X(t) E AC} = 1.
Applying this remark to A = D we obtain the following eriterion.
C,

Proposition 7. We have g == 0 in D if and only if DC is recurrent.


4.4. Dirichlet Problem 171

As examples: in Rd , d 23, if Dis the complement of a compact set, then


g=!-O in D. In R 2 , g=!-O in D if and only if DC is a polar set, in whieh ca se
g=:l.
When g=!-O in D, the generalized Dirichlet problem (D,f) has the solutions
H Df + eg for any constant e. The next result implies that there are no other
bounded solutions. We state it in a form dictated by the proof.

Theorem 8. Let D be an open set and f be a bounded Borel measurable funetion


on cD. If h is bounded and harmonie in D, and lim D3x _ z hex) = fez) for all
z E cD, then h must be of the form below:

hex) = HDf(x) + eg(x), XE D; (22)

where g is defined in (21) and e is any eonstant. This h in faet conuerges to f


at each point z of cD whieh is regular for D and at whieh fis eontinuous. In
C

partieular if f is eontinuous on cD, h is a generalized solution to the Dirichlet


problem (D,f).

Proof. There exists a sequenee of bounded regular open sets Dn such that
Dn @ Dn+ 1 for all n, and Un Dn = D. Such a sequenee exists by a previous
remark in the proof of Proposition 6. Suppose that h is bounded and har-
monie in D. Consider the Dirichlet problem (Dm h). It is plain that h is a
solution to the problem, henee we have by Proposition 5:

XE Dn• (23)

This implies the first part of the lemma below, in whieh we write T n for
to lighten the typography; also T o = O.
'D n ' Tfor 'D

Lemma 9. For eaeh XE D, the sequenee {h(X(T n)), 31'(Tn), r; n ~ 0] is a


bounded martingale. We haue PX-a.s.:

nli~
... h(X(Tn))
~,
= {fe(X(T)) on {T < oo};
on {T = oo};
(24)

where e is a eonstant not depending on x.

Proof. Sinee h is bounded in D by hypo thesis, we have by the strong Markov


property, for each XE D and n ~ 1:

P{h(X(Tn))I.?(Tn- 1 )} = EX(Tn-tl{h(X(Tn))}
= HDnh(X(Tn- 1 )) = h(X(Tn_ 1 ))

where the last equation comes from (23). This proves the first assertion, and
eonsequently by martingale eonvergenee theorem the limit in (24) exists.
172 4. Brownian Motion

We now prove that there exists a random variable Z belonging to the


remote field "§ ofthe Brownian motion process, such that the limit in (24) is
equal to Z almost surelyon the set {T = oo}. [Readers who regard this as
"intuitively obvious" should do Exercise 10 first.] Recall that "§ may be
defined as folIows. From each integer k 2 1 let "§k be the u-field generated by
X(n) for n 2 k, and augmented as done in §2.3, namely ~k = u(X(n), n 2 k)-.
Then "§ = I\k'= 1 "§k· By the Corollary to Theorem 8.1.4 of Course (see also
the discussion on p. 258 there), ~ is trivial. [This is the gist of Exercise 4 of
§4.1.]
Define h in R d to be h in D, and zero in D e. We claim that there is a random
variable Z (defined in Q) such that

(25)

for every XE R d • For XE D, under p x we have h(X(Tn)) = h(X(Tn», so the


limit in (25) exists and is the same as that ip (24). For XE D e, under p x we have
h(X(Tn» = hex) = 0, hence the limit in (25) exists trivially and is equal to zero.
Since (25) holds PX-a.s. for every x ERd, we can apply the shift to obtain for
each k 2 1:

(26)
n~ CD

for every x ERd. Let x E D; then under p x and on the set {T = oo}, we have
T n > k and consequently h(X(Tn» ()k = h(X(Tn» for all sufficiently large
0

values of n. Hence in this case the limit in (26) coincides with that in (24).
Therefore the latter is equal to Z ()k which belongs to "§k. This being true
0

for each k 2 1, the limit in (24) is also equal to limk~oo Z ()k which belongs
0

to "§. Since "§ is trivial, the upper limit is a constant PX-a.s. for each XE Rd •
This means: for each x there is a number c(x) such that

PX{T = 00; Z = c(x)} = 1. (27)

°
°
By Exercise 4 below, g > in at most one component domain D o of D, and
g == in D - Do. Choose any Xo in Do and put for all x in D:

q>(x) = PX{T = 00; Z =1= c(xo)}.

It is clear from the definition of Z that for any ball B = B(x, r) (§' D, q>(x) =
PBcq>(x) as in Theorem 1 of §4.3. Since q> is bounded, it is harmonie in D.
°
Since q> 2 and q>(xo) = 0, as shown above, it follows by Proposition 3 of
°
§4.3 (minimum principle) that q> == 0 in D o. As g == in D - D o, we have
q> == 0 in D. Thus we may replace c(x) by c(x o) in (27), proving the second
line in (24). For XE D, PX-a.s. on the set {T < oo}, we have T n i T, X(Tn ) E D
and X(Tn ) ~ X(T); since h(y) ~ fez) as y E D, y ~ Z E GD by hypothesis, we
4.4. Dirichlet Problem 173

have h(X(Tn )) - t j(X(T)). This proves the first line of (24). The lemma is
proved.

Now it follows by (23), (25) and bounded convergence that for each x E D:

h(x) = lim W{h(X(Tn ))}

which reduces to (22) by (24). The rest of Theorem 8 is contained in Theorem 2


and is stated only for a recapitulation.

Exercises
1. Without using Theorem 4, give a simple proof that a ball or a cube in Rd
is regular. Generalize to any solid with a "regular" surface so that there
is anormal pointing outward at each point. [Hint: (V) of §4.2 is sufficient
for most occasions.]
2. Let D be the domain obtained by deleting a radius from the ball B(o, 1)
in R 3 ; and let j = Ilxll on oD. The original Dirichlet problem (D,j) has
no solution.
3. In classical potential theory, a bounded domain D is said to be regular
iff the Dirichlet problem (D,j) is solvable for every continuous j on D.
Show that this definition is equivalent to the definition given here.
4. For any open set D, show that the function g defined in (21) is either
identically zero or strict1y positive in each connected component
(domain) of D. Moreover there is at most one component in which g > O.
Give examples to illustrate the various possibilities.
5. What is the analogue of (17) and (18) for R 1 ?
6. Derive the Poisson integral in R 2 from Cauchy's formula (19), or the
Taylor series of an analytic function (see Titchmarsh [1]).
7. Derive the Poisson integral corresponding to the exterior Dirichlet
problem for the ball B = B(o, r); namely find the explicit distribution of
X(ToB )1{ToB< 001 under px, where x E Be.
8. Let d 2 2; A be the hyperplane {x E Rdlxd = O}; and let D =
{xERdlxd>O}. Compute HD(x,'), namely the distribution of X(T A )
under px, for XE D. [Hint: we need the formula for Brownian motion in
R 1 :P{T{ol E dt} = Ixl/(2nt)3/2 exp[ -x 2/2t] dt; the rest is an easy com-
putation. Answer: HD(x,dy) = r(d/2)Xd1t-d/21Ix - yll-d l1 (dy) where 11 is
the area on A.]
174 4. Brownian Motion

9. Let D be open in Rd , d ;;::: 2. Then D is polar if and only if there does not
C

exist any noneonstant bounded harmonie funetion in D. [Hint: if D is C

not polar, there exists a bounded eontinuous funetion I on cD whieh is


not eonstant.]
10. Let {x m n ;;::: I} be independent, identieally distributed random vari-
ables taking the values ± 1 with probability 1/2 eaeh; and let X n =
Ii=l x j • Define for k;;::: 1: T k = T(kJ on {Xl = +1}; T k = T{-k) on
{X 1 = -1}; and A = {limk~ cx: X(T k ) = + oo}. Show that eaeh T k is op-
tional with respeet to {'~n}' where '~n = O"(X j , I sj sn); T k i CXJ
almost surely; but A does not belong to the remote field 1 /\:;;=
O"(X n , n ;;::: m). Now find a similar example for the Brownian motion.
(This is due to Durrett.)

4.5. Superharmonie Function and Supermartingale

We introduee a dass of funetions whieh stand in the same relation to har-


monie funetions as supermartingales to martingales.

Definition. Let D be an open set. A funetion u is ealled superharmonie in D


iff
(a) - 00 < u s +00; u i= + 00 in eaeh eomponent domain of D; and u
is lower semi-eontinuous in D;
(b) for any ball B(x, r) § D we have

u(x) ;;::: ~~)


O"(r
f S(x.r)
u(y)O"(dy). (1)

[A funetion u is subharmonie in D iff - u is superharmonic in D; but we


ean dispense with this term here.] Thus the sphere-averaging property in (3)
of §4.3 is replaced here by an inequality. The assumption of lower semi-
continuity in (a) implies that u is bounded below on each compact sub set of D.
In particular the integral in the right member of (1) is weIl defined and not
equal to - 00. We shall see later that it is in fact finite even if u(x) = +x.
In consequenee of (l), we have also for any B(x, r) § D:

u(x) ;;::: _(1)


vr
r
JB(x.r)
u(y)m(dy). (2)

Since u is bounded below in B(x, r) we may suppose u ;;::: 0 there, and u(x) <x.
We then obtain (2) from (1) as we did in (6) of §4.3. It now follows that if u
is superharmonic in a domain D, then it is locally integrable there. To see this
we may again suppose u ;;::: 0 beeause u is bounded below on each eompact
4.5. Superharmonie Function and Supermartingale 175

subset of D. Let A be the set of points x in D such that u has a finite integral
over some ball with center at x. It is trivial that A is open. Observe that if
x E A, then there is X n arbitrarily near x for which u(x n) < 00, and con-
sequently U has a finite integral over any B(x m r) (§ D. Therefore this is also
true for any B(x, r) (§ D by an argument of inclusion. Now let X n E A, x n --+
X E D. Then for sufficiently large n and small r we have x E B(x n , r) (§ D. Thus
XE A and so A is closed in D. Since D is a domain and A is not empty we
have proved A = D as claimed.
We now follow an unpublished method of Doob's to analyse super-
harmonie functions by means of supermartingales. Suppose first that D is
bounded and u is superharmonic in an open set containing 15. Let p denote
the distance function in R d • For a fixed 8 > 0 and each x in D we put

r(x) = -tp(x, 8D) /\ 8, S(x) = S(x,r(x));

thus S(x) is the sphere with center x and radius r(x) varying with x as pre-
scribed. Define a sequence of space-dependent hitting times {Tm n ~ O} as
folIows: T o = 0 and for n ~ 1:

T n = inf{ t > Tn-11 X(t) E S(X(Tn- 1»} (3)

where inf 0 = + 00 as usual. Each T n is optional with respect to {.~]


(exercise !). The strong Markov property of X and the inequality (1) imply
the key relation:

(4)

provided that u(x) < 00. The integrability of u(X(Tn)) is proved by induction
»,
on nunder this proviso. Thus {u(X(Tn :F(Tn), n ~ 1} is a supermartingale
under PX • Since D is bounded all the terms of the supermartingale are
bounded below by a fixed constant. Hence it may be treated like a positive
supermartingale; in particular, it may be extended to the index set
"1 :$; n :$; 00" in a trivial way.
Let us first prove the crucial fact that for each x E D, we have

pX{lim T = 'D} = 1.
n~oo
n (5)

For it is clear that PX_a.s., Tni and T n < 'D < 00 because D is bounded. Let
T = lim n i Tn- Then X(T (x,) = limn~oo X(Tn) by continuity and so X(T,J E
00

15. Suppose if possible that X(T 00) E D. Then for all sufficiently large values
ofn, wehavep(X(Tn_1),oD) > 2- 1p(X(T),8D) > Oandp(X(Tn_d,X(Tn») <
»
4- 1 p(X(T),8D)/\8. But by (3), p(X(Tn-d,X(Tn = 2- 1 p(X(Tn_d,8D)/\r..
These inequalities are incompatible. Hence X(T 00) E 8D and consequently
T 00 = 'D·
176 4. Brownian Motion

Now fix t > 0 and define a stopping random variable as folIows:

N = inf{n 2 IITn 2 t}. (6)

where inf 0 = + 00. For each integer k 2 1, {N = k} belongs to the tr-field


generated by Tl' ... , T k ; hence to ff(T k ). Thus N is optional with respect to
{ff( T n ), n 2 I}. Therefore by Doob's stopping theorem for a positive (hence
closable) supermartingale (Theorem 9.4.5 of Course):

(7)

is a supermartingale under Px provided u(x) < oc. Here X(T J = X(T D),
ff(T xJ = .~(TD) by (5). It follows that

by Fatou's lemma since a1l the terms are bounded be10w. Now !im n
X(T nAN ) = X(T N ) whether N is finite or infinite, and by the 10wer semi-
continuity of u in 15 (because u is superharmonic in an open set containing 15)
we conclude that
(9)

where we have indicated the dependence of N on ß. On the set (t < T})},


there exists n such that t < T n ; hence N(E) < Uj and TN(E)-l < t ::; T"I(I) by (6).
By the definition of T N(f.), this imp!ies IX(t) - X(T N(E»)I ::; E. Thus we have
proved
!im X( T N(f.») = X(t !\ T D) (10)
f.t 0

PX-a.s. on {t < T V }' On the set {t 2 T v }, N(ß) = + oc for a1l E > 0 and X(T N(I.») =
X(T D ) by (5). Hence (10) is also true. Using (10) in (9) and the lower semi-
continuity of u again, we obtain

(11 )

The next step is standard: if 0 < s < t, we have PX-a.s.

e{ u(X(t !\ T D)) I.;F,} = EX(S){ u(X( (t - s)!\ T J)))} ::; u(X(s))


= u(X(S!\ T n)) (12)

on {s < T n }, since X(s) E D; whereas on {s 2 Tf)} the two extreme members


of (12) both reduce to U(X(T n )) and there is equa!ity. We record this im-
portant resu1t be10w.
4.5. Superharmonie Function and Supermartingale 177

Theorem 1. IJ D is bounded and u is superharmonie in an open set containing


15, then
{u(X(t 1\ T n)), ff(t), 0 ~ t ~ oo} (13)

is a supermartingale under px Jor each XE D Jor which u(x) < 00. When u is
harmonie in an open set containing 15, then (13) is a martingale under Px Jor
each x E D.

We proceed to extend this result to an unbounded D and a function u


which is superharmonic in D only. At the same time we will prove that the
associated supermartingale has right continuous paths with left limits. To do
this we use the basic machinery of a Hunt process given in §3.5. First, we
must show that the relevant process is a Hunt process. For the Brownian
motion in R d this is trivial; indeed we verified that it is a Feller process. But
what we need here is a Brownian motion somehow living on the open set D,
as follows (see Exercise 9 for another way).
Let D be any open sub set of R d, and Da = D u {o} the one-point com-
pactification of D. Thus 0 plays the role of "the point at infinity": x -+ aiff
x leaves all compact subsets of D, namely x -+ oD; and all points of oD are
identified with o. Define a process {X(t); t ~ O} living on the state space Du
as follows:

X(t) = {X(t), ~f t < T n ; (14)


0, IfTn~t~OO.

By this definition Gis made an absorbing state, and T n the lifetime of X; see
(13) and (14) of §1.2. Define a transition semigroup (Qt, t ~ 0) as follows; for
any bounded Borel function <p on Da and x E Da:

(15)

In particular, Qo<P = <po We call X the "Brownian motion killed outside D"
or "kilIed at the boundary GD". It is easy to check that (Qt) is the transition
semigroup of X, but the next theorem says much more. Recall {~} is the
family of augmented a-fields associated with the unrestricted Brownian
motion {Xt, t ~ O}.

Theorem 2. {i\,~, t ~ O} is a Hunt process.

Proof. First of all, almost all sampie paths are continuous in [0, 00), in the
topology of D", To see this we need only check the continuity at T D when
Tn < 00. This is true because as ti Tn < 00, X(t) -+ G = X(T n). Since con-
tinuity clearly implies quasi left continuity, it remains only to check the strong
Markov property. Let J be a continuous function on Da, then it is bounded
because Da is compacL Let T be optional with respect to {~}, and A E ffT •
178 4. Brownian Motion

We have for X E Da and t ~ 0:

EX{A; Qt!(X r ); T< oo}


= EX{A; T< 'D; Qt!(X r )} + PX{A; 'D ~ T< cx:;}j(ö). (16)

since Qt!(ö) = f(ö) for t ~ O. By (15), the first term on the right side above is
equal to

' T<, D,. Ex(r)[t <, D,.


ExrlA' . f'(X t )]} + EX{A',T<,
D'
. EX('f)[t >
-
, D] lf'(r)
J. '

which is equal to

by the strong Markov property of the unrestricted Brownian motion X


applied at T. Substituting into (16) we obtain

This is none other than EX{A;f(X Ht ); T + t < oo} upon inspection.


Since A is an arbitrary set from !#'r, we have verified that

for every continuous f on Da. Therefore, X has the strong Markov property
(cf. Theorem 1 of §2.3), indeed with respect to a larger u-field than that
genera ted by X itself. 0

We are ready for the denouement.

Theorem 3. Let D be an open set and u be superharmonie and ~ 0 in D; pul


u(ö) = O. Then u is excessive with respect to (Qt). Conversely if u is excessive
with respect to (Qt), and =i= 00 in each component domain of D, then u restricted
to D is superharmonic and ~ O.

Proof· Let each D n be bounded open and D n ii D. Write 'n = 'D"' , = 'D
below. Theorem 1 above is applicab1e to each D.. Hence if x E D and u(x) <
00 we have

(17)

where the second inequality is trivial because u ~ O. As n --+ 00 we have PX-a.s.


'n i ' and {t < Ln} i {t < ,}. Hence it follows by monotone convergence that

(18)
4.5. Superharmonie Function and Supermartingale 179

where the equation is due to u(o) = O. Since u is lower semi-continuous at x


we have PX-a.s. u(x)::s; limt~O u(X(t)); while limt~O 1{t<tJ = 1. Hence by
Fatou:

u(x);;::: e{lim U(X(t))I{t<tJ};;::: lim Qtu(x). (19)


t~O \ tTO

The two relations (18) and (19), together with the banality u(o) = Qtu(o),
show that u is excessive with respect to (Qt).
Conversely, if u is excessive with respect to (Qt), then u ;;::: 0; and the
inequality (1) is a very special case of u ;;::: P AU in Theorem 4 of §3.4. Observe
that QAc(X,') = P AC(X, .) if x E D and A c D, where QA is defined for X as
PA is for X. To show that u is superharmonic in D it remains to show that u
is lower semi-continuous at each x in D. Let B(x,2r) @ D. Then for all Y E
B(x, r) = B, we have pY {-r ::s; t} ::s; pY {Tr ::s; t} and the latter does not depend
on Y (Tr is defined in (10) of §4.2). It follows that

lim PY{r ::s; t} = 0 uniformly for Y E B. (20)


t~O

Now write for Y E B:

Qtu(y) = EY{t < r; u(X t)} + PY{r ::s; t}u(o)


= Ptu(y) - EY{r ::s; t; u(X t ) - u(o)}. (21 )

Suppose first that u is bounded, then Ptu is continuous by the strong FeIler
property; while the last term in (21) is less than s if 0< t< <5(s), for all Y E B.
We have then
u(y) ;;::: Qtu(y) ;;::: Ptu(y) - s;
lim u(y) ;;::: lim Ptu(y) - s = Ptu(x) - s ;;::: Qtu(x) - E.
y-x y-x

Letting t ! 0 we see that u is lower semi-continuous at x. For a general


excessive u, we have u = limn i (u 1\ n), hence its lower semi-continuity
follows at once from that of u 1\ n. 0

Here is an application to a weIl known result in potential theory known as


the barrier theorem. It gives a necessary and sufficient condition for the reg-
ularity at a boundary point of an open set. We may suppose the set to be
bounded since regularity is a local property. A function u defined in D is
called a barrier at z iff it is superharmonic and > 0 in D, and u(x) converges
to zero as x in D tends to z.

Proposition 4. Let z E oD; then z is regular Jor DC if and only if there exists a
barrier at z.
180 4. Brownian Motion

Proof. Let u be a barrier at z and put u(o) = O. Then u is excessive with


respect to (Qt). We may suppose u bounded by replacing it with u /\ 1. Suppose
z is not regular for D C• Let BI and B z be two balls with center at z, BI c B z .
We have then

On {rB, < rD}' we have X(rB,) E D and

EX(TB,>[ r B2 < r D; u(X(r B2))] :::::; EX(TU,)[ u(X(r B2 /\ r D)]


= Qcu(X(r B,)) :::::; u(X(r B,)) = u(X(r B,)),

where C = (B z n D)C u {o}, explicitly. Substituting into (22) we obtain

Since PZ-a.s., r D > 0 while r B2 ! 0 if B z shrinks to z. Hence we may choose B z


so that the left member of (23) has a value >0 because u > 0 in D. Now fix
B z and let BI shrink to z. Then X(r B,) -+ z and so u(X(r B,)) -+ 0 by hypothesis;
hence the right member converges to zero by bounded convergence. This
contradiction proves that z must be regular for D C•
Conversely if z is regular for DC, put f(x) = Ilx - zll on clD. Then I is
bounded on oD since D is bounded; and f(x) = 0 if and only if x = z. By
the solution of the Dirichlet problem for (D,f), u = P DJ is harmonie in D
and converges to f(z) = 0 as x -+ z. Since f > 0 on uD except at one point
(which forms a polar set), it is clear that u > 0 in D. Thus u is the desired
barrier at z. D

Proposition 5. Let u be superharmonie and bounded below in a domain D. Let


B be open and B c D. Then PBcu is harmonie in B, and superharmonie in D.

Proof. We may suppose u ~ 0 by adding a constant and put u(D) = O. Then


u is excessive with respect to (Qt) and so by Theorem 4 of §3.4, for every XE D:

(24)

Since u is locally integrable, there exists x in each component of B for which


u(x) < 00. Theorem I of §4.3 applied with f = u then establishes the har-
monicity of P BCU in B. PBCU is excessive with respect to (Qt) by Theorem 4 of
§3.4, hence superharmonie in D by Theorem 3 above. D

When B is a ball with B c D, the function PBC is known as the Poisson


integral of u for Band plays a major role in classical potential theory. Since
B is regular we have PBCU = u in D - B. Thus the superharmonie function u
is transformed into another one which has the nicer property of being har-
4.5. Superharmonie Function and Supermartingale 181

mo nie in B. This is the simplest illustration of the method of balayage:


"sweeping all the charge in B onto its boundary". (There is no charge where
Llu = 0, by a law of electromagnetism.) Poisson's formula in Example 2 of
§4.4 gives the exact form of the Poisson integral. A c10sely related notion will
now be discussed.

EXAMPLE. Recall from Example 4 of §3.7, for d ~ 3:

Ad
u(x, y) = Ilx _ Ylld 2'

Then u(x, y) denotes the potential density at y for the Brownian motion
starting from x. For a fixed y, u(·, y) is harmonie in Rd - {y} by Proposition
5 of §4.3. Since it equals + 00 at y it is trivial that the inequality (1) holds for
x = y; since it is continuous in the extended sense, u(·, y) is superharmonie
in R d • Now let D be an open set and for a fixed y E D put

gn(x, y) = u(x, y) - Pncu(x, y) (25)

where Pnc operates on the function x -+ u(x, y). It follows from the preceding
discussion that gn(·, y) is superharmonie in D, harmonie in D - { y}, and
vanishes in (15)" and at points of oD which are regular for DC ; if Dis regular
°
then it vanishes in D C• Since u(·, y) is superharmonie and ~ in Rd , it is
excessive with respect to (Pt) by an application of Theorem 3 with D = Rd •
Of course this fact can also be verified direct1y without the intervention
°
of superharmonicity; cf. the Example in §3.6. Now gn ~ by Theorem 4 of
§3.4. Hence it is excessive with respect to (Qt) by another application of
Theorem 3. For any Borel measurable function f such that Ulfl(x) < 00 we
have

where X t may be replaced by Xt in the last member above. Thus gn(x, y)


plays the same role for X as u(x, y) for X; it is the potential density at y for
the Brownian motion starting from x and killed outside D.
The function gn is known as the Green's function for D and the quantity
in (26), which may be denoted by Gnf, is the associated Green's potential
of f. It is an important result that gn(x, y) = gn(Y, x) for all x and y; indeed
Hunt [1] proved that Qt(x, dy) has a density q(t; x, y) which is symmetrie
in (x, y) and continuous in (0, (0) x D x D. This is abasie result in the
deeper study of the killed process, which cannot be undertaken here. In
R 2 , Green's function can also be defined but the situation is complieated
by the fact that the Brownian motion is recurrent so that the corresponding
182 4. Brownian Motion

u in (25) is identically + 00. [This is excessive but not superharmonic!]


Finally, for a ball in any dimension Green's function can be obtained by
a device known as Kelvin's transformation; see e.g. Kellogg [1]. The case
d = 1 is of course quite elementary.
As a corollary of Proposition 5, we see now that the sphere-average
in the right member of (1) is finite for every x in D. For it is none other than
PBcu(x) when B = B(x, r). The next proposition is somewhat deeper.

Proposition 6. IJ u is excessive with respect to (Qt), and =f. 00 in each com-


ponent oJ D, with u(o) < 00, then QtU < 00 Jor each t > O.

ProoJ. Let x E D, B = B(x,r) @ D, and r = rD. Then we have

The first term on the right side of (27) does not exceed

because u is 10cally integrable. The second term does not exceed

by the supermartingale stopping theorem applied to {u( X t) ~j. Since -


P{t ~ r; u(X t )} = PX{t ~ r}u(o) < 00

Qtu(x) < 00 by (15). o


We can now extend Theorem 1 as folIows.

Theorem 7. Let D be an open set and u be superharmonic and ~O in D. Then


Jor any XE D,
w = lim u(X(t)) exists PX-a.s. (28)
tiirD

Put Jor 0 :S t :S 00:

w(t) = {U(X(t)), (29)


w,

thus w( (0) = w. Then Jor any x E D, we have PX-a.s.: t ~ w(t) is right con-
tinuous in [0,(0) and has leJt limits in (0,00]; and {w(t), ~(t), 0 < t :S X)}
is a supermartingale under p x . In ease u(x) < 00, the parameter value t = 0
may be added to the supermartingale. IJ u is harmonie and bounded in D, then
{w(t), ~(t), 0 :S t:S oo} is a continuous martingale under p x Jor any x E D.
4.5. Superharmonie Function and Supermartingale 183

Proof. By Theorem 3, u is excessive with respect to (Qt). Hence by Theorem 6


of§3.4 applied to the Hunt process X, t --+ u(X t ) is right continuous in [0,00)
and has left limits in (0,00], r-a.s. for any XE D. It follows that the limit
in (28) exists. The asserted continuity properties of t --+ w(t) then follow
trivially. Next, let Dn be bounded open, Dn c Dn + 1 and Un
Dn = D. Given
x in D there exists n for which x E D n . It follows from Proposition 5 that
EX {u(X('r DJ)} < 00 for all m ~ n, and is decreasing since u is excessive.
Hence, EX{w} < 00 by Fatou. We have either by Theorem 1 or by the
stopping theorem for excessive functions, under p x :

w(o) = u(x) ~ li~ EX{u(X(t A 'tDJ)} ~ EX {li~ u(X(t A 't DJ )}

= EX{u(X(t»; t < 't D } + P{w; t ~ 't n}


= EX{w(t)}.

For any x E D and t > 0;

These relations imply the assertions of the theorem, the details of which
are left to the reader. D

There is a major improvement on Theorem 7 to the effect that almost


all sampIe functions of w(t) defined in (29) are actually continuous. This
is Doob's theorem and the proof given below makes use of a new idea,
the reversal of time. We introduce it for a spatially homogeneous Markov
process discussed in §4.1, as folIows. The transition probability function
°
is said to have a density iff for every t > there exists Pt ~ 0, Pt E g X If
such that
(30)
Recall that the Lebesgue measure m is invariant in the sense of (8) of §4.1.
In the case of (30) this amounts to Jm(dx)Pt(x, y) = 1 for m-a.e. y. Although
m is not a finite measure, it is trivial to define pm and Em in the usual way;
thus pm(A) = Jm(dx)PX(A) for each A E $'-. Now for a fixed c > Odefine
areverse process Xc in [O,c] as follows:
XAt) = X(c - t), tE [0, cJ. (31 )

There is no difficulty in the concept of a stochastic process on a parameter


set such as [0, cJ. The sam pIe functions of Xc are obtained from those of X
by reversing the time from c. Since X as a Hunt process is right continuous
and has left limits in (0, c), Xc is left continuous and has right limits there.
This kind of sampIe functions has been considered earlier in Chapter 2,
for instance in Theorem 6 of §2.2.
184 4. Brownian Motion

Proposition 8. Let X be a spatially homogeneous M arkov process satisfying


(30), where for eaeh t> 0, Pt is asymmetrie funetion of (x, y). Then under
pm, Xc has the same finite dimensional distributions as X in [0, cJ.

Proof· Let 0::;; t l < ... < t n ::;; e, and jj E tff+ for 1 ::;;j::;; n. To show that

Em{Ö }-I
jj(X(t))} = Em{Ö }-I
jj(Xc(t))} (32)

we rewrite the right member above as Em{OJ=nfj(X(c - t))}. Since


mPc-tj = m, the latter can be evaluated as folIows:

1
r . Jdxnfn(x 0 p(tj+
n) .
}=n-I
1 - t j; x j+b x)jj(x) dX j

= r·Jfl(xl)dx 1 n
n-I

J=I
P(tj+l-tj;Xj,Xj+l)jj+I(Xj+l)dxj+l'

The last expression is equal to the left member of (32) by an analogous


evaluation. D

We return to Brownian motion X below.

Theorem 9. Let u be a positive superharmonic function in R d. Then t ~ u(X(t))


is a.s. continuous in [0, 00] (i.e., continuous in (0,00) and has limits at and °
at 00).

Proof. Consider the reverse process X c in (31). AIthough under pm it is a copy


of Brownian motion in [0, c] under pm by Proposition 8, it is not absolutely
clear that we can apply Theorem 6 of §3.4 to u(XAt)) since that theorem is
stated for a Hunt process in [0, 00). However, it is easy to extend Xr to
[0,00) as folIows. Define

Y(t) = {~c(t), for t E [O,c], (33)


c Xc(c) + X(t) - X(c), for t E [c, 00).

It follows from the independent increments property of X and Proposition 8


that {~(t), t ~ O} under pm is a copy of Brownian motion under pm.
Therefore t ~ u(Y,,(t)) is right continuous in [0,00) and has left limits in
(0,00 J. In particular, t ~ u(X c(t)) = u(X(c - t)) is right continuous in (0. c),
which means that t ~ u(X(t)) is left continuous in (0, c). But t ~ u(X(t))
is also right continuous in (0, c), hence it is in fact continuous there. This
being true for each c, we conclude that under pm, t ~ u(X(t)) is continuous
in [0,00 J. By the definition of pm, this means that for m-a.e. x we have the
°
same resuIt under px. For each s > and x, Ps(x,') is absolutely continuous
4.5. Superharmonie Function and Supermartingale 185

with respect to m. Hence for Ps(x,' )-a.e. y, the result holds under pY. For
°
s :::::: let
As = {w u(X(-, w)) is continuous in es, oo]}.
1

Then A s E /F- (proof?). We have for each x,

because X(s) has Ps(x,·) as distribution. Letting s 1 0, we obtain P X{A o} = 1.


D

Theorem 9 is stated for u defined in the whole space. It will now be ex-
tended as folIows. Let D be an open set, and u be positive and superharmonie
in D. The function t -> u(X(t,w)) is defined in the set I(w) = {t > X(t,w) E 01
D}, which is a.s. an open set in (0,00). We shall say that u(X) is (right, left)
continuous wherever defined iff t -> u(X(t, w)) is so on I(w) for a.e. w. Let
rand r' be rational numbers, r < r'. It follows from Theorem 7 that for
each x

pX {X(t) E D for all tE er, r'], and t -> u(X(t)) is right continuous
in (r,r')} = PX{X(t) E D for all tE [r,r']}. (34)

For we can apply Theorem 7 to the Brownian motion starting at time r in D,


on the set {r' < I D }. Since every generic t in I(w) is caught between a pair
er, r'] in the manner shown in (34), we have proved that for a Brownian
motion X with any initial distribution, u(X) is right continuous wherever
defined. In particular, the last assertion is true for the Brownian motion
}';. defined in (33) under pm. Thus t -> u( }';.(t)) is right continuous wherever
defined, hence t -> u(X(c - t)) is right continuous for tE (0, c) and c - tE
I(w). Since c is arbitrary, this implies that t -> u(X(t)) is left continuous for
tE I(w). Thus u(X) is in fact continuous wherever defined under pm. As in
Theorem 9, we can re pi ace pm by p x for every x. In the context ofTheorem 7,
we have proved that t -> u(X(t)) is continuous in (0, I D), hence t -> w(t) is
continuous in [0,00 J. We state this as a corollary, although it is actually
an extension of Theorem 9.

Corollary. In Theorem 7, t -> w(t) is continuous in [0,00], a.s.

Using the left continuity of an excessive function along Brownian paths,


we can prove a fundamental result known as the Kellogg-Evans theorem
in classical potential theory (see §5.l).

Theorem 10. For the Brownian motion process, a semi-polar set is polar.
186 4. Brownian Motion

Proof. By the results of §3.5, it is sufficient to prove that a thin compact


set K is polar. Put <p(x) = P{e- TK }. Since K is thin, <p(x) < 1 for all x.
Let first x ~ K, and Gn be open sets such that Gn II K. Then lim n i T G" =
T K S 00 and T G" < T K for all n, PX-a.s. The strict inequality above is due
to the continuity ofthe Brownian paths and implies T K = T G" + T K O(TGJ 0

on {TG" < oo}. It follows by the strong Markov property that

(35)

Since T G" E ff(T G.,), we have T K E V::= 1 ~(TGJ Letting n -+ 00 in (35), we


obtain a.s.
(36)

where Theorem 9 is crucially used to ensure that <p(X(·)) is left continuous


at T Kif T K < 00, and has a limit at T Kif T K = 00. In the latter case of course
<p(X(TK)) is meant to be lim ti 00 <p(X(t)). However, we need (36) only on
the set {TK < oo}, on which the relation (36) entails <p(X(TK )) = 1. Since
<p < 1 everywhere this forces P T K < oo} = O. Now let x be arbitrary and
X {

t > O. We have

because X(t) E K C on {t< T K}. Letting tl 0 we obtain PX { T K < oo} = 0


since x is not regular for K. Thus K is polar. D

Corollary. For any Borel set A, A\A r is a polar set.


This folio ws because A\Ar is semi-polar by Theorem 6 of §3.5. We can
now prove the uniqueness ofthe solution ofthe generalized Dirichlet problem
in §4.4.

Proposition 11. Let D be a bounded open set; f be a bounded measurable


function defined on iJD and continuous on iJ*D = iJD n (D')'. Then the func-
tion H vI in (1) of §4.4 is the unique bounded solution to the generalized Dirichlet
problem.

Proof. We know from §4.4 that HDf is a solution. Let h be another solution
and put u = h - H Df. Then u is harmonie in D and converges to zero
on a* D, the set of "regular boundary points". By the Corollary to Theorem
10, DC\(D')' is a polar set. Observe that it contains aD - c7* D. For each
x in D, the following assertions are true PX_a.s. As t ii 'D' X(t) -+ X('D) E
iJ*D because aD - a*D being a subset of DC\(Dcy cannot be hit. Hence
u(X(t)) as t ii 'D converges to zero, and consequently the random variable
w defined in (28) is equal to zero. Applying the last assertion of Theorem 7
to u, we conclude that u(x) = P{w} = O. In other words, h == HDf in D.
D
4.5. Superharmonie Function and Supermartingale 187

We dose this seetion by diseussing the analogue of Definition 2 for a


harmonie funetion in §4.3. It is trivial that the limit of an inereasing sequenee
of superharmonie funetions in a domain D is superharmonie provided it
is not identieally infinite there. On the other hand, any superharmonie
funetion in D is the limit of an inereasing sequenee of infinitely differentiable
superharmonie funetions in the sense made preeise in Exereise 12 below.
Finally, we have the following eharaeterization for a twiee eontinuously
differentiable superharmonie funetion.

Theorem 12. 11 u E 1[(2)(D) and Lfu ::;; 0 in D, then u is superharmonie in D.


11 u is superharmonie and belong to 1[(2) in a neighborhood 01 x, then Lfu(x) ::;; O.
Proof. To prove the first assertion let B(x, r) <§ D. Then u belongs to 1[(2)
in an open neighborhood of B(x, r). Henee formula (16) of §4.3 holds with
h replaeed by u, and yields (d/d6)A(x, 6) ::;; 0, for 0 < 6 ::;; r. It follows that
A(x, r) ::;; lim~t 0 A(x,6) = u(x), whieh is the inequality (1). The other eon-
ditions are trivially satisfied and so u is superharmonie in D. Next, suppose
that u is superharmonie and belongs to 1[(2) in B(x, r). We may suppose
u 2': 0 there, and then by Theorem 3 u is exeessive with respeet to the Brownian
motion killed outside B(x, r). It follows by Theorem 4 of §3.4 that PB(x,WU(x)
for 0 < 6 < r deereases as 6 inereases. Henee in the notation above (d/d6)A(x,
J
6) ::;; 0 for m-a.e. 6 in (0, r), and so by (16) of §4.3 we have S(x.~) Lfu(y)O'(dy) ::;; 0
for a sequenee of 6 ! O. Therefore Lfu(x) ::;; 0 by eontinuity as 6 ! O. D
Exercises
1. If u is superharmonie in D, prove that the right members of (1) and of
(2) both inerease to u(x) as r ! O.
2. Ifu is superharmonie in D, then for eaeh x E D we have u(x) = limx*y~x
u(y). -

3. Let u be superharmonie in D, and D r = {x E Dlp(x,oD) > r}, r > O.


Define
ur (x) = r
JB(x,r)
u(y)m(dy).

Prove that Ur is a eontinuous superharmonie funetion in D r .


4. (Minimum prineiple). Let u be superharmonie in a domain D. Suppose
that there exists Xo E D sueh that u(xo) = infXED u(x); then u == u(xo)
in D. Next, suppose that for every Z E oD U {oo} we have limx~z u(x) 2':
0, where x-- 00 means Ilxll-- 00. Then u 2': 0 in D. -
5. Let D be a domain and u have the properties in (a) of the definition of
a superharmonie function. Then a neeessary and sufficient condition
for u to be superharmonie in D is the following. For any ball B € D,
and any funetion h which is harmonie in Band continuous in 13, u 2': h
on aB implies u 2': hin B. [Hint: for the sufficiency part take In continuous
188 4. Brownian Motion

on oB and ju there, then HBln S u on oB; for the neeessity part use
Proposition 5 of §4.4.J
6. Extend the neeessity part of Exercise 5 to any open set B with eompaet
Be D. [Hint: if B is not regular, Proposition 5 of §4.4 is no longer
suffieient, but Proposition 6 iso Alternatively, we may apply Exereise
4 to u - h.J
7. If u is superharmonie in R d and t > 0, then Ptu is eontinuous [Hint: split
the exponential and use p t / 2 u(0) < oo.J
8. Let D be bounded, u be harmonie in D and eontinuous in 15. Then the
proeess in (13) is a martingale. Without using the martingale verify
direetly that u(x) = E u(X(t J\, D))} for 0 S t S 00. Note that D need
X
{

not be regular.
9. Let D be an open set and define for t ~O:

X(t) = X(t J\ 'D)'


The proeess i is ealled the Brownian motion stopped at the boundary
01 D. Hs appropriate state spaee is 15. Find the transition semigroup of X.
Are all the points of oD absorbing states? Prove that Xis a Hunt proeess.
[Hint: we need Theorem 10; but try first to see the diffieulty!J
10. If D is open bounded and z is regular for D e. Prove that the funetion
u(x) = g{'D} is a barrier at z for D.

11. Show that the first funetion in (17) of §4.3 is subharmonie in R 2 ; the
seeond superharmonie in R d, d ~ 3.
12. Let u be superharmonie in D. For (j > 0 put
uö(x) = fRd <Pö(llx - yll)u(y)dy

where <Pö is given in Exereise 6 of §4.3. Let D ö be as in Exereise 3 above.


Prove that U ö is superharmonie and infinitely differentiable in D ö , and
inereases to u as (j t O.
13. For eaeh t o > 0, the funetion x ~ PX{'D > t o} is exeessive with respeet
to (Qt). Henee it is eontinuous in D. [Hint: use Proposition 1 of §4.4.J
14. For eaeh bounded Borel measurable I, and t > 0, QJ is eontinuous
in D. Henee if D is regular, the Brownian motion killed outside D is
a FeIler proeess. [Hint: for 0< s < t we have QJ = PsQt-J - EX{s ~
'D; Qt-J(X sn;use (20).J
15. Under the eonditions of Theorem 1, prove that the funetion PDeU is
superharmonie in an open set D o eontaining 15. [Hint: u being super-
harmonie in D o, PDeU is exeessive with respeet to the Brownian motion
killed outside Do.J
16. In R 2 , a positive superharmonie funetion is a eonstant.
4.6. The Role of the Laplacian 189

In the next two exercises, proofs of some of the main results above are
sketched which do not use the general Hunt theory as we did.
17. Let u be a positive superharmonic function in R d . We may suppose it
bounded by first truncating it. Prove that alm ost surely:
(a) t -> u(X(t)) is lower semi-continuous;
(b) if Q is the set of positive rationals, then
lim u(X(t)) = u(X(O));
Q3tlO

(c) the preceding relation is true without restricting t to Q, on account


of (a);
(d) t -> u(X(t)) is right continuous by the transfinite induction used in
Theorem 6 of §3.4.
18. Let u(x) = EX{e- TA } where A E lS'. Prove that u is lower semi-continuous
(for the Brownian motion). Show that {e-tu(X t), :F" t ~ O} is a super-
martingale, and that it is right continuous, hence also has left limits.
[Hint: Usetheproofin Problem 17 ;wedonotneed the superharmonicity.]
Theorem 9 follows from Problem 17 as be fore, and Theorem 10 from
Problem 18.

4.6. The Role of the Laplacian

The Dirichlet problem deals with the solution of Laplace's equation Llq> = 0
in D with the boundary condition q> = f on aD. The inhomügeneous case
of this equation is called Poisson's equation: Llq> = 9 in D where 9 is a given
function. Formally this equation can be solved if there is an operator .1- 1
inverse to .1, so that .1(Ll- 1 g) = g. We have indeed known such an inverse
under a guise, and now proceed to unveil it.
Suppose that 9 is bounded continuüus and Uigl < 00. Then we have

· -1 (Pt - /)Ug
I1m
t!O t
=
I'1m --1
tJ,O t
It Psgds
0
= -g (1)

where I == Po is the identity operator. Thus the operator

. 1
I1m
.si = - (Pt - /) (2)
tJ,O t

acts as an inverse to the potential operator U. It turns out si = 1Ll when


acting on a suitable dass of functions specified below.
Für k ~ 0 we denote by lC(k) the dass offunctions which have continuous
partial derivatives of order -::;, k. Note that it does not imply boundedness of
the function or its partial derivatives. The subdass of i[:<k) with bounded kth
190 4. Brownian Motion

partial derivatives is denoted by IC~); the subclass of lC(k) having eompaet


supports is denoted by lC~k). Clearly lC~k) c ICbk). These eonditions may be
restrieted to an open subset D of Rd , in whieh ease the notation lC(k)(D), ete.
is used.

Theorem 1. Let f be bounded Lebesgue measurable and belong to q,2) in an


open neighborhood of x. Then we haue

·
I1m (PI - l)j'(x)
~-- LI.
= - f' (x), (3)
110 t 2

boundedly in the neighborhood.

Proof. Let x be fixed and f belong to q2) in the ball B(x, r). By Taylor's
theorem with an integral remainder term, we have for Ilyll < r:

d d

fIx + y) - fIx) = L yJ;(.x) + L YiYj IOl .hAx + sy)( 1 - s) ds (4)


i= 1 i,j= 1

where J; and J;j denote the first and seeond partial derivatives of f. Note
that they are bounded in B(x, r). Now write

~ {Pt.!'(x) - fIx)} = ~ IRd p(t;o,y)[f(x + y) - f(x)Jdy; (5)

put y = zJt, and split the range ofthe integral into two parts: Ilyll = Ilzll.Jt <
rand Ilyll = Ilzll Jt 2: r respeetive1y. Observe that p(t; 0, z Jt) d(z Jt) =
p(l; 0, z) dz. The seeond part of the integral is bounded by

whieh eonverges to zero boundedly as t 1 O. Substituting (4) into the first


part of the integral, we obtain

-1 ~ _p(l;o,z) { .jtLzJ(x)+t
_ d Ld ZiZj
SI t;)x+sz-y't)(l-s)ds
r- }
dz
t Ilzll <rl, I i= 1 i.j= 1 0 .

d
=
Jllzll<rl,1
L
r _ p(1;o,z) i.j=l ZiZj r
Jo
l
f)x + szJt)(l - s)dsdz, (6)

beeause by spherieal symmetry we have for eaeh i:

r
Jllzil <rlvl
_p(l :o,z)zjdz = O.
4.6. The Role of the Laplacian 191

The right member of (6) may be written as


d
I ~ d p(l;o,z)ziz/pij(t,z)dz
i.j= 1 R

where ({Jii t , z) = {f
0
/;j(X + szJt)(l - s)ds, if Jt[[z[[ < r;
(7)
0, if Jt[[z[[;:o: r.

We have by bounded convergence,

1im ((Jij(t,z) = 11 /;j(x)(l - s)ds = -21 /;)x);


tto Jo
and by the properties of the normal distribution,

It foIlows that as t 1 0, the sum in (7) converges boundedly to


1 d 1
-2 . I l5ij/;)X) = -2 Llf(x). o
I,J= 1

We wish to apply Theorem I to f = Ug. When does this function belong


to C(2)? First of aB, we need U[g[ < 00. When UI = + 00, it is not sufficient
that 9 be bounded. Let IRe denote the dass of functions on R d which are
Lebesgue measurable and bounded with compact supports. If 9 E IRe then we
have shown in §3.7, for d = 3, that

Ug(x) = 2n
1 f [[xg(y)_ y[[ dy
is bounded continuous in x. In fact Ug E C(I). The same argument there
applies to any d ;:0: 3. Thus in this case Ug is smoother than g. In order that
Ug belong to C(2), we need a condition which is stronger than C(O) (continuity)
but weaker than C(1). There are several forms of such a condition but we
content ourselves with the following. The functions 9 is said to satisfy a
Hölder condition ifffor each compact K there exists rx > and M < 00 (rx and
M may depend on K) such that
°
[g(x) - g(y)[ $; M[[x - y[[~ (8)

for aIl x and y in K. If rx = 1 this is known as Lipschitz condition. If 9 also


has compact support then it is dearly bounded in R d • We shall denote this
dass of functions by lHl e' For a proof of the foIlowing resu1t under slightly
192 4. Brownian Motion

more general assumptions, see Port and Stone [1], pp. 115 -118. An easy
ease of it is given in Exereise 1 below.

Theorem 2. In Rd , d;:o: 3, ij" 9 E lEBe then Ug E e(1); ij" gE IHle (hell Ug E e (2 ).

We are now ready to show that the two operators - L1/2 and U aet as
inverses to eaeh other on eertain classes of funetions.

Theorem 3 (for Rd , d ;:0: 3). Ij" 9 E IHl n then - (L1j2)( U g) = g. Ir 9 E e~2), thell
U( - (L1/2)g) = g.

Praaf. If gE IHle or 9 E e~2) then Ug E e (2 ) by Theorem 2.ln either ease 9 and


Uigl are both bounded eontinuous in R d . Applying Theorem 1 with f = Ug,
together with (1), we obtain (L1j2)(Ug) = -go Next if 9 E e~2), we have by
Theorem 1 applied to 9 and bounded eonvergenee:

. lt P, (P
Ptg - 9 = hm -hg-- -g) ds = lt Ps (.hm--Phg -
- g)
hjO 0 h 0 hjO
h - cis

(9)

If K is the eompaet support of g, then IPtgl :s; IlgllPtl K ..... 0 as t ..... .J"j beeause
K is transient in R d , ci ;:0: 3. Sinee lL1gl is bounded and has eompaet support,
the last member of(9) eonverges to U((L1/2)g) as t ..... JJ. We have thus proved
that - 9 = U( (L1j2)g). 0

Sinee e~2) c IHl n Theorem 3 implies that - L1j2 and U are inverses to eaeh
other on e~2).
Next, reealling (26) of §4.5 but writing 9 for the f there, we know that if
Uigl < JJ, then
(10)

where

(11)

and D is any bounded open set in Rd , d ?: 3. This is then true if 9 E IHle. More-
over ifwe apply the Laplaeian to all the terms in (10), we obtain by Theorem 3

Ll(GDg) = -2g in D (12)

beeause PDcUg is harmonie in D, being the solution to the generalized


Diriehlet problem for (D, Ug). If D is regular, then PDcUg is eontinuous in
15 as the classieal solution to the Diriehlet problem. Henee GDg is eontinuous
in 15 and vanishes on cD, by (10). We summarize the result as folIows.
4.6. The Role of the Laplacian 193

Theorem 4 (for R d , d z 3). If 9 E ~c and D is a bounded regular open set, then


the function given in (11) is the unique solution of Poisson's equation - (Ll/2)<p =
9 in D which is continuous in l5 and vanishes on aD.

To see the uniqueness, suppose <p is sueh a solution. Then <p - GDg is
harmonie in D, eontinuous in l5, and vanishes on aD. Henee it vanishes in
l5 by the maximum (minimum) prineiple for harmonie funetions.
We proeeed to obtain similar results for R 2 . Sinee the Brownian motion
in R 2 is reeurrent, the potential kernel U is of no use. We use a substitute
eulled from Port and Stone [1]. Let Ilxoll = 1 and put for all x E R 2 :

u*(x) = fo CD
pi(x) dt.

We have

_ -
- 1
2n
1""(e
0
-e s -lds
-llxWs-s)

1 In f.rlllxw -1 -r 1 1
= 2n Jo r S ds e dr = ~ log 1I~' (13)

Now put
pi(x, y) = pi(Y - x), u*(x, y) = u*(y - x),
Pi(x,dy) = pi(x,y)dy;

U*(x, dy) = u*(x, y) dy = ~ log Ilx ~ yll dy. (14)

The kernel U* is the logarithmic potential. Note that not only u*(x) is positive
or negative aeeording as Ilxll :s; 1 or z 1; but the same is true for pi(x) for
eaeh t z O. If g is oounded with eompaet support, then U*g is bounded
eontinuous in R 2 . In this respeet the analogy of U* with U in R d for d z 3
is perfeet ; Theorem 2 also goes over as follows.

Theorem 2'. In R 2 , if 9 E IR n then U*g E C(1); if 9 E ~c then U*g E C(2).

The following eomputations should be handled with eare beeause the


quantities may be positive or negative. If gis Lebesgue integrable, we have
194 4. Brownian Motion

It follows that if gE IHle so that JIU*(X, y)g(y)ldy < 00, we have

U *g - PIU *g = SI P,gds - hm
0 . JA + A I P,gds = S'oP,gds (15)
A-'> ·f

because Ip,gl :s; 1/2ns Jlg(y)1 dy -> 0 as s -> 00.

Theorem 5 (for R2 ). Ir gE IHl n then - (LI/2)( U*g) = g. Ir g E 1[:~2), then


U*( - (LI/2)g) = g.

Proof. The first assertion follows from Theorems 1 and 2' and (15), in exactly
the same way as the corresponding ca se in Theorem 3. To prove the second
assertion we can use an analogue of (9), see Exercise 3 below. But we will
vary the technique by first proving the assertion when ;1y E IHle. U nder this
stronger assumption we may apply the first assertion to Llg to obtain

LI
- 2" U*(Llg) = Lly.

Hence LI (g + ~U*(Llg)) = o. Since both g and ~U*(Llg) are bounded it follows


by Picard's theorem (Exercise 5 of §4.3) that g + ~U*(Lly) = 0 as asserted.
For a general g E 1[:~2), we rely on the following lemma which is generally
useful in analytical questions of this sort. Its proof is left as an exercise.

Lemma 6. Let g be bouncled Lebesgue measurable and put j(Jr ä > 0:

where Cf>b is defined in Exercise 6 of §4.3. Then gij E 1[:( x). Ir in addition g beloflgs
to I[:b2 ) in an open set D, then gb -> g and Llgij -> Llg both boundedly in D as
<5 t O.

We now return to the proof of the second assertion of Theorem 5. If


gE 1[:~2), then Llg and Llgij for all () > 0 have a compact support K. Since
IU*l IK <x we have U*(Llgij) -> U*(Llg) by dominated convergence because
of the bounded convergence in Lemma 6. Since gij E I[:(XJ), we have already
proved that gij + ~U*(Llgb) = 0 for all () > 0; as t5 tOthis yields

g + ~ U*(Llg) = O. D
Next we consider Poisson's equation in R 2 . Since U* is not a true potential,
the analogue of (10) is in doubt. We make adetour as follows. Let y E ([~2),
then (9) is true. We now rewrite (9) in terms of the process

E'{q(X,)} - y(x) = E' {S: ~ y(XJ dS}. (16)


4.6. The Role of the Laplacian 195

Ifwe put
(17)

then a general argument (Exercise 10) shows that {Mt, g;, t ~ O} is a mar-
tingale under p x for each x. Let D be a bounded open set, then {MtAtD ,
g;, t ~ O} is a martingale by Doob's stopping theorem. Since g and Ag are
bounded and EX{'D} < 00 for XE D, it follows by dominated convergence
that if XE D:

g(x) = EX{M o} = lim EX{MtAtD} = W{M tD }


t-+ Xl

(18)

In particular if g = U*f where f E IHle> then g is bounded, g E 1[(2) by Theorem


2', and (A/2)g = - f by Theorem 5. Hence (18) becomes

U*f(x) = PDcU*f + GDf· (19)

This is the exact analogue of (10). It follows that Theorem 4 holds intact in
R 2•
We have made the assumption of compact support in several places above
such as Theorems 2 and 2', mainly to lighten the exposition. This assumption
can be replaced by an integrability condition, but the following lemma
(valid in Rd for d ~ 1) shows that in certain situations there is no loss of
generality in assuming compact support.

Lemma 7. Let fE I[(k) (k ~ 0) or satisfy a Hölder condition, and D be a hounded


open set. Then there exists g having compact support which satisfies the same
condition and coincides with f in D.

Proof· Let D o be a bounded open set such that D o ::J jj and p(aD, aD o) =
c5 o. Let 0 < c5 < c5 0 and put

t/I(x) = f
JDo qJ;;(x - y)dy
where qJ;; is as in Lemma 6. Then t/I E 1[(<10). If p(x, Do) > c5, then t/I(x) = O. If
XE D, then t/I(x) = 1. It is clear that the function g = f· t/I has the required
properties. 0

EXAMPLE 1. Let D be a bounded regular open set in Rd , d ~ 1. Solve the


equation
AqJ = 1 in D;
196 4. Brownian Motion

By Lemma 7, there exists g E IC~"XJ) such that g = 1 in D. Put

We do not need Theorem 2 or 2' here, only the easier Exercise 1 to conclude
that Ug or U*g belongs to IC(<XJ); hence GDg E 1C(00) by (10) or (19). We have
by Theorem 4,
L1cp = g = 1 in D.

It is clear that cp(x) = 0 if x E DC by the regularity of D.


Since g = 1 in D, we have cp(x) = --tEX{'D}' Using (VIII) of §4.2, we have

(20)

Since the first term on the right side of (20) is harmonie in D, we have

a neat verification.

The preceding discussion does not extend to R 1. In general, Brownian


motion in R 1 is a rather special case owing to the simple topological structure
of the line. On the analytical side, since L1 reduces to the second derivative
many resuIts simplify. For instance it is trivial that given any continuous g
there exists f such that L1f = g; think of the high dimensional analogue! F or
these reasons it is possible to obtain far more specific resuIts in R 1 by methods
which do not easily generalize. We leave a couple of exercises to suggest the
possibilities without developing them because a creditable treatment of
the Brownian motion in R 1 on its own right would take a volume or two.
However, we will describe the solution of Poisson's equation by a different
method attuned to one dimension, wh ich is useful for other harder problems.
Let g be bounded continuous in (a, b), a < b; we put , = '(a,bl and

cp(x) = p{S; g(Xt)dt}

Recall the notation T h = inf{t > O:IX(t) - X(0)12 h}. For each XE (a,b),
let [x - h, x + h] c (a,b). Then r{T h <, < oo} = 1. We have by the strong
Markov property:

cp(x) = EX {(Soh + S;h) g(X t) dt}

= EX {fOTh g(Xt)dt} + PThCP(X). (21)


4.6. The Role of the Laplacian 197

It is obvious by symmetry that PX{X(Th ) = X + h} = r{X(T h ) = x - h} =


!; hence
Phq;(x) = Hq;(x + h) + q;(x - h)}.

Substituting into (21) we obtain

hI2 {q;(x + h) - 2q;(x) + q;(x - h)} =


-2 EX {fT
V Jo h
}
g(Xt}dt . (22)

Without loss of generality we may suppose that g :::::: 0. Then it follows from
(21) that
q;(x}:::::: Hq;(x + h) + q;(x - h)}.

Since q; is bounded this implies that q; is continuous, indeed concave (see e.g.
Courant [1], Vol. 2, p. 326). Next as h 1 0, it is easy to see by dominated
convergence that the right member of (22) converges to

because ex{ T h } = h2 by a well-known particular case of (20). Therefore, the


°
Ieft member of (22) converges as h 1 to g(x). Now the limit is known as
the generalized second derivative. A c1assical result by Schwarz (a basic
lemma in Fourier series, see Titchmarsh [1], p. 431) states that if q; is con-
tinuous and the limit exists as a continuous function of x in (a, h), then q;"
exists in (a,b) and is equal to the limit. Thus q;" = -2g. 0

It turns out that in high er dimensions there are also certain substitutes
for the Laplacian similar to the generalized second derivative above. Some
of the analytical difficulties disappear if these substitutes are used instead;
on the other hand under more stringent conditions they can be shown to be
the Laplacian after all, as in the case above.

EXAMPLE 2. Let 0< a < b < 00. Compute the expected occupation times of
B(a, a) before the first exit time from B(a, b).
We will treat the problem in R 2 which is the most interesting case since
the expected occupation time of B(a, a) is infinite owing to the recurrence
of the Brownian motion. In the notation of (19), the problem is to compute
GD! when D = B(a, b), ! = 1 in B(a, a) and ! = °
in B(a, b) - B(a, a). This
! is only "piecewise continuous" but we can apply (19) to two domains
separately. We seek q; such that

Llq;(x) = -2, for Ilxll < a,


Llq;(x) = 0, for a < Ilxll < b,
q;(x) = 0, for Ilxll : : : b.
198 4. Brownian Motion

Clearly, q> depends on Ilxll only, nt:nce we can solve the equations above by
the polar form ofthe Laplacian given in (19) of§4.3. Let Ilxll = r; we obtain

r2
q>(r) = -2 + CI + C2 log r, 0< r < a;

a < r < b.

How do we determine these four constants? Obviously q>(0) < 00 gives


C2 = 0, and q>(b) = 0 gives C3 = -c 4 10g b. It is now necessary to have re-
course to the first assertion in Theorem 2' to know that both q> and oq>/or are
continuous. Hence q>(a-) = q>(a+), (oq>/or)(a-) = (oq>/or)(a+). This yields
the solution:

a2 - r2 b
q>(r) = 2 + a2 log~, o ~ r ~ a;
b
= a 2 10g-, a< r < b.
r

Exercises
1. In R d , d ~ 3, if 9 E lC~k), k ~ 1, then Vg E lC(k). In R 2 , if 9 E lC~k), k ~ 1,
then V*g E lC(k).
2. Prove Lemma 6. [Hint: use Green's formula for B(x, b o), b o > b to show
Llüx) = JLlf(x - y)q>ö(y)dy.]
3. Prove the analogue of (9) for R 2 :

Ptg-g= S~P:(~9)dS
and deduce the analogue of(10) from it.
4. ls the following dual of (15) true?

5. Define u*(x) in R I in the same way as in R 2 • Compute u*(x) and extend


Theorem 5 to R I . [Hint: u*(x) = 1 -14]
6. Compute u"(x) in R I . Prove that if gis bounded continuous (in R I ), then
for each rx > 0, V"g has bounded first and second derivatives, and
(Llj2)(U"g) = rxV"g - g. Conversely, if 9 and its first and second deriva-
tives are bounded continuous, then V"«Ll/2)g) = rxU"g - g. [Hint:
u"(x) = e-lxlv2"/J2rI.. These results pertain to the definition ofan "infini-
tesimal generator" which is of some use in R I ; see, e.g., Ho [1 J.
4.7. The Feynman-Kac Functional and the Schrödinger Equation 199

7. Let D = (a,b) in R I . Derive Green's function gD(X,y) by identifying the


solution of Poisson's equation for an arbitrary bounded continuous f:

f9 D(X, y)f(y) dy = W {f;D f(X t) dt}.

[Hint: gD(.X, y) = 2(x - a)(b - y)/(b - a) if a < x::; y::; h; 2(b - x)(y - a)1
(b - a) if a < y::; x < b.]
8. Solve the problem in Example 2 for R 3 .
9. Let 11 be a a-finite measure in R d , d ;;:: 2. Suppose U11 is harmonie in an
open set D. Prove that Il(D) = O. [Rint: let f E 1[(2) in B es D and f = 0
outside B where B is abalI; h(UIl)L1fdm = hd U(L1.f)dll; use Green's
formula.]
10. The martingale in (17) is a case of a useful general proposition. Let
{MI' t ;;:: O} be associated with the Markov process {XI':Fr, t;;:: O} as
folIows: (i) Mo = 0; (ii) Mt E:Fr; (iii) M s + t = M s + Mt' es where {OS'
s;;:: O} is the shift; (iv) for each x, EX{X t} = O. Then {Mw~t, t;;:: O} is a
martingale under each P X. Examples of Mt are g(X t) - g(X 0) and
Shcp(Xs)ds, where gEM, cpEbg; and their sumo Condition (iii) is the
additivity in additive functionals.

4.7. The Feynrnan-Kac Functional and the


Schrödinger Equation

In this section we discuss the boundary value problem for the Schrödinger
equation. This includes Dirichlet's problem in §4.4 as a particular case. The
probabilistic method is based on the following functional of the Brownian
motion process. Let q E bIS, and put for t ;;:: 0:

(1)

where {X t } is the Brownian motion in R d , d ;;:: 1. Let D be a bounded domain,


T D the first exit time from D defined in (1) of §4.4, and fE g + (OD). We put for
all x in R d :
(2)

Since the integrand above belongs to :F-, u is universally measurable, (hence


Lebesgue measurable) by Exercise 3 of §2.4. In fact u E g because T D E:F°
by Exercise 6 of§4.2, and eit)f(X(t)) as a function of(t, w) belongs to ßIJ x :F 0 .
The details are left as Exercise 1. Of course u ;;:: 0 everywhere, but u may be
+ 00. Our principal result below is the dichotomy that if f E bg + then either
200 4. Brownian Motion

u == + 00 in D, or u is bounded in l5. This will be proved in several steps. We


begin with a theorem usually referred to as Harnack's inequality.
Let U denote the dass of functions defined in (2), for a fixed D, and all q
and J as specified, subject furthermore to IlqlID::; Q, where Q is a fixed
constant. For ep E tff and A E tff, IlepilA = SUpxEA lep(x)l; when A is the domain
of definition of ep, it may be omitted from the notation.

Theorem 1. Each u in U is either identically + 00 in D, or everywhere .finite in


D. For each compact subset K oJ D, there is a positive constant A depending
only on D, K and Q, such that Jor all finite u in U, and any two points x and x'
in K, we have
u(x') ::; Au(x).

ProoJ. Fix a c5 > 0 so small that

(3)

where T r is defined in (10) of §4.2. This is possible by Exercise 11 of §4.2.


We now proceed to prove that if u(xo) < 00, and 0< r< c5 /\ (Po/2), where
Po is the distance from Xo to cD, then for all x E B(xo, r) we have

(4)

For 0 < s ::; 2r, we have T s < r Dunder pxo, since B(x o,2r) (§ D. Hence
by the strong Markov property

The next crucial step is the stochastic independence of T s and X(TJ under
any px, proved in (XI) of §4.2. U sing this and (3) we obtain

u(xo) ~ !Exo{u(X(Ts))} = 2(J1(s) JrS(Xo,·') u(Y)(J(dy). (6)

The step leading from (3) to (7) in §4.3 then yields

(7)

For all XE B(x o, r), we have B(x, r) c B(x o, 2r) @ D. Hence we obtain
similarly:
u(x) = EX{eq(Ts)u(X(Ts))}
::; e{eQTsu(X(Ts))} ::; 2E X{u(X(Ts))}' (8)
4.7. The Feynman-Kac Functional and the Schrödinger Equation 201

This leads to the first inequality below, and then it follows from (7) that

u(x) s -v(r)
2 lB(x,r) u(y)dy s -v(r)2 lB(xQ,2r) u(y)dy
4v(2r) ( )
s~uxo. (9)

We have thus proved (4), in particular that u(x) < 00. As a consequence, the
set of x in D for which u(x) < 00 is open in D. To show that it is also closed
relative to D, let X n -+ xx. E D where u(x < 00 for all n. Then for a sufficiently
p)

large value of n, we have Ilx oo - xnll < tJ 1\ (p(X., oD)j2). Hence the inequality
(4) is applicable with x and Xo replaced by X oo and X n , yielding u(x oo ) < 00.
Since D is connected, the first assertion of the theorem is proved.
Now let Do be a subdomain with Do C D. Let 0 < r < tJ 1\ (p(D o, oD)j2).
Using the connectedness of D o and the compactness of Do, we can prove the
existence of an integer N with the following property. For any two points
x and x' in Do, there exist n points Xl' ... ,X n in Do with 2 S n S N + 1, such
that x = Xl' x' = Xn , Ilxj+ 1 - xjll < rand p(xj+ 1, oD) > 2r for 1 sj s n - 1.
A detailed proof ofthis assertion is not quite trivial, nor easily found in books,
hence is left as Exercise 3 with sketch of solution. Applying (4) successively
to x j and Xj +l ' 1 sj s n - 1 (sN), we obtain Theorem 1 with A = 2(d+2)N.
If K is any compact subset of D, then there exists D o as described above such
that K c Do. Therefore the result is a fortiori true also for K. D

Theorem 1 is stated in its precise form for comparison with Harnack's


inequality in the theory of partial differential equations. We need it below
only for a fixed u. The next proposition is Exercise 11 of §4.2 and its proof is
contained in that of (X) of §4.2. I t turns the trick of transforming Theorem 1
into Theorem 3.

Proposition 2. Ifm(D) tends to zero, then EX{eQtD } converges to one, uniformly


for XE R d.

Theorem 3. Let u be as in (2), but suppose in addition that fis bounded on oD.
If u =1= 00 in D, then u is bounded in D.

Proof. Let K be a compact subset of D, and E = D - K. It follows from


Proposition 2 that given any 8 > 0, we can choose K to make m(E) so small
that
sup EX{eQtE } s 1 + 8. (10)
xeRd

Put for XE E:
u 1 (X) = EX{elrD)f(X(T D)); TE < T D},

u 2(x) = EX{elrD)f(X(TD));T E = T D }.
202 4. Brownian Motion

By the strong Markov property, since In = IE + In' (JrE on the set (I E < I [)}:

On {I E < ID}' X(IE) E K. Since u 1= Cf] in D, u is bounded on K by Theorem 1.


Together with (10) this implies

Since fis bounded we have by (10)

Thus for all x E E we have

(11 )

Since 15 = Eu K, u is bounded in 15. [J

We shall denote the u(x) in (2) more specifically by u(D, qJ; x).

Corollary. Let f E M. If u(D, q, Ifl; x) 1= Cf) in D, then u(D, qJ; x) is houmied


in 15.

In analogy with the notation used in 94.6, let :?ß(D) denote the dass of
functions defined in D which are bounded and Lebesgue measurable; IHl(D)
the dass of functions defined in D which are bounded and satisfy (8) of 94.6
for each compact K cD. Thus hC(1)(D) c IHl(D) c hC(O)(D). Let us state the
following analytic lemma.

Proposition 4. If gE .cJß(D), then GDg E C(1)(D). For d = 1 if g E hC(O)(D), for


d ~ 2 if gE IHl(D), then GDg E C(2)(D).

For d = 1 this is elementary. For d ~ 2, the results follow from (10) and
(19) of §4.6, via Theorems 2 and 2' there, with the observation that GDy =
G D(1Dg) by (11) of §4.6. Note however that we need the versions of these
theorems localized to D, which are implicit in their proofs referred to there.
For a curious deduction see Exercise 4 below.

Theorem 5. For d = 1 let q E bC(O)(D); for d ~ 2 let q E IHl(D). Under the


conditions of the C orollary to Theorem 3, the function u(D, qJ: . ) is a solution
of the following equation in D:

(12)
4.7. The Feynman-Kac Functional and the Schrödinger Equation 203

Proof. Sinee u is bounded in 15, we have for x E D:

( 13)

[Note that in the integral above the values of q and u on aD are irrelevant.]
This shows that the function l{t<tDJlq(X t)u(X t )1 of(t,w) is dominated in the
integration with respect to the produet measure m j x p x over [0, (0) x Q,
where m j denotes the Lebesgue measure on [0, (0). Therefore, we ean use
Fubini's theorem to transform the following integral as shown below:

ex {f;" q(Xt)U(Xt)dt} = EX {fo" 1{t<tDJq(Xt)EX'[elrD)f(X(CD))] dt}

= EX {.nX(CD)) fo'Xl l:t<tDJq(Xt)exp[fD q(Xs)dsJdt}.

The third member above is obtained from the seeond by first reversing the
order of the two integrations, then applying the Markov property at eaeh
tunder EX, noting that CD = t + CD 0 8t on {t < CD}' We ean now perform
the trivial integration with respect to t to obtain

This result may be reeorded in previous notation as follows:

(14)

Reeall that H Df is harmonie in D henee belongs to 1[:(<1) there. Sinee qu E


:2B(D), it follows from Proposition 4 that u E 1[:(11(D). Then qu E bl[:(Ol(D) for
d = I, qu E IHl(D) for d;:::: 2, henee U E 1[:(2 1(D). Taking the Laplaeian in (14)
and using (12) and (19) of §4.6, we eonclude that - 2qu = LI u in D as asserted.
o
The equation (12) is the celebrated Sehrödinger's equation in quantum
°
physics. For q == it reduees to Laplaee's equation.
The next result includes Theorem 2 of §4.4 as a particular case.

Theorem 6. Under the hypotheses of Theorem 5, if Z E aD and z is regular for


DC, and if f is continuous at z, then we have

)im u(x) = f(z). (15)


D3X-+Z
204 4. Brownian Motion

Proof· We may suppose f ~ 0 by using f = f+ - f-. Given t; > 0, there


exists r > 0 such that

(16)

O:(e) = sup If(Y) - f(z) I :-:; e. (17)


YE B(z,lr)n(iJD)

Let x E D n B(z, r). Put

Ul(X) = EX{Tr < 'D; eq('D)f(X('D))} = W{T r < 'D; eq(Tr)u(X(Tr))},


uz(x) = EX{'D:-:; T r; eq('D)f(X('D))}'

We have X(T r) E Don {T r < 'D}' hence by Theorem 3 foilowed by Schwarz's


inequality:
Ul(X) :-:; W{T r < 'D; eQTr}llullD
:-:; PX{Tr < 'Dp/1EX{elQTr}1/11IuIID'

As x -+ z, this converges to zero by (8) of §4.4, and (16) above. Next we have
for x E B(z, r):

luz(x) - f(z)1 :-:; EX{'D:-:; T r; eq('D)}O:(e)


+ EX{'D:-:; T r; leq('D) - 1i}lf(z)1
+ PX{Tr < 'D}lf(z)l·
The first term on the right is bounded by EX{eQTr}e:-:; (1 + e)e by (16) and
(17); the second by 11 - EX{e±QTr}llf(z)l:-:; elf(z)1 by (16); and the third
converges to zero as x -+ z by (8) of §4.4. The conc1usion (15) foilows from
these estimates. 0
Putting together Theorems 3, 5 and 6, we have proved that for every
fE qaD), a l[(l)(D)-solution of the equation (12) is given by u(D, q,/;')
provided that u(D, q, 1; .) =f. 00 in D. Moreover if D is regular, then this
solution belongs to C(O)(D). Thus we have solved the Dirichlet boundary value
problem for the Schrödinger equation by the explicit formula given in (2).
It turns out that under the conditions stated above, this is the unique solution.
For the proof and other c10sely related results we refer the reader to the very
recent paper by Chung and Rao [3]. A special case is given in Exercise 7
below.

Let us remark that contrary to the Laplace case, the uniqueness of solution
in the Schrödinger case is in general false. The simplest example is given in
R 1 by the equation u" + u = 0 in D = (0, n). The particular solution u(x) =
sin x vanishes on aD! In general, unicity depends on the size of the domain
D as weil as the function q. Such questions are related to the eigenvalue
4.7. The Feynman-Kac Functional and the Schrödinger Equation 205

problem associated with the Schrödinger operator. Here we see that the
quantity u(D, q, 1, x) serves as a gauge in the sense that its finiteness for some
x in Densures the unique solvability of all continuous boundary value
problems.

Exercises
1. Prove that the function u in (2) is Borel measurable.
2. If fE g + in (2), then either u := 0 in D or u > 0 in D. [Here D need not
be bounded.]
3. (a) Let D be a domain. Then there exist domains D n strictly contained in
D and increasing to D. [Hint: let Un be the union of all balls at dis-
tance > I/n from GD. Fix an X o in D and let D n be the connected
component of U n which contains xo. Show that Un D n is both open
and closed relative to D.]
(b) Let Do be a bounded domain strictly contained in D. Let 0 < r <
tp(D o, GD), and 15 0 c Uf= 1 B(Xi' r12) where all Xi E 15 0 . Define a
connection "~" on the set of centers S = {Xi' 1 ::;; i::;; N} as follows:
Xi ~ X j if Ilx i - Xjll < r. Use the connectedness of D oto show that for
any two elements X a and Xb of S, there exist distinct elements Xi'
1 ::;; j ::;; I, such that Xi, = X a , Xi, = Xb' and Xij ~ X ij +' for 1 ::;; j ::;; 1- 1.
In the language of graph theory, the set S with the connection ~ forms
a connected graph. [This formulation is due to M. Steele.]
(c) Show that the number N whose existence is asserted at the end of the
proof ofTheorem 1 may be taken to be the number N in (b) plus one.
In the following problems D is a bounded domain in Rd , d ;:::: 1; q E M'.]
4. (a) Let D 1 be a subdomain @D. If ~ c D - D 1 , then GDg is harmonie in
D1 •
(b) Let D 1 @ D2 @ D. If gE IHl(D) then there exists g1 such that g1 E
IHlc(R d ) and gl = 9 in D], 9 1 = () in Rd - D 2 . [Hint: multiply 9 by a
function in CXJ) as in Lemma 7 of §4.6.]
(c) Prove Proposition 4 by using Theorems 2 and 2' of §4.6. [This may
be putting the horse behind the cart as alluded to in the text, but it is
a good exercise!]
5. If u(D, q, 1; .) =1= 00 in D, then it is bounded away from zero in D. More-
over there exists a constant C > 0 such that
u(D,q, 1; x);:::: Cu(E,q, 1; x)

for all subdomains E @D, and all X E D.


6. Prove that u(D, q, 1; .) =1= 00 if and only if for all X in D we have
206 4. Brownian Motion

[Hint: for some t o > 0 we have two constants Cl > 0 and C z > 0 such
that Cl::; EX{elr D); 0< 'D::; I} ::; C 2 for all XE D; now estimate
EX{eq('D); n < 'D::; n + I} and add.]

7. Suppose D is regular and EX{ellqlltD} < 00 for some XE D. Then for any
fE qaD), u(D, q,f,.) is the unique solution of (12) with boundary value
f. [Hint: let ep be a solution which vanishes on aD. Show that ep = GD(qep).
Prove by induction on n that

ep(x) = :! EX {S;D q(Xt)(S~ q(X s) dS)"ep(X t ) dt}

for all n :2 o. Now estimate lep(x)I.]


8. In R l let D = (a,b), and q E qD). Put
uAx) = P{elr D); X('D) = z}
for z = a and b. Prove that if U a =1= 00 in D, then both U a and Ub are bounded
in D. [Hint: to prove Ub =1= 00, use the following result from the elementary
theory of differential equations. Either the boundary value problem for
the equation ep" + qep = 0 in D has a nonzero solution with ep(a) = ep(b) =
0; or it has a unique solution with any given values ep(a) and ep(b). This
is due to M. Hogan.]
An extension of Exercise 8 to higher dimensions has been proved by
Ruth Williams. If aD is sufficiently smooth, A is an "open" sub set of aD;
and u(D,q, l A ;·) =1= 00 in D, then u(D,q, 1;·) =1= 00 in D.

NOTES ON CHAPTER 4

§4.1 The theory of spatially homogeneous Markov processes is an extension of that


of random walks to the continuous parameter case. This is an old theory due largely to
Paul Levy [ll Owing to its special character classical methods of analysis such as the
Fourier transform are applicable; see Gihman and Skorohod [1] for a more recent
treatment.
For the theory of dual processes see Blumenthai and Getoor [1], which improved
on Hunt's original formulation. Much of the classical Newtonian theory is contained
in the last few pages ofthe book in a condensed manner, but it is a remarkable synthesis
not fully appreciated by the non-probabilists.
§4.2. For lack of space we have to de-emphasize the case of dimension d = 1 or 2
in our treatment of Brownian motion. So far as feasible we use the general methods of
Hunt processes and desist from unnecessary short-cuts. More coverage is available in
the cognate books by K. M. Rao [1] and Port and Stone [ll The former exposition
takes a more general probabilistic approach while the latter has more details on several
topics discussed here.
§4.3 and §4.4. The force of the probabilistic method is amply illustrated in the solu-
tion of the Dirichlet problem. The reader who leams this natural approach first may
indeed wonder at the tour de force of the classical treatments, in which some of the
basic definitions such as the regularity ofboundary would appear to be rather contrived.
4.7. The Feynman-Kac Functional and the Schrödinger Equation 207

As an introduction to the c1assical viewpoint the old book by Kellogg [1] is still valuable,
particularly for its discussion of the physical background. A simpler version may be
found in Wermer [1]. Ahlfors [1] contains an elementary discussion of harmonie
functions and the Dirichlet problem in R 2 , and the connections with analytic functions.
Brelot [1] contains many modern developments as weil as an elegant (French style)
exposition of the Newtonian theory.
The proof of Theorem 8 by means of Lemma 9 may be new. The slow pace adopted
here serves as an example of the caution needed in certain arguments. This is probably
one of the reasons why even probabilists often bypass such proofs.
§4.5. Another method of treating superharmonie functions is through approximation
with smooth ones, based on results such as Theorem 12; see the books by Rao and
Port-Stone. This approach leads to their deeper analysis as Schwartz distributions. We
choose Doob's method to give further credance to the viability of paths. This method
is longer but ties several items together. The connections between (sub)harmonic
functions and (sub)martingales were first explored in Doob [2], mainly for the loga-
rithmic potential. In regard to Theorems 2 and 3, a detailed study of Brownian motion
killed outside a domain requires the use of Green's function, namely the density of the
kernel Q, defined in (15), due to Hunt [1]. Here we regard the case as a worthy illustration
ofthe general methodology (8 and all).
Doob proved Theorem 9 in [2] using H. Cartan's results on Newtionian capacity.
A non-probabilistic proof ofthe Corollary to Theorem 10 can be found in Wermer [1].
The general proposition that "semipolar implies polar" is Hunt's Hypothesis (H) and
is one of the deepest results in potential theory. Several equivalent propositions are
discussed in Blumenthai and Getoor [1]. A proof in a more general case than the
Brownian motion will be given in §5.2.
§4.6. The role of the infinitesimal generator is being played down here. For the
one-dimensional case it is quite useful, see e.g., Ho [1] for some applications. In higher
dimensions the domain of the operator is hard to describe and its full use is neither
necessary nor sufficient for most purposes. It may be said that the substitution of integral
operators (semigroup, re solvent, balayage) for differential ones constitutes an essential
advance of the modern theory of Markov processes. Gauss and Koebe made the first
fundamental step in identifying a harmonie function by its averaging property (Theorem
2 in §4.3). This is indeed a lucky event for probability theory.
§4.7. This section is added as an afterthought to show that "there is still sap from
the old tree". For a more complete discussion see Chung and Rao [3] where D is not
assumed to be bounded but m(D) < 00. The one-dimensional case is treated in Chung
and Varadhan [1]. The functional eq(t) was introduced by Feynman with a purely
imaginary q in his "path integrals"; by Kac [1] with a nonpositive q. Hs application
to the Schrödinger equation is discussed in Dynkin [1] with q ~ 0, Khas'minskii [1]
with q :?: O. The general case of a bounded q requires a new approach partly due to the
lack of a maximum principle.
Let us alert the reader to the necessity of a meticulous verification of domination,
such as given in (13), in the sort of ca1culations in Theorem 5. Serious mistakes have
resulted from negligence on this score. For instance, it is not sufficient in this case to
verify that u(x) < 00, as one might be misled to think after a preliminary (illicit) integra-
tion with respect t.
Comparison of the methods used here with the c1assical approach in elliptic partial
differential equations should prove instructive. For instance, it can be shown that the
finiteness of u(D, q, 1; .) in D is equivalent to the existence of a strictly positive solution
belonging to (:<2)(D) n (:<0)(1». This is also equivalent to the proposition that all eigen-
values A of the Schrödinger operator, written in the form (,1/2 + q)<p = A<p, are strictly
negative; see a forthcoming paper by Chung and Li [1]. Further results are on the way.
Chapter 5

Potential Developments

5.1. Quitting Time and Equilibrium Measure


In the next two sections our aim is to establish a number of notable results
in classical potential theory by the methods developed in the earlier chapters.
In contrast with the preceding sections of last chapter, the horizon will be
widened to reach far beyond Brownian motion. We shall deal with Hunt
processes satisfying certain general hypotheses and the results will apply to
classes of potential kerneis including the M. Riesz potentials (sec Exercises
below) as weH as the logarithmic and Newtonian. There are usually different
sets of overlapping conditions to yield a particular result. The theory of dual
processes in Blumenthai and Getoor [1] gives a framework which has a
considerable range, but it is a long and sometimes technicaHy complicated
passage. Here instead we offer a more direct and relatively new approach
to a number of selected topics, with a view to further development. The case
of Brownian motion will be discussed toward the end.
We begin in this section with the analysis of the sampie path when and
where it quits a transient set. This will lead to a basic result known as the
equilibrium principle in potential theory.
Let {X" t;;:: O} be a Hunt process on (E,0"), as in ~3.1. eWe banish (~from
sight if not from thought.] lt turns out that initially we need only a Markov
process with paths which have left limits in (0,00), and for which hitting
times have the measurability properties valid for a Hunt process. The other
hypotheses for a Hunt process will not be explicitly used until further notice.
Let ( be a O'-finite measure on 0", and suppose that the potential kernel U
has a density u with respect to (, as folIows:

U(x, A) = L u(x, y)((dy), XE E, A E 6. (I)

°
Of course u ;;:: and u E 6' x 0"; but no further condition will be imposed
untillater. Let A E 6 and
5.1. Quitting Time and Equilibrium Measure 209

see §3.3 for the measurability of LlA- We define

,( )_{sUP{t>OIXt(W)EA}, if W E Ll A ,
IA W -
0, if W E Q - Ll A ;

contrast this with

inf {t > 0 I X t E A}, if W E Ll A'


TA(w ) = {
00, if W E Q - Ll A-

TA is the hitting time, YA is the quitting time (or last exit time) of A (denoted
by LA in (21) of §3.6). Their dual relationship is obvious from the above.
Recall that A is transient if and only if

(3)

We have for each t > 0:


r > t} -- {TA . et <
f\ A 00)'
J' (4)

from which it follows that YA E :Fx' The next result is trivial but crucial:

{YA > O} = {TA< oo}. (5)

For x E E, BEg, define the quitting kerne I LA as follows:


(6)

and write as usual for.f E g+ :

From here on we fix the transient set A and omit it from the notation until
further notice. Let .f E blC (bounded continuous), I:: > 0, and consider the
following approximation of[(X(y- )):

(7)

Put
1
.1, (x) = -[;
'1',' 1 < "{ <
pxfO - I::}', (8)

!/Jr. is universally measurable. Now a basic property of Y may be expressed


by means of the shift as follows:

(9)
210 5. Potential Developments

For if }'A ~ t, then TA 0, =XJ, namely "r'A' BI = 0; if ( A >


C t, then j'A =,
t + YA 01 , As a consequence of (9), we have for each t 2': 0:
0

{t < y~ t + I:} = {O < y' Bf ~ I;}. (10)

Using this we apply the simple Markov property at time t to see that the
integrand in (7) is equal to

Hence the quantity in (7) is equal to U(f 'Ij;,)(x), Let us verify that this is
uniformly bounded with respcct to G, Since f is bounded it is sufficient to
check the following:

= r
J{y>Oj
(~G Jrl r c)'
y
1 dt) PX(dw) ~P X (}, > O} ~ 1.

Here we have used Fubini's theorem to reverse the order of integration of


the double integral above with respect to px x m, where m is the Lebesgue
measure on [0, XJ), Exactly the same evaluation of the quantity in (7) yields

r
Jli>O}
1 rl'
~ Jü-c)' f(Xf)dtPX(dw), (ll)

Since fE bC, and X has a left limit at}' ( <:1:;), as I; 1 0 the limit ofthe quantity
in (11) is equal to

r
Jh'>O}'
f'(X(,'-))PX(dw)
I
= EXI"
tJ.
> 0; f(X("-
I
))1,
j

We have therefore proved that

fL(x,dY)f(Y) = !im U(NJ(x) = !im fu(x,Y)f(y)Mc(dY) (12)


clO F.\ 0

where M c is the measure given by

Mc(dy) = Ij;,(y)~(dy),

Now we make the following assumptions on the function u, which will be


referred to as (R),
5.1. Quitting Time and Equilibrium Measure 211

(i) For each X E E, y -+ u(x, y)-l is finite continuous;


(R)
(ii) u(x, y) = .::fJ if and only if x = y; we put u(x, x) -1 = O.
lt is clear that condition (i) is equivalent to the foBowing: u(x, y) > 0 for aB
x and y in E; and for each x E E, y -+ u(x, y) is extended continuous in E. Here
are some preliminary consequences ofthe conditions. Since Ju(x, y)M,(dy) <
00 by (12), it follows from (ii) that M,({x}) = 0 for every x; namely Mr. is
diffuse. Next since infYEK u(x, y) > 0 for each compact K by (i), we have
Mr.(K) < 00. Thus M, is aRadon measure. Now let cp E Ce; then the function
y -+ cp(y)u(x, y)-l belongs to Ce for each x, by (i). Substituting this for I in
(12), we obtain

SL(x,dy)
( ) cp(y) =
.
hm
Scp(y)M,(dy) (13)
u x, y ,tO

because u(x, y)u(x, y) - 1 = 1 for y E E - {x}, and the point set {x} may be
ignored since M, is diffuse. Since for each x, L(x, .) is a finite measure and
u(x, y)-l is bounded on each compact, L(x, dy)u(x, y)-l is aRadon measure.
It is weIl known that two Radon measures are identical if they agree on all
cp in Ce (Exercise 1). Hence the relation (13) implies that there exists a single
Radon measure p on g such that

r L(x, dy) = p(B), Vx E E, BEg. (14)


JB u(x, y)

Let us pause to marvel at the fact that the integral above does not depend
on x. This suggests an ergodic phenomenon which we shall discuss in §5.2.
Since M, is aRadon measure for each G > 0, it follows also from (13) that
pis the unique vague limit of M, as G ! 0; but this observation is not needed
here. We are now going to turn (14) around:

L(x, B) = SB u(x, y)p(dy), xEE,BEg. (15)

When B = {x} in (14) the left member is equal to zero by condition (ii).
Hence p is diffuse. Next putting B = {y} with y i= x in (14) we obtain

L(x, {y}) = u(x,y)p({y}) = O. (16)

For an arbitrary BEg, we have

L(x,B) = l B\{x)
u(x,y)
L(x,dy)
(
u x, Y
) +L(x,Bn{x})

= S B\{x)
u(x, y)p(dy) + L(x, B n {x})

= SB u(x, y)p(dy) + L(x, B n {x}) (17)


212 5. Potential Developments

since fJ. is diffuse. Therefore (15) is true if and only if

"Ix: L(x, {x}) = r{y > 0; X(y-) = x} = o. (18)

In the proof of (18) we consider two cases according as the point x is


holding or not. Recall that x is a holding point if and only if alm ost every
path starting at x remains at x for a strictIy positive time. It follows by the
zero-one law that if x is not holding point, then almost every path starting
at x must be in E - {x} for some rational value of t in (0, b) for any b > O.
by right continuity ofthe path (this is the first explicit use ofright continuity).
Define a sequence of rational-valued optional times {Sn' n ::::: I} as follows:

lf x is not holding, then pX[lim n Sn = O} = 1. We have

(Simple Markov property is sufficient here since Sn is countably valued.)


Since X(Sn) -=f. x, the right member of (19) equals zero by (16) with x and J'
interchanged. Letting n --> w in (19) we obtain (18).
If x is a holding point, then it is dear that

0< U(x, {x}) = u(x, x)~({x}).

It follows firstly that ~({x}) > 0 and secondly U(x, {x}) = w. Together with
the hypothesis that x is holding the latter condition implies that {x} is a
recurrent set under px. This is a familiar resuIt in the theory ofMarkov chains
(where the state space is a countable set). Moreover, another basic resuIt in
the latter theory asserts that it is almost impossible for thc path to go from
a recurrent set to a transient set. 80th resuIts can be adapted to the case in
question, the details of which are left in Exercise 8 below (strong Markov
property is needed). In condusion, we have proved that if x is holding then
PX{TA < w} = 0, which implies (18). We summarize the results abovc with
an important addition as folIows.

Theorem 1. Let X be a Hunt process with the potential kerne! in (1) satisfving
conditions (i) and (ii). Then for each transient set A, there exists aRadon
measure fJ.A such that for anJ' x E E a/1(/ BE ß:

(20)

If almost all paths of the process are continuous, then fJ.A Iws support in r:A.
In general if A is open then fJ.A has support in A.
5.1. Quitting Time and Equilibrium Measure 213

Proof. If the paths are continuous, then clearly we have on LI A: X{y A- ) =


X(YA) E cA. In general if Ais open, then on Ll A it is impossible for X{YA) E A
by right continuity of the paths. Hence there is a sequence of values of t
strict1y increasing to YA at which X(t) E A; consequently the left limit
X(y A - ) E A. By (6), L(x,') has support in cA in the first case, and in A in the
second case. Hence so does ~A by (14). 0

It is essential to see why for a compact A the argument above does not
show that ~A has support in A. For it is possible on Ll A that X(YA) E A while
X(y A -)i A; namely the path may jump from anywhere to cA and then
quit A forever.

Corollary. Wehave

(21)

This follows from (20) when B = E, in view of (5).


The measure ~A is called the equilibrium measure for A. Its exact deter-
mination in electrostatics is known as Robin's problem. Formula (14) above
gives the stochastic solution to this problem in R d, d ;:::: 3. In order to amend
Theorem 1 and its corollary so that ~A will have support in A for an arbitrary
Borel set A, there are several possibilities. The following expedient is due to
lohn B. Walsh. Consider the left-hitting time TA defined in (23) of §3.3, and
the corresponding [eft quitting time YA ;

YA'(w) = sup{t > o[ Xt_{w) E A} (22)

where sup 0 = 0. Since for a Hunt process left limits exist in (0, 00), and
t -+ X I_ is left continuous, we have X(y A -) E A, regardless if the sup in (22)
is attained or not. This is the key to the next result.

Theorem 2. Under the hypotheses of Theorem 1 there exists aRadon measure


~A' with support in A such that for every x E E and BEg:

P{YA >O;X{YA-)EB} = fu(x,Y)~A'(dY). (23)

Proof. Let us beware that "left" notions are not necessarily the same as
"right" ones! The definition oftransience is based on XI = X t +, not X t -, and
it is not obvious that the transient set A will remain "transient" for the left
limits of the process. The latter property means pX{yA' < oo} = 1 for every
x. That this is indeed true is seen as follows. We have by Theorem 9 of §3.3,
TA ;:::: TA a.s. for any Borel set A. On the other hand, the left analogue of (4)
is true: {YA' > t} = {TA' BI < oo}. It follows that
0
214 5. Potential Developmcnts

Therefore, {YA = oo} C {YA = oo}, namely A is left-transient ifit is (right)-


transient. [Is the convcrse true?J
The rest ofthe proofofTheorem 2 is exactly the same as that ofTheorem
1, and the question of support of PA is settled by the remark preceding the
theorem. D

Since left notions are in general rather different from their right analogues,
Theorem 2 would require re-thinking of several basic notions such as "left-
regular" and "left-polar" in the developments to foIlow. Fortunately under
Hypotheses (B) and (L), we know by Theorem 3 of §3.8 that TA = T~ a.s.
It foIlows that YA = YA: a.s. as weIl (Exercise 6) and so under these hypotheses
Theorem 2 contains Theorem 1 as a particular case. We state this as a
coroIlary. We shaIl denote the support of a measure P by ~.

Corollary. Under Hypothesis (B), (20) and (21) hold with ~A C A.

Why is Hypothesis (L) not mentioned? Because it is implied by the con-


ditions of the potential kerneI, provided we use ~ as the reference measure
(Exercise 2).
It is known that Hypothesis (B) holds under certain duality assumptions
(see Meyer [3J). In order to state a set of conditions in the context of this
section under which Hypothesis (B) holds, we need the next proposition.

Proposition 3. U nder the conditions of Theorem 1, fär each Y the function


x -> u(x, y) is superaveraging. If it is lower semi-continuous then it is excessiüe.

Proof. For each fE M+, Uf is excessive by Proposition 2 of §2.1. Hence


for each t > 0:

P,Uf(x) = f P,u(x, y)f(y)~(dy) s Su(x, y)f(yK(dy)


= Uf(x), (24)
where
P,U(x, y) = f P,(x, dz)u(z, y). (25)

Since (24) is true for aIl fE M+, it foIlows that for each x there exists N x
with ~(N xl = 0 such that if y 1= N x:

P,u(x, y) s u(x, y).

Now the measure ~ charges every nonempty open set (why?). Hence for an
arbitrary y we have Yn 1= N x' Yn -> y, so that u(z, Yn) -> u(z, y) for every z by
condition (i). Therefore by Fatou:

P,U(x, y) s lim P,u(x, Yn) S !im u(x, Yn) = u(x, y).


n n
5.1. Quitting Time and Equilibrium Measure 215

This proves the first assertion of the proposition. The second follows from
Proposition 3 of §3.2 and the remark following it. 0

Since u(·, y) is superaveraging, we denote its regularization by g(., y):

g(x, y) = lim Ptu(x, y).


tlO

For each y, x --+ g(x, y) is excessive by Proposition 5 of §3.2. Observe that the
function g may not satisfy the conditions (i) and (ii), in particular g(x, x) may
not be infinite. The following results is proved in Chung and Rao [1] but
the proof is too difficult to be given here. A simpler proof would be very
interesting indeed.

Theorem 4. Under the conditions of Theorem 1, if we assume also that


(a) each compact is transient,
(b) for each x, 1{(x, x) = + 00,
then Hypothesis (B) is true.

In view of Proposition 3, condition (b) above hold if u(·, y) is lower semi-


continuous for each y. In this case condition (a) is satisfied if the process is
transient according to the definition given in §3.2, by part of Theorem 2 of
§3.7. We shall return to these conditions in §5.2.
It is clear in the course of this book that the systematic use of hitting times
(balayage) constitutes a major tool in the theory of Hunt processes. By
comparison, the notion of quitting times was of recent origin and its poten-
tials remain to be explored. The next result serves as an illustration of the
method. Further work along this line should prove rewarding.

Theorem 5. Assume Hypothesis (B) as well as (R). Let An be transient sets


such that An 1 A and nn
An = A. Then we have for each x E AC u Ar and each
fEbC:
!im LAn(x,f) = LA(x,j). (26)
n

In other words the sequence of measures LAjx,·) converges tightly to LA(x,·).

Proof. We begin with the following basic relation. For each x E AC u Ar:

(27)

This has been proved before in this book (where?), though perhaps not
exactly in this form. The reader should ponder why the conditions on x and
on the transience of An are needed, as weil as the quasi left continuity of the
216 5. Potential Developments

process. Now we write (27) in terms of quitting times as folIows:

n (Oe'
l fAn > O} = 1f yA > 0 1J . (28)

We will omit below the diche "alm ost surely" when it is obvious. Clearly
YA" 1 and YA" ~ }'k Let f3 = lim n ['An' Then on {YA > O}, we have X(y An - ) =
X(y"';: n - ) E An as shown above on account of Hypothesis (B). It follows by
right continuity that if {3 < YA" for all 11 then

X(fJ) = lim X(y A" - ) E nAn =


n
A. (29)

Thus ß ~ YA and so ß = Yk The last equation is trivial if {3 = i'A" for some


Next, we prove that on {YA > O} we have
11.

lim X(YA" -) = X(}'A-)' (30)


n

This is trivial if Xis continuous at }'k The general argument below is due to
John B. Walsh and is somewhat delicate. If X has a jump at lA' this jump
time must be one of a countable collection of optional times {cx n }, by Exercise
5 of§3.1. This means that for alm ost every w, }'A(W) = cxn(w) where n depends
on w. We can apply the strong Markov property at !Y. n for all n to "cover
YA" whenever X is discontinuous there. (We cannot apply the property at
}'A!) Two cases will be considered for each CXn' written as CX below.

Case 1. X(!Y.) ~ A. Applying (27) with x = X(cx), we see that since TA (Ja =
CIJon {cx = YA}, there exists N(w) < CIJ such that TA" ., 0, =CIJ, hence !Y. = ;'A"
for n ~ N(w). Thus (30) is trivially true because i'A" = j'A for all sufficiently
large values of 11.
Case 2. X(!Y.) E A. Then on {cx = YA} we must have X(cx) E A\A r because the
path does not hit A at any time strictly after cx. Since ()( is a jump time and
A\A r is semipolar by Theorem 6 of §3.5, this possibility is ruled out under
Hypothesis (B), by Theorem 1 (iv) of §3.8. This ends the proof of (30).
Now let x E AC u Ar and{ E blC. Then we have by (28), {I'A" > O} 1 [}'A > O}
PX-a.s.; hence by (30) and bounded convergence:

Recalling (6) this is the assertion in (26). 0

Corollary. I{ Al is compact then

(32)
5.1. Quitting Time and Equilibrium Measure 217

Proof. We may suppose At =l=E. There exists xottA b and u(xo,·)-t is


bounded continuous on At. Hence we have by (26).

lim f LAn(xo,dy) = f LA(xo,dy)


n An U(X O' Y) A U(XO' Y)

which reduces to (32). D


The corollary will be applied in the next section to yield important results,
under further conditions on the potential kernel.

Exercises
1. Let (E,0") be as in §1.1. A measure 11 on 0" is called aRadon measure iff
Il(K) < 00 for each compact sub set K on E. Prove that if 11 and v are
two Radon measures such that Sf dll = Sf dv for all f E Ce> then 11 == v.
[Hint: use Lemma 1 of §1.1 to show that Il(D) = v(D) < 00 for each
relatively compact open D; then use Lemma 2 of §1.1 to show that
Il(B n D) = v(B n D) for all BE 0". Apply this to a sequence of Dk such
that D k i E. This exercise is given here because it does not seem easy to
locate it in textbooks.]
In Exercises 2 and 5, we assume the conditions of Theorem 1.
2. Let A be transient. If IlA(E) = 0, then Ais polar. Conversely if P A1(X) = 0
for some x, then IlA(E) = O.
3. Assume U(x, K) < 00 for each x and compact K. Let f be any excessive
function. If fex) > 0 for some x then fex) > 0 for all x. [Hint: use
Proposition 10 of §3.2.]
4. Assume each compact set is transient. Let f be an excessive function. If
fex) < 00 for some x then the set {x Ifex) = oo} is a polar set. [Hint:
use Theorem 7 of §3.4 and U > 0.]
5. Under the same conditions as Exercise 4 prove that each singleton is
polar. [Hint: let Dn be open relatively compact, Dn 11 {x o}; P v) = U Iln
with Illn c Vn- Show that (a subsequence of) {Iln} converges vaguely to
A6 xo ' and AU(X, x o) s limn U Iln(x) S 1 so that ), = O. For x I tt Vb limn
Ulln(X I ) = 0 because U(x I ,') is bounded continuous in VI' Now use
Exercise 2.]
6. Prove that for a Hunt process, YA ?: YA a.s. for any A E 0". lf TA = T~
a.s. then YA = YA a.s.
7. Prove that a polar set is left-polar; a thin set is left-thin.
8. The following is true for any Hunt process (even more gene rally). Let x
be a holding but not absorbing point. Define the sojourn time S = inf {t >
0IX t =1= x}, and the re-entry time R = inf{t > SIXt = x}. Show that
218 5. Potential Developmcnts

PX{S> t} = e- A1 for some }.: 0< A < Cf..;. Let PX{R < Cf.]} = p. Show
that U(x, {x}) = ),-1(1 - p)-I; hence U(x,{x}) = 00 if and only if {x}
is "recurrent under p Let A E $; prove that if P TA < oo} > 0 then
X
". X {

PX{TA < R} > O. Consequently if {x} is recurrent under p x , then so is A.


Note: without further assumption on the process, a holding point x may
not be hit under pY, y =1= x; hence the qualification "und er p x " above.
9. Derive a formula for EX{e- ;,j{(X j.-); Y> O} for }. > 0, }' = 'rA as in
Theorem 1.
10. Find the equilibrium measure J1A when A is a ball in the Newtonian case
(name1y Brownian motion in R 3 ).

5.2. Some Principles of Potential Theory

In this section we continue the study of a Hunt process with the potential
kernel given in (1) of §5.1, under further conditions. Recall that Hypothesis
(L) is in force with ~ as the reference measure. The principal new condition
is that of symmetry:
V(x, y): u(x, y) = u(y,x). (S)

Two alternative conditions of transience will be used:

Vx, V compact K: U(x,K) < 00;

each compact set is transient.

Two alternative conditions of regularity (besides (R) in §5.1) will be used:

Vy: x -+ u(x, y) is lower semi-continuous;

Vy: x -+ u(x, yi is excessive.

There are various connections between the conditions. Recall that (TI) and
(U I) imply (T 2 ) by Theorem 2 of§3.7; (R) and (U tl imply (U 2) by Proposition
3 of §5.1 ; (R), (T 2) and either (U I) or (U 2) imply Hypothesis (B) by Theorem 4
of §5.1 ; (R) and (S) imply (U 1) trivially.
Readers who do not wish to keep track of the various conditions may
assume (R), (S) and (T 1)' and be assured that all the results below hold true
with the possible exception ofthose based on the energy principle. However,
it is one of the fascinating features of potential theory that the basic results
are interwoven in the manner to be illustrated below. This has led to the
development ofaxiometic potential theory by Brelot's schoo!.
The Equilibrium Principle of potential theory may be stated as folIows.
For each compact K, there exists a finite measure J1K with support in K such
5.2. Some Principles of Potential Theory 219

that
(E)

The principle holds under (R), (T 2) and either (U 1) or (U 2), because then
Hypothesis (B) holds and (E) holds by the Corollary to Theorem 2 of §5.1.
We are interested in the mutual relationship between several major
principles such as (E). Thus we may assurne the validity of (E) itself in some
of the results below. Let us first establish a basic relation known as Hunt's
"switching formula" in the duality theory. The proof is due to K. M. Rao.
From here on in this section the letters K, D, and Bare reserved respectively
for a compact, relatively compact open, and (nearly) Borel set.

Theorem 1. Under (S), (Tl) and (U 2), we have for each B:

(1)

Proof. We begin by noting that we have by (S)

PBU(X, y) = f u(y, z)PB(x, dz); (2)

hence for each x, y --> PBu(x, y) is excessive by simple analysis (cf., Exercise 8
of §4.l). On the other hand for each y, x --> PBu(x, y) is excessive by (U 2) and
a general property of excessive function (Theorem 4 of §3.4).

Let K c D, we have

by Theorem 3 of §3.4 and (Tl); namely

fK U(X, y)~(dy) = fK PDu(x, y)~(dy) < 00. (3)

Since U(X, y) 2:: PDu(x, y) by Theorem 4 of §3.4, and K is arbitrary for a fixed
D, it follows from (3) that

U(X, y) = PDu(x, y) (4)

for each x and ~-a.e. y in D. Both members of (4) are excessive in y for each
fixed x, hence (4) holds for each x in E and y E D, by the Corollary to Pro-
position 4 of §3.5. [This property of U is of general importance.]
Nextwehave
220 5. Potential Developments

because P K(X, .) is supported by K, and u( y, z) = P DU( y, z) for z E K c D by


(4) as just proved. It follows by Fubini and (S) that the quantity in (5) is equal
to

f PD(y,dw) f PK(x,dz)u(z, w) =f PIJ{y,dw)PKu(x, w)

~f PJ)(y,dw)u(x, w)

= PDu(y,x).

Thus we have proved that for all x and y, and K cD:

(6)

Integrating this with respect to ~(dx) over an arbitrary compact set C, we have

Taking a sequence of such sets D n 11 K, we have PD"UlcCr) 1 PKUlcCr),


provided y ~ K\Kr, by the Carollary to Theorem 5 of §2.4. Using this se-
quence in (7), we obtain

y ~ K\K r • (8)

It is essential to notice that the last member above is finite by (Tl) because
it does not exceed U(y, Cl. Since C is arbitrary we deduce from (8) that

(9)

far each y rf= K\K r , and ~-a.e. x. Both members of (9) are excessive in each
variable when the other variable is fixed and ~(K\Kr) = 0 because K\,K' is
semipolar. Therefore (9) holds in fact for all x and y: since it is symmetrie in
x and y, we conclude that

(10)

Now given Band x, there exist compacts K n c B, such that T K" 1 TB,
PX-a.s. by Theorem 8(b) of §3.3. Since for each y, u(X" y) is right continuous
in tunder (U 2) by Theorem 6 of §3.4, it follows by Fatou and (10) that

Interchanging x and y we obtain (1). D


5.2. Some Principles of Potential Theory 221

We are now ready to establish another major principle, known as the


Maria-Frostman Maximum Principle as folIows. For any o--finite measure fl
supported by the compact K, we have

sup Ufl(X) = sup Ufl(X) (M)


XE E XEK

This principle must be intuitively obvious to a physicist, since it says that the
potential induced by acharge is greatest where the charge lies. Yet its proof
seems to depend on some sort of duality assumption on the distribution of
the charge. This tends to show that physical processes carry an inherent
duality suggested by the reversal of time.

Theorem 2. The maximum principle (M) halds under (S), (Tl) and (U 2)'

Praaf. There is nothing to show if the right member of (M) is infinite, so we


may suppose it finite and equal to M. For c > 0 define the set

B = {x E EI Ufl(X) ~ M + c}. (12)

Since U fl is excessive under (U 2)' it is finely continuous (Corollary 1 to


Theorem 1 of §3.5) and so B is finely closed. The fine continuity of U fl also
implies that K c B r (why?). Therefore we have by Theorem 1 and Fubini:

PßUfl(X) = f Pßu(x, y)fl(dy) = f PßU(y,X)fl(dy)

= f U(y,X)fl(dy) = Ufl(X), (13)

because ltt c K and for each y E K, Pßu(y,x) = u(y,x) trivially since y E Br.
Ün the other hand, P ß(x, .) has support in B by Theorem 2 of §3.4, since B
is finely closed. Hence we have

PßUfl(X) = f Pß(x,dy)Ufl(y) ~ sup


YEß
Ufl(y) ~ M + c. (14)

Putting (13) and (14) together we conclude that Ufl ~ M since c is arbitrary.
o
The argument leading to (13) is significant and recorded below.

Corollary. Far any o-~finite measure fl such that ltt c B r , we have

(15)
222 5. Potential Developments

In conjunction with (E), an interesting consequence of(15) is the following


particular case of Hypothesis (B):

(16)

In some situations this can take the place of Hypothesis (Bl.


The next principle will be named the Polarity Principle.For Brownian
notion it is proved in Theorem 10 of §4.5. Its general importance was recog-
nized by Hunt. The depth of this principle is evident from the following
formulation, in which we follow a proof of K. M. Rao's with some
modification.

Theorem 3. Assume (T 2 ), Hypothesis (B), and both the equilibrium principle


(E) and the maximum principle (M). Then the polarity principle holds as
folIows:
Every semipolar set is polar. (P)

Remark. For a thin set this says that if it cannot be hit immediately from
some point, then it cannot be hit at all from any point!
Proo.f. By Theorem 8(b) of §3.3 and Theorem 6 of §3.5, it is sufficient to prove
that each compact thin set is polar. Let K be such a set, and define for each
n;:C: 1:

Since P K I is finely continuous, each An is finely closed. Let L be a compact


subset of An" Then PLI (x) S I - Iln for XE L; hence for all x by (M). This
implies EX{e- h } S 1 - Iln for all x; hence L is polar by Exercise 6 of§3.8.
where Hypothesis (B) is used. Thus An is polar and so is A = U,:~ t An" Put
C = K - A. Then C is finely closed (why?); and we have

"Ix E C: P c l(x) = 1. (17)

Put T t = T K; if IX is a countable ordinal which has the predecessor IX - 1, put

T, = T,-t + T K BT , . , :
C

if IX is a limit countable ordinal, put

where ß ranges over all ordinals preceding IX. The following assertions
are almost surely true for eachlX. Since A is polar, we have X(T,) E C on
5.2. Some Principles of Potential Theory 223

{Ta< oo}. For a limit ordinal a this is a consequence of quasi left continuity.
In view of (17), we have for each a:

n {T
00

{T, < oo} = dn < CXl}. (18)


n=l

On the other hand, since K is thin, strong Markov property implies that
Ta < Ta + 1 on {T, < CXl}. It follows as in the proof of Theorem 6 of §3.4
that there exists a first countable ordinal a* (not depending on w) for which
Ta' = 00. This a* must be a limit ordinal by (18). Therefore, on the set of (j)
where K is ever hit, it is hit at a sequence of tim es which are all finite and
increase to infinity. This contradicts the transience of K. Hence K cannot
be hit at all, so it is polar. 0

Corollary. For any B E~' the set B\B r is polar.

In particular for any open D, the set ofpoints on cD which are not regular
for D is a polar set, because it is a subset of DC\(DT. This is the form of the
C

result known in classical potential theory as Kellogg-Evans theorem. The


general form of(P) goes back to Brelot, Choquet and H. Cartan. Its relevance
to the Dirichlet problem has been discussed in Proposition 11 of §4.5.
Now we return to the setting of §5.1, and adduce some important con-
sequences about the equilibrium measure under the new assumptions (S),
(T 2 ) and (M). For each transient set A, we define

(19)

to be the capa city of A. Under (T 2), C(K) is defined for each compact K
and is a finite number since flK is aRadon measure. For any two O"-finite
measures )'1 and ,1.2' we put

(20)

The condition of O"-finiteness is made to ensure the applicability of Fubini's


theorem. Under (S) it then follows that

(21 )

whether finite or infinite. This symmetry plays a key role in wh at follows.


The next result is a characterization of the capa city weil known in classical
potential theory.

Theorem 4. Let v be any O"-finite measure with ~ c K, such that

Vx E K: Uv(x):::; 1. (22)
224 5. Potential Developments

Then we have
v(K) ~ C(K). (23)

Proo{ Let K c D; under (T 2 ) both K and D are transient and so flD as weil
as flK exist. Under (S) we have as a case of (20):

(24)

The left member above eguals v(K) because V flD = P Dl = 1 on K. The


right member does not exceed flD(E) because VI' ~ 1 by (22) and (M). Now
let D n HK; then we have proved that v(K) ~ flDJE) = C(D n ) for each 11,
and conseguently (23) follows by the Corollary to Theorem 5 of §5.1 0

Note that if the ineguality in (22) holds for all x E E, then the intervention
of (M) is not needed in the preceding proof. This is the case for Corollary 1
below.

Corollary 1. If K I C K l , then C(Kd ~ C(K 2 ).

Corollary 2. If v is a (J~fil1ite measure on ,x such that U v <x, v-a.e, then


v does not charge any polar set.

Proof. Let An = {xl Vv(x) ~ n}. If \' charges a polar set it must charge a
polar subset of An for some n, and so also a compact polar subset of An,
because V is necessarily inner regular (Exercise 1). Let VI be the restriction
ofv to such a set K, and 1'2 = 1'1/11. Then VVl ~ Ion K, hence by the theorem
v2 (K) ~ C(K). But C(K) = 0 by Exercise 2 of §5.1. Thus 1'2 == 0 and \' does
not charge K after all. 0

Let ), and v be two finite measures on ff; then i - v is weil defined by


(), - v)(B) = A(B) - v(B); integration with respect to}. - v is also weil defined
by Ud(), - v) = Udi - Udv when both Ud}. and Udv are finite. We
extend the definition (20) as folIows. The energy of }, - \' is defined to be

0. - v,), - v) = 0., ),) - <J., v) - <vJ) + <v, v) (25)

provided that all four terms on the right are finite. This amounts to the
condition that (). + v, ), + v) < XJ. Let us remark that if )'1 - },2 = )'3 - )'4
and both (}'I - A2 , )'1 - )'2) and <i
3 - },4, },3 - A4 ) are defined then they
are egual. If the energies of AI - ;'2 and of ;'3 - },4 are defined then so is
the energy of (AI - A2 ) - (A 3 - J.4) = (/'1 + A4 ) - (Al + ;'3)' In particular
we can define the energy of a signed measure using its Hahn-Jordan decom-
position, but there is no need for that. We are ready to announce another
principle.
5.2. Some Principles of Potential Theory 225

Energy Principle. For any two finite measures A and v on ß, we have 0 - v,


A - v) ~ 0 provided it is defined; if it is equal to zero then A == v.

Theorem 5. Suppose that the Hunt proeess has a transition density funetion
Pt with respeet to ~ whieh is symmetrie, namely for t > 0:

Pt(X, dy) = Pt(x, y)~(dy); Pt(X, y) = Pt(Y, x) (26)

for all (x, y); where Pt ~ 0, Pt E ß x ß. Then the energy principle holds.

Proof. We have

u(X, y) = 2 Iooo PZt(x, y) dt

and

PZt(X, y) = I Pt(X, z)piz, y)~(dz).


Writing Jl. for A - v purely for abbreviation (not as "signed measure"), we
have if <Jl., Jl.) < 00:

<Jl.,Jl.) = II Jl.(dx)u(x, Y)Jl.(dy)


= 2 Iooo dt I [II Jl.(dx)pt(x, Z)Pt(Z, Y)Jl.(dy) ] ~(dz)
= 2 Iooo dt I(I Jl.(dX)Pt(X,Z)y ~(dz),
by the symmetry of Pt. Hence <Jl., Jl.) ~ o. If <Jl., Jl.) = 0 then there exists
a set N of t with m(N) = 0 and for each t rt N a set Zt of z with ~(Zt) = 0
such that

I Jl.(dx)pt(x, z) = 0 if t rt N, z rt Zt· (27)

Let f E bC; it follows from (27) that

I Jl.(dx}Ptf(x) = 0, if t rt N. (28)

Since X is right continuous, lim tLo Prf(x) = f(x) for each x. Taking a
sequence of t rt N, t 1 0 in (28) we obtain

Vf E bC: If dA - If dv = 0,
where both integrals are finite. Hence A == v (see Exercise 1 of §5.1). D
226 5. Potential Developments

From here on all the results so far proved will be used wherever appro-
priate. Let K be a fixed compact set and <P(K) denote the dass of finite
measures v such that ~ c K and <v, v) < CIJ. lt follows from Corollary 2
to Theorem 4 that such a v does not charge any polar set, since JUl'dl' < x.
For the equilibrium measure IlK' we have

Hence !lK E <P(K). But then IlK does not charge the set A below:

(30)

because A is polar as in the proof ofTheorem 3. Consequently the inequality


in (29) is an equality:
(31 )

We know (Exercise 2 of ~5.1) that C(K) = 0 if and only if K is polar. If


C(K) > 0 let v E <P(K) with v(K) = C(K). Let us compute

Since v does not charge A we have

<
It follows from (31) and (33) that the energy of v - IlK is equal to v, v) -
C(K). We have therefore proved the following result under the energy
principle.

Theorem 6. Let C(K) > O. For any v E <P(K) with I'(K) = C(K), IVe haue

C(K) s: <1',1') (34)

wnere equality holds if and only if v = IlK.

In other words, the equilibrium measure for K is the unique measure


among all measures on K having the same total mass which minimizes the
energy. This is the way it was found by Gauss and Frostman for Newtonian
and M. Riesz potentials (see Exercises 7 ~ 12 below). Here we have turned
the table around by first establishing the existence ofthe equilibrium measure
in §5.1 under fairly general conditions, then verifying its minimization
property as shown above. Although more restrictive assumptions, in partic-
ular the symmetry of transition density, are imposed to ensure Theorem 6,
these conditions still cover the dassical cases with room to spare.
5.2. Some Principles of Potential Theory 227

Another uniqueness theorem for the equilibrium measure based on


duality (Theorem 3 of §4.l) is given in Exercise 2.
The word "equilibrium" connotes in physics a "steady state", namely
a stationary distribution of some random process. Such an identification
seems missing in the literature and is supplied below. Although it is a kind
of"Columbus' egg" stood on its head, there may be possibilities for explora-
tion: an extension to the asymmetrie case is discussed in Chung and Rao [2].
We assurne that the compact K is not a polar set and is regular: K = Kr.
The latter condition amounts to the absence of the polar set A in (30). In
this case the quitting kernel L K in §5.1 is astriet probability kernei, namely:

(35)

Hence we can construct a discrete parameter homogeneous Markov process


on the state space K, with L K as its transition probability function. Under
(R) and (S), it is easy to see that the family of functions {u(x,·), XE K},
restricted to K, is uniformly integrable with respect to J1.K' For if X n E K
and X n ---+ x, then lim n u(x n , Y) = u(x, y) for each y; at the same time L(x n , K) =
L(x, K) for all n. The asserted uniform integrability then follows from an
elementary result (Theorem 4.5.4 of Course). This is stronger than the fol-
lowing condition, known as Doblin's Hypothesis (D); see Doob [I], p. 192.
There exists I: > 0 such that if A c K, A E tff' with J1.K(A) ::;; c, then LK(x, A) ::;;
1 - c for all x E K. It is an easy case of Doblin's general results that there
exists a probability measure n on A such that for every A c K, A E If and
every x E K:
1 n .
!im - L L~)(x, A) = n(A); (36)
n n j= 1

where the L~)'s are the iterates of L K = L~). The measure n is an invariant
(stationary) measure for L, namely nL K = n. Now under (S) there is an
obvious invariant measure obtained by norma!izing the equilibrium
measure:
1 1
1l'K = IlK(E) J1.K = C(K) J1.K· (37)

It follows that n = IlK by the remarks below.


The results above can be improved in several respects, for an arbitrary
probability kerneion a compact space K, of the form L(x, dy) = u(x, Y)J1.(dy)
where J1.(K) < 00. If u > 0 and satisfies (U 1) (without (R) or (S)), then there
is a unique invariant probability measure n and the Cesaro mean in (36)
may be replaced by a simple limit; see Chung and Rao [2]. Thus the hypoth-
eses of symmetry imposed in the preceding discussion are merely expedient
to ensure the polarity and energy principles. A theory of energy for non-
symmetrie kerneis would be a major advance in this circle of ideas.
228 5. Potential Developments

Let us now return briefly to the case of Brownian motion in R 3 which


corresponds to the Newtonian potential. This is also referred to as the
electrostatic potential induced by charges on conductors. In this case the
equilibrium principle may be stated as folIows. For each bounded Borel set
A, there exists a finite measure IlA supported by cA such that

P A l(x)=-
2n
1 i IlA(dy)
'11 x - Y 11'
,A
(38)

This representation is very fruitful. First, it yields the following formula für
the capacity of A:
C(A) = 2n lim IlxIlPAl(x). (39)
X--I',y

It follows at once that if Ale A 2 , then C(A d ::;; C(A 2 ). Next, if An is a


sequence of compacts decreasing to A, then the convergence in (39) is uniform
with respect to n by an easy estimate based on (38). Consequently we can
interchange limits in x and n to obtain

lim C(A n ) = C(A). (40)

Finally, for any two Borel sets A and B we have the inequality:

(41)

which is proved as folIows. For each x, P AuBl(x) - PB1(x) is the probability


that the path from x hits A u B but not B. Such a path must hit A but not
A n B, the probability of which is PA l(x) - P AnB1(x), hence the inequality.
Applying (39) to (41) we obtain

C(A) + C(B) ~ C(A u B) + C(A n B). (42)

This is the strong subadditivity of the capacity functions discussed in Prop-


osition 3 of §3.3. Together with the other properties verified above, the
development in §3.3 shows that we can extend the function C(') to all subsets
of R 3 to be a Choquet capacity. Cf. Helms [1, §7.5].
As another illustration, let us determine the equilibrium measure for a
ball. Take B = B(o, r) and x = () in (14) of§5.1. Since u(x, y) = (2nllx _ yll)-l,
we have for any Borel set A:

IlB(A) = 2n L Il.vIIL(o, dy). (43)

Rotational symmetry implies that L(o, .) is the uniform distribution on aB. A


formal proof may be given along the lines of Proposition (XI) of §4.2. It
5.2. Some Principles of Potential Theory 229

follows that
6(8B n A) 1
,uB(A) = 2nr 2 = -2 6(8B n A);
4nr r
in particular
C(B) = C(8B) = ,uB(8B) = 2nr. (44)

We can check this from (41) and (16) of §4.4:

For another application of (39), let Albe a Borel set with its dosure
contained in the interior of A. Then it is obvious that P Al(x) = P A-AII(x) for
x E AC. Hence C(A) = C(A - AI) by (39). lt is interesting to see how this
result can be sharpened and extended to the dass of processes considered
1ll §5.1, with continuous paths. We have by (19)

C(A) = r LA(x, dy)


JE u(x, y)

for any x. Choose x E AC. Then the continuity of paths implies that under
p x we have rA = rA-Al almost surely, because the path from outside A can-
not hit A without hitting A - Ab and cannot quit A - Al (forever) without
quitting A. Therefore, LA(x,·) = LA-AJX,·) which says that the equilibrium
measures for A and A - Al are the same. The fact that ,uli = ,uDA may be
regarded as a limiting case.
In the language of electrostatics, if the interior of asolid conductor is
partially hollowed out, distribution of the induced charge on the outside
surface remains unchanged when it is grounded (in equilibrium). An analytic
proof of this physical observation may be based on the energy minimizing
characterization of the equilibrium measure (Theorem 6). To some of us
such a proof may seem more devious than the preceding one, but this is a
matter of subjective judgment and previous conditioning. Be it as it may,
here is an appropriate place to end these notes (for the moment), leaving the
reader with thoughts on the empricial origin of mathematical theory, the
grand old tradition of analysis, and the relatively new departure founded on
the theory of probability.

Exercises
1. Let (E, 0') be as specified in §3.1. Show that any 6-finite measure v on 0'
is inner regular; namely: v(B) = SUPKCB v(K). [Hint: for a finite measure
this follows from Exercise 12 of §2.1 of Course.]
230 5. Potential Developments

2. Show that under the hypothesis of Theorem 5 the duality relation (18)
of §4.1 holds with Ei" = U". Assurne also (P). Prove the uniqueness of the
equilibrium measure J1K in the following form: if Uv = UJ1K on K and
~ c Kr, then V = J1K' [Hint: use the Corollary to Theorem 2; and note
that U:K C Kr by (P).]
3. Under (S) and (E), prove that if J1 and V are a-finite measures such that
U J1 :::;; U v, then J1(E) :::;; v(E). [Hint: integrate with respect to J1K']
4. Let cP ± (K) denote the class of } = )"1 - )"2 where )'1 E CP( K), A2 E CP( K).
Under the energy principle prove that J1K is the unique member of
cp±(K) which minimizes G below:
G(A) = (),)") - 2A(K).
This is the quadratic actually used by Gauss.
5. Suppose that <A,A) ~ 0 for every } E cp±(K) as defined in Exercise 4.
This is called the Positivity Principle and forms part of the energy
principle. Show that under (S) we have the Cauchy-Schwarz inequality:
<A, V)2 :::;; O",A)<V, v) for any A and v in CP±(K).
6. Generalize (34) as folIows. Let K be compact: v a finite measure on K: U
defined on K x K is symmetric and ~ O. Define L(x, dy) = u(x, y)v(dy)
and assurne that for every x E K we have L(x, K) :::;; I. Prove that for
any measure } on K with A(K) :::;; 1 we have

Hence if the positivity principle in Exercise 5 holds, then <AL, )L) :::;;
(), A). Thus the energy of a subprobability measure decreases after each
transformation by L. This is due to J. R. Baxter. [Hint: put cp(y) =
JA(dx)u(x,y). Then S(Lcp)2dv:::;; SL(cp2)dv:::;; Scp 2 dv; hence <AL,AL) 0=
J(cpLcp)dv:::;; Scp 2 dv = <cp,cpL).]
The next few problems are taken from Frostman [1] and meant to give
some idea of the classical approach to the equilibrium problem. They are
analytical propositions without reference to a process, where (E,0") =
(R d, gßd), ~ = m (Lebesgue measure).
7. Let u ~ 0 and (x, y) -+ u(x, y) be lower semi-continuous. Prove that
there exists a probability measure v such that
<v, v) = inf<;"A)
Je

where ), ranges over all probability measures on a compact K. [Hint:


take a vaguely convergent subsequence of a sequence {An} such that
<An, An) tends to the infinum.]
8. Let v and ). be as in Exercise 7. Assume in addition that u is symmetric.
Prove that <v, v) :::;; <V,A) for any A. Deduce that if <v, v) = <Je,A) then
5.2. Some Principles ofPotential Theory 231

v == Je under the energy principle. [Hint: consider the energy of(1 - c)v + cA
for I; 1 O.J
9. Suppose (A,A) < 00. Then Uv = <v, v), ),-a.e. on K. [Hint: let <v, v) =
c> b > a > 0 and suppose that Uv < a on A with Je(A) > O. It follows
from lower semi-continuity that there exists a ball B with v(B) > 0 such
that Uv > bon B. Now transfer the mass v(B) and re-distribute it over A
proportionately with respect to Je; namely put for all S: v'(S) = v(S\B) +
<
V(B)A(S n A)A(A) - I. Compute v, Vi) to get a contradiction with Exer-
cise 8. Hence Uv : : :-: C, A-a.e. on K.J
Note. It follows from (31) and (37) that UV K = <v K , vK ) on K n Kr, hence
except for a polar set by (P). If }. has finite energy then }. does not charge
any polar set, hence the result in Exercise 9 is true by the methods used in
§§5.1-5.2. The notion of "capacity" was motivated by such considerations.
10. Let u(x, y) = Ilx - yll-' where rx > 0 in R 2 , and rx > 1 in R 3 . Prove that
if U J1 is continuous, then (M) holds. [Hint: the Laplacian of U J1 at a
point where maximum is attained must be <0 by calculus. This is
called the "elementary maximum principle" by Frostman and is an
illustration of the humble origin of a noble result.J
11. Let u be as in Exercise 10 but 1 < rx < 3 in R 3 . Let J1 have the compact
support K and suppose that U J1 considered as a function on K only is
continuous there. Then it is continuous everywhere. This is known as the
Continuity Principle and is an important tool in the classical theory.
[Hint: UJ1 is continuous in KC. It is sufficient to show that rpo(x) =
h(x,o) u(x, Y)J1(dy) 1 0 uniformly in x as b 1 o. For x E K this is true by
the hypo thesis and Dini's theorem. For x 1= K let y be a point in K
nearest x. Then IIY - zll :s 211x - zll for all z E K; if Ilx - yll :S b then
rpo(x) :S 2'rp20(y)·J
12. Establish the celebrated M. Riesz convolution formula below:

fRJx - yll,-dll y - zllß-ddy = C(d,rx,ß)llx _ yll,+li-d

for d : : :-: 1 and 0 < rx + ß < d, where C is a constant. [Hint: put x = 0,


Ilzll = 1 to see that the existence of Cis trivial. For its explicit evaluation
use the formula
k(rx)llx - yll,-d = LW t(a/2)-l pt (x,y)dt
where Pt is given in (1) of §4.2, and k(rx) is easily computed.J
The next three exercises concern Brownian motion in R 3.
13. Let B = B(o, r). Prove that
P O { YB E dt} = r(2nt 3 )-1/2 e - r 2/2t dt.

[Hint: PO{YB > t} = EO{pX(tlTB < ooJ}; use (16) of §4.4.J


232 5. Potential Developments

14. As in Exercise 13, prove that


P{ e- aTH } = 2r~(1 _ e- r>!2a)-I.
[Hint: by Exercise 13 of §4.2, with A = (0:,0,0), we have

E"{ e- aTB } -I = EO{ exp[ Jbx l(t)J} = ;~) J'B exp[ J2ax IJO"(dy).
Both results above for R d , d ;::: 3' are in Getoor [3J].
15. For any bounded Borel set A, and any Borel set S, prove that

So' dx PX{o < }'A ::;; t; X(}'A) E S} = tPA(S)

where PA is the equilibrium measure for A. [Hint: use

Sox dx(u(x, y) - Pru(x, y)) = t.

Apart from the use of ")' A the result (valid für R d, d ;::: 3) is implicit m
Spitzer [l].J

NOTES ON CHAPTER 5

The title ofthis ehapter is a double entendre. It is not the intention ofthis book to treat
potential theory exeept as a eoneomitant of the underlying processes. Nevcrthclcss we
shall proeeed far enough to show several faeets of this old theory in a new light.
§5.1 The eontent of this seetion is based on Chung [1]. This was an attempt to
utilize the inherent duality of a Markov proeess as evideneed by its hitting and quitting
times. Formula (20) or its better known eorollary (21) is a partieular ease of the repre-
sentation of an exeessive funetion by the potential of a measure plus a generalized
harmonie funetion. Such a representation is proved in Chung and Rao [I J for the dass
of potentials whieh satisfies the eonditions of Theorem 1. In classieal potential theory
this is known asF. Riesz's deeomposition of a subharmonie funetion; see Brelot [I].
We stop short of this fundamental result as it would take us too far into a well-
entrenehed field.
Let us point out that Robin's problem of determining the equilihrium charge
distribution is solved by the formula (14). One might employ a kind of Monte Carlo
method to simulate the last exit probability there to obtain empirieal results.
§5.2. The various prineiples of Newtonian potential theory are diseussed in Rao [I].
The extension of these to M. Riesz potentials was a major advanee made by Frostman
[I]. Their mutual relations form the base of an axiomatic potential theory founded by
Brelot. Wermer [IJ gives abrief aeeount of the physieal eoneepts of capacity and
energy leading to the equilibrium potential.
In prineiple, it should be possible to treat questions of duality by the method of
time reversal. Although a general theory of reversing has existed for some time, it seems
still too diffieult for applications. A prime example is the polarity prineiple (Theorem 3)
which is not prima facie such a problem. Yet all known probabilistic proofs use some
kind of reversing (cf. Theorem 10 of §4.5). It would be extremely interesting to obtain
this and perhaps also the maximum principle (Theorem 2) by a manifest reversing
argument. Such an approach is also suggested by the concept of reversibility of physical
processes from which potential theory sprang.
Bibliography

L. V. Ahlfors
[I] Complex Analysis, Second edition, McGraw Hill Co., New York, 1966.
J. Azema
[I] Theorie generale des processus et retournement du temps, Ann. Eeole Norm Sup. Sero 4,
6 (1973), 459-519.
R. M. Blumenthai and R. K. Getoor
[1] Markov Processes and Potential Theory, Academic Press, New York, 1968.
M. Brelot
[I] Elements de la Theorie Classique du Potentiei, Fourth Edition, Centre de Documentation
Universitaire, Paris, 1969.
K. L. Chung
[I] A Course in Prohahility Theory, Second Edition, Academic Press, New York, 1974.
[2] Markov Chains with Stationary Transition Prohabilities, Second Edition, Springer-
Verlag 1967.
[3] Boundary Theory for Markov Chains, Princeton University Press, Princeton NJ, 1970.
[4] A simple proof of Doob's theorem, Sem. de Prob. V, (1969/70), p. 76. Lecture Notes in
Mathematies No. 191, Springer-Verlag, Berlin Heidelberg New York.
[5] On the fundamental hypotheses of Hunt Processes, Instituto di alta matematica, Sympo-
sia Mathematica IX (1972), 43-52.
[6] Probabilistic approach to the equilibrium problem in potential theory, Ann. Inst. Fourier
23 (1973), 313-322.
[7] Remarks on equilibrium potential and energy, Ann. Inst. Fourier 25 (1975), 131-138.
[8] Excursions in Brownian motion, Arkivför Mat. 14 (1976), 155-177.
K. L. Chung and J. L. Doob
[1] Fields, optionality and measurability, Amer. J. Math. 87 (1965), 397 -424.
K. L. Chung and J. Glover
[1] Left continuous moderate Markov processes, Z. Wahrseheinlichkeitslheorie 49 (1979),
237-248.
K. L. Chung and P. Li
[I] Comparison of probability and eigenvalue methods for the Schrödinger equation,
Advances in Math. (to appear).
K. L. Chung and K. M. Rao
[I] A new setting for potential theory (Part I), Ann. Inst. Fourier 30 (1980),167- 198.
[2] Equilibrium and energy, Probab. Math. Stat. 1 (1981).
[3] Feynman-Kac functional and the Schrödinger equation, Seminar in Stochastic Processes
1 (1981), 1-29.
K. L. Chung and S. R. S. Varadhan
[I] Kac functional and Schrödinger equation, Studia Math. 68 (1979), 249-260.
K. L. Chung and J. B. Walsh
[I] To reverse a Markov process, Acta Math. 123 (1969),225-251.
R. Courant
[I] Differential and Integral Caleulus, Interscience 1964.
C. Dellacherie
[I] Ensembles aleatories I, 11, Sem. de Prob. III (1967/68), pp. 97-136, Leeture Notes in
Mathematics No. 88, Springer-Verlag.
[2] Capacites et Processus Stochastiques, Springer-Verlag Berlin Heidelberg New Y ork 1972.
234 Bibliography

[3J Mesurabilite des debuts et theoreme de section: le lot a la portee de toutes les bourscs.
Sem, de Prob. XV (1979/80), pp. 351--370 Leeture Notes in Mathematics No .• Springer-
Verlag, Berlin Heidelberg New York.
C. Dellacherie and P. A. Meyer
[I J Probahilill;,\' et Potentiei, Hermann, Paris 1975.
[2J Probahilith et Potentiei: Tlu;orie des Martingales, Hermann. Paris 1980.
J. L. Doob
[IJ Stoehastie Processes. Wiley and Sons. New York, 1953.
[2J Semimartingales and subharmonic functions, Trans. Amer. Math. SOl'. 77 (1954). 86 121.
[3J A probability approach to the heat equation, Trans. Amer. Math. Soc. 80 (1955).216- 280.
E. B. Dynkin
[lJ Markov Processes, Springer-Verlag, Berlin Heidelberg New York 1965.
E. B. Dynkin and A. A. Yushekevich
[I J Theorems and Problems in Markov Processes, (in Russian), Nauk, Moscow, 1967.
O. Frostman
[I J Potentiel d'equilibre et capacites des ensembles avec quelques applications a la theorie
des fonctions, Medd. Lunds Univ. Math. Sem. 3 (1935), 1--118.
R. K. Getoor
[I J Markov processes : Ray processes and right processes, Leeture Notes in Marhematics
No. 440, Springer-Verlag, Berlin Heidelberg New York 1975.
[2J Transience and recurrence 01' Markov Processes, Sem. de Prob. XIV (1978/79). pp, 397-
409, Leeture Notes in Mathematics No. 784, Springer-Verlag, Berlin Heidelberg New
York.
[3J The Brownian eseape process, Ar/l1. Probability 7 (1979),864--867.
I. I. Gihman and A, V. Skorohod
[I J The Theorv 0(' Stochastie Processes 1l (translated from the Russian), Springer-Verlag
Berlin Heidelberg New York 1975.
L. L. Helms
[I J Introduction to Potential Theory, Wiley and Sons, New York, 1969.
G. A. Hunt
[I J Some theorems concerning Brownian motion, Trans. Amer. Math. SOl'. 81 (1956).
294-319.
[2J Markoff processes and potentials, I, Illinois J. Math, I (1957),44 93.
[3J Martingales et Processus de Markov, Dunod, Paris, 1966.
K.Ito
[I J Leetures on Stochastic Proeesses, Tate Institute, Bombay 1961.
M. Kac
[I J On some connections between probability theory and differential and integral equations.
Proc. Second Berkeley Symposium on Math. Stal. and Prohahilitr, 189 215. University
of California Press, Berkeley 1951.
S. Kakutani
[I J Two-dimensional Brownian motion and harmonie functions, Proc. Imp. Acad. Tokl'o
22 (1944), 706 714.
J. L. Kelley
[I J General Topology, D. Van Nostrand Co., New York 1955.
O. D. Kellogg
[I J Foundations o('Porential Theory, Springer-Verlag Berlin, 1929.
R. Z. Khas'minskii
[I J On positive solutions ofthe equation du + Vu = O. Theory ofProhahility alld its Applica-
tions (translated from the Russian) 4 (1959),309-318.
P. Levy
[I J Theorie de ['Addition des Variahles Aleatoires, Second edition, Gauthicr-Villars. Paris,
1954.
P. A. Meyer
[1 J Probahility and Potentials, Blaisdell Publishing Co. 1966.
[2J Processus de Markov, Leeture Notes in Mathematics No. 26, Springer-Verlag, Berlin
Heidelberg New York 1967.
[3J Processus de Markov: 1a frontiere de Martin. Leeture Notes in Mathematics No. 77,
Springer- Verlag, Berlin Heidelberg New Y ork 1968.
Bibliography 235

P. A. Meyer, R. T. Smythe and J. B. Walsh


[1] Birth and death of Markov processes, Proc. Sixth Berkeley Symposium on Math. Slat.
and Probability, Vol. III, pp. 295-305, University of California Press, 1972.
S. Port and C. Stone
[1] Brownian Motion and Classical Potential Theory, Academic Press, New York 1978.
K. M. Rao
[I] Brownian motion and classical potential theory, Aarhus University Lecture Notes No.
47, 1977.
H. L. Royden
[1] Real Analysis, Second Edition, Macmillan, New York, 1968.
R. T. Smythe and J. B. Walsh
[I] The existence of dual processes, lnvensiones Math. 19 (1973), 113-148.
F. Spitzer
[I] Electrostatic capacity, heat ftow, and Brownian motion, Z. Wahrscheinlichkeitstheorie
3 (1964),110-121.
J. M. Steele
[I] A counterexample related to a criterion for a function to be continuous, Proc. Amer.
Math. Soc. 79 (1980),107-109.
E. C. Titchmarsh
[1] The Theory of Functions, Second edition, Oxford University Press, 1939.
J. B. Walsh
[I] The cofine topology revisited, Proceedings of Symposia in Pure mathematics XXXI,
Probability, Amer. Math. Soc. (1977), pp. 131-152.
J. Wermer
[1] Potential theory, Lecture Notes in Mathemalics No. 408, Springer- Verlag, Berlin Heidel-
berg New York 1974.
J. G. Wendel
[I] Hitting spheres with Brownian motion, Ann. of Probability 8 (1980), 164-169.
D. Williams
[I] Diffusions, Markov Processes, and Martingales, Vol. I, Wiley and Sons, New York 1979.
K. Yosida
[1] Functional Analysis, Springer-Verlag, Berlin Heidelberg New York 1965.
Index

Absorbing state 9 convolution semigroup 137


additive functional 199
almost sure(ly) 26, 51
analytic set 40 debut 40
approximation of entrance and hitting Dellacherie 's theorem 113
times 93 -4, 113 Dirichlet problem 166
area of sphere 150 for unbounded domain 170
augmentation 29, 59 generalized 170, 186
unsolvable 167
divergenee formula 157
balayage 98 domination principle 100
baITier 179 Doob's continuity theorem 184, 189
boundary value problem Doob 's stopping theorem 30
for Laplace equation (see under Doob- Meyer decomposition 48
"Dirichlet problem' ') dual process 139
for Schrödinger equation 204 duality formula 140
Blumenthal's zero-oHme law 64 Dynkin 's lemma 4
Borel measurable process 18
Borelian semigroup 46
Brownian motion 11, 118, 125, 128, cf 1
144ff.. 228 cf 63
continuity of paths 78 cf 96
killed outside a domain 177 electrostatics 229
polarity of singleton 118, 169 energy 224
potential density 128-9 minimization 226, 230
recurrence, transience 145 principle 225
rotational symmetry 149 equilibrium measure 213
stopped 188 as steady state 227
transition density 144 equilibrium principle 218
excessive function 45, 85
as limit of potentials 82, 85-6
capacity 223, 228 balayage properties 100
Cartan '5 theorem 116 c10sure properties 81, 104
Choquet 's capacibility theorem 91 decreasing limit 116ff.
cone condition 165 fine continuity 108
continuity of paths 77 infinity of 104
continuity principle 231 purely 121
238 Index

excessive function (COIl!.) Hypothesis (L) 112


right continuity on paths 101
excurSlOn 153
independent increments 142
initial distribution 7
:F 0 " :F" (= .?F 0 xl, .?F', 2 instantaneous stale 56
:Fr, .~ T+ 14 invariant function 12 I
.?F T - 16 measure 138
:F'T 57
:F/L, 61
:F- p .?F-(= :F . xl 62 Kellogg - Evans theorem 185, 223
.?F p :F( = .?F xl 16, 75 kerne I 6
Feiler property 49 Kolmogorov's extension theorem 6
strong 129
Feiler proeess 50
Feynman ~ Kae funetional 199 Laplace equation 156
fine closure 97 radial solution 159
fine topology 107 Laplaeian
first entranee time 22 as infinitesimal generator 190, 198
optionality, approximation (see under as inverse to potential operator 192,
"hitting time '"l 194
Frostman 230 last exit time 121. 209
left entrance, hitting time 95, 134
Icft quitting time 213
gauge 205 U~vy process 137
Gauss formulas 157, 162 lifetime 9
quadratie 232 logarithmie potential 193
Gauss ~ Koebe theorem 156
generalized Diriehlct problem 186
Green 's funetion, potential 18 I M. Riesz potentiab 226,231
for an interval 199 Markov chain 10
Markov process 2
Martingale 24
harmonie funetion 156ft. fm Brownian motion 147, 153
and mal1ingaJc 182 maximum principle (see also "domination
harmonie measure 162 principle' ')
Harnack's theorems 158 for harmonie function 158
inequality 200 Maria- Frostman 221
hitting time 70 measurahility quest ions 63
optionality 71, 92 Meyer's theorem 32
approximation 93 ~4, 113 moderate Markov proper! y 69
hitting probabilities for spheres 168 ~9 monotone c1ass 4
holding point 79
Hölder condi tion 162
homogeneity 6 nearl y Bore I 95
spatial 137
Hunt process 75ff.
Hunt's switching formula 219 occupation time 109
Hypothesis (8) 131. 212 optional 12
Index 239

field 43 of supermartingales 29, 32


optionality of entrance and hitting Robin's problem 213
limes 71, 92

Schrödinger equation 202ff.


Pieard's theorem 161 semigroup 6
point at infinity 9, 48 semipolar 110
Poisson separable proccss 78
equation 193 -4, 196 shift 8, 20
formula 169 spatial homogencity 137
integral 180 stable state 79
proeess 11 stoehastie proeess 2
polar 110 strong Feiler property 129
polarity prineiple 222 strong Markov property 57
positivity principle 230 strong subadditivity 89, 228
potential 46 submarkovian 9
lower scmi-continuity 126 superaveraging funetion 45
predietable 17 superharmonie funetion 174
field 43 and excessive function 178
progressive measurability 37 and supermartingale 175ff. 182
projeetion theorem 40 characterizations 187
continuity on paths 183
minimum principle 187
quasi left continuity 70 supermartingalc 24fT.
quitting time 121. 209, 213 superpotential 36
quitting kernel 209,212-3

thin 110
Radon measure 212 totally inaccessiblc 80
reeurrent process 122 transient process 86, 126
set 121 set 121
reference measure 112 transition function 6
relleetion principle 153 transition probability dcnsity 11
regular point 97
domain 165
regularization 82 uniform motion 10
remote field 144 uniqueness fnr potentials of
resolvent 46 measures 140
equation 83 universal Illeasurability 63
reversal 01' time, reverse process 183
(F.) Riesz decomposition 118
Riesz potentials (see under M. Riesz version 28
potentials) volumc of ball 150
right continuity
of augmcnted fields 61
of excessive funetion on paths 101 zero-or-one law 64, 144
of fields 12 zero potential 109
Grundlehren der mathematischen Wissenschaften
ASeries of Comprehensive Studies in Mathematics
A Selection
153. Federer: Geometrie Measure Theory
154. Singer: Bases in Banaeh Spaees I
155. Müller: Foundations of the Mathematieal Theory of Eleetromagnetie Waves
156. van der Waerden: Mathematieal Statisties
157. Prohorov/Rozanov: Probability Theory. Basie Coneepts. Limit Theorems. Random Processes
158. Constantineseu/Cornea: Potential Theory on Harmonie Spaees
159. Köthe: Topologieal Vector Spaces I
160. Agrest/Maksimov: Theory of Ineomplete Cylindrieal Funetions and their Applieations
161. Bhatia/Szegö: Stability of Dynamieal Systems
162. Nevanlinna: Analytie Funetions
163. Stoer/Witzgall: Convexity and Optimization in Finite Dimensions I
164. Sario/Nakai: Classifieation Theory of Riemann Surfaees
165. MitrinovielVasie: Analytie Inequalities
166. GrothendieekiDieudonnt\: Elements de Geometrie Aigebrique I
167. Chandrasekharan: Arithmetieal Funetions
168. Palamodov: Linear Differential Operators with Constant Coeffieients
169. Rademacher: Topies in Analytic Number Theory
170. Lions: Optimal Control of Systems Governed by Partial Differential Equations
171. Singer: Best Approximation in Normed Linear Spaees by Elements of Linear Subspaces
172. Bühlmann: Mathematieal Methods in Risk Theory
173. Maeda/Maeda: Theory of Symmetrie Lattices
174. Stiefel/Scheifele: Linear and Regular Celestial Meehanie. Perturbed Two-body
Motion - Numerieal Methods - Canonieal Theory
175. Larsen: An Introduetion to the Theory of Multipliers
176. GrauertiRemmert: Analytische Stellenalgebren
177. Flügge: Praetieal Quantum Meehanies I
178. Flügge: Praetieal Quantum Mechanies 11
179. Giraud: Cohomologie non abelienne
180. Landkof: Foundations of Modern Potential Theory
181. Lions/Magenes: Non-Homogeneous Boundary Value Problems and Applieations I
182. Lions/Magenes: Non-Homogeneous Boundary Value Problems and Applieations II
183. Lions/Magenes: Non-Homogeneous Boundary Value Problems and Applications III
184. Rosenblatt: Markov Proeesses. Structure and Asymptotic Behavior
185. Rubinowiez: Sommerfeldsehe Polynommethode
186. Handbook for Automatie Computation. Vol. 2. Wilkinson/Reinsch: Linear Algebra
187. SiegeliMoser: Leetures on Celestial Meehanies
188. Warner: Harmonie Analysis on Semi-Simple Lie Groups I
189. Warner: Harmonie Analysis on Semi-Simple Lie Groups II
190. Faith: Algebra: Rings. Modules. and Categories I
191. Faith: Algebra 11. Ring Theory
192. Malleev: Aigebraie Systems
193. P6lyalSzegö: Problems and Theorems in Analysis I
194. Igusa: Theta Funetions
195. Berberian: Baer*-Rings
196. AthreyalNey: Branehing Proeesses
197. Benz: Vorlesungen über Geometrie der Aigebren
198. Gaal: Linear Analysis and Representation Theory
199. Nitsehe: Vorlesungen über Minimalflächen
200. Dold: Leetures on Aigebraie Topology
201. Beck: Continuous Flows in the Plane
202. Schrnetterer: Introduction to Mathematieal Statisties
203. Schoeneberg: Elliptic Modular Funetions
204. Popov: Hyperstability of Control Systems
205. Nikollskii: Approximation of Funetions of Several Variables and Imbedding Theorems
206. Andre: Homologie des Aigebres Commutatives
207. Donoghue: Monotone Matrix Funetions and Analytie Continuation
208. Laeey: The Isometrie Theory of Classieal Banaeh Spaees
209. Ringel: Map Color Theorem
2 !O. Gihman/Skorohod: The Theory of Stoehastie Proeesses I
211. Comfort/Negrepontis: The Theory of Ultrafilters
212. Switzer: Aigebraie Topology- Homotopy and Homology
213. Shafarevieh: Basic Aigebraie Geometry
214. van der Waerden: Group Theory and Quantum Meehanies
215. Schaefer: Banaeh Lattices and Positive Operators
216. Polya/Szegö: Problems and Theorems in Analysis II
217. Stenstriim: Rings of Quotients
218. Gihman/Skorohod: The Theory of Stoehastie Proeesses II
219. Duvant/Lions: Inequalities in Meehanies and Physies
220. Kirillov: Elements of the Theory of Representations
221. Murnford: Aigebraic Geornetry I: Complex Projeetive Varieties
222. Lang: Introduetion to Modular Forms
223. Bergh/Liifström: Interpolation Spaces. An Introduetion
224. Gilharg/Trudinger: Elliptic Partial Differential Equations of Seeond Order
225. Schütte: Proof Theory
226. Karouhi: K-Thcory. An Introduetion
227. Grauert/Remrnert: Theorie der Steinsehen Räume
228. Segal/Kunze: Integrals and Operators
229. Hasse: Number Theory
230. Klingenherg: Lcetures on C10sed Geodesies
231. Lang: Elliptie Curves: Diophantine Analysis
232. Gihman/Skorohod: The Theory of Stoehastie Proecsses 111
233. Stroock/Varadhan: Multi-dimensional Diffusion Processes
234. Aigner: Combinatorial Theory
235. Dynkin/Yushkevich: Markov Control Proeesses and Their Applieations
236. Grauert/Remmert: Theory of Stein Spaees
237. Köthe: Topologieal Vector-Spaees 11
238. Graham/MeGehec: Essays in Comrnutative Harmonie Analysis
239. Elliott: Probahilistie Nurnber Theory I
240. Elliott: Probabilistie Number Theory 11
241. Rudin: Funetion Theory in the Unit Ball of C"
242. Blaekburn/Huppert: Finite Groups I
243. Blaekburn/Huppert: Finite Groups 11
244. Kubert/Lang: Modular Units
245. Cornfeld/Fomin/Sinai: Ergodie Theory
246. Naimark: Thenry of Group Representations
247. Suzuki: Group Theory I

You might also like