Existunique
Existunique
Existunique
1 Lipschitz Conditions
We now turn our attention to the general initial value problem
dy
= f (t, y)
dt
y(t0 ) = y0 ,
for every pair of points (t, y1 ) and (t, y2 ) in S. The constant K is called the Lipschitz constant
for f on the domain S.
Example 1.1. Let f (t, y) = 3y + 2. Then |f (t, y2 ) − f (t, y1 )| = 3|y2 − y1 | so f is Lipschitz
with constant 3.
3
Example 1.2. Let f (t, y) = ty 2 . Then since |f (t, y2 ) − f (t, y1 )| = t|y2 + y1 ||y2 − y1 | is
not bounded by any constant times |y2 − y1 |, f is not Lipschitz with respect to y on the
domain R × R. However f is Lipschitz on any rectangle R = [a, b] × [c, d] since we have
t|y1 + y2 | ≤ 2 max{|a|, |b|} · max{|c|, |d|} on R.
3
The following lemma gives a simple test for a function to be Lipschitz with respect to y.
Lemma 1.1. Suppose f is continuously differentiable with respect to y on some closed rect-
angle R. Then f is Lipschitz with respect to y on R.
Proof. Since ∂f /∂y is continuous on the closed and bounded set R, it attains a maximum
and a minimum on R. Therefore
¯ ¯
¯ ∂f ¯
K = max ¯ (t, y)¯¯ < ∞.
¯
(t,y)∈R0 ∂y
So given (t, y1 ) and (t, y2 ) in B0 , the Mean Value Theorem implies that there is some y3
between y1 and y2 such that
¯ ¯
¯ ∂f ¯
|f (t, y2 ) − f (t, y1 )| = ¯ (t, y3 )¯¯ |y2 − y1 | ≤ K|y2 − y1 |.
¯
∂y
1
Example 1.3. Let f (t, y) = |t|ety +t sin(t+2y). Then ∂f
∂y
= t|t|ety +sin(t+y)−2t cos(t+2y),
which is continuous on any rectangle R. Therefore f is Lipschitz on any rectangle R.
3
Note that the converse of Lemma 1.1 is false. That is, Lipschitz with respect to y does
not imply differentiable with respect to y.
Example 1.4. Let f (t, y) = t|y| on R = [−2, 2] × [−2, 2]. Then since
We now state the main theorem about existence and uniqueness of solutions.
Theorem 1.1. Suppose f (t, y) is continuous in t and Lipschitz with respect to y on the
domain R = [a, b] × [c, d]. Then, given any point (t0 , y0 ) in R, there exist ² > 0 and a unique
solution y(t) of the initial value problem
dy
= f (t, y), y(t0 ) = y0
dt
on the interval (t0 − ², t0 + ²).
Note that Theorem 1.1 asserts only the existence of a solution on some interval, which
could be quite small in general.
Now suppose y0 > 0. Then the solution blows up to infinity as t approaches 1/y0 . Hence the
interval containing 0 on which the solution exists is (−∞, 1/y0 ). For large y0 , the interval of
positive time for which the solution exists is very small.
3
2
dy
Example 1.6. Consider dt
= y 2/3 . Solving this separable equation gives
µ ¶3
t 1/3
y(t) = + y0 .
3
For y0 = 0 we therefore have the solution y(t) = t3 /27. However y(t) ≡ 0 is also a solution
with initial data y0 = 0, so we have non-uniqueness of solutions for this equation. The
problem of course is that f (y) = y 1/3 is not Lipschitz. There is no Lipschitz constant in any
interval containing zero since
Note however that y0 = 0 is the only initial data for which we have non-uniqueness. For
if y0 > 0 (the same reasoning applies for y0 < 0), then on the interval J = (y0 /2, ∞) the
derivative of f with respect to y is bounded. For ∂f /∂y = 23 y −1/3 is decreasing in y on J
and thus
2
|∂f /∂y(t, y)| ≤ ≡ K.
3(y0 /2)1/3
So by the Mean Value Theorem, given any x, y ∈ J there is some z between x and y such
that
|f (x) − f (y)|
= |fy (z)| ≤ K
|x − y|
and therefore f is Lipschitz on J with constant K. Hence Theorem 1.1 implies the existence
of a unique solution of dy
dt
= y 2/3 y(0) = y0 on some time interval.
3
To prove the existence and uniqueness theorem, we need some machinery from real anal-
ysis.
2 Metric Spaces
A metric space is a set X, together with a distance function (or metric) d : X × X → R
that satisfies the following conditions:
2. d(x, y) = d(y, x)
3
We say a sequence {xk } in a metric space X converges to x in X if lim d(xk , x) = 0.
k→∞
That is, xk converges to x if for every ² > 0 there is some N such that k > N implies
d(xk , x) < ². We then write lim xk = x, or simply xk → x.
k→∞
A sequence {xk } is called a Cauchy sequence if for every ² > 0 there is some N such
that m, n > N implies d(xm , xn ) < ². It is easy to see that convergent sequences are Cauchy
sequences.
A metric space X is called complete if every Cauchy sequence converges to an element
of X.
Example 2.2. Rn is complete.
3
√
Example 2.3. Q is not complete. Let {xn } be any sequence in Q that converges to 2.
We know such a sequence exists by the density of the rationals in the reals. Then {xn } is a
Cauchy sequence, but does not converge to an element of Q.
3
Example 3.1. Let f be defined on R by f (x) = arctan(x). Since |f (x)| < π/2 for all x ∈ R
and limx→∞ f (x) = π/2, it follows that kf k = π/2.
3
Example 3.2. Let f (x) = x2 . On the interval I = [−3, 3] the sup norm of f is kf k = 9.
On R, the sup norm of f is kf k = ∞ since f is unbounded on R.
3
Convergence with respect to the uniform norm is known as uniform convergence. We
say a sequence of functions fn converges uniformly to a function f on the interval I if
lim kfn − f k = lim sup |fn (x) − f (x)| = 0.
n→∞ n→∞ x∈I
Example 3.3. Let fn (x) = xn and let f (x) = 0. Then on the domain [0, 1/2] we have
µ ¶n
n 1
kfn − f k = sup |x | = →0
x∈[0,1/2] 2
so fn converges uniformly to f on the domain [0, 1/2]. However, on the domain [0, 1] we
have
kfn − f k = sup |xn | = 1 6→ 0,
x∈[0,1]
4
so fn does not converge uniformly to f on [0, 1].
3
One important feature of uniform convergence is that it preserves continuity. That is,
the uniform limit of a sequence of continuous functions is continuous.
Theorem 3.1. Let {fn } be a sequence of continuous functions on some interval I in R, and
suppose that fn converges uniformly on I to a function f . Then f is continuous on I.
Proof. Fix x ∈ I, and let ² > 0 be given. Then since fn converges uniformly to f , we may
choose n such that kfn − f k < ²/3. Since fn is continuous at x, there exists some δ > 0 such
that for any y in I with |y − x| < δ we have |fn (y) − fn (x)| < ²/3. Therefore, for any such
y, we also have by the triangle inequality:
|f (y) − f (x)| ≤ |f (y) − fn (y)| + |fn (y) − fn (x)| + |fn (x) − f (x)|
≤ kf − fn k + |fn (y) − fn (x)| + kfn − f k
< ²/3 + ²/3 + ²/3 = ²
d(f, g) = kf − gk.
Proof. Let fn be a Cauchy sequence in C([a, b], [c, d]). Then given ² > 0 there is some N
such that kfm − fn k < ² whenever m, n > N . For each fixed x in [a, b], we have
so the sequence of real numbers {fn (x)} is a Cauchy sequence. Since R is complete, this
sequence converges to some real number, which we shall call f (x). Doing this for each x in
[a, b] defines a function f defined on [a, b]. Since each sequence {fn (x)} lies in the closed
interval [c, d], its limit f (x) is also in [c, d], and therefore the function f maps [a, b] into [c, d].
Next we show that fn converges uniformly to f . Given ² > 0, choose N so that kfm −fn k <
²/2 whenever m, n > N . Then for any x in [a, b], we have
Now since fn (x) converges to f (x), it follows that |fn (x) − f (x)| < ²/2 for large enough n,
and therefore we have
|fm (x) − f (x)| < ²
5
for any m > N . Since the choice of m did not depend on x, this inequality holds for every
x in [a, b]. Therefore
kfm − f k = sup |fm (x) − f (x)| ≤ ²,
x∈I
6
Theorem 4.1. (Contraction Mapping Principle) Let T : X → X be a contraction mapping
on a complete metric space X. Then T has a unique fixed point x ∈ X.
Proof. Let x0 ∈ X and define the sequence {xk } by setting
xk+1 = T (xk )
for k ≥ 1, it follows by induction that d(xk , xk+1 ) ≤ αk d0 . Now given ² > 0 choose N so
that αN d0 /(1 − α) < ². This can be done since α < 1. Then for m, n > N suppose without
loss of generality that m ≤ n. Then by the triangle inequality,
n−1
X n−1
X ∞
X αm d0 αN d0
d(xm , xn ) ≤ d(xk , xk+1 ) ≤ αk d0 ≤ αk d0 = ≤ < ².
k=m k=m k=m
1−α 1−α
This equation is called an integral equation, since it relates the unknown function y to
an integral involving y. Thus any solution of (2) is a solution of the integral equation
7
(3). Conversely, suppose y is a continuous function which satisfies (3). Then f (s, x(s)) is
continuous, and the Fundamental Theorem of Calculus implies that y is differentiable and
dy
dt
= f (t, y(t)). Furthermore, at t = t0 we have y(t0 ) = y0 . Thus any continuous solution of
(3) is a solution of (2). Thus we now focus our attention on solving (2).
Given a continuous function y, we define T (y) to be the function given by
Z t
T (y)(t) = y0 + f (s, y(s)) ds.
t0
Then y is a solution of (2) if and only if T (y) = y, i.e. y is a fixed point of the map T .
Next we wish to define a complete metric space on which T is a contraction mapping.
We begin by defining for ² > 0 and η > 0 the space
Theorem 5.1. Suppose f (t, y) is continuous and Lipschitz with respect to y on the domain
R = [a, b]×[c, d]. Then for any (t0 , y0 ) in R, there exist ² > 0 and η > 0 such that T : X → X
is a contraction mapping.
Proof. First choose η > 0 small enough that the interval [y0 − η, y0 + η] is contained within
the interval [c, d]. Next, since f is Lipschitz with respect to y on R, there is some constant
K such that |f (t, y2 ) − f (t, y1 )| ≤ K|y2 − y1 | for all (t, y1 ) and (t, y2 ) in R. Let
Since x is continuous and f is continuous, the composition f (s, x(s)) is continuous, so by the
Fundamental Theorem of Calculus y is differentiable, and thus continuous. To prove that
the range of y is a subset of [y0 − η, y0 + η], observe that for t0 ≤ t ≤ t0 + ²,
¯Z t ¯ Z t
¯ ¯ ηM
|y(t) − y0 | = ¯¯ f (s, x(s)) ds¯¯ ≤ |f (s, x(s))| ds ≤ ²M < < η,
t0 t0 M +1
8
α = ²K < 1. Then for t0 ≤ t ≤ t0 + ²,
¯Z t ¯
¯ ¯
|T (x)(t) − T (y)(t)| = ¯ f (s, x(s)) − f (s, y(s)) ds¯¯
¯
t
Z t0
≤ |f (s, x(s)) − f (s, y(s))| ds
t0
Z t
≤K |x(s) − y(s)| ds
t0
Z t
≤K kx − yk ds
t0
≤ K²kx − yk = αkx − yk
6 Picard Iteration
Now let us look back to the proof of the contraction mapping principle. In it, we found that
the fixed point of T is the limit of T k (x0 ), where x0 is any element of X. Hence the solution
of the initial value problem (2) can be found by iterating the function
Z t
T : y(t) 7→ y0 + f (s, y(s)) ds
t0
on any arbitrarily chosen continuous function satisfying the initial data. One natural choice
is the constant function x0 (t) ≡ y0 . Then for k ≥ 0 we define xk+1 = T (xk ). That is,
Z t
xk+1 (t) = y0 + f (s, xk (s)) ds.
t0
9
initial function x0 (t) ≡ y0 gives
Z t
x1 (t) = y0 + ky0 ds = (1 + tk)y0
0
Z t µ ¶
1 2 2
x2 (t) = y0 + (1 + tk)y0 ds = 1 + tk + t k y0
0 2
..
.
à k !
X tj k j
xk (t) = y0 .
j=0
j!
These are precisely the partial sums of the Taylor series for etk x0 , the unique solution.
3
10