Nothing Special   »   [go: up one dir, main page]

0% found this document useful (0 votes)
48 views94 pages

Corresponding Lecture Notes

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 94

Dieses Skript wird laufend aktualisiert.

Version: 9. Dezember 2021

1 Introduction

This course is about Sobolev spaces which are indispensable for a modern theory of
Partial Dierential Equations (PDEs). One example for such a Sobolev space is given
by
H 1 (Ω) = {u ∈ L2 (Ω) ∶ ∂i u ∈ L2 (Ω) ∀i ∈ {1, . . . , N }}
where Ω ⊂ RN , N ∈ N is an open set and ∂i u is the i-th weak (sometimes called distribu-
tional) partial derivative of u. We will clarify later how this is dened. For the moment
it suces to know that it is a generalization of the classical i-th partial derivative. The
main reasons for using these spaces are the following:

ˆ One can formulate PDEs in these spaces. Finding a solution u of some PDE is
often equivalent to nding a function ũ ∈ H 1 (Ω) solving the corresponding PDE in
its weak formulation. These functions are then called weak solutions of the given
problem.

ˆ Since Sobolev spaces carry the structure of Banach spaces  the space H 1 (Ω) above
is even a Hilbert space  one can use powerful tools from functional analysis to prove
the existence of weak solutions of PDEs.

In addition to that, Sobolev space allow to prove the existence of (weak) solutions even
when classical solutions do not exist, for instance when coecient functions are disconti-
nuous. Furthermore, they are nowadays indispensable for numerical methods like Finite
Element Methods or Galerkin Methods. This is our motivation to study these spaces in
more detail. The plan of the lecture is the following:

(1) (1L) Introduction and Preliminaries

(2) (2L) Weak derivatives and Sobolev spaces

(3) (1L) Lax-Milgram Theorem and Riesz' Representation Theorem

(4) (2L) Approximation by smooth functions

(5) (2L) Stein's Extension Theorem

(6) (2L) Sobolev's Embedding Theorem and Applications

(7) (2L) Morrey's Embedding Theorem and Applications

(8) (2L) Compact Embeddings: The Rellich-Kondrachov Theorem and beyond

(9) (2L) Poincaré's Inequality and Applications

(10) (2L) Trace Theorem and Applications

(11) (1L) Separability

1
(12) (1L) Reexivity

I am going to illustrate the benets of the theory with the aid of a running example
that we will understand better and better during this course. This example is an elliptic
boundary value problem of the form

− ∆u(x) + c(x)u(x) = f (x) (x ∈ Ω), u(x) = g(x) (x ∈ ∂Ω) (1.1)

where the functions c, f, g ∶ Ω → R u of this


are given. We are looking for a solution
problem under as weak assumptions on c, f, g, Ω as possible. To use Sobolev space theory
we rst pass to its weak formulation. To this end we assume that u ∈ C (Ω) is a classical
2

solution of this problem and multiply the PDE with some test function ϕ ∈ C0 (Ω)
the support of which is strictly contained in Ω. Integration over Ω and the Divergence
1
Theorem imply

∫ f (x)ϕ(x) dx = ∫ (−∆u(x) + c(x)u(x))ϕ(x) dx


Ω Ω

= ∫ − div(ϕ∇u)(x) + ∇ϕ(x) ⋅ ∇u(x) + c(x)u(x)ϕ(x) dx


= −∫ ϕ(x)∇u(x) ⋅ ν(x) dσ(x) + ∫ ∇ϕ(x) ⋅ ∇u(x) + c(x)u(x)ϕ(x) dx


∂Ω Ω

= ∫ ∇ϕ(x) ⋅ ∇u(x) + c(x)u(x)ϕ(x) dx


Here, ν ∶ ∂Ω → RN denotes the outer unit normal vector eld, σ is the surface measure
of ∂Ω. So our boundary value problem (1.1) takes the form


∫ ∇ϕ(x) ⋅ ∇u(x) + c(x)u(x)ϕ(x) dx = ∫ f (x)ϕ(x) dx ∀ϕ ∈ C0 (Ω), u∣∂Ω = g.
Ω Ω

For convenience we set γu ∶= u∣∂Ω (the trace of u) and introduce

a(u, v) ∶= ∫ ∇u(x) ⋅ ∇v(x) + c(x)u(x)v(x) dx,


l(v) ∶= ∫ f (x)v(x) dx.


Then the so-called weak formulation of the boundary value problem (1.1) takes the form

a(u, ϕ) = l(ϕ) ∀ϕ ∈ C0∞ (Ω), γu = g. (1.2)

In the following we shall develop the tools to get a rich existence theory for this problem.
The interesting point is that classical solutions are not only necessarily solutions of (1.2),
but even the converse holds in some cases (after modication on a null set).

Preliminaries:
1
Recalldiv(ϕf ) = ϕ div(f ) + ∇ϕ ⋅ f for scalar functions ϕ ∈ C 1 (Ω) and vector elds f ∈ C 1 (Ω; RN ).
Moreover, div(∇u) = ∆u.

2
We collect some facts and x the notation. All sets respectively functions considered in
this lecture are assumed to be Lebesgue-measurable. Vector spaces come with the eld
K = R. In the following:

ˆ Ω ⊂ RN denotes an open domain, i.e., an open connected set. It may be bounded


or unbounded.

ˆ Lebesgue spaces: For 1 ≤ p ≤ ∞, Lp (Ω) is the vector space of (equivalence classes


of ) Lebesgue-measurable functions f ∶Ω→R that are p-integrable with respect to
the Lebesgue measure. This means that

1/p
∥f ∥p ∶= (∫ ∣f (x)∣p dx) (1 ≤ p < ∞), ∥f ∥∞ ∶= ess supΩ ∣f ∣.

is nite for these functions and ∥ ⋅ ∥p denes a norm that turns (Lp (Ω), ∥ ⋅ ∥p ) into
a Banach space. In the case p = 2 it is a Hilbert space endowed with the inner
product

⟨f, g⟩2 ∶= ∫ f (x)g(x) dx.



p
We write L
loc (Ω) consists of all measurable functions such that f ⋅ 1K ∈ Lp (Ω) for
all compact subsets K ⊂ Ω.
ˆ Minkowski's and Hölder's inequality: We recall Minkowski's inequality ∥f +
g∥p ≤ ∥f ∥p + ∥g∥p , which is nothing but the triangle inequality in Lp (Ω). Very
important: Hölder's inequality for 1 ≤ p, q ≤ ∞ reads

1 1
∥f g∥1 ≤ ∥f ∥p ∥g∥q if + = 1.
p q

In that case we write q = p′ = p


p−1 .
ˆ Density arguments: Many inequalities / properties of functions belonging to
Sobolev spaces will be proved using the denseness of smooth functions over Ω. We
∞ ∞
call this vector space C (Ω). We will call test functions all functions u ∈ C (Ω)
the support of which supp(u) ∶= {x ∈ Ω ∶ u(x) ≠ 0} is a compact subset of Ω. In
particular, these functions vanish in a neighbourhood of ∂Ω. The space of test

functions will be denoted by C0 (Ω).

ˆ Fundamental Lemma of the Calculus of Variations: It reads that for any


f ∈ L (Ω), 1 ≤ p ≤ ∞
p

∫ f (x)ϕ(x) dx = 0 for all ϕ ∈ C0∞ (Ω) ⇒ f =0 almost everywhere.


We provide an argument for this in the case 1 ≤ p < ∞. It suces to nd a sequence
(ϕn ) ⊂ C0∞ (Ω) such that ϕn → ∣f ∣ Lp (Ω). We will see later why such a
p−2 ′
f in
sequence exists. Then

∫ ∣f (x)∣ dx = n→∞
lim (∫ f (x)ϕn (x) dx + ∫ f (x)(∣f (x)∣p−2 f (x) − ϕn (x)) dx)
p
Ω Ω Ω

3
≤ 0 + lim sup ∥f ∥p ∥ϕn − ∣f ∣p−2 f ∥p′
n→∞
= 0,
hence f =0 almost everywhere. A similar proof shows that the same conclusion is
2
true under the weaker assumption f ∈ L1loc (Ω).
ˆ Dierential operators: The dierential operators ∂i , ∇, ∆ have the usual mea-
ning: ∂i is the i-the partial derivative of a function, ∇ = (∂1 , . . . , ∂N ) is the gradient
and ∆ = div(∇) = ∑i=1 ∂ii is the Laplacian, which plays a central role in many
N

PDEs. Later on ∂i u, ∇u, etc. will denote the weak i-th partial derivative, weak
gradient of u, etc. We will not make a notational distinction.

ˆ Bounded linear operators: A bounded linear operator T ∶ X → Y between some


Banach spaces (X, ∥⋅∥X ) and (Y, ∥⋅∥Y ) is an R-linear map satisfying ∥T x∥Y ≤ C∥x∥X
for all x ∈ X with some positive number C > 0 independent of x. The least such
number is the operator norm of T , namely

∥T x∥Y
∥T ∥ ∶= ∥T ∥X→Y = sup < ∞.
x≠0 ∥x∥X
We will only deal with linear operators (in contrast to nonlinear ones) in this
lecture.

ˆ Compact operators: A bounded linear operator between Banach spaces T ∶ X →


Y is compact if for each bounded sequence (xn ) ⊂ X the image sequence (T xn ) ⊂ Y
has a convergent subsequence. This denition is worth keeping in mind; compact-
ness is a very important concept.

ˆ C m,α -Domains: Consider a bounded domain Ω ⊂ RN as above. We say that it is a


C m,α -domain if each point on its boundary ∂Ω has a neighbourhood U such that,
after some permutation of coordinates,

∂Ω ∩ U = {(x′ , xN ) ∈ U ∶ xN = ψ(x′ )},


Ω ∩ U = {(x′ , xN ) ∈ U ∶ xN > ψ(x′ )}

for some function ψ ∈ C m,α (RN −1 ). In that case the outer unit normal vector eld
′ 3
at the boundary point x ∶= (x , xN ) ∈ ∂Ω is given by

1 ∇ψ(x′ )
ν(x) = √ ( ). (1.3)
1 + ∣∇ψ(x′ )∣2 −1

ˆ Surface integrals: We want to (rather: need to) integrate over the boundaries of
C m,α -domains Ω ⊂ RN . This is done via

M
∫ g dσ ∶= ∑ ∫ g dσUi
∂Ω i=1 ∂Ω∩Ui

2
Apply the previous reasoning to f ⋅ 1K for all compact subsets K ⊂ Ω.
3 c
It is indeed the outer one because you can formally check via Taylor expansion that x + tν(x) ∈ Ω
for 0 < t < t0 and x + tν(x) ∈ Ω for −t0 < t < 0 provided that t0 > 0 is chosen suciently small.

4
where ∂Ω = ⋃Mi=1 Ui . Here, the Ui 's are disjoint neighbourhoods (graphical pieces)
as above with ψi ∈ C
m,α
(RN −1 ). For such neighbourhoods the latter integrals are
dened according to


∫ g dσUi = ∫ g(x′ , ψi (x′ )) 1 + ∣∇ψi (x′ )∣2 dx′ .
∂Ω∩Ui {x′ ∈RN −1 ∶(x′ ,ψi (x′ ))∈Ui }

ˆ Divergence Theorem: Surface integrals are important in view of the Divergence


Theorem, which is a higher-dimensional version of the Fundamental Theorem of
Calculus. For vector elds f ∈ C 1 (Ω; RN ) and Lipschitz domains (m = 0, α = 1) it
reads

∫ div(f ) dx = ∫ f ⋅ ν dσ,
Ω ∂Ω
where the boundary integral is given by the previous denition. Notice that the
outer unit normal vector eld is dened locally in terms of the parametrizing
function ψ as in (1.3). As a consequence one obtains the integration-by-parts for-
mula for u, v ∈ C 1 (Ω):

∫ ∂i uv dx = ∫ uvνi dσ − ∫ u∂i v dx. (1.4)


Ω ∂Ω Ω

We shall use this for test functions v = ϕ ∈ C0∞ (Ω) that vanish close to the boundary.
4
open sets Ω ⊂ R and all u ∈ C (Ω) the equality
N 1
In that case we obtain for all

∫ ∂i uϕ dx = − ∫ u∂i ϕ dx.
Ω Ω
End Lec 01

2 Weak derivatives and Sobolev spaces

We start with the denition of a weak derivative of a given function u ∈ L1loc (Ω) for some
open subset Ω ⊂ RN , N ∈ N.

Denition 2.1. u ∈ L1loc (Ω) and i ∈ {1, . . . , N }.


Let A function w ∈ L1loc (Ω) is called i-th
weak partial derivative of u if it satises

∫ u(x)∂i ϕ(x) dx = − ∫ w(x)ϕ(x) dx for all ϕ ∈ C0∞ (Ω).


Ω Ω

In this case we will write ∂i u ∶= w.

4
Establishing this rigorously is a bit technical, we skip this. Essentially, it is a consequence of (1.4)
where Ω is replaced by some large enough ball where all boundary terms are well-dened.

5
In the one-dimensional case one replaces the i-th partial derivative by the usual derivative.
We shall also use the symbols ∂ x , ∂y etc. as for classical derivatives. The denition ∂i u ∶= w
makes sense because we now prove that two dierent weak partial derivatives coincide
almost everywhere.

Proposition 2.2. Let Ω ⊂ R


N
be an open set and u ∈ Lloc (Ω), i ∈ {1, . . . , N }. Assume
1

that w, w̃ ∈ Lloc (Ω) are an i-th weak derivative of u. Then w = w̃ (almost everywhere).
1

Beweis:
Let ϕ ∈ C0∞ (Ω) be arbitrary. By Denition 2.1,

− ∫ w(x)ϕ(x) dx = ∫ u(x)∂i ϕ(x) dx = − ∫ w̃(x)ϕ(x) dx.


Ω Ω Ω
So we infer

∫ (w(x) − w̃(x))ϕ(x) dx = 0 for all ϕ ∈ C0∞ (Ω).



The Fundamental Lemma of the Calculus of Variations implies w − w̃ = 0 almost every-
where, which is all we had to show. ◻

So we can speak of the weak partial derivative, the weak gradient (dened via
∇ ∶= (∂1 , . . . , ∂N )) of a function u ∈ L1loc (Ω). A function u ∈ L1loc (Ω) may in general be
discontinuous on Ω, but nevertheless admits weak derivatives. We will see some examples
below. Furthermore, in contrast to the classical derivatives that are dened pointwise for
each x ∈ Ω, the weak derivative a priori depends on Ω as a whole. Higher weak derivatives
dened accordingly: For a given multi-index α ∈ N0 the corresponding weak partial
N
are
derivative is supposed to satisfy

∣α∣
∫ u(x)∂ ϕ(x) dx = (−1) ∫ w(x)ϕ(x) dx
α
for all ϕ ∈ C0∞ (Ω).
Ω Ω
αN
Here, for any given α = (α1 , . . . , αN ) ∈ NN
0 the symbol ∂α stands for ∂1α1 . . . ∂N and
∣α∣ ∶= α1 + . . . + αN . We rst show that this dierentiation concept generalizes the notion
of a classical derivative. The following result tells us that the classical gradient is the
only candidate for the weak derivative if it exists.

Proposition 2.3. Let Ω ⊂ RN be open.

(i) If u ∈ C 1 (Ω) then the classical gradient of u is also a weak gradient of u.


(ii) If u ∈ L1loc (Ω) has a weak gradient ∇u ∈ L1loc (Ω; RN ) and u ∈ C 1 (Ω̃) for some open
subset Ω̃ ⊂ Ω, then the weak gradient coincides with the classical gradient on Ω̃.

Beweis:

In this proof we denote the classical i-th partial derivative by
∂xi . The classical integration-
by-parts formula (1.4) yields for all i = 1, . . . , N
∂u
∫ (x)ϕ(x) dx = − ∫ u(x)∂i ϕ(x) dx for all ϕ ∈ C0∞ (Ω̃).
Ω̃ ∂xi Ω̃

6
Using this fact for Ω̃ = Ω proves (i). In order to prove (ii) we assume that a weak gradient
∞ ∞
on Ω exists. Then each ϕ ∈ C0 (Ω̃) belongs to ϕ ∈ C0 (Ω), so the denition of a weak
derivative implies

∫ ∂i u(x)ϕ(x) dx = − ∫ u(x)∂i ϕ(x) dx for all ϕ ∈ C0∞ (Ω̃).


Ω̃ Ω̃

The Fundamental Lemma of the Calculus of Variations gives ∂i u = ∂u


∂xi (almost every-
where) on Ω̃. ◻

One can check: ∂i (β1 u + β2 v) = β1 ∂i u + β2 ∂i v for all β1 , β 2 ∈ R provided that the weak
derivatives on the right exist.

Example 2.4.
(a) Let u(x) ∶= ∣x∣ for x ∈ Ω ∶= (−1, 1) ⊂ R. Proposition 2.3 tells us that the function
v(x) ∶= 1 for x > 0 and v(x) ∶= −1 for x < 0 is the only candidate for a weak

derivative of u. We can check this by hand. For ϕ ∈ C0 (Ω) we have

1 0 1
∫ u(x)ϕ′ (x) dx = − ∫ xϕ′ (x) dx + ∫ xϕ′ (x) dx
−1 −1 0
0 1
= −[xϕ(x)]0−1 + ∫ ϕ(x) dx + [xϕ(x)]10 − ∫ ϕ(x) dx
−1 0
1
= ϕ(−1) + ϕ(1) − ∫ v(x)ϕ(x) dx
−1
1
= −∫ v(x)ϕ(x) dx.
−1

So v is indeed the weak derivative of u.


(b) Consider the function u ∶ Ω → R, (x, y) ↦ 1x>0 + 1y>0 where Ω ∶= (−1, 1) × (−1, 1).
We claim that it has a second weak derivative ∂xy u even though it does not have
rst order derivatives. We claim that ∂xy u = 0 holds in the weak sense. Indeed, for
ϕ ∈ C0∞ (Ω),

∫ u(x, y)∂xy ϕ(x, y) d(x, y)



1 1 1 1
=∫ (∫ ∂xy ϕ(x, y) dx) dy + ∫ (∫ ∂xy ϕ(x, y) dy) dx
−1 0 −1 0
1 1
=∫ (∂y ϕ(1, y) − ∂y ϕ(0, y)) dy + ∫ (∂x ϕ(x, 1) − ∂x ϕ(x, 0)) dx
−1 −1
1 1
= −∫ ∂y ϕ(0, y) dy − ∫ ∂x ϕ(x, 0) dx
−1 −1
= −ϕ(0, 1) + ϕ(0, −1) − ϕ(1, 0) − ϕ(−1, 0)
= 0.

7
The last equality holds because ϕ vanishes close to the boundary of Ω and

(0, 1), (0, −1), (1, 0), (0, 1) ∈ ∂Ω.

On the other hand, ∂x u does not exist:

1 1 1 1
∫ u(x, y)∂x ϕ(x, y) d(x, y) = ∫ (∫ ∂x ϕ(x, y) dx) dy + ∫ (∫ ∂x ϕ(x, y) dx) dy
Ω −1 0 0 −1
1 1
=∫ (ϕ(1, y) − ϕ(0, y)) dy + ∫ (ϕ(1, y) − ϕ(−1, y)) dy
−1 0
1
= −∫ ϕ(0, y) dy.
−1

This cannot be written as − ∫Ω w(x, y)ϕ(x, y) dy for some w ∈ L1loc (Ω).


(c) Schwarz's Theorem is always true: ∂xy u exists as a weak derivative if and only if
∂yx u exists. On the other hand, (b) shows that ∂x (∂y u) may not be a meaningful
equivalent expression.

Further examples will be given below. We now introduce the Sobolev spaces.

Denition 2.5 (Sobolev spaces). Let k∈N and 1 ≤ p ≤ ∞.

W k,p (Ω) ∶= {u ∈ Lp (Ω) ∶ ∂ α u ∈ Lp (Ω) for all 0 , 0 ≤ ∣α∣ ≤ k}


α ∈ NN
1
⎛ ⎞p
∥u∥W k,p (Ω) ∶= ∑ ∥∂ α u∥pp if 1 ≤ p < ∞, ∥u∥W k,∞ (Ω) ∶= max ∥∂ α u∥∞ .
⎝∣α∣≤k ⎠ ∣α∣≤k

We also dene H k (Ω) ∶= W k,2 (Ω).

Here, ∂ α u ∈ Lp (Ω) stands for the statement that the weak derivative ∂ α u exists and that
it lies in L (Ω) (not only in Lloc (Ω)). We remark that other equivalent norms can be
p 1

taken without changing the theory. For instance, for 1 ≤ p ≤ ∞ one may also take

u ↦ ∑ ∥∂ α u∥p .
∣α∣≤k

The denition given above has the pleasant feature that the most important spaces
H k (Ω) are generated by the inner product

⟨u, v⟩k,2 ∶= ⟨u, v⟩H k (Ω) ∶= ∫ α α


∑ ∂ u(x)∂ v(x) dx.
Ω ∣α∣≤k

In the special case k = 1, which is the most important one for us,

N
⟨u, v⟩1,2 = ∫ ∑ ∂i u(x)∂i v(x) + u(x)v(x) dx = ∫ ∇u(x) ⋅ ∇v(x) + u(x)v(x) dx.
Ω i=1 Ω

8
Satz 2.6. Let k ∈ N, 1 ≤ p ≤ ∞. Then (W k,p (Ω), ∥ ⋅ ∥W k,p (Ω) ) is a Banach space and
(H k (Ω), ⟨⋅, ⋅⟩k,2 ) is a Hilbert space.

Beweis:
We use that ∥ ⋅ ∥W k,p (Ω) , ⟨⋅, ⋅⟩k,2 are norms respectively inner products. The proof of this
fact is straightforward and therefore omitted. So it remains to show that the spaces
W k,p (Ω) are complete with respect to these norms. To show this, we use that the spaces
(Lp (Ω), ∥ ⋅ ∥p ) are complete.

Let (un )n∈N be a Cauchy sequence in W k,p (Ω), i.e., for all ε>0 there is m0 ∈ N such
that
∥um − un ∥W k,p (Ω) ≤ ε for all m, n ≥ m0 .
By denition of the norm we conclude that for each xed α ∈ NN
0 , ∣α∣ ≤ k we have

∥∂ α um − ∂ α un ∥p ≤ ∥um − un ∥W k,p (Ω) ≤ ε for all m, n ≥ m0 .

So (∂ α un )n∈N is a Cauchy sequence in Lp (Ω). Completeness of Lp (Ω) yields vα ∈ Lp (Ω)


such that
∂ α un → vα in Lp (Ω). (2.1)

Dene v ∶= v(0,...,0) ∈ Lp (Ω). We claim ∂ α v = vα . Indeed, for all test functions ϕ ∈ C0∞ (Ω),
we have

∫ v(x)∂ ϕ(x) dx = n→∞


lim ∫ un (x)∂ α ϕ(x) dx
α
Ω Ω

= lim (−1)∣α∣ ∫ ∂ α un (x)ϕ(x) dx


n→∞ Ω
∣α∣
= (−1) ∫ vα (x)ϕ(x) dx.

This implies that vα is the α-th weak derivative of v . Since vα ∈ Lp (Ω), we infer ∂ α v = vα
for all α∈ NN
0 such that ∣α∣ ≤ k , hence v ∈ W
k,p
(Ω). Hence,
1/p 1/p
⎛ ⎞ ⎛ ⎞ (2.1)
∥un − v∥W k,p (Ω) = ∑ ∥∂ α (un − v)∥pp = ∑ ∥∂ α un − vα ∥pp → 0 as n → ∞.
⎝∣α∣≤k ⎠ ⎝∣α∣≤k ⎠

We have thus proved that (un ) converges in W k,p (Ω), which nishes the proof. ◻

∥⋅∥W k,p (Ω)


Denition 2.7. W0k,p (Ω) ∶= C0∞ (Ω) and H0k (Ω) ∶= W0k,2 (Ω).

As a closed subspace of W k,p (Ω) the space W0k,p (Ω) is a Banach space (equipped with
the same norm as W k,p (Ω)).

9
Example 2.8.
(a) Consider u(x) ∶= ∣x∣γ for x ∈ Ω ∶= {y ∈ RN ∶ ∣y∣ < 1}
γ ∈ R ∖ {0}, N ∈ N, N ≥ 2. and
By Proposition 2.3, the only candidate for the weak gradient is ∇u ∶= γx∣x∣
γ−2
. One
may show that this function is indeed the weak partial derivative of u provided
that γ > 1 − N (which ensures that ∇u is locally integrable). We compute using
5
polar coordinates

1 1 ∣SN −1 ∣
∫ ∣u(x)∣ = ∫
p
∣x∣γp dx = ∫ rN −1 ⋅∣SN −1 ∣rγp dr = ∣SN −1 ∣ ∫ rN +γp−1 dr =
Ω ∣x∣<1 0 0 N + γp
if and only if N + γp > 0, otherwise +∞. The same way we get precisely for N+
(γ − 1)p > 0
1 ∣γ∣∣SN −1 ∣
∫ ∣∇u(x)∣ = ∫
p
∣γx∣x∣γ−2 ∣p dx = ∣γ∣p ∫ rN −1 ⋅ ∣SN −1 ∣r(γ−1)p dr = .
Ω ∣x∣<1 0 N + (γ − 1)p

We conclude:

N
u ∈ W 1,p (Ω) ⇔ N + γp > 0, N + (γ − 1)p > 0 ⇔ γ >1− .
p

(b) Set I ∶= (0, 1). Assume that g ∈ Lp (I) and dene

x
G(x) ∶= ∫ g(t) dt.
0

We claim G ∈ W 1,p (I) and G′ = g in the weak sense. So let ϕ ∈ C0∞ (I) be a test
function. Using Fubini's Theorem we get

1 1 x
∫ G(x)ϕ′ (x) dx = ∫ (∫ g(t) dt) ϕ′ (x) dx
0 0 0
1 1
=∫ ∫ 1t≤x≤1 g(t)ϕ′ (x) dt dx
0 0
1 1
=∫ (∫ 1t≤x≤1 g(t)ϕ′ (x) dx) dt
0 0
1 1
=∫ g(t) (∫ ϕ′ (x) dx) dt
0 t
1
=∫ g(t) (ϕ(1) − ϕ(t)) dt
0
1
= −∫ g(t)ϕ(t) dt.
0

5
For integrable functions we have


∫ u(x) dx = ∫ rN −1 (∫ u(rω) dσ(ω)) dr,
RN 0 SN −1

where SN −1 = {ω ∈ RN ∶ ∣ω∣ = 1} denotes the unit sphere.

10
So the weak derivative of G is g, i.e., G′ = g in the weak sense. Moreover, Hölder's
inequality gives

1 1 x p
∫ ∣G(x)∣p + ∣G′ (x)∣p dx = ∫ ∣∫ g(t) dt∣ + ∣g(x)∣p dx
0 0 0
1
1 x p′
≤∫ (∫ 1 dt) ∥g∥pp + ∣g(x)∣p dx
0 0
≤ 2∥g∥pp < ∞.

We conclude G ∈ W 1,p (I).


Question: Why is this false for I = [0, ∞)?
End Lec 02

To prove further elementary properties of Sobolev functions we anticipate the following


approximation result (Meyers-Serrin Theorem). Let k ∈ N, 1 ≤ p < ∞. Then, for any

given u ∈ W (Ω), there is a sequence (un )n∈N ⊂ C (Ω) ∩ W k,p (Ω) such
that un → u in
k,p

W k,p (Ω) and almost everywhere. We will call such sequences approximating sequences.
In particular,

∥⋅∥W k,p (Ω)


C ∞ (Ω) ∩ W k,p (Ω) = W k,p (Ω) (k ∈ N, 1 ≤ p < ∞).

This is not true for p = ∞. To see this choose Ω = {y ∈ RN ∶ ∣y∣ < 1} and u(x) = ∣x∣.
Then u∈W 1,∞
(Ω) and its weak gradient is given by ∇u(x) =
∣x∣ . If a sequence (un ) as
x

above existed, then ∂1 u would be the L∞ (Ω)-limit of continuous (even smooth) functions.
limit of continuous functions is continuous, so x ↦ 1 would have to be
x
But a uniform
∣x∣
continuous, which is false. Nevertheless, the sequences can be chosen to satisfy

∥un ∥p ≤ ∥u∥p for all 1 ≤ p ≤ ∞, n ∈ N. (2.2)

Proposition 2.9. Let Ω ⊂ RN be open and 1 ≤ p < ∞.


(i) (Product rule)6 Assume u, v ∈ W 1,p (Ω) ∩ L∞ (Ω). Then uv ∈ W 1,p (Ω) ∩ L∞ (Ω)
with ∂i (uv) = v∂i u + u∂i v .

6
In the case v ∈ W 1,p (Ω)∩C ∞ (Ω) an easier proof without density argument is possible. The observation
for any test function ϕ ∈ C0 (Ω) the function vϕ is again a test function. So if ∂i u denotes the

is that
weak partial derivative, we get

∫ uv∂i ϕ dx = ∫ u(∂i (vϕ) − ϕ∂i v) dx


Ω Ω

= − ∫ ∂i u(vϕ) dx − ∫ uϕ∂i v dx
Ω Ω

= − ∫ (v∂i u + u∂i v)ϕ dx.


This proves ∂i (uv) = v∂i u + u∂i v in the weak sense as claimed.

11
(Chain rule) Assume u ∈ W (Ω) and that G ∈ C 1 (R)
1,p
(ii) has a bounded derivative.
Then G○u∈ W (Ω) with ∂i (G ○ u) = G′ (u)∂i u.
1,p

(iii) Assume u ∈ W 1,p (Ω). Then u+ ∶= max{u, 0}, u− ∶= max{−u, 0}, ∣u∣ ∈ W 1,p (Ω) 7
with

∂i u+ = ∂i u ⋅ 1{u>0} , ∂i u− = −∂i u ⋅ 1{u<0} , ∂i ∣u∣ = sign(u)∂i u.


Beweis:
We rst prove (i). Choose approximating sequences (un ), (vn ). The classical chain rule
implies ∂i (un vn ) = un ∂i vn + vn ∂i un . Then

?
∫ u(x)v(x)∂i ϕ(x) dx = n→∞
lim ∫ un (x)vn (x)∂i ϕ(x) dx
Ω Ω

= − lim ∫ ∂i (un (x)vn (x))ϕ(x) dx


n→∞ Ω

= − lim ∫ [(∂i un )(x)vn (x) + (∂i vn )(x)un (x)]ϕ(x) dx


n→∞ Ω
?
= − ∫ [(∂i u)(x)v(x) + (∂i v)(x)u(x)]ϕ(x) dx

We justify the equalities with ?. Applying Hölder's inequality a couple of times we get

∣∫ (u(x)v(x) − un (x)vn (x))∂i ϕ(x) dx∣ ≤ ∥uv − un vn ∥p ∥∂i ϕ∥p′



≤ (∥u(v − vn )∥p + ∥vn (u − un )∥p )∥∂i ϕ∥p′
≤ (∥u∥∞ ∥v − vn ∥p + ∥vn ∥∞ ∥u − un ∥p )∥∂i ϕ∥p′
(2.2)
≤ (∥u∥∞ + ∥v∥∞ )(∥v − vn ∥p + ∥u − un ∥p )∥∂i ϕ∥p′
→ 0 (n → ∞).
8
The second equality is a consequence of the Dominated Convergence Theorem : After
passing to subsequences still denoted by (un ), (vn ), we know ∥un ∥∞ +∥vn ∥∞ ≤ ∥u∥∞ +∥v∥∞
and ∣∇un ∣ + ∣∇vn ∣ ≤ w for some w ∈ L (Ω). Hence,
p

∣(∂i un )(x)vn (x) + (∂i vn )(x)un (x)∣ ≤ ∣w(x)∣(∥v∥∞ + ∥u∥∞ ) ∈ Lp (Ω).

So the pointwise almost everyhwere convergence of (∂i un )(x)vn (x) + (∂i vn )(x)un (x) →
(∂i u)(x)v(x) + (∂i v)(x)u(x) gives the claim.

We now prove (ii). Since the assumptions imply G′ (u)∂i u ∈ Lp (Ω), it suces to prove
that the i-th weak partial derivative is given by ∂i (G ○ u) = G′ (u)∂i u. So let ϕ ∈ C0∞ (Ω)
7
sign(z) = 1 if z > 0, sign(0) = 0, sign(z) = −1 if z < 0.
8
The Riesz-Fischer Theorem, which establishes the completeness of L (Ω), tells you that un → u in
p

L (Ω) implies un → u almost everywhere and that there is subsequence (unk ) satisfying ∣unk ∣ ≤ w
p

for some w ∈ L (Ω). So un → u, vn → v in W (Ω) implies unk → u, vnk → v, ∇unk → ∇u, ∇vnk → ∇v
p 1,p

almost everywhere and ∣unk ∣ + ∣vnk ∣ + ∣∇unk ∣ + ∣∇vnk ∣ ≤ w for some w ∈ L (Ω).
p

Example: un (x) ∶= ∑n∈N 1[n,1/n] (x) coverges to the trivial function in L (R) for 1 ≤ p < ∞. In the
p

case 1 < p < ∞ we can take w(x) ∶= ∑n∈N ∣un (x)∣ ∈ L (R). In the case p = 1 this is not true, but
p

we may take w(x) ∶= ∑n∈N ∣un2 (x)∣ ∈ L (R), which is a bound for the subsequence (un2 )n∈N . Notice
1

∥w∥1 = ∑n∈N n2 < ∞.


1

12
be given and choose an approximating sequence (un ) for u. The classical chain rule gives

∂i (G ○ un ) = G (un )∂i un for all n∈N and hence

?
∫ G(u(x))∂i ϕ(x) dx = n→∞
lim ∫ G(un (x))∂i ϕ(x) dx
Ω Ω

= − lim ∫ ∂i (G(un (x)))ϕ(x) dx


n→∞ Ω

= − lim ∫ G′ (un (x))∂i un (x)ϕ(x) dx


n→∞ Ω
? ′
= − ∫ G (u(x))∂i u(x)ϕ(x) dx

The claim is proved once we have justied the equalities with ?. The rst one is a
consequence of

∣∫ (G(u(x)) − G(un (x)))∂i ϕ(x) dx∣



≤ ∥G(u) − G(un )∥p ∥∂i ϕ∥p′
≤ ∥G′ ∥∞ ∥u − un ∥p ∥∂i ϕ∥p′ → 0 (n → ∞).

The second one follows again by the Dominated Convergence Theorem. Notice that
G′ (un ) → G′ (u) holds pointwise almost everywhere because G′ is continuous.


We prove (iii). Set Gε (z) ∶= z 2 + ε2 − ε for ε > 0. Then

2ε∣z∣
∣Gε (z) − ∣z∣∣ = √ ≤ε
z2 + ε2 + ε + ∣z∣

Part (ii) gives ∂i (Gε (u)) = G′ε (u)∂i u in the weak sense. We thus obtain from the Domi-
nated Convergence Theorem

∫ ∣u(x)∣∂i ϕ(x) dx = ε→0


lim+ ∫ Gε (u(x))∂i ϕ(x) dx
Ω Ω

= − lim+ ∫ G′ε (u(x))∂i u(x)ϕ(x) dx


ε→0 Ω
u(x)
= − lim+ ∫ √ ∂i u(x)ϕ(x) dx
ε→0 Ω u(x)2 + ε2
= − ∫ sign(u(x))∂i u(x)ϕ(x) dx.

This proves the claim for ∣u∣. The remaining statements are a consequence of u+ = 12 (∣u∣+u)
and u− = 2 (∣u∣ − u) and the linearity of weak derivatives.
1

Similarly, one can prove further elementary properties of Sobolev functions by exploiting
the denseness of smooth functions.

13
3 Lax-Milgram Theorem and Riesz' Representation
Theorem

We now show how Sobolev spaces may be used to solve Partial Dierential Equations.
To this end we go back to (1.1) and study the elliptic boundary value problem

−∆u(x) + c(x)u(x) = f (x) (x ∈ Ω), u(x) = g(x) (x ∈ ∂Ω).


As announced earlier, we gradually weaken the hypotheses on c, g, f in our considerations
related to this problem. Accordingly, our assumptions on the data are by no means
optimal. We start assuming g = 0, c = 1 and f ∈ L2 (Ω). In this case the above problem
takes the form

− ∆u(x) + u(x) = f (x) (x ∈ Ω), u(x) = 0 (x ∈ ∂Ω). (3.1)

We have shown in Section 1 that the corresponding weak formulation is given by


∫ ∇ϕ(x) ⋅ ∇u(x) + u(x)ϕ(x) dx = ∫ f (x)ϕ(x) dx ∀ϕ ∈ C0 (Ω), u∣∂Ω = 0.
Ω Ω

The boundary conditions are encoded in the solution space. We are thus looking for a
function u ∈ H01 (Ω) satisfying


∫ ∇ϕ(x) ⋅ ∇u(x) + u(x)ϕ(x) dx = ∫ f (x)ϕ(x) dx ∀ϕ ∈ C0 (Ω).
Ω Ω

Satz 3.1 (Riesz' Representation Theorem [21]). Let H be a Hilbert space


9 and l∶H →R
a bounded linear functional. Then there is a unique function u∈H such that

⟨u, ϕ⟩ = l(ϕ) for all ϕ ∈ H.

Beweis. If l u = 0 (and that's the only possible


is the trivial functional, we may take

choice). So assume l is nontrivial. In that case, we have ker(l) ⊊ H , so there is v ∈ ker(l)
such that l(v) = 1. This implies

l(ϕ − l(ϕ)v) = l(ϕ) − l(ϕ)l(v) = 0 for all ϕ ∈ H,


9 1/2
One indeed needs the completeness: l(f ) ∶= ∫0 f (x) dx denes a bounded linear functional on
2
C([0, 1]) equipped with the L -inner product, but l(⋅) ≠ ⟨v, ⋅⟩ for any v ∈ C([0, 1]) because 1[0,1/2] is
not continuous. By extending l to the completion of H one however gets l(⋅) = ⟨v, ⋅⟩ for some v in the
completion of C([0, 1]), namely v = 1[0,1/2] ∈ L ([0, 1]).
2

Where does the proof fail? It is the existence of v ∈ ker(l) such that l(v) = 1. For that,

one needs that the kernel (more generally: a closed subspace) admits an orthogonal complement,
i.e., H = ker(l) ⊕⊥ ker(l) . Recall that the construction of the orthogonal complement uses that

Cauchy sequences converge: For u ∈ H one denes its projection π(u) ∈ ker(l) onto ker(l) via
∥π(u) − u∥ = inf{∥v − u∥ ∶ v ∈ ker(l)} = min{∥v − u∥ ∶ v ∈ ker(l)}, so u = π(u) + (u − π(u)). From
this construction: u − π(u) ⊥ ker(l). The existence of a minimizer is due to the fact that the minimi-
zing sequence (which is a Cauchy sequence) converges. So here is the point where the completeness
of H is used.

14
so ϕ − l(ϕ)v ∈ ker(l) for all ϕ ∈ H. Since v is orthogonal to the kernel, we obtain

0 = ⟨v, ϕ − l(ϕ)v⟩ = ⟨v, ϕ⟩ − l(ϕ)⟨v, v⟩ for all ϕ ∈ H.

So the claim follows for u ∶= ⟨v, v⟩−1 v . (Uniqueness: clear.)

Korollar 3.2. Assume f ∈ L2 (Ω). Then (3.1) has a unique weak solution u ∈ H01 (Ω)
that satises
∥u∥1,2 ≤ ∥f ∥2 .

Beweis. We apply Riesz' Representation Theorem to the Hilbert space H0 (Ω), equipped
1

with inner product ⟨⋅, ⋅⟩1,2 , and the linear functional l ∶ H0 (Ω) → R given by l(ϕ) =
1

∫Ω f (x)ϕ(x) dx. This linear functional is bounded because of

∣l(ϕ)∣ ≤ ∥f ∥2 ∥ϕ∥2 ≤ ∥f ∥2 ∥ϕ∥1,2 .

So Riesz' Representation Theorem shows that there is precisely one u ∈ H01 (Ω) satisfying

∫ ∇u(x) ⋅ ∇ϕ(x) + u(x)ϕ(x) dx = ⟨u, ϕ⟩1,2 = l(ϕ) = ∫ f (x)ϕ(x) dx for all ϕ ∈ H01 (Ω).
Ω Ω

So (3.1) has precisely one weak solution. It satises

∥u∥21,2 = ⟨u, u⟩1,2 = ∫ f (x)u(x) dx ≤ ∥f ∥2 ∥u∥1,2


and the claim follows.

End Lec 03

In principle, one may apply Riesz' Representation Theorem not only to the standard
inner product, but any other equivalent one may be taken. So in fact this result allows to
solve a whole family of boundary problems and not only the particular one from (3.1).
Anyway, there is a more general result, which is called the Lax-Milgram Lemma. It
essentially tells us that the symmetry requirement of an inner product (i.e. ⟨u, v⟩ = ⟨v, u⟩
∀u, v ∈ H ) is not needed for a solution theory for problems like

a(u, v) = l(v) ∀v ∈ H. (3.2)

Satz 3.3 (Lax-Milgram Lemma [13]) . Let (H, ⟨⋅, ⋅⟩) be a (real) Hilbert space, let a(⋅, ⋅) ∶
H ×H →R be a bilinear form and l ∶ H → R a linear functional such that:

(i) a is bounded, i.e., there is C>0 such that ∣a(u, v)∣ ≤ C∥u∥∥v∥ for all u, v ∈ H ,
(ii) a is coercive, i.e., there is c>0 such that a(u, u) ≥ c∥u∥ 2
for all u ∈ H,
(iii) l is bounded, i.e., there is M >0 such that ∣l(v)∣ ≤ M ∥v∥ for all v ∈ H.

15
Then (3.2) has a unique solution u∈H satisfying ∥u∥ ≤ c−1 M .

Beweis:
For any given u ∈ H , the maps v ↦ a(u, v) and v ↦ l(v) are bounded linear functionals by
assumption (i) and (iii). So Riesz' Representation Theorem yields uniquely determined
elements wu , r ∈ H such that

a(u, v) = ⟨wu , v⟩, l(v) = ⟨r, v⟩.

Dene A ∶ H → H, u ↦ wu . Then we have the following equivalence:

a(u, v) = l(v) ∀v ∈ H ⇔ ⟨Au, v⟩ = ⟨r, v⟩ ∀v ∈ H ⇔ Au = r.

To nd a unique solution to this problem we apply Banach's Fixed Point Theorem to

T ∶ H → H, u ↦ u − ϱ ⋅ (Au − r)

where ϱ≠0 will be chosen suitably.

We rst show that A is linear and bounded. For any given u1 , u2 , v ∈ H and α1 , α2 ∈ R
we have

⟨A(α1 u1 + α2 u2 ), v⟩ = ⟨wα1 u1 +α2 u2 , v⟩


= a(α1 u1 + α2 u2 , v)
= α1 a(u1 , v) + α2 a(u2 , v)
= α1 ⟨wu1 , v⟩ + α2 ⟨wu2 , v⟩
= ⟨α1 wu1 + α2 wu2 , v⟩
= ⟨α1 Au1 + α2 Au2 , v⟩.

This proves the linearity. Moreover, for all u ∈ H,

∥Au∥2 = ⟨Au, Au⟩ = a(u, Au) ≤ C∥u∥∥Au∥.

This proves ∥Au∥ ≤ C∥u∥ for all u ∈ H.

Using these facts we now show that T is a contraction for suitable ϱ ≠ 0. Using that T is
linear, we get for all u1 , u2 ∈ H

∥T u1 − T u2 ∥2 = ∥T (u1 − u2 )∥2
= ∥u1 − u2 − ϱ ⋅ A(u1 − u2 )∥2
= ∥u1 − u2 ∥2 − 2ϱ⟨A(u1 − u2 ), u1 − u2 ⟩ + ϱ2 ∥A(u1 − u2 )∥2
= ∥u1 − u2 ∥2 − 2ϱa(u1 − u2 , u1 − u2 ) + ϱ2 ∥A(u1 − u2 )∥2
≤ ∥u1 − u2 ∥2 − 2ϱc∥u1 − u2 ∥2 + ϱ2 C 2 ∥u1 − u2 ∥2

16
Choosing ϱ = cC −2 we thus obtain

∥T u1 − T u2 ∥ ≤ 1 − c2 C −2 ∥u1 − u2 ∥.

So T is a contraction and hence posses precisely one xed point. As we have seen
above, this implies that (3.2) has a unique solution. This solution, call it u, satises
c∥u∥2 ≤ a(u, u) = l(u) ≤ M ∥u∥ so that ∥u∥ ≤ c−1 M is proved, too. ◻

We apply this result to problems of the form

− ∆u(x) + c(x)u(x) = f (x) (x ∈ Ω), u(x) = 0 (x ∈ ∂Ω). (3.3)

A weak solution to this problem u ∈ H01 (Ω) satises a(u, v) = l(v) for all v∈H where

a(u, v) ∶= ∫ ∇u(x) ⋅ ∇v(x) + c(x)u(x)v(x) dx,


l(v) ∶= ∫ f (x)v(x) dx.


Korollar 3.4. Assume f ∈ L2 (Ω), c ∈ L∞ (Ω) with c(x) ≥ µ > 0 almost everywhere.
unique weak solution u ∈ H0 (Ω) that satises
1
Then (3.3) has a

∥u∥1,2 ≤ min{1, µ}−1 ∥f ∥2 .

Beweis. We verify the assumptions of the Lax-Milgram Lemma. (Bi-)Linearity is clear,


(i) follows from

∣a(u, v)∣ ≤ ∫ ∣∇u(x)∣∣∇v(x)∣ + ∣c(x)∣∣u(x)∣∣v(x)∣ dx


≤ max{1, ∥c∥∞ } ∫ ∣∇u(x)∣∣∇v(x)∣ + ∣u(x)∣∣v(x)∣ dx



≤ max{1, ∥c∥∞ }∥u∥1,2 ∥v∥1,2 .

Moreover, ∣l(v)∣ ≤ ∥f ∥2 ∥v∥2 as before and

a(u, u) = ∫ ∣∇u(x)∣2 + c(x)∣u(x)∣2 dx


≥ min{1, µ} ∫ ∣∇u(x)∣2 + ∣u(x)∣2 dx



= min{1, µ}∥u∥21,2 .

So the Lax-Milgram Lemma proves the claim.

17
4 Approximation by smooth functions

In this section we want to show that smooth functions approximate Sobolev functions u∈
W k,p (Ω) fork ∈ N, 1 ≤ p < ∞. In particular we will prove the existence of approximating
sequences (un ) ⊂ C ∞ (Ω) ∩ W k,p (Ω) satisfying (2.2). We start with some preliminaries
about test functions.

4.1 Test functions

We rst need to establish the mere existence of test functions. The starting point is the
following fact about


⎪e−1/x if x > 0,
ζ(x) ∶= ⎨

⎪ x ≤ 0.
⎩0 if

Proposition 4.1. ζ ∈ C ∞ (R).

The main diculty is to inductively prove ζ (n) (x) = pn (x)x−2n e−1/x for all x ∈ (0, ∞)
−z m
where pn is a polynomial (of degree ≤ n). Using this and e z → 0 as z → ∞ for all
m ∈ N one gets the result. Notice that this counterexample shows that there are C ∞ (R)-
10
functions that are not real-analytic . The following result establishes the existence of
cut-o (or bump) functions, which are a special kind of test functions from C0∞ (Ω).

Proposition 4.2. Let Ω ⊂ RN be open, x0 ∈ Ω and 0 < r < R < dist(x0 , ∂Ω). Then there
is ψ∈ C0∞ (Ω) such that

0 ≤ ψ(x) ≤ 1, ψ(x) = 1 for x ∈ Br (x0 ), ψ(x) = 0 for x ∈ BR (x0 )c

as well as ∣∇ψ(x)∣ ≤ C∣R − r∣−1 for all x ∈ BR (x0 ) ∖ Br (x0 ) and some C > 0. In particular,
C0∞ (Ω) ⊋ {0}.

Beweis:
Choose ζ as in Proposition 4.1 and dene ψ1 ∈ C ∞ (R) via ψ1 (t) ∶= ζ(1 − t)ζ(t), in
particular ψ1 ≥ 0, supp(ψ1 ) = [0, 1]. As a consequence,


∫ ψ1 (s) ds
0 ≤ ψ2 ≤ 1, ψ2 ∣(−∞,0] ≡ 1, ψ2 ∣[1,∞) ≡ 0 where ψ2 (t) ∶= t .
∫R ψ1 (s) ds
∣x−x0 ∣−r
Then ψ(x) ∶= ψ2 ( R−r ) has all the desired properties. ◻

10
Notice that real-analytic functions have isolated zeros whereas the zero 0 is not an isolated one of ζ.

18
They are the building blocks for the following more general result that allows to localize
the considerations. We will see an example for this in the proof of the Meyers-Serrin
Theorem.

Satz 4.3 (Partition of Unity). Let I be a set and (Oi )i∈I a family of open subsets of RN ,
Ω ∶= ⋃i∈I Oi . Then there is a sequence (ϕj )j∈N ⊂ C0∞ (Ω) with the following properties:
(i) 0 ≤ ϕj (x) ≤ 1 for all x∈Ω for all j ∈ N,
(ii) supp(ϕj ) ⊂ Oi(j) for some i(j) ∈ I for all j ∈ N,

(iii) ∑j=1 ϕj (x) =1 for all x ∈ Ω,
(iv) For each compact set K⊂Ω there is an m∈N and an open set W such that

K⊂W ⊂Ω and ϕ1 (x) + . . . + ϕm (x) = 1 for all x ∈ W.

Beweis:
11
We dene the set of open balls

B = {Br (q) ∶ q ∈ Qn , r ∈ Q such that Br (q) ⊂ Oi for some i ∈ I}.

Since B is bijective to a subset of Qn × Q, it is countable. So we may write B = {Brj (qj ) ∶


j ∈ N}. Proposition 4.2 provides functions ψj ∈ C0∞ (Ω) satisfying

0 ≤ ψj ≤ 1, ψj = 1 on Brj /2 (qj ), ψj = 0 on Brj (qj )c . (4.1)

Then dene

ϕ1 ∶= ψ1 , ϕj ∶= (1 − ψ1 ) ⋅ . . . ⋅ (1 − ψj−1 )ψj (j ∈ N, j ≥ 2).

Then (i) and (ii) are clear and it remains to prove (iii),(iv).

One inductively proves

ϕ1 + . . . + ϕj = 1 − (1 − ψ1 ) ⋅ . . . ⋅ (1 − ψj ) for all j ∈ N.

For any given compact subset K ⊂ Ω we have12 K ⊂ ⋃m j=1 Brj /2 (qj ) =∶ W for some m ∈ N.
Hence (1 − ψ1 (x)) ⋅ . . . ⋅ (1 − ψm (x)) = 0 for x ∈ W . We obtain for all n ∈ N, n ≥ m

(ϕ1 + . . . + ϕn )(x) = 1 − (1 − ψ1 (x)) ⋅ . . . ⋅ (1 − ψm (x)) ⋅ . . . ⋅ (1 − ψn (x)) = 1


´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
=0

This proves (iii) and (iv). ◻

11
We call them Br (q) ∶= {x ∈ RN ∶ ∣x − q∣ < r}.
12
Here we use that K ⊂ Ω ⊂ {Brj /2 (q) ∶ j ∈ N}. Prove this!

19
A particularly important role is played by so-called molliers. These are test functions
φ ∈ C0∞ (RN ) with φ ≥ 0, supp(φ) ⊂ B1 (0) and ∫RN φ(x) dx = 1. Proposition 4.2 tells us
that such functions exist. Considering

1 x
φε (x) ∶= φ( )
εN ε
we obtain a mollifying sequence satisfying

φε ≥ 0, supp(φε ) ⊂ Bε (0), ∫ φε (x) dx = 1.


RN

4.2 Convolution with molliers

We rst dene the convolution of two nonnegative measurable functions f, g ∶ RN →


[0, ∞].
(f ∗ g)(x) = ∫ f (x − y)g(y) dy (x ∈ RN ).
RN

Proposition 4.4 (Young [27]). Assume 1 ≤ p, q, r ≤ ∞ and 1+ 1


r = 1
p + 1
q . Then:

∥f ∗ g∥r ≤ ∥f ∥p ∥g∥q .

Beweis:
We only consider 1 ≤ p, q, r < ∞. This is a consequence of the following application of
inequality (notice r > p, r > q and 1 =
r + pr + qr ) and Tonelli's Theorem:
1 1 1
Hölder's
r−p r−q

∥f ∗ g∥rr = ∫ ∣f ∗ g∣r dx
RN
r
≤∫ (∫ ∣f (y)∣∣g(x − y)∣ dy) dx
RN RN
p q r−p r−q r
=∫ (∫ (∣f (y)∣ r ∣g(x − y)∣ r ) ⋅ ∣f (y)∣ r ⋅ ∣g(x − y)∣ r dy) dx
RN RN
r−p r−q
p q
≤∫ (∫ ∣f (y)∣ ∣g(x − y)∣ dy) (∫
p q
∣f (y)∣ dy)
p
(∫ ∣g(x − t)∣ dt)
q
dx
RN RN RN RN

=∫ ∫ p ∥g∥q
∣f (y)∣p ∣g(x − y)∣q dy dx ⋅ ∥f ∥r−p r−q
RN RN

=∫ p ∥g∥q
∣f (y)∣p dy∥g∥qq ⋅ ∥f ∥r−p r−q
RN
= ∥f ∥rp ∥g∥rq .

The claim for p=∞ or q=∞ or r=∞ is proved analogously. ◻

In particular, convolution is a well-dened operationLp (RN )∗Lq (RN ) ⊂ Lr (RN ) with the

corresponding inequality. One checks that the following rules hold for f, g, h ∈ C0 (R ):
N

20
(i) f ∗ g = g ∗ f,
(ii) (f ∗ g) ∗ h = f ∗ (g ∗ h),
(iii) supp(f ∗ g) ⊂ supp(f ) + supp(g) = {x + y ∶ x ∈ supp(f ), y ∈ supp(g)},
(iv) ∂ α (f ∗ g) = ∂ α f ∗ g = f ∗ ∂ α g ,
(v) ∫RN (f ∗ g)h dx = ∫RN f (g ∗ h) dx.
We will prove these identities in the exercise sessions. The strong hypothesis f, g, h ∈
C0∞ (RN ) is chosen here for simplicity. Each item (i)-(v) actually holds for a much more
general class of functions.
End Lec 04

4.3 Approximation of Lp (Ω)-functions

For a given function u ∈ Lp (Ω), i.e., u 1Ω ∈ Lp (RN ), we consider the convolution pro-
ducts
uε (x) ∶= (φε ∗ u)(x) = ∫ φε (x − y)u(y) dy.

Our aim is to prove uε → u in L (R ).


p N
To this end, we rst subsequently approximate u
by the classes of functions:

ˆ step functions,

ˆ step functions with compact support inside Ω,


ˆ continuous functions with compact support inside Ω,
ˆ smooth functions with compact support inside Ω.
In the last step we will use convolution for mollication. In order to pass from step functi-
ons to continuous functions, we need to approximate indicator function 1A of measurable
subsets A⊂R N
with nite measure ∣A∣. We use the fact that the Lebesgue measure is
regular.

Lemma 4.5. Let A ⊂ RN measurable, ∣A∣ < ∞. Then, for every ε > 0, there is a compact
set K⊂R N
and an open set O ⊂ RN such that

K ⊂ A ⊂ O, ∣O ∖ K∣ < ε.

13
This can be used as follows. In the situation of the Lemma, consider the continuous
function
dist(x, Oc )
ϕ(x) ∶= (x ∈ RN ).
dist(x, Oc ) + dist(x, K)

13
Prove this!

21
We want to show that it is a good Lp -approximation for the indicator function whenever
1 ≤ p < ∞. It satises ϕ(x) = 1A (x) = 0 for x ∈ Oc as well as ϕ(x) = 1A (x) = 1 for x ∈ K .
Moreover, 0 ≤ ϕ − 1A ≤ 1 on R . Hence,
N

∥ϕ − 1A ∥pLp (RN ) = ∥ϕ − 1A ∥pLp (O∖K) ≤ ∥1∥pLp (O∖K) = ∣O ∖ K∣ < ε.

We now generalize this idea as follows.

Proposition 4.6. Let Ω ⊂ RN be open and 1 ≤ p < ∞. Then C0 (Ω) is dense in Lp (Ω).
Beweis:
Let u ∈ Lp (Ω). By construction of the Lebesgue measure there is a step function s=
∑M
j=1 aj 1Aj ∈ Lp (Ω) with
δ
∥u − s∥Lp (Ω) ≤ .
4
14
By the Dominated Convergence Theorem there is a compact subset K ⊂ Ω such that
δ
∥s∥Lp (Ω∖K) ≤ .
4
Following the ideas from above, we nd continuous functions ϕ1 , . . . , ϕM ∈ C0 (Ω) as above
with
δ
∥1Aj ∩K − ϕj ∥Lp (Ω) ≤ (j = 1, . . . , M ).
2M (∣aj ∣ + 1)
15
Supports inside Ω can be achieved because Aj ∩ K has compact support inside Ω. We
dene
M
v ∶= ∑ aj ϕj ∈ C0 (Ω).
j=1

This function satises

∥u − v∥Lp (Ω) ≤ ∥u − s∥Lp (Ω) + ∥s − s 1K ∥Lp (Ω) + ∥s 1K − v∥Lp (Ω)


δ
≤ + ∥s∥Lp (Ω∖K) + ∥s 1K − v∥Lp (Ω)
4
X X
δ δ X X
X
X
M M X
X
X
X
≤ + +X X
X ∑ a j 1 A ∩K − ∑ a j ϕ j X
X
X
4 4 X X
j
X
X
Xj=1 j=1 X p L (Ω)
M
δ
≤ + ∑ ∣aj ∣∥1Aj ∩K − ϕj ∥Lp (Ω)
2 j=1
δ M δ
≤ + ∑ ∣aj ∣
2 j=1 2M (∣aj ∣ + 1)
≤ δ.
14
Apply this for instance to the sequence vn ∶= s1Kn where Kn ∶= {x ∈ Ω ∶ ∣x∣ ≤ n, dist(x, ∂Ω) ≥ n1 }.
15
Indeed: If not you may consider ϕj χ instead, where χ∈ C0∞ satises 0 ≤ χ ≤ 1 and χ(x) = 1 on K .

22
This proves the claim. ◻

Proposition 4.7. Assume u ∈ Lp (RN ). Then uε ∈ Lp (RN ) ∩ C ∞ (RN ) with

∥uε − u∥Lp (RN ) → 0 as ε → 0, 1 ≤ p < ∞,


∥uε ∥Lp (RN ) ≤ ∥u∥Lp (RN ) for ε > 0, 1 ≤ p ≤ ∞.

Beweis:
uε ∈ C ∞ (RN ) follows from (iv). Young's inequality gives for 1≤p≤∞ and ε>0

∥uε ∥Lp (RN ) = ∥φε ∗ u∥Lp (RN ) ≤ ∥φε ∥L1 (RN ) ∥u∥Lp (RN ) = ∥u∥Lp (RN ) .

It remains to prove uε → u in Lp (RN ) for 1 ≤ p < ∞ as ε → 0. So let δ>0 be arbitrary.


yields v ∈ C0 (R ) with
N
Proposition 4.6

δ
∥u − v∥Lp (RN ) ≤ .
4
We then have

∥uε − u∥Lp (RN ) ≤ ∥uε − vε ∥Lp (RN ) + ∥vε − v∥Lp (RN ) + ∥v − u∥Lp (RN )
≤ ∥(u − v)ε ∥Lp (RN ) + ∥vε − v∥Lp (RN ) + ∥u − v∥Lp (RN )
≤ 2∥u − v∥Lp (RN ) + ∥vε − v∥Lp (RN )
δ
≤ + ∥vε − v∥Lp (RN ) .
2
So it remains to show that the latter term tends to zero as ε → 0. Choose K ⊂ RN a
compact superset of supp(v) + B1 (0). Then supp(vε ), supp(v) ⊂ K for 0 < ε < 1 and we
obtain

∥vε − v∥L∞ (K) = max ∣∫ φε (x − y)v(y) dy − v(x)∣


x∈K RN

= max ∣∫ φε (x − y)(v(y) − v(x)) dy∣


x∈K K

≤ sup ∣v(y) − v(x)∣ ∫ φε (x − y) dy


∣x−y∣≤ε, K
x,y∈K

≤ sup ∣v(y) − v(x)∣


∣x−y∣≤ε,
x,y∈K

→0 as ε → 0.

Since v is uniformly continuous, this last expression tends to zero as ε → 0. So we can


choose ε>0 so small that 0 < ε < ε0 implies

∥vε − v∥Lp (RN ) = ∥1 ⋅ (vε − v)∥Lp (K) = ∥1∥Lp (K) ∥vε − v∥L∞ (K) ≤ δ.
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
<∞

23
This proves the claim. ◻

Satz 4.8. Let Ω ⊂ RN be open. Then C0∞ (Ω) is dense in Lp (Ω).


Beweis:
Let u ∈ Lp (Ω) δ > 0. As above, the Dominated Convergence Theorem yields a
and
compact subset K ⊂ Ω such that ∥u∥Lp (Ω∖K) ≤
2 . For 0 < ε < dist(K, ∂Ω) set vε ∶=
δ

(u1K )ε ∈ C0∞ (Ω). Proposition 4.7 shows ∥u1K − (u1K )ε ∥Lp (RN ) ≤ 2δ for small enough
ε > 0, so
δ δ
∥u − vε ∥Lp (Ω) ≤ ∥u1K − (u1K )ε ∥Lp (Ω) + = ∥u1K − (u1K )ε ∥Lp (RN ) + ≤ δ,
2 2
which proves the claim. ◻

NB: This approximation by smooth functions with compact support is possible in Lp (Ω),
but in most cases not for W k,p (Ω) with k ≥ 1. The reason is that cutting away the regions
close to the boundary produces large derivatives. Later on, we will prove Poincaré's
inequality for functions W0k,p (Ω) that obviously does not hold for functions from W k,p (Ω)
for reasonable Ω ⊂ RN . This will provide an indirect proof of W0k,p (Ω) ⊊ W k,p (Ω).

4.4 Approximation of W k,p (Ω)-functions

Proposition 4.9. Ω ⊂ RN be an open


Let set and u ∈ W k,p (Ω), k ∈ N, 1 ≤ p < ∞. Let
χ∈ C0∞ (Ω). Then (uχ)ε → uχ in W k,p (Ω).
Beweis:
16
The product rule from Proposition 2.9 shows v ∶= uχ ∈ W k,p (Ω) and, as a function
trivially extended to RN , v ∈ W k,p (RN ) because χ (and all its derivatives) has com-
pact support inside Ω. So property (iv) of the convolution product for functions from
W k,p
(R )
N
(see the Exercise sheet) implies

∂ α vε = ∂ α (φε ∗ v) = φε ∗ ∂ α v = (∂ α v)ε .

Proposition 4.7 then gives for 0 < ε < ε0 suciently small

p
∥vε − v∥pW k,p (Ω) = ∑ ∥∂ α (vε − v)∥Lp (Ω)
∣α∣≤k
p
= ∑ ∥∂ α (vε − v)∥Lp (RN )
∣α∣≤k

16
Notice that χ ∈ C0∞ (Ω) implies that the proof of the product rule actually does not rely on the
approximation result that we are about to prove. So no danger of circular reasoning!

24
= ∑ ∥(∂ α v)ε − ∂ α v∥pLp (RN )
∣α∣≤k

≤δ . p

This proves the claim. ◻

Satz 4.10 (Meyers,Serrin (1964) [14]) . Let Ω ⊂ RN be open and 1 ≤ p < ∞. Then
∥⋅∥W k,p (Ω)
W k,p
(Ω) = C ∞ (Ω) ∩ W k,p (Ω) .

Beweis:
For k∈N dene the open sets

1
Ωj ∶= {x ∈ Ω ∶ dist(x, ∂Ω) > } , Uj ∶= Ωj ∖ Ωj−2 ,
j

where Ω−1 ∶= Ω0 ∶= ∅. Then (Uj )j∈N is an open covering of Ω. Choose some subordinate
partition of unity (ψj )j∈N , see Theorem 4.3.

Let u ∈ W k,p (Ω) and ε > 0 be arbitrary. Since supp(uψj ) ⊂ Uj ∖ Uj−2 ⊂ Ω, Proposition 4.9

yields a mollier φεj ∈ C0 (R ) such that vj ∶= (uψj )εj = φεj ∗ (uψj ) satises
n

supp(vj ) ⊂ Uj+1 ∖ Uj−3 , ∥vj − uψj ∥W k,p (Ω) ≤ ε2−j .

Set v ∶= ∑∞
j=1 vj . Then v ∈ C ∞ (Ω) since v is a locally nite sum, see Theorem 4.3 (iv).
Moreover,

X
X
X ∞ ∞ X
X
X ∞ ∞
∥v − u∥W k,p (Ω) = X
X
X
X
X ∑ v j − ∑ uψj
X
X
X
X
X ≤ ∑ ∥v j − uψj ∥ W (Ω)
k,p ≤ ∑ ε2−j ≤ ε.
X
X
Xj=1 j=1 X
X
XW k,p (Ω) j=1 j=1

This is all we had to show. ◻

This holds regardless of any regularity assumptions on the boundary of Ω. The situation
is dierent if we require the approximating sequence (un ) to be an element of C ∞ (RN ) ∩
W k,p (RN ) or even C0∞ (RN )∣Ω ∶= {u∣Ω ∶ u ∈ C0∞ (RN )}. For the proof of the following
result we refer to [1, Theorem 3.22, Ÿ4.11].

Satz 4.11. Let Ω ⊂ RN be a bounded Lipschitz domain and 1 ≤ p < ∞. Then

∥⋅∥W k,p (Ω)


W k,p (Ω) = C0∞ (RN )∣Ω .

25
This result extends to many important unbounded uniform Lipschitz domains where,
essentially, the boundary of Ω can be written as a graph of Lipschitz functions with
17
uniformly bounded Lipschitz constants. Notice that for generic open sets Ω ≠ RN we

have that the closure of C0 (R )∣Ω is a strict superset of the closure of
N
C0∞ (Ω). The case
Ω=R N
(no boundary at all) is the only important exception.

Lemma 4.12. Let k ∈ N, 1 ≤ p < ∞. Then W0k,p (RN ) = W k,p (RN ).

Beweis. We only prove the result for k = 1 to avoid technicalities (i.e. the product rule
for higher derivatives). Let u ∈ W (RN ) and choose a cut-o function ϕ ∈ C0∞ (RN )
1,p

as in Proposition 4.2 with 0 ≤ ϕ ≤ 1, ϕ(x) = 1 for ∣x∣ ≤ 1 and ϕ(x) = 0 for ∣x∣ ≥ 2. Set
ϕR (x) ∶= ϕ(x/R). We claim uϕR → u in W
1,p
(Ω). Indeed,

∥∂i (uϕR ) − ∂i u∥Lp (RN ) = ∥∂i u(ϕR − 1) + u∂i ϕR ∥Lp (RN )


≤ ∥∂i u(ϕR − 1)∥Lp (RN ) + ∥u∂i ϕR ∥Lp (RN )
1
≤ ∥∂i u(ϕR − 1)∥Lp (RN ) + ∥∂i ϕ∥∞ ∥u∥Lp (RN )
R
The rst term converges to zero because of the Dominated Convergence Theorem because
of ∣∂i u(ϕR − 1)∣ ≤ ∣∂i u∣ ∈ Lp (RN ) and ϕR → 1 pointwise almost everywhere. So we get

lim ∥∂i (uϕR ) − ∂i u∥Lp (RN ) = 0 (i = 1, . . . , N ).


R→∞

Similarly, the Dominated Convergence Theorem gives

lim ∥uϕR − u∥Lp (RN ) = 0.


R→∞

This proves uϕR → u ∈ W 1,p (RN ). So Proposition 4.9 (with Ω = RN , χ = ϕR ) shows that

the function (uϕR )ε ∈ C0 (R ) converges to uϕR
N
as ε → 0. Hence, C0∞ (RN ) is dense in
W (R ), i.e.,
1,p N

∥⋅∥W 1,p (RN )


W 1,p (RN ) ⊂ C0∞ (RN ) = W01,p (RN ) ⊂ W 1,p (RN ).

This proves the claim (for k = 1).

Korollar 4.13. Let Ω ⊂ RN be a bounded Lipschitz domain and u ∈ W k,p (Ω), k ∈ N, 1 ≤


p < ∞. Then uε → u in W k,p (Ω) as ε → 0.
Beweis:
Let (un ) ⊂ C0∞ (RN ) be an approximating sequence given by Theorem 4.11. Then

∥uε − u∥W k,p (Ω) ≤ ∥uε − (un )ε ∥W k,p (Ω) + ∥(un )ε − un ∥W k,p (Ω) + ∥un − u∥W k,p (Ω)
17
Ω = RN ∖ {0} is not such a generic open set.

26
≤ 2∥un − u∥W k,p (Ω) + ∥(un )ε − un ∥W k,p (Ω) .

So, for any given δ>0 we may choose n∈N such that

δ
∥un − u∥W k,p (Ω) ≤ .
4
On the other hand, by Proposition 4.9 for Ω = RN and χ ∈ C0∞ (RN ) satisfying χ=1 on
the support of un , we have un = un χ and hence

δ
∥(un )ε − un ∥W k,p (Ω) = ∥(un χ)ε − un χ∥W k,p (Ω) ≤ for 0 < ε < ε0 .
2
Taking these two estimates together, we obtain

δ δ
∥uε − u∥W k,p (Ω) ≤ 2 ⋅ + =δ for all ε ∈ (0, ε0 ).
4 2
This means uε → u in W k,p (Ω) as ε → 0, which is all we had to show. ◻
End Lec 05

5 Stein's Extension Theorem

In this section we want to prove Stein's Extension Theorem [23]. It states that for bounded
18
Lipschitz domains Ω ⊂ RN each function u ∈ W k,p (Ω) with k ∈ N, p ∈ [1, ∞] admits an
extension Eu ∈ W k,p (RN ) such that19

(Eu)∣Ω = u and ∥Eu∥W k,p (RN ) ≤ C ∗ ∥u∥W k,p (Ω) .

We will show (indirectly) that this requirement on the boundary regularity of Ω is close
to optimal. In fact, the result is not true for mere C -domains with 0,α
0 < α < 1 such
as Ω ∶= {(x, y) ∈ (0, 1) ∶ 0 < x < 1, 0 < y < x1+δ } with δ > 0. We recall that a bounded
Lipschitz domain Ω ⊂ R is such that ∂Ω ⊂ ⋃j=1 Uj ⊂ R
N M N
for open sets U1 , . . . , UM such
that, after permutation of coordinates,

∂Ω ∩ Uj = {(x′ , xN ) ∈ Uj ∶ xN = ψj (x′ )},


Ω ∩ Uj = {(x′ , xN ) ∈ Uj ∶ xN > ψj (x′ )}

for Lipschitz-continuous functions ψj ∶ RN −1 → R.


18
This actually also holds for unbounded domains with the strong local Lipschitz property. For typical
unbounded domains (half-spaces, paraboloids, etc.) this is satsied. Since the technicalities are even
larger, we do not insist on this generalization.
19
Such an extension operator is not uniquely determined since what happens away from Ω is not so
important. For instance, instead of u ↦ Eu one may consider u ↦ (Eu)χ where χ ∈ C0 (R ) satises
∞ N

χ∣Ω ≡ 1.

27
Why is it interesting and of practical relevance to have such an operator? Assume you
want to prove some estimate of the form

∥T u∥Lq (Ω) ≤ C(Ω)∥u∥W k,p (Ω)

for some linear operator T. We will assume this operator to satisfy

∥T (U ⋅ 1Ω )∥Lq (Ω) ≤ D∥T (U )∥Lq (RN ) (U ∈ W k,p (RN ))

for some positive constant D. This is for instance satised for the identity operator T = id
or integral operators T U (x) = ∫RN K(x, y)U (y) dy with nonnegative kernels K . We show
that estimates for such operators can be obtained with the aid of the corresponding
estimates on RN that are sometimes easier to prove. Having proved the latter and having
an extension operator E as above at our disposal, one obtains the desired estimate on Ω
for free. Indeed,

∥T u∥Lq (Ω) = ∥T (Eu ⋅ 1Ω )∥Lq (Ω)


≤ D∥T (Eu)∥Lq (RN )
≤ D C(RN )∥Eu∥W k,p (RN )
≤ D C(RN )C ∗ ∥u∥W k,p (Ω) .

We start with a technical tool known as Whitney Decomposition Theorem (or Whitney's
Covering Lemma [26, pp.67-69]). We say that W is a closed dyadic cube if

W = {2−k (z + w) ∶ w ∈ [0, 1]N } for some z ∈ ZN , k ∈ Z.

Two such dyadic cubes W, W ′ are called almost disjoint if W ∩ W′ is a null set. So they
intersect at some corner or along parts of their faces, but their interiors are disjoint. For
example, when N = 2, set

3 3 3 3
W1 ∶= [0, 1] × [0, 1], W2 ∶= [0, 1] × [1, 2], W3 ∶= [1, ] × [1, ], W4 ∶= [ , 1] × [ , 1].
2 2 4 4
Each of these cubes is dyadic, W1 , W2 , W3 are mutually almost disjoint, W3 , W4 and
W2 , W 4 are almost disjoint, too, but W1 , W4 are not. The following preliminary results
are essentially due to Caldéron and Zygmund [4, Section 3].

Lemma 5.1. Let Ω ⊂ RN be open and ∅ ⊊ Ω ⊊ RN . Then there are closed almost disjoint
dyadic cubes W1 , W2 , . . . with the following properties
(I) ⋃j∈N Wj = Ω,
(II) diam(Wj ) ≤ dist(Wj , Ωc ) ≤ 4 diam(Wj ) for all j ∈ N.
(III) Wi ∩ Wj ≠ ∅ implies
1
4 diam(Wi ) ≤ diam(Wj ) ≤ 4 diam(Wi ),
(IV) #{i ∈ N ∶ Wi ∩ Wj ≠ ∅} ≤ 12N for all j ∈ N.

28
Furthermore, for any xed κ ∈ (0, 14 ) there are ϕ1 , ϕ2 , . . . ∈ C0∞ (RN ) such that

(V) 0 ≤ ϕj ≤ 1, ϕj (x) = 1 for x ∈ Wj and ϕj (x) = 0 for dist(x, Wj ) ≥ κ diam(Wj ).


20
(In particular , ϕj (x) ≠ 0 and x ∈ Wi implies Wi ∩ Wj ≠ ∅.)

(VI) ∣∂ α ϕj (x)∣ ≤ Cα diam(Wj )−∣α∣ for all α ∈ NN


0 .

The proof of this result is quite technical. The interested reader may nd it in the
Appendix for completeness. We need this result in order to prove the existence of a
smooth version of the distance function. We study

δ(x) ∶= dist(x, Ωc ) = inf{∣x − z∣ ∶ z ∈ Ωc }.

Proposition 5.2. Let Ω ⊂ RN be open, ∅ ⊊ Ω ⊊ RN . Then there is a nonnegative function


dΩ ∈ C ∞ (Ω) and positive numbers Cα > 0 such that

5 δ(x) ≤ dΩ (x) ≤ 4 ⋅ 12 δ(x),


1 N
(i)

(ii) ∣∂ dΩ (x)∣ ≤ C̃α δ(x) ∈ NN


0 , ∣α∣ ≥ 1
α 1−∣α∣
for α

The function dΩ is called regularized distance function.

Beweis:
We choose dyadic cubes Wj and ϕj ∈ C0∞ (RN ) as in Lemma 5.1, dene


dΩ (x) ∶= d(x) ∶= ∑ diam(Wk )ϕk (x).
k=1
21
In view of property (IV) this sum is actually a nite sum Then cover a given compact
set K ⊂ Ω by nitely many Ox1 , . . . , Oxm , so ϕj = 0 on Ox1 ∪ . . . Oxm ⊃ K whenever
j ∈ N ∖ (Ix1 ∪ . . . ∪ Ixm ). So only nitely many ϕj are non-zero on each compact subset of
Ω. So ϕj ∈ C0∞ (Ω) for all j ∈ N implies d ∈ C ∞ (Ω).

We start by proving (i). Let x ∈ Ω, choose k∈N with x ∈ Wk , which is possible by (I).
Then
(II)
δ(x) ≤ dist(Wk , Ωc ) + diam(Wk ) ≤ 5 diam(Wk ). (5.1)

So the lower bound from (i) follows from ϕk (x) = 1 by (V) and

(5.1) 1
d(x) ≥ diam(Wk )ϕk (x) = diam(Wk ) ≥ δ(x).
5

To prove the upper bound we use

(II) (III) 1
δ(x) ≥ dist(Wk , Ωc ) ≥ diam(Wk ) ≥ diam(Wj ) if Wj ∩ Wk ≠ ∅. (5.2)
4
20
This is a consequence of (III)
21
Indeed, for any given point x ∈ Ω has an open neighbourhood Ox and a nite index set Ix ⊂ N such
that j ∈ N ∖ Ix implies ϕj ∣Ox = 0. This follows from (I),(V).

29
This implies

(V ) (5.2),(V ) (IV )
d(x) = ∑ diam(Wj )ϕj (x) ≤ ∑ 4δ(x) ≤ 4 ⋅ 12N δ(x).
W j ∩W k ≠∅ W j ∩W k ≠∅

It remains to prove (ii). Assume once again x ∈ Wk and 0 , ∣α∣ ≥ 1.


α ∈ NN Then

(V I)
∣∂ α d(x)∣ ≤ ∑ diam(Wj ) ⋅ Cα diam(Wj )−∣α∣
Wj ∩Wk ≠∅

= Cα ∑ diam(Wj )1−∣α∣
Wj ∩Wk ≠∅
(5.1) 1−∣α∣
1
≤ Cα ∑ ( δ(x))
Wj ∩Wk ≠∅
5
(IV )
≤ Cα 5∣α∣−1 12N δ(x)1−∣α∣ .
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
C̃α


End Lec 06

22
Another technical tool is the following .

Proposition 5.3. There are c, C > 0 and a continuous function ϕ ∶ [1, ∞) → R satisfying

(i) ∫1 ϕ(t) dt = 1,
∞ k
(ii) ∫1 t ϕ(t) dt = 0 for all k ∈ N,
−ct
(iii) ∣ϕ(t)∣ ≤ Ce for all t ∈ [1, ∞).
Beweis:
The basic idea is to use the residue theorem for (i) and Cauchy's integral formula for (ii).
We consider
e
ψ(z) ∶= exp ( − ω(z − 1)1/4 ) (z ∈ C ∖ [1, ∞))
πz
where ω ∶= e−iπ/4 = √ . One can check23 that the function
1−i
2

z = 1 + reiϕ ↦ (z − 1)1/4 = r1/4 eiϕ/4 (r ≥ 0, 0 < ϕ < 2π)


22
The full strength of this construction cannot be seen from the proof of the extension theorem for rst
order Sobolev spaces. For higher order Sobolev spaces, property (ii) is used for more k .
23
The Cauchy-Riemann equations for the real part u(r, θ) ∶= r1/4 cos(ϕ/4) and the imaginary part
v(r, θ) ∶= r1/4 sin(ϕ/4) read as follows:
1 1
∂r u = ∂θ v, ∂r v = − ∂θ u.
r r

30
is holomorphic in C ∖ [1, ∞). Hence, ψ is meromorphic and z ↦ ψ(z)z k is holomorphic
in C ∖ [1, ∞) for any given k ∈ N. So the integrals along piecewise smooth closed curves
γ z=0
encircling may be computed as follows:

1 e e 1
lim ψ(z)z = exp ( − ω(−1)1/4 ) = exp ( − ω ω̄) = .
∫ ψ = z→0
2πi γ π π π
Moreover,

∫ (⋅) ψ(⋅) = 0 k ∈ N.
k
for all
γ
Still, this is a statement about line integrals (Kurvenintegrale) in the complex plane
for complex-valued integrands and not about integrals along the real interval [1, ∞) for
real-valued integrands. So we approximate such an integral by suitable line integrals in
the complex plane.

24
We dene the following curve

γε ∶= γε1 ⊕ γε2 ⊕ γε3 ⊕ γε4 (ε > 0)

via

ˆ (A part of the parallel to the right half-axis at height I(z) = ε)


γε1 (t) = t + εi for t ∈ [1, ε−1 ].
ˆ (Almost full large circle from ε−1 + εi to ε−1 − εi, counterclockwise)
−2 1/2 it
γε2 (t) = (ε + ε )
2
e for t ∈ [θ, 2π − θ], θ ∶= arctan(ε2 ).
ˆ (A part of the parallel to the right half-axis at height I(z) = −ε)
−1 −1
γε3 (t) =1+ε − εi − t for t ∈ [1, ε ].
ˆ (Small half-circle around 1)
γε4 (t) = 1 + εe−it for t ∈ [π/2, 3π/2].
25
Then we use (z ∈ C, t > 1)
e e √
∣ψ(z)∣ = exp ( − R(ω(z − 1)1/4 )) ≤ exp ( − ∣z − 1∣1/4 / 2),
π∣z∣ π∣z∣
e
ψ(t + i0) = exp ( − ω(t − 1)1/4 ), (5.3)
πt
e
ψ(t − i0) = exp ( − ω(t − 1)1/4 eiπ/2 ) = ψ(t + i0).
πt
24
⊕ means concatenation here, so one after the other. If γ, η are w.l.o.g. continuous curves on [0, 1] → C
with γ(1) = η(0), then the continuous curve γ ⊕ η is given by



⎪γ(2t) if 0 ≤ t ≤ 12 ,
(γ ⊕ η)(t) = ⎨

⎩η(2t − 1) ≤ t ≤ 1.
1
⎪ if
2

25
For z = 1 + reiϕ with 0 < ϕ < 2π we have

ϕ−π 1 ∣z − 1∣1/4
R(ω(z − 1)1/4 ) = R(e−iπ/4 ⋅ r1/4 eiϕ/4 ) = r1/4 cos( ) ≥ r1/4 √ = √ .
4 2 2

31
From (5.3) and the Dominated Convergence Theorem we get

∫ 2 (⋅) ψ = o(1), ∫ 4 (⋅) ψ = o(1) ε → 0.


k k
as
γε γε

and thus

δk0
2πi ⋅ = ∫ (⋅)k ψ
π γε

= ∫ (⋅)k ψ + ∫ (⋅)k ψ + o(1)


γε1 γε3
1/ε
=∫ (t + εi)k ψ(t + εi) − (t − εi)k ψ(t − εi) dt + o(1)
1

=∫ tk ⋅ (ψ(t + i ⋅ 0) − ψ(t − i ⋅ 0)) dt + o(1)
1
(5.3) ∞
= 2i ∫ tk ⋅ Im(ψ(t + i ⋅ 0)) dt + o(1),
1 ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
=∶ϕ(t)

where
e (t − 1)1/4 (t − 1)1/4
ϕ(t) = exp (− √ ) sin ( √ ).
πt 2 2

Until now we have not seen any reason why Lipschitz domains, or Lipschitz-continuous
functions, play a particular role. The basic link between Lipschitz domains and the re-
gularized distance function d ∶= dRN ∖Ω is the following.

Proposition 5.4. ψ ∶ RN −1 → R be Lipschitz-continuous and Ω = {x ∈ RN ∶ xN >


Let
ψ(x )}. Then there is a c > 0 such that 0 ≤ c−1 d(x) ≤ ψ(x′ )−xN ≤ c d(x) for all x ∈ RN ∖Ω.

Beweis:
We have by Proposition 5.2 (i) for x ∈ RN ∖ Ω
d(x) ≤ 4 ⋅ 15N δ(x)
= 4 ⋅ 15N inf{∣x − z∣ ∶ z ∈ RN ∖ Ω}
≤ 4 ⋅ 15N ∣(x′ , xN ) − (x′ , ψ(x′ ))∣
= 4 ⋅ 15N (ψ(x′ ) − xN ).
To prove the lower bound for d let L denote the Lipschitz constant of ψ. Then Proposi-
26
tion 5.2 (i) gives

1
d(x) ≥ δ(x)
5
26 √
Here we distinguish between the Euclidean norm ∣ ⋅ ∣2 on R and ∣ ⋅ ∣1 ; they satisfy ∣v∣2 ≤ ∣v∣1 ≤ N ∣v∣2
N

for all v ∈ R . In the fourth line of this chain of inequalities ∣x − y ∣1 = ∑i=1 ∣xi − yi ∣.
N ′ ′ N −1

32
1 ′
= inf ∣(x , xN ) − (y ′ , ψ(y ′ ))∣2
y ′ ∈RN −1 5
1
≥ √ inf ∣(x′ , xN ) − (y ′ , ψ(y ′ ))∣1
5 N y ′ ∈RN −1
1
= √ inf [∣x′ − y ′ ∣1 + ∣xN − ψ(y ′ )∣]
5 N y ′ ∈RN −1
1
≥ √ inf [∣x′ − y ′ ∣1 + min{1, L−1 }∣(xN − ψ(x′ )) + (ψ(x′ ) − ψ(y ′ ))∣]
5 N y ′ ∈RN −1
1
≥ √ inf [∣x′ − y ′ ∣2 + min{1, L−1 }∣xN − ψ(x′ )∣ − min{1, L−1 } ∣ψ(x′ ) − ψ(y ′ )∣ ]
5 N y ′ ∈RN −1 ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
≤L∣x′ −y ′ ∣2

min{1, L−1 }
≥ √ ∣xN − ψ(x′ )∣
5 N
min{1, L−1 }
= √ (ψ(x′ ) − xN ).
5 N
This proves the claim. ◻

Dene

T v(t) ∶= t ∫ v(s)s−2 ds.
t

Lemma 5.5 (Hardy's Inequality). Let p ∈ [1, ∞]. Then ∥T v∥Lp ([0,∞)) ≤ p
p+1 ∥v∥Lp ([0,∞)) .

Beweis:
The case p=∞ results from


∣T v(t)∣ ≤ ∣t∣∥v∥∞ ∫ s−2 ds = ∥v∥∞ .
t

So we may assume p ∈ [1, ∞) from now on. Also, it suces to prove the estimate for
nontrivial v ∈ C0∞ (RN ) in view of Theorem 4.8. The idea is to use integration by parts.
We have

∞ ∞ p
∥T v∥pLp ([0,∞)) = ∫ tp (∫ v(s)s−2 ) dt
0 t
tp+1 ∞ p ∞ ∞ tp+1 ∞ p−1
=[ (∫ v(s)s−2 ) ] − ∫ ⋅ p (∫ v(s)s−2 ) ⋅ (−v(t)t−2 ) dt
p+1 t 0 0 p+1 t
p ∞ ∞ p−1
−2
=0+ ∫ (t ∫ v(s)s ) ⋅ v(t) dt
p+1 0 t
p ∞ p−1
≤ ∥(t ∫ v(s)s−2 ) ∥ ∥v∥Lp ([0,∞))
p+1 t Lp′ ([0,∞))
p
= ∥∣T v(t)∣p−1 ∥Lp′ ([0,∞)) ∥v∥Lp ([0,∞))
p+1

33
p
= ∥T v∥p−1
Lp ([0,∞))
∥v∥Lp ([0,∞)) .
p+1

Since ∥T v∥Lp ([0,∞)) is positive and nite, we may divide by ∥T v∥p−1


Lp ([0,∞))
and obtain the

result. ◻

Satz 5.6 (Stein). Let Ω ⊂ RN


be a bounded Lipschitz domain and k ∈ N, 1 ≤ p ≤ ∞. Then
has a bounded extension operator E ∶ W (Ω) → W k,p (RN ).
k,p

Beweis:
The proof is very advanced: we focus on k = 1, 1 ≤ p < ∞ and do not provide all details,
only the main ideas  Sorry!

The strategy is the following: We consider special Lipschitz domains (as in Propositi-
on 5.4) rst and prove the existence of an extension operator for those. This is the main
intellectual challenge of the proof. Afterwards, we generalize this to a general bounded
Lipschitz domain. Here one uses that the boundary of such a general Lipschitz domain
is a nite union of special Lipschitz domains, for which we have already constructed an
extension operator. So it remains to combine these nitely many extension operators to
27
some extension operator for the whole domain Ω using a partition of unity , see Theo-
rem 4.3.
It is again sucient to prove the estimates for smooth functions u ∈ C0∞ (RN ), see Theo-
rem 4.11. We x a function ϕ as in Proposition 5.3 and dene d ∶= d N
R ∖Ω to be the
regularized distance function of the complement of Ω.

Step 1: Special Lipschitz domains


We start our analysis with the construction of an extension operator for

Ωψ ∶= {x = (x′ , xN ) ∈ RN ∶ xN > ψ(x′ )}

where ψ ∶ RN −1 → R is Lipschitz-continuous. By Proposition 5.4 there is are c, C > 0 such


that
C(ψ(x′ ) − xN ) ≥ c d(x) ≥ ψ(x′ ) − xN ≥ 0 (x ∈ RN ∖ Ω). (5.4)

We dene an extension operator E as follows:



⎪u(x) , if xN ≥ ψ(x′ ), i.e., for x ∈ Ω,
(Eu)(x) ∶= ⎨ ∞
⎪ ′ ′
⎩∫1 u(x , xN + 2cd(x)t)ϕ(t) dt , if xN < ψ(x ), i.e.,
⎪ for x ∈ RN ∖ Ω.

This operator obviously satises Eu∣Ω = u∣Ω . We need to show u ∈ W 1,p (RN ) and

∥Eu∥W 1,p (RN ) ≤ C ∗ ∥u∥W 1,p (Ω) .


27
This should be seen as a technical step, so of minor theoretical importance that does not provide any
new idea.

34
Step 1(a): Lp -bound for Eu.
Choose A > 0 such that ∣ϕ(t)∣ ≤ At−2 for all t ≥ 1. This is possible in view of Propositi-
on 5.3 (iii). We have

∞ ∣u(x′ , xN + 2cd(x)t)∣
∣(Eu)(x)∣ ≤ A ∫ dt (xN < ψ(x′ ))
1 t2
The change of coordinates 2cd(x)t = ψ(x′ ) − xN + s gives for v(s) ∶= u(x′ , ψ(x′ ) + s)
∞ ∣u(x′ , ψ(x′ ) + s)∣
∣(Eu)(x′ , xN )∣ ≤ 2Acd(x) ∫ ds
xN −ψ(x′ )+2cd(x) (s + ψ(x′ ) − xN )2
(5.4) ∞ ∣v(s)∣
≤ 2AC(ψ(x′ ) − xN ) ∫ ds.
ψ(x′ )−xN s2
So Hardy's Inequality gives

ψ(x′ ) ψ(x′ ) ∞ p
∣v(s)∣
∫ ∣(Eu)(x′ , xN )∣p dxN ≤ (2AC)p ∫ (ψ(x′ ) − xN )p (∫ ds) dxN
−∞ −∞ ψ(x′ )−xN s2
∞ ∞ p
∣v(s)∣
= (2AC)p ∫ (t ∫ ds) dt
0 t s2
p p ∞
≤ (2AC)p ( ) ∫ ∣v(s)∣p ds
p+1 0
p p ∞
= (2AC)p ( ) ∫ ∣u(x′ , xN )∣p dxN .
p+1 ψ(x′ )

Integrating now with respect to x′ over RN −1 gives

ψ(x′ )
∥Eu∥pLp (RN ∖Ω) =∫ (∫ ∣(Eu)(x′ , xN )∣p dxN ) dx′
RN −1 −∞
p p ∞
′ ′
≤ (2AC)p ( ) ∫ ∫ ′ ∣u(x , xN )∣ dxN dx
p
p+1 RN −1 ψ(x )
p p
= (2AC)p ( ) ∥u∥pLp (Ω) .
p+1
As a consequence,

p
∥Eu∥Lp (RN ) ≤ ∥u∥Lp (Ω) + ∥Eu∥Lp (RN ∖Ω) ≤ (1 + 2AC ) ∥u∥Lp (Ω) . (5.5)
p+1

Step 1(b): Lp -bound for ∂i (Eu).


28
Next we have to prove the corresponding inequality for the derivatives. We will use
Eu ∈ C 1 (RN ). We don't prove this in detail, but only present one computation related
to this fact. We check ∇(Eu)(y) = ∇u(y) for all y ∈ ∂Ω. In fact we nd

(Eu)(x) − (Eu)(y) − ∇u(y) ⋅ (x − y) = u(x) − u(y) − ∇u(y) ⋅ (x − y)


28
More eort gives u ∈ C ∞ (RN ).

35
= O(∣x − y∣2 ) as x → y, x ∈ Ω.

Do we have the same for x → y, x ∈ RN ∖ Ω to conclude ∇(Eu)(y) = ∇u(y)? Using


d(x) ≤ C1 δ(x) ≤ C2 ∣x − y∣ for some C1 , C2 > 0 we get for some constants C > 0

∣Eu(x) − u(y) − ∇u(y) ⋅ (x − y)∣



= ∣∫ (u(x′ , xN + 2cd(x)t) − u(y ′ , yN )) ϕ(t) dt − ∇u(y) ⋅ (x − y)∣
1
∞ 1
= ∣∫ ∫ [∇u(y ′ + s(x′ − y ′ ), yN + s(xN + 2cd(x)t − yN )) − ∇u(y)]
1 0

⋅ (x′ − y ′ , xN + 2cd(x)t − yN ) ds ϕ(t) dt∣


∞ 1
≤ ∥u∥C 2 ∫ [∫ (s∣x′ − y ′ ∣ + 2cd(x)t + s∣xN + 2cd(x)t − yN ∣) (∣y − x∣ + ∣d(x)∣t) ds] ∣ϕ(t)∣ dt
1 0
∞ 1
≤ C∥u∥C 2 ∫ ∫ ∣x − y∣(1 + t) ⋅ ∣x − y∣(1 + t) ds ∣ϕ(t)∣ dt
1 0

≤ C∥u∥C 2 ∣x − y∣2 ∫ (1 + t)2 ∣ϕ(t)∣ dt
1
= O(∣x − y∣2 ).

Here we used:

ˆ u is smooth and dened on RN . This is important because we apply the Mean


1
Value Theorem in integral form: u(z1 ) − u(z2 ) = ∫0 ∇u(z2 + s(z1 − z2 )) dx ⋅ (z1 − z2 ).
It requires that u is continuously dierentiable in a neighbourhood of the segment
joining z1 , z2 . It is a priori unclear to ensure this with a function only dened on
Ω. Notice that we do not assume Ω to be convex.

ˆ The rst equality holds by denition of Eu ∫1 ϕ(t) dt = 1.
and

ˆ The second equality uses the Mean Value Theorem and ∫1 tϕ(t) dt = 0.

Dierentiation under the integral sign gives for x ∈ RN ∖ Ω, i.e., for xN < ψ(x′ ),

∂i (Eu)(x) = ∫ (∂i u(x′ , xN + 2cd(x)t) + 2ct∂i d(x)∂N u(x′ , xN + 2cd(x)t)) ϕ(t) dt.
1

We now estimate the gradient in a similar way as above. The estimate

∞ p
∥∫ ∂i u(x′ , xN + 2cd(x)t) dt∥ ≤ 2AC ∥∂i u∥Lp (Ω)
1 Lp (RN ∖Ω) p+1
p
≤ 2AC ∥u∥W 1,p (Ω)
p+1
follows as above: it suces to replace u by ∂i u. The second term is estimated similarly.
−3 ′
Choose B>0 such that ∣ϕ(t)∣ ≤ Bt for all t ∈ [1, ∞). Then we nd for xN < ψ(x )


∣∫ 2ct∂i d(x)∂N u(x′ , xN + 2cd(x)t) ⋅ ϕ(t) dt∣
1

36

≤ 2c ∫ ∣∂i d(x)∣∣∂N u(x′ , xN + 2cd(x)t)∣∣tϕ(t)∣ dt
1
∞ ∣∂N u(x′ , xN + 2cd(x)t)∣
≤ 2cB∥∂i d∥∞ ∫ dt.
1 t2
Again, one obtains via Hardy's Inequality

p
∥∂i (Eu)∥Lp (RN ) ≤ (1 + N ⋅ 2(A + B)C ) ∥u∥W 1,p (Ω) . (5.6)
p+1

So (5.5),(5.6) yield the claim for special Lipschitz domains.

Step 2: General bounded Lipschitz domains.


We consider a covering ∂Ω ⊂ ⋃j=1 Uj where the open sets
M
Uj are given by

∂Ω ∩ Uj = {(x′ , xN ) ∈ Uj ∶ xN = ψj (x′ )} ⊂ ∂Ωψj ,


Ω ∩ Uj = {(x′ , xN ) ∈ Uj ∶ xN > ψj (x′ )} ⊂ Ωψj

after some permutation of coordinates. Here, the ψj ∶ RN −1 → R are Lipschitz-continuous


functions. Step 1 provides extension operators Ej ∶ W (Ωψj ) → W 1,p (RN ). (This tells
1,p

us how to extend a given function across the boundary.) Next choose an open set U0 ⊂ Ω
such that Ω ⊂ ⋃j=0 Uj and dene E0 ∶ W (Ω), u ↦ u. As in Theorem 4.3 choose nitely
M 1,p

many test functions (ϕi )i∈I with supp(ϕi ) ⊂ Uj(i) and ∑i∈I ϕi (x) = 1 for all x ∈ Ω. Set

∑i∈I ϕi (x)(Ej(i) (ϕi u))(x)


Eu(x) ∶= χ(x) ⋅ .
∑i∈I ϕi (x)2

where χ ∈ C0∞ (RN ) is an arbitrary function satisfying χ(x) = 1 on Ω and ∑i∈I ϕi (x)2 ≠ 0
on supp(χ).

This operator is an extension operator because x ∈ Ω ∩ supp(ϕi ) implies x ∈ Uj(i) and


hence Ej(i) (ϕi u)(x) = (ϕi u)(x) = ϕi (x)u(x). Hence,

∑i∈I,x∈supp(ϕi ) ϕi (x) ⋅ ϕi (x)u(x)


x∈Ω ⇒ Eu(x) = 1 ⋅ = u(x).
∑i∈I,x∈supp(ϕi ) ϕi (x)2

Moreover, the triangle inequality, the product rule and Hölder's inequality give

∥Eu∥W 1,p (RN ) ≤ C ∥∑ ϕi (x)Ej(i) (ϕi u)(x)∥


i∈I W 1,p (RN )
≤ C ∑ ∥ϕi Ej(i) (ϕi u)∥W 1,p (RN )
i∈I
≤ C ∑ ∥ϕi ∥W 1,∞ (RN ) ∥Ej(i) (ϕi u)∥W 1,p (RN )
i∈I
≤ C ∑ ∥ϕi ∥W 1,∞ (RN ) Ci∗ ∥ϕi u∥W 1,p (Ω)
i∈I

37
≤ C ∑ ∥ϕi ∥2W 1,∞ (RN ) Ci∗ ⋅ ∥u∥W 1,p (Ω) .
i∈I

In the case k≥2 the proof is even more complicated because the derivatives of order ≥2
of the regularized distance function are not bounded any more. For instance, in the case
k=2 one needs to bound terms of the form


∂ij (Eu)(x) = ∂ij d(x) ∫ ∂N u(x′ , xN + 2cd(x)t) ⋅ tϕ(t) dt
1

in terms of the ∥u∥W 2,p (Ω) . Here one uses


∣∂ij (Eu)(x)∣ = ∣∂ij d(x)∣ ∣∫ (∂N u(x′ , xN + 2cd(x)t) − ∂N u(x′ , xN + 2cd(x))) ⋅ tϕ(t) dt∣
1
∞ 1
= ∣2cd(x)∂ij d(x)∣ ∣∫ 2
∂N u(x′ , xN + 2cd(x)(1 + st)) ds) ⋅ t2 ϕ(t) dt∣
(∫
0 1
1 ∞ ∣∂ 2 u(x′ , xN + 2cd(x)(1 + s)t)∣
≤ 2c∥d ∂ij d∥∞ ∫ (∫ N
dt) ds.
0 1 t2

The same techniques as above allow to bound this integral in terms of ∥∂N
2
u∥Lp (Ω) and
hence in terms of ∥u∥W 2,p (Ω).
End Lec 07

6 Sobolev's Embedding Theorem and Applications

Lq (Ω)-spaces a generic function u ∈ W k,p (Ω) belongs.


In this section we analyze to which
By denition we only know u ∈ L (Ω), but it turns out that this is not the end of the story.
p

Roughly speaking, assuming more and more weak dierentiability (i.e., large enough k )
the functions should become more and more regular, possibly ending up being bounded
or even continuous. In Example 2.8 we have seen that the function u(x) = ∣x∣γ , γ < 0 lies
in W 1,p
(B) if and only if γ > 1− N
p . Here, B was the unit ball centered at zero. This
prototypical singularity leads to the following observation:

ˆ For 1≤p<N elements of W 1,p (B) can be unbounded.


On the other hand, the larger p is, the milder the singularities have to be.

ˆ For p>N there is the chance that elements of W 1,p (B) cannot be unbounded.

We shall prove related statements in this section. To be more precise we seek for the
validity of continuous embeddings

W 1,p (Ω) ↪ Lq (Ω), i.e. ∥u∥Lq (Ω) ≤ C∥u∥W 1,p (Ω)

38
under suitable assumptions on p, q, Ω and some constant C>0 that does not depend on
u. Later on we will show how this aects the theory of our elliptic model boundary value
problem.

We start with a trivial remark: If Ω ⊂ RN is bounded, more generally: has nite measure,
then we actually have L (Ω) ⊂ L (Ω) for all q ∈ [1, p]. This follows from
p q

1
−1
∥u∥q = ∥u ⋅ 1∥q ≤ ∥u∥p ∥1∥ pq = ∥u∥p ∣Ω∣ q p .
p−q
´¹¹ ¹ ¹¸ ¹ ¹ ¹ ¶
<∞

So we see that on bounded domains lower integrability is for free. In typical unbounded
N
domains (R , half-spaces, strips, cylinders,. . . ) this is not the case as can be seen from
examples of the form x ↦ (1 + ∣x∣)−α for α > 0. This has two consequences: First, we will
not investigate such embeddings for q < p; they are practically irrelevant. Second, the
nal result for bounded domains will be slightly dierent from the general case.

We turn our attention towards higher integrability where the theory for bounded Lip-
29
schitz domains and unbounded domains is essentially the same. As mentioned above,
the question is how much integrability / continuity a generic function u ∈ W k,p (Ω) ad-
N
mits. We will reduce our analysis to R by means of an extension operator that we
constructed in the last section. We start with necessary conditions that are quite easy to
obtain. They t very well to the model singularity x ↦ ∣x∣γ discussed earlier.

Proposition 6.1. Let N, k ∈ N and p, q ∈ [1, ∞]. If a continuous embedding W k,p (RN ) ↪
L (R )
q N
exists, then necessarily
1 1 k
0≤ − ≤ . (6.1)
p q N
Beweis:
We assume that the embedding is continuous and take any nontrivial
30
u ∈ C0∞ (RN ). For
λ>0 set uλ (x) ∶= u(λx) we have ∂ α uλ (x) = λ∣α∣ (∂ α u)(λx). So we have

∥uλ ∥qLq (RN ) = ∫ ∣u(λx)∣q dx = λ−N ∫ ∣u(x)∣q dx = ∣λ∣−N ∥u∥qLq (RN )


RN RN
as well as

∥uλ ∥pW k,p (RN ) = ∑ ∥∂ α uλ ∥pLp (RN ) = ∑ λp∣α∣ ∥(∂ α u)(λ⋅)∥pLp (RN ) = ∑ λp∣α∣−N ∥∂ α u∥pLp (RN ) .
∣α∣≤k ∣α∣≤k ∣α∣≤k

Assuming W k,p (RN ) ↪ Lq (RN ) we thus obtain for some C>0


1

−N ⎛ p∣α∣−N ⎞
p
1
λ q ∥u∥Lq (RN ) ≤ C ∑ λ ∥u∥W k,p (RN ) ≤ C ′ (λ−N + λpk−N ) p ∥u∥W k,p (RN ) .
⎝∣α∣≤k ⎠
29
Wild domains with cusps, fractal structure etc. may cause problems. For Lipschitz domains these
exotic phenomena do not occur due to the presence of an extension operator.
30
u ∈ W k,p (RN ) to carry through the arguments.
Actually one may take any nontrivial function

39
This implies (consider λ→∞ resp. λ → 0)

N N N N
− ≤k− and − ≥− ,
q p q p

which is equivalent to (6.1). ◻

For 1≤p< N
k the conditions (6.1) mean p ≤ q ≤ N −kp . For p ≥ k all q ≥ p are allowed.
Nk N

Except for the case p =


N
k , where a slightly weaker result is true, this is already the
answer to the problem as we shall demonstrate in the following. We will concentrate on
the proofs for k = 1; to prove the statements for higher k one uses inductively

u ∈ W k,p (RN ) ⇔ u ∈ W 1,p (RN ) and ∂1 u, . . . , ∂N u ∈ W k−1,p (RN ).

So we come to one of the most important theorems in the Sobolev spaces: Sobolev's
Inequality. For 1<p<N it is due to Sobolev himself [22]. Gagliardo [8] extended the result
to p=1 and large classes of bounded domains. At almost the same time, Nirenberg [17]
proved a more general version on RN including the case p=1 as well (knowing that the
result holds on suciently nice domains Ω⊂R N
). To prove Sobolev's Inequality, we
will use the generalized Hölder inequality

1 1 1 1
∥∣v1 ∣ N −1 ⋅ . . . ⋅ ∣vN −1 ∣ N −1 ∥ ≤ ∥v1 ∥LN1−1
(R)
⋅ . . . ⋅ ∥vN −1 ∥LN1−1
(R)
, (6.2)
L1 (R)

see Exercise 1 on Exercise Sheet 1. Moreover we will need the Inequality of arithmetic
and geometric means

N 1 1 N
∏ aiN ≤ ∑ ai (a1 , . . . , aN ≥ 0). (6.3)
i=1 N i=1

Satz 6.2 (Sobolev (1938), Gagliardo (1958), Nirenberg (1959)) . Assume N ∈ N, N ≥ 2


and 1 ≤ p < N. Then we have the following inequality for all u ∈ W 1,p (RN ):

p(N − 1)
∥u∥ Np ≤√ ∥∇u∥Lp (RN ) .
L N −p (RN ) N (N − p)

Beweis:
By Lemma 4.12 and Exercise 4 of Exercise Sheet 1 it suces to prove this inequality for
u ∈ C0∞ (RN ). The crucial step is to prove the claim for p = 1, which we shall do rst. For
u ∈ C0∞ (RN ) and j ∈ {1, . . . , N } we have
xj
∣u(x)∣ = ∣∫ ∂j u(x1 , . . . , xj−1 , t, xj+1 , . . . , xN ) dt∣
−∞

≤ ∫ ∣∂j u(x1 , . . . , xj−1 , t, xj+1 , . . . , xN )∣ dt.


R

40
This implies

1
N
N N −1
∣u(x)∣ N −1 ≤ ∏ (∫ ∣∂j u(x1 , . . . , xj−1 , t, xj+1 , . . . , xN )∣ dt) .
j=1 R

Hence,

N 1
N N −1
∫ ∣u(x)∣ N −1 dx1 ≤ ∫ ∏ ( ∫ ∣∂i u(x)∣ dxi ) dx1
R R i=1 R
1 N 1
N −1 N −1
= ( ∫ ∣∂1 u(x)∣ dx1 ) ∫ ∏ ( ∫ ∣∂i u(x)∣ dxi ) dx1
R R i=2 R
1 N 1
(6.2) N −1 N −1
≤ ( ∫ ∣∂1 u(x)∣ dx1 ) ∏(∫ ∣∂i u(x)∣ dx1 dxi ) .
R i=2 R2

Integrating this inequality now with respect to x2 yields

N
∫ ∣u(x)∣ N −1 dx1 dx2
R2
1 N 1
N −1 N −1
≤ ∫ [( ∫ ∣∂1 u(x)∣ dx1 ) ∏(∫ ∣∂i u(x)∣ dx1 dxi ) ] dx2
R R i=2 R2
1 1 N 1
N −1 N −1 N −1
= (∫ ∣∂2 u(x)∣ dx1 dx2 ) ⋅ ∫ [( ∫ ∣∂1 u(x)∣ dx1 ) (∏∫ ∣∂i u(x)∣ dx1 dxi ) ] dx2
R2 R R i=3 R2
2 1 N 1
(6.2) N −1 N −1
≤ ∏(∫ ∣∂i u(x)∣ dx1 dx2 ) ∏(∫ ∣∂i u(x)∣ dx1 dx2 dxi ) .
i=1 R2 i=3 R3

In this way, one inductively shows for all k ∈ {1, . . . , N }


N
∫ ∣u(x)∣ N −1 dx1 dx2 . . . dxk
Rk
k 1 N 1
N −1 N −1
≤ ∏(∫ ∣∂i u(x)∣ dx1 dx2 . . . dxk ) ∏ (∫ ∣∂i u(x)∣ dx1 dx2 . . . dxk dxi )
i=1 Rk i=k+1 Rk+1

where the last product it to be understood as 1 for k = N . Finally, we use (6.3) and get
N −1 1
N N
N N
∥u∥ N = (∫ ∣u(x)∣ N −1 dx) ≤ ∏ (∫ ∣∂i u(x)∣ dx)
N −1 RN i=1 RN

(6.3)1 N
≤ ∑ ∫ ∣∂i u(x)∣ dx
N i=1 RN
1
≤ √ ∫ ∣∇u(x)∣ dx.
N RN
This proves the claim for p = 1.

41
N (p−1)
For 1<p<N and nontrivial u ∈ C0∞ (RN ) consider v ∶= ∣u∣ N −p u. Then

p(N − 1) NN(p−1)
v ∈ C01 (RN ) and ∇v = ∣u∣ −p ∇u.
N −p
So the result above implies

p(N −1)

∥u∥ NNp−p = ∥v∥ N


N −p N −1

1
≤ √ ∥∇v∥1
N
p(N − 1) N (p−1)
=√ ∥∣u∣ N −p ∣∇u∣∥1
N (N − p)
Hölder p(N − 1) N (p−1)
≤ √ ∥∣∣u∣ N −p ∥p′ ∥∣∇u∣∥p
N (N − p)
p(N − 1) N (p−1)

=√ ∥u∥ NNp−p ∥∣∇u∣∥p .


N (N − p) N −p

This gives the result. ◻

Bemerkung 6.3.
(a) The best constant in the Sobolev Inequality is known is given by

1 1/N
p − 1 1− p Γ(N )Γ(1 + N /2)
CS (p) ∶= π −1/2 N −1/p ( ) ( ) .
N −p Γ(N /p)Γ(1 + N − N /p)

In 1976, Talenti [24] proved that for 1 < p < N this value is attained precisely31 for
functions u(x) = (a + b∣x − x0 ∣ )1−N /p where a, b > 0, x0 ∈ RN . In the case p = 1 we
p′

have
Γ(1 + N /2)1/N
CS (1) = lim CS (p) = √
p↘1 πN
and a maximizing sequence for the inequality can be chosen to consist of functions
converging to the indicator function of a ball in a suitable sense. Federer, Fleming [5]
and Rishel [7, Theorem II] proved that the inequality for p=1 is related to the
so-called isoperimetric inequality

N −1
∣E∣ N ≤ CS (1) area(∂E)

for suciently regular subsets E ⊂ RN . It is maximized by balls.


31
A related article is [?]. The uniqueness was proved in [?, Corollary 8.2 (b)] using that any maximizer
N +2
must be a solution of the equation −∆u = u N −2 in R
N
that does not change sign. This may be proved
with techniques from the Calculus of Variations.

42
Satz 6.4 (Sobolev's Embedding Theorem) Assume N ∈ N, N ≥ . 2 and 1 ≤ p < N and
∗ Np
p ∶= N −p . Then there is a continuous embedding W 1,p (RN ) ↪ Lq (RN ) precisely for
p ≤ q ≤ p∗ , i.e., for 0 ≤ p1 − 1q ≤ N1 .

Beweis:
The Sobolev Inequality shows

∥u∥p∗ ≤ CS (p)∥∇u∥p ≤ CS (p)∥u∥1,p .

Moreover, ∥u∥p ≤ ∥u∥1,p . For any given q ∈ [p, p∗ ] we nd θ ∈ [0, 1] such that
1
q = θ
p + 1−θ
p∗ .
32
Then Hölder's inequality gives

∥u∥q ≤ ∥u∥θp ∥u∥1−θ


p∗ ≤ CS (p)
1−θ
∥u∥1,p .

Since this prefactor does not depend on u, the claim is proved. ◻

Korollar 6.5. Assume N ∈ N, N ≥ 2 and 1≤p<N and p∗ ∶= NN−p


p
. For any bounded

Lipschitz domain Ω ⊂ RN there is a continuous embedding W (Ω) ↪ Lq (Ω) for 1 ≤ q ≤


1,p

p∗ .

Beweis:
Theorem 5.6 provides an extension operator E ∶ W 1,p (Ω) → W 1,p (RN ). We thus obtain

∥u∥Lp∗ (Ω) ≤ ∥Eu∥Lp∗ (RN ) ≤ CS (p)∥Eu∥W 1,p (RN ) ≤ CS (p)∥E∥∥u∥W 1,p (Ω) .

Hence, for 1 ≤ q ≤ p∗ ,
1
∥u∥Lq (Ω) ≤ ∥u∥Lp∗ (Ω) ∥1∥ 1− 1 ≤ CS (p)∥E∥∥u∥W 1,p (Ω) ∣Ω∣ p′ .
L q p∗
(Ω)

This proves the claim. ◻

33
The Sobolev Inequality is false for p=N and the question is whether an embedding

W 1,N
(Ω) ↪ L (Ω) holds. The answer is no. In the Exercises we shall prove the followi-
ng.

Satz 6.6. Assume N ∈ N, N ≥ 2. Then there is a continuous embedding W 1,N (RN ) ↪


Lq (RN ) precisely for N ≤ q < ∞.
32
Notice
∥u∥q = ∥∣u∣θ ⋅ ∣u∣1−θ ∥q ≤ ∥∣u∣θ ∥ p ∥∣u∣1−θ ∥ p∗ = ∥u∥θp ∥u∥1−θ
p∗ .
θ 1−θ

This is sometimes called Lyapunov's Inequality.


33
Heuristically, this follows from CS (p) → ∞ as p ↗ N . But this is not a proof, just a way of remembering
things.

43
Korollar 6.7. Assume N ∈ N, N ≥ 2. For any bounded Lipschitz domain Ω ⊂ RN there
is a continuous embedding W
1,N
(Ω) ↪ Lq (Ω) precisely for 1 ≤ q < ∞.

There is a sharper statement about the limiting case p = N, which is particularly im-
portant in two spatial dimensions. The fundamental result in this direction is the Moser-
Trudinger Inequality [16, 25] which essentially says that any function u ∈ W01,N (Ω) satis-
es
N
eα∣u∣ ∈ L1 (Ω)
N −1

provided that Ω ⊂ RN is a bounded domain and α > 0.


End Lec 08

Example 6.8. We give an indirect proof of the fact that Hölder-domains do not admit
extension operators as in Stein's Extension Theorem. To keep the technicalities at a
moderate level, we concentrate on the case N = 2. We want to show that for 1 ≤ p < N = 2
we have
Np 2p
W 1,p (Ω) ↪
/ Lp (Ω), p∗ = =

N −p 2−p
where the non-Lipschitz domain is given by

Ω = {(x, y) ∈ R2 ∶ ∣x∣γ < y < 1, 0 < ∣x∣ < 1}, 0 < γ < 1.
To see this dene
γ+1
u(x, y) ∶= y −α where α ∶= .
γp∗
One checks:

ˆ (α + 1)p > 1 because of p < 2,


ˆ γ(1 − (α + 1))p > −1 because of γ < 1,

ˆ γ(1 − αp ) = −1 by denition of α,
−p(α+1)
ˆ ∣∇u(x, y)∣ ≤ ∣α∣y .

From this we get

1 1
∫ ∣∇u(x, y)∣ + ∣u(x, y)∣ d(x, y) ≤ ∫
p p
∫ (∣α∣ + 1)y −(α+1)p dy dx
Ω 0 ∣x∣γ
1 1
≤ (∣α∣ + 1) ∫ (1 − xγ(1−(α+1)p) ) dx
0 1 − (α + 1)p
∣α∣ + 1 1
≤ ∫ x
γ(1−(α+1)p)
dx
p(α + 1) − 1 0
< ∞,
On the other hand,

1 1
∫ ∣u(x, y)∣ d(x, y) = ∫ y −αp dy dx
p ∗ ∗

Ω 0 ∣x∣γ

44
1 1
=∫ (1 − xγ(1−αp ) ) dx

0 1 − αp∗

1 1
−1
= ∫ (x − 1) dx
γ 0
= ∞.
We infer
W 1,p (Ω) ↪
/ Lp (Ω).

This implies that Ω does not admit a bounded extension operator W 1,p (Ω) → W 1,p (RN ).

6.1 Applications

We return to the boundary value problem (3.3), which was given by

−∆u(x) + c(x)u(x) = f (x) (x ∈ Ω), u(x) = 0 (x ∈ ∂Ω).


In Corollary 3.4 we showed that this boundary value problem has a unique solution
for f ∈ L2 (Ω), c ∈ L∞ (Ω) with c(x) ≥ µ > 0. This was proved using the Lax-Milgram
Theorem. Using Sobolev's Inequality we may improve this result now. We only state the
corresponding result for a bounded Lipschitz domain and only for N ≥ 3. This is because
2N
of H 1 (Ω) ↪ L N −2 (Ω) only for N ≥ 3.

Korollar
2N
6.9. Ω ⊂ RN , N ≥ 3
Let be a bounded Lipschitz domain and assume f ∈
N
L N +2 (Ω), c ∈ L (Ω) with c(x) ≥ µ > 0
2 almost everywhere. Then (3.3) has a unique weak
solution u ∈ H0 (Ω) that satises
1

∥u∥1,2 ≤ CS (2) min{1, µ}−1 ∥f ∥ 2N .


N +2

Beweis:
We recall that we want to solve a(u, v) = l(v) for all v∈H where

a(u, v) ∶= ∫ ∇u(x) ⋅ ∇v(x) + c(x)u(x)v(x) dx,


l(v) ∶= ∫ f (x)v(x) dx.


Again we need to check the assumptions of the Lax-Milgram-Lemma.

34
The boundedness of a follows from

∣a(u, v)∣ ≤ ∫ ∣∇u(x)∣∣∇v(x)∣ + ∣c(x)∣∣u(x)∣∣v(x)∣ dx



34
We use here ∥∣∇u∣∥2 ≤ CS (2)∥u∥1,2 for all u ∈ H01 (Ω), but we had proved it for u ∈ H 1 (RN ) only. This is
no problem because any u ∈ H01 (Ω) may be extended trivially to RN (in striking contrast to H 1 (Ω)-
functions) so that Sobolev's Inequality applies. The best constant C > 0 satisfying ∥∣∇u∣∥2 ≤ C∥u∥1,2
is however smaller, but this is not our issue here.

45
≤ ∥∣∇u∣∥2 ∥∣∇v∣∥2 + ∥c∥ N ∥u∥ 2N ∥v∥ 2N
2 N −2 N −2

≤ ∥∣∇u∣∥2 ∥∣∇v∣∥2 + ∥c∥ N CS (2)2 ∥∣∇u∣∥2 ∥∣∇v∣∥2


2

≤ (1 + ∥c∥ N CS (2) )∥u∥1,2 ∥v∥1,2


2
2

The proof of coercivity uses c(x) ≥ µ > 0 and works as in the proof of Corollary 3.4.
Finally, l is a bounded linear functional because

∣l(v)∣ ≤ ∥f v∥1 ≤ ∥f ∥ 2N ∥v∥ 2N ≤ ∥f ∥ 2N CS (2)∥∣∇v∣∥2 ≤ ∥f ∥ 2N CS (2)∥v∥1,2 .


N +2 N −2 N +2 N +2

So the Lax-Milgram Lemma proves the claim. ◻

L∞ (Ω) ⊂ L 2 (Ω)
2N N
What do we gain here? Due to L2 (Ω) ⊂ L N +2 (Ω) and our assumptions
on the coecients are less restrictive than before. For instance, the function c may be
unbounded from above (not below, though!), which was not allowed before.

Question: What would be the corresponding result for N = 2? How should an improve-
ment on unbounded domains look like?

7 Morrey's Embedding Theorem and Applications

In the last Section we have shown that W 1,p (Ω) ↪ Lq (Ω) for 1 ≤ p < N and p ≤ q ≤ p∗ =
Np
N −p . In the case p = N one obtains the same result for p ≤ q < ∞, but not for q = ∞. Now
we want to show that in the case p>N actually more is true: W 1,p (Ω) ↪ C 0,α (Ω) where
α ∶= 1 − N
p ∈ (0, 1).

At rst sight it seems odd to prove such a result for elements of Sobolev spaces, that
are only dened up to a set of measure zero. Actually, elements of Sobolev spaces are,
just as elements of Lebesgue spaces, equivalence classes of functions that coincide almost
everywhere. Since continuous functions may become discontinuous (and vice versa) after
modication on a null set, it does not make sense to claim that any function u ∈ W 1,p (Ω)
should be automatically Hölder-continuous. We rather claim that the equivalence class
u ∈ W 1,p (Ω) contains a Hölder-continuous function. In other words, we will prove that
for any given u ∈ W 1,p (Ω) there is ũ ∈ C 0,α (Ω) such that u = ũ almost everywhere.

We dene the spaces C m,α (Ω) as follows:

∥u∥C(Ω) ∶= sup ∣u∣,



∥u∥C m (Ω) ∶= ∑ ∥∂ α u∥C(Ω) ,
α∈NN
0 ,∣α∣≤m

46
∣u(x) − u(y)∣
[u]C 0,α (Ω) ∶= sup
x,y∈Ω,x≠y ∣x − y∣α
∥u∥C m,α (Ω) ∶= ∥u∥C m (Ω) + ∑ [∂ α u]C 0,α (Ω)
α∈NN
0 ,∣α∣=m

We use the following result without proof (which is not too dicult).

Satz 7.1. Let Ω ⊂ RN , m ∈ N0 , α ∈ (0, 1]. Then the spaces

(C m (Ω), ∥ ⋅ ∥C m (Ω) ), (C m,α (Ω), ∥ ⋅ ∥C m,α (Ω) )

are Banach spaces.

To prove embeddings into Hölder spaces, we focus on the model situation Ω = RN . Using
an Extension operator, this turns out to be sucient. In the following result we denote
by ωN ∶= ∣B1 (0)∣ the volume of the unit ball in RN .

Satz 7.2 (Morrey). Let N < p < ∞, u ∈ W 1,p (RN ) and α ∶= 1 − Np ∈ (0, 1). Then we have

for almost all x, y ∈ R N

2p − N −1/p
∣u(x)∣ ≤ ω ∥u∥W 1,p (RN )
p−N N
∣u(x) − u(y)∣ 4p −1/p
≤ ω ∥∇u∥Lp (RN ) .
∣x − y∣α p−N N
Beweis:
We rst prove these inequalities for u ∈ C0∞ (RN ). So let x, y ∈ RN be arbitrary and dene
x+y ∣x−y∣
their midpoint by m ∶= 2 , set ρ ∶= 2 = ∣x − m∣ = ∣y − m∣. Then we have ∣Bρ ∣ = ωN ρN
and thus

ωN ρN ∣u(x) − u(y)∣
=∫ ∣u(x) − u(y)∣ dz
Bρ (m)

≤∫ ∣u(x) − u(z)∣ dz + ∫ ∣u(y) − u(z)∣ dz


Bρ (m) Bρ (m)
1
≤∫ ∫ (∣∇u(x + t(z − x))∣∣x − z∣ + ∣∇u(y + t(z − y))∣∣y − z∣) dt dz
Bρ (m) 0
1
≤ 2ρ ∫ ∫ (∣∇u(x + t(z − x))∣ + ∣∇u(y + t(z − y))∣) dt dz
Bρ (m) 0
1
≤ 2ρ ∫ (∫ t−N ∣∇u(z)∣ dz + ∫ t−N ∣∇u(z)∣ dz) dt
0 Bρt (x+t(m−x)) Bρt (y+t(m−y))
1 p−1 p−1
≤ 2ρ ∫ t−N ∥∇u∥Lp (Bρt (x+t(m−x))) (∣Bρt (x + t(m − x))∣ p + ∣Bρt (y + t(m − y))∣ p ) dt
0

47
1 p−1
≤ 2ρ∥∇u∥Lp (RN ) ∫ t−N ⋅ 2(ωN (ρt)N ) p dt
0
p−1 1
= 4ρ(ωN ρN ) p ∥∇u∥Lp (RN ) ∫ t−N /p dt
0
1− N 4p −1/p
≤ ωN ρN ∣x − y∣ p ⋅ ω ∥∇u∥Lp (RN ) .
p−N N

In the last line we used 2ρ = ∣x − y∣. The second estimate is proved similarly:

ωN ∣u(x)∣ ≤ ∫ ∣u(x) − u(y)∣ + ∣u(y)∣ dy


B1 (x)

≤∫ ∣u(x) − u(y)∣ dy + ∥u∥Lp (B1 (x)) ∣B1 ∣1/p


B1 (x)
1
1/p′
≤∫ (∫ ∣∇u(x + t(y − x))∣∣x − y∣ dt) dy + ∥u∥Lp (RN ) ωN
B1 (x) 0
1
∣∇u(y)∣ dy) t−N dt + ∥u∥Lp (RN ) ωN
1/p′
≤∫ (∫
0 Bt (x)
1
∥∇u∥Lp (Bt (x)) ⋅ ∣Bt (x)∣1/p t−N dt + ∥u∥Lp (RN ) ωN
1/p′
≤∫

0
1
t−N /p dt + ∥u∥Lp (RN ) ωN
1/p′ 1/p′
≤ ωN ∥∇u∥Lp (RN ) ∫
0
−1/p p −1/p
≤ωN ⋅ (ωN ∥∇u∥Lp (RN ) + ωN ∥u∥Lp (RN ) )
p−N
−1/p p
≤ ωN ⋅ ωN ∥u∥W 1,p (RN ) .
p−N
This prove the inequalities for test functions u. To treat the general case consider a se-
quence (un ) un → u in W 1,p (RN ) and un → u almost everywhere.
of test functions with
Then the above estimate shows that (un ) is a Cauchy sequence in C (RN ). Since this
0,α

is a Banach space, there is ũ ∈ C (R ) with un → ũ in C (R ). We this conclude


0,α N 0,α N

u = limn→∞ un = ũ almost everywhere. ◻

Korollar 7.3. Let N ∈ N and N < p < ∞. Then there is a continuous embedding
W 1,p (RN ) ↪ C 0,α (RN ) where α = 1 − Np . For bounded Lipschitz domains Ω ⊂ RN we
have W 1,p (Ω) ↪ C 0,α (Ω).

Bemerkung 7.4.
(a) We already saw: The result is not true for p = N ; W 1,p −functions need not even be
bounded.

(b) The embedding W 1,p (Ω) ↪ C 0,α (Ω) is true for p = ∞, so W 1,∞ -functions coincide
with a Lipschitz-continuous function almost everywhere. We did not include this to
avoid technicalities. Notice that the reasoning by density in our proof from above

48
does not work, at least not directly, for such functions. The interested reader may
have a look at Rademacher's Theorem.

(c) In 1951 Caldéron [3] proved the following. If u ∈ W 1,p (Ω) with N < p ≤ ∞, then u
coincides almost everywhere with some dierentiable function. The derivatives of
the latter coincide with the corresponding weak derivatives of u almost everywhere.
The argument is based on Lebesgue's dierentiation theorem.

End Lec 09

8 Continuous Embeddings of Sobolev spaces: A summary

We start with recapitulating the important continuous embeddings of rst order Sobolev
spaces W 1,p (RN ) with 1 ≤ p < ∞. In the past lectures we have proved the following:

Np
(i) (Theorem 6.4) If 1 < p < N : W 1,p (RN ) ↪ Lq (RN ) for p≤q≤ N −p
(ii) (Theorem 6.6) If p = N : W 1,p (RN ) ↪ Lq (RN ) for p≤q<∞
(iii) (Corollary 7.3) If p > N: W 1,p
(R ) ↪ C
N 0,α
(R )
N
for α=1− N
p

These result show to which extent functions belonging to W 1,p (RN ) are better than
ordinary L (R )-functions; the existence of a weak gradient in L (R ; R ) regularizes
p N p N N

the function. Singularities become milder (1 < p < N or p = N ) or even become impossible
(p > N ).

What is the consequence for higher order Sobolev spaces? Assume 1 ≤ p < N . For any
u ∈ W (R ) we have ∂1 u, . . . , ∂N u ∈ W (R ) with weak
2,p N 1,p N
derivatives ∂j (∂i u) = ∂ij u ∈
Lp (RN ). Accordingly, we may apply the embeddings for rst order Sobolev spaces to get
∂1 u, . . . , ∂N u ∈ Lq (RN ) for q as in (i). This implies u ∈ W 1,q (RN ), hence

Np
W 2,p (RN ) ↪ W 1,q (RN ) for p≤q≤ .
N −p

Now we can use the embeddings of W 1,q (RN ) to go further, e.g.,

Np Nq
W 2,p (RN ) ↪ W 1,q (RN ) ↪ Lr (RN ) for p≤q≤ , q < N, q ≤ r ≤ .
N −p N −q

This gives in the case 1≤p≤ N


2:
Np
(i) If 1≤p< N
2: W 2,p (RN ) ↪ Lr (RN ) for p≤r≤ N −2p .

(ii) If p= N
2: W 2,p (RN ) ↪ Lr (RN ) for p ≤ r < ∞.
For p> N
2 we get

49
1, NN−p
p
(iii) If
N
2 < p < N : W 2,p (RN ) ↪ W (RN ) ↪ C 0,α (RN ) for α=2− N
p.
(iii) If p = N : W 2,p (RN ) ↪ ⋂p≤q<∞ W 1,q (RN ) ↪ ⋂0<α<1 C 0,α (RN ).
(iv) If p > N : W 2,p (RN ) ↪ C 1,α (RN ) for α = 1 − Np .
(because u, ∂1 u, . . . , ∂N u ∈ C (RN ) implies u ∈ C 1,α (RN ))
0,α

In such a way one obtains the following embeddings.

Np
(A) If 1≤p< N
k: W k,p (RN ) ↪ Lr (RN ) for p≤r≤ N −kp .

(B) If p= N
k: W
k,p
(RN ) ↪ Lr (RN ) for p ≤ r < ∞.
(C) If p> k and p ∉ N:
N N
W k,p
(R ) ↪ C l,α (RN )
N
for l = k − ⌊ Np ⌋ − 1, α ∶= 1 + ⌊ Np ⌋ − N
p

The corresponding embeddings on bounded Lipschitz domains are the same up to repla-
cing p≤r by 1≤r in (A) and (B).

An excursion: The Calculus of Variations was invented to prove the existence of mini-
mizers of a given energy functional. In physical applications this can be

1
I ∶ H01 (Ω) → R, u↦ ∫ ∣∇u∣ dx − ∫ f u dx
2
2 Ω Ω

where, for simplicity, f ∈ L2 (Ω). One can show that such functionals indeed have unique
minimizers u that are (again unique) weak solutions of the boundary value problem

−∆u = f on Ω, u=0 on ∂Ω.

In nonlinear models, other functionals are of importance, for instance

1 1
J ∶ H01 (Ω) → R, u↦ ∫ ∣∇u∣ dx + ∫ ∣u∣ dx − ∫ f u dx
2 q
2 Ω q Ω Ω

For which q > 1 is this well-dened? Sobolev's Embedding Theorem shows that this
functional is well-dened (even continuously dierentiable) provided that 1<q≤ 2N
N −2 .
Having proved the existence of a unique minimizer, which one can do with abstract
35
methods , one has found a weak solution to the nonlinear boundary value problem

−∆u + ∣u∣q−2 u = f on Ω, u=0 on ∂Ω.


Being interested in other energy functionals, say

1 1
J ∶ W01,p (Ω) → R, u↦ ∫ ∣∇u∣ dx + ∫ ∣u∣ dx − ∫ f u dx
p q
2 Ω q Ω Ω
Np
one has a good theory available for p≤q≤ N −p . The conclusion is that Sobolev's Em-
bedding Theorem allows to treat nonlinear problems with the methods of the Calculus
of Variations.
End Lec 10
35
The Direct Method of the Calculus of Variations where lower semi-continuity and compact embed-
dings are heavily exploited. This is the topic on a course on nonlinear boundary value problems.

50
9 Compact Embeddings: The Rellich-Kondrachov Theorem
and beyond

In the previous two sections we discussed the existence of (continuous) embeddings


W 1,p (Ω) ↪ Lq (Ω) or W 1,p (Ω) ↪ C 0,α (Ω) under suitable assumptions on p, q, α, Ω. Now
we want to show that these embeddings are not only continuous, but even compact. Here
the crucial assumption turns out to be that

(i) Ω is bounded and


Np
(ii) q< N −p resp. α<1− N
p . (Non-endpoint cases)

We will see in the Exercises that these assumptions are natural in the sense that typi-
cally the embeddings are not compact for unbounded Ω or Sobolev-critical exponents.
Compactness is of utmost importance for the whole theory of analysis, notably elliptic
boundary value problems and the calculus of variations. We will later provide some more
information about this. We start with the denition of compactness.

Denition 9.1. A linear operator K ∶ X → Y between Banach spaces X, Y is called com-


pact if for every bounded sequence (xn )n∈N ⊂ X the sequence (Kxn )n∈N has a convergent
subsequence in Y.

We mention a few basic observations about compact operators:

ˆ Every compact operator is bounded, but the converse is not true.


(Do not dare to forget this fact!!!)

ˆ If X, Y are nite-dimensional Banach spaces, so X ≃ Rk , Y ≃ Rl for some k, l ∈ N0 ,


then every bounded linear operator X →Y is compact.

ˆ The identity map ι∶X →X is compact if and only if X is nite-dimensional. The


proof of this fact relies on Riesz' Lemma.

ˆ The operator A ∶ l2 (N) → l2 (N), (cn )n∈N ↦ (an cn )n∈N is compact if and only if
(an )n∈N is a null sequence.

ˆ A linear compact operator A ∶ Lp (Ω) → Lp (Ω) is characterized by the property that


there is a sequence (An )n∈N of bounded linear operators with nite-dimensional
range (i.e., {An x ∶ x ∈ X} is a nite-dimensional subspace of Y) and

∥An − A∥X→Y = sup ∥(An − A)f ∥Y → 0 as n → ∞.


f ∈X,∥f ∥X =1

We now want to investigate when the inclusion maps

ι ∶ W 1,p (Ω) → Lq (Ω), u ↦ u, ι ∶ W 1,p (Ω) → C 0,α (Ω), u ↦ u

are compact.

51
9.1 Compact Embeddings into Hölder spaces

The starting point of compactness investigations is the Ascoli-Arzelà Theorem from 1884
? ?
[ ] resp. 1894 Arzelà [ ]. It relies on the notion of equicontinuity.

Denition 9.2 (Equicontinuity) . LetK ⊂ RN be closed. A sequence (fn )n∈N in C(K)


is called equicontinuous if for all ε > 0 there is δε > 0 such that for all x, y ∈ K

∣x − y∣ < δε ⇒ ∣fn (x) − fn (y)∣ < ε for all n ∈ N.

An important class of equicontinuous families of functions are uniformly Hölder-continuous


functions (fn )n∈N satisfying [fn ]C 0,α (K) ≤ M for some α, M > 0. Indeed, in that case

∣x − y∣ < δε ∶= (M −1 ε)1/α implies

∣fn (x) − fn (y)∣ ≤ M ∣x − y∣α < M δεα = ε.

Satz 9.3 (Ascoli(1884), Arzelà (1894)) . Let K ⊂ RN be bounded and closed and let
(fn )n∈N ⊂ C(K) be a pointwise bounded and equicontinuous sequence. Then (fn )n∈N has
a uniformly convergent subsequence.

Beweis:
We choose x1 , x2 , . . . such that Q ∩ K = {xi ∶ i ∈ N}. Then (fn (x1 ))n∈N is (by assumption)
a bounded sequence of real numbers. So there is an injective map ψ1 ∶ N → N such that
the subsequence (fψ1 (n) (x1 ))n∈N converges. Next, (fψ1 (n) (x2 ))n∈N is a bounded sequence
of real numbers and we nd another injective map ψ2 ∶ N → N such that the subsequence
(fψ1 (ψ2 (n)) (x2 ))n∈N converges. Since it is a subsequence of the previous subsequence,
we get that both (fψ1 (ψ2 (n)) (x1 ))n∈N , (fψ1 (ψ2 (n)) (x2 ))n∈N converge. Inductively, one nds
injective maps ψ1 , ψ2 , . . . , ψk ∶ N → N such that the sequences (fΨ (n) )(xi ))n∈N converge
k
for i = 1, . . . , k where Ψk (n) ∶= ψ1 (ψ2 (. . . (ψk (n)))). Dene the diagonal sequence gn (x) ∶=
fΨn (n) (x). Then (gn )n∈N is a subsequence of (fn )n∈N and we want to show that it is a
Cauchy sequence in C(K).

Indeed, given ε>0 we choose x1 , . . . , xM ∈ K such that

M
K ⊂ ⋃ Bδ (xi ) for δ = δε/3 as above.
i=1

Then choose n0 ∈ N such that

ε
∣gn (xi ) − gm (xi )∣ ≤ for i = 1, . . . , M and n, m ≥ n0 .
3
Then we have for all x∈K and n, m ≥ n0

∣gn (x) − gm (x)∣ ≤ min [∣gn (x) − gn (xi )∣ + ∣gn (xi ) − gm (xi )∣ + ∣gm (xi ) − gm (x)∣]
i=1,...,M

52
ε
≤ + min [∣gn (x) − gn (xi )∣ + ∣gm (xi ) − gm (x)∣]
3 i=1,...,M
≤ε
by choosing xi (dependent on x) such that ∣x − xi ∣ < δε/3 , which is possible as we saw
above. So (gn )n∈N is a Cauchy sequence in the Banach space (C(K), ∥ ⋅ ∥C(K) ) and thus
converges uniformly to some continuous function. ◻

Satz 9.4. Let Ω ⊂ RN be bounded and 1 > α > β > 0. Then the embedding C 0,α (Ω̄) ↪
C 0,β
(Ω̄) 36
is compact .

Beweis:
Let (fn )n∈N be bounded in C 0,α (Ω) with M ∶= supn∈N ∥fn ∥0,α < ∞. We have seen above
that (fn )n∈N is then equicontinuous. Moreover, this sequence is pointwise bounded due to
∥fn ∥C(Ω) ≤ ∥fn ∥0,α ≤ M for all n ∈ N. So the Ascoli-Arzelà Theorem provides a uniformly
convergent subsequence (fnj )j∈N with limit f ∈ C(Ω̄). We want to show fnj → f in

C 0,α (Ω). To simplify the notation we write fj instead of fnj ,

To see this we estimate as follows for x ≠ y:


∣(f − fj )(x) − (f − fj )(y)∣ ∣fi (x) − fi (y)∣ + ∣fj (x) − fj (y)∣
≤ lim ≤ 2M ∣x − y∣α−β , (9.1)
∣x − y∣β i→∞ ∣x − y∣ β

∣(f − fj )(x) − (f − fj )(y)∣ 2


≤ ∥f − fj ∥C(Ω̄) . (9.2)
∣x − y∣β ∣x − y∣β
For any given ε>0 choose j0 ∈ N such that

ε ε ε α−β
β
∥f − fj ∥C(Ω̄) < min { , ( ) } for all j ≥ j0 .
2 4 4M
Then we get for x≠y∈Ω
ε α−β
1 ∣(f − fj )(x) − (f − fj )(y)∣ (9.1) ε
∣x − y∣ < ( ) ⇒ ≤ 2M ∣x − y∣α−β < ,
4M ∣x − y∣β 2
ε α−β
1 ∣(f − fj )(x) − (f − fj )(y)∣ (9.2) 2∥f − fj ∥C(Ω̄) ε
∣x − y∣ ≥ ( ) ⇒ ≤ < .
4M ∣x − y∣β ∣x − y∣β 2
This implies
ε ε
∥f − fj ∥0,β = ∥f − fj ∥C(Ω̄) + [f − fj ]0,β < + = ε,
2 2
which is all we had to show. ◻

With a little bit of technical work, one may prove the more general statement that
C k1 ,α1 (Ω̄) ↪ C k2 ,α2 (Ω̄) is compact provided that k 1 , k 2 ∈ N0 und α1 , α2 ∈ (0, 1) satisfy
k1 + α1 > k2 + α2 .
36
We haven't even proved the weaker statement C 0,α (Ω̄) ⊂ C 0,β (Ω̄) yet. It is a consequence of this result.

53
Korollar 9.5. ⊂ RN be a bounded Lipschitz domain and assume N < p < ∞. Then
Let Ω
the embedding W (Ω) ↪ C 0,β (Ω) is compact provided that 0 < β < 1 − Np . In particular,
1,p

W 1,p (Ω) ↪ Lq (Ω) is compact for all q ∈ [1, ∞].

Beweis:
Let (un )n∈N be bounded in W 1,p (Ω). Then Morrey's Embedding Theorem shows that
there is a C > 0 such that

sup ∥un ∥C 0,α (Ω) ≤ C sup ∥un ∥W 1,p (Ω) < ∞


n∈N n∈N

for α ∶= 1 − Np > 0. So Theorem 9.4 implies that (un )n∈N has a subsequence that converges

in C 0,β
(Ω). This proves the rst claim. In view of

1 1
∥un − u∥Lq (Ω) ≤ ∥un − u∥L∞ (Ω) ∣Ω∣ q ≤ ∥un − u∥C 0,β (Ω) ∣Ω∣ q

for all q ∈ [1, ∞] the second claim holds as well. ◻

Notice that this has an abstract generalization: The concatenation of bounded linear
operator and a compact linear operator is a compact linear operator.
End Lec 11

9.2 Compact Embeddings into Lebesgue spaces

We now prove the (more important) statement that Sobolev's Embedding W 1,p (Ω) ↪
Lq (Ω) 1 ≤ q < p∗ . This is the Rellich-Kondrachov Theorem  Rellich [19]
is compact for
proved it in the special case p = q = 2 and Kondrachov [11] extended this result to general
p, q . A modern proof may be based on a characterization of precompact subsets of Lp (Ω)
respectively L (R ). We use without proof the following characterization of such sets.
p N

Proposition 9.6. Let (X, ∥ ⋅ ∥X ) be a Banach space and V ⊂ X a subset. Then the
following statements are equivalent:

ˆ (Precompactness) Every sequence in V has a convergent subsequence in X.


ˆ (Total boundedness) For all ε>0 the set V can be covered by nitely many balls in
X with radius ε.

We are going to use the Fréchet-Kolmogorov-Riesz criterion to check the compactness


of the embedding. This criterion requires for two preliminary results about estimates for
f (⋅ + h) − f when h ∈ RN is given (and small).

54
Proposition 9.7. Let f ∈ Lp (RN ). Then f (⋅ + h) → f in Lp (RN ) as ∣h∣ → 0.

Beweis:
Choose ε > 0 and g ∈ C0∞ (RN ) such that ∥f −g∥p < ε
3 , see Theorem 4.8. Since g is uniformly
continuous with compact support, the Dominated Convergence Theorem yields a δ > 0
such that ∣h∣ ≤ δ implies ∥g(⋅ + h) − g∥p < ε
3 , whence

∥f (⋅ + h) − f ∥p ≤ ∥f (⋅ + h) − g(⋅ + h)∥p + ∥g(⋅ + h) − g∥p + ∥g − f ∥p


≤ 2∥f − g∥p + ∥g(⋅ + h) − g∥p
< ε.

Proposition 9.8. Let f ∈ W 1,p (RN ). Then we have for all h ∈ RN

∥f (⋅ + h) − f ∥p ≤ ∣h∣∥∇f ∥p .

Beweis:
By density (Lemma 4.12), it suces to prove this inequality for C0∞ (RN ). We use

1
1 1 1 p
∣f (x + h) − f (x)∣ = ∣∫ ∇f (x + th) ⋅ h dt∣ ≤ ∣h∣ ∫ ∣∇f (x + th)∣ dt ≤ ∣h∣ (∫ ∣∇f (⋅ + th)∣ dt) .
p
0 0 0

Integrating this over all x ∈ RN we get

1 1
1 p 1 p
∥f (⋅ + h) − f ∥p ≤ ∣h∣ (∫ ∫ ∣∇f (x + th)∣ dt dx) = ∣h∣ (∫
p
∥∇f ∥pp dt) = ∣h∣∥∇f ∥p .
RN 0 0

The historical background of the following result is beautifully described in the survey
paper [10].

Satz 9.9 (Fréchet (1908), Kolmogorov (1931), M. Riesz (1933)). Assume 1≤p<∞ and
N ∈ N. Then a family F is precompact in L (R )
p N
if and only if the following conditions
hold:

(i) F is bounded in Lp (RN ).


(ii) For all ε > 0 there is a δε > 0 such that ∥f (⋅+h)−f ∥p < ε for all f ∈ F, h ∈ RN , ∣h∣ ≤ δ .
(iii) For all ε>0 there is a compact subset Kε ⊂ RN such that ∥f ∥Lp (RN ∖Kε ) < ε for all
f ∈ F.

55
Beweis:
We rst show that a precompact family F satises (i),(ii),(iii). So let ε > 0. Then there
are g1 , . . . , gm ∈ Lp (RN ) such that

m
F ⊂ ⋃ Bε/3 (gi ). (9.3)
i=1

So (i) follows from


ε
∥f ∥p ≤ max ∥gi ∥p + for all f ∈ F.
i=1,...,m 3
Moreover, Proposition 9.7 provides a δε > 0 such that

ε
max ∥gi (⋅ + h) − gi ∥p < for ∣h∣ ≤ δε .
i=1,...,m 3

For any given f ∈F we may then choose (according to (9.3)) i ∈ {1, . . . , m} such that
∥f − gi ∥p < ε
3 . Hence, (ii) results from

∥f (⋅ + h) − f ∥p ≤ ∥f (⋅ + h) − gi (⋅ + h)∥p + ∥gi (⋅ + h) − gi ∥p + ∥gi − f ∥p


≤ε for all f ∈ F.

To prove (iii) let us choose φ1 , . . . , φm ∈ C0∞ (RN ) such that ∥gi − φi ∥p < 2ε
3 , see Theo-
rem 4.8, set Kε ∶= ⋃i=1 supp(φi ). Choosing gi as above for any given f ∈ F
m
we get

∥f ∥Lp (RN ∖Kε ) = ∥f − φi ∥Lp (RN ∖Kε )


≤ ∥f − gi ∥Lp (RN ∖Kε ) + ∥gi − φi ∥Lp (RN ∖Kε )
ε 2ε
≤ +
3 3
= ε.

This nishes the rst part of the proof.

Now assume that F ⊂ Lp (RN ) satises (i),(ii),(iii), let ε>0 be arbitrary. The strategy is
to approximate the family by a family of continuous functions to which we can apply the
Ascoli-Arzelà Theorem. Proposition 4.7 and (ii) show that a nonnegative ρ ∈ C0∞ (RN )
with ∥ρ∥1 = 1 and suciently small support may be chosen in such a way that the

56
37
following holds for all(!) f ∈ F:
1
p p
∥ρ ∗ f − f ∥p = (∫ ∣∫ ρ(x − y)f (y) dy − f (x)∣ dx)
RN RN
1
1 1 p p
≤ (∫ ∣∫ ρ(h) p′ ⋅ ρ(h) p (f (x − h) − f (x)) dh∣ dx)
RN RN
1
1 1 p
≤ (∫ ∥ρ p′ ∥pp′ ∥ρ p (f (x − ⋅) − f (x))∥pp dx)
RN
1 (9.4)
1
p
≤ ∥ρ∥1 (∫
p′
ρ(h) (∫ ∣f (x − h) − f (x)∣ dx) dh) p
(Fubini)
RN RN
1
p
≤1⋅ sup ∥f (⋅ + h) − f ∥p ⋅ (∫ ρ(h) dh)
h∈supp(ρ) RN
ε
≤ sup ∥f (⋅ + h) − f ∥p < for all f ∈ F.
h∈supp(ρ) 3
Having chosen ρ in such a way we consider the family of continuous(!) functions

G ∶= {(ρ ∗ f )∣K ∶ f ∈ F} ⊂ C(K)


where K ∶= Kε/3 according to (iii). Let us show that this family is pointwise bounded
and equicontinuous.

Then G is pointwise bounded due to

∥ρ ∗ f ∥C(K) ≤ ∥ρ ∗ f ∥∞ ≤ ∥ρ∥p′ ∥f ∥p ≤ ∥ρ∥p′ M for all f ∈ F.


Moreover, it is equicontinuous due to

sup ∣(ρ ∗ f )(x) − (ρ ∗ f )(y)∣ = sup ∣∫ ρ(x − z)f (z) dz − ∫ ρ(y − z)f (z) dz∣
x,y∈K,∣x−y∣<δ x,y∈K,∣x−y∣<δ RN Rn

≤ sup ∫ ∣ρ(x − z)∣∣f (z) − f (y − x + z)∣ dz


x,y∈K,∣x−y∣<δ RN

≤ sup ∥ρ(x − ⋅)∥p′ ∥f − f (y − x + ⋅)∥p


x,y∈K,∣x−y∣<δ

= ∥ρ∥p′ sup ∥f − f (y − x + ⋅)∥p


x,y∈K,∣x−y∣<δ

= o(1) as δ→0 uniformly w.r.t. f ∈ F.


So the Ascoli-Arzelà Theorem shows that G is precompact in C(K). This implies that
for ε̃ ∶= ε
3∣K∣1/p
there are functions g1 , . . . , gm ∈ C(K) such that
m
G ⊂ ⋃ {h ∈ C(K) ∶ ∥h − gi ∥C(K) < ε̃}.
i=1
37
Notice the subtle dierence: In Proposition 4.7 we showed ∥ρ ∗ f − f ∥p can be arbitrarily small for any
given f ∈ L (R ). This is, however, not sucient to conclude because we need a uniform approxi-
p N

mation property for all f ∈ F. So we need  ∃ρ∀f instead of  ∀f ∃ρ.

57
Extending the functions gi trivially to RN , we obtain for all f ∈F

min ∥f − gi ∥Lp (RN ) = min (∥f ∥Lp (RN ∖K) + ∥f − gi ∥Lp (K) )
i=1,...,m i=1,...,m
ε
≤ + min (∥f − ρ ∗ f ∥Lp (K) + ∥ (ρ ∗ f )∣K −gi ∥Lp (K) )
3 i=1,...,m ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
∈G
ε ε
≤ + + ε̃ ⋅ ∥1∥Lp (K)
3 3
= ε.

Hence
m
F ⊂ ⋃ {h ∈ Lp (RN ) ∶ ∥h − gi ∥Lp (RN ) < ε},
i=1

which is all we had to show. ◻

Satz 9.10 (Rellich-Kondrachov Theorem) . Ω ⊂ RN


Let be a bounded Lipschitz domain
Np
and 1 ≤ p < N. Then the embedding W 1,p (Ω) ↪ Lq (Ω) is compact for 1 ≤ q <
N −p .

Beweis:
The rst and main step is to prove the compactness of W 1,p (Ω) ↪ Lp (Ω). We have to
show that every bounded sequence (un )n∈N in W 1,p
(Ω) has a subsequence that converges
in Lp (Ω). To this end we show that

F ∶= {fn ∶ n ∈ N} where fn ∶= E(un )χ

is precompact provided that E ∶ W 1,p (Ω) → W 1,p (RN ) denotes Stein's Extension opera-
tor and χ∈ C0∞ (RN ) satises χ(x) = 1 for x ∈ Ω, set K ∶= supp(χ).

We show that (i),(ii),(iii) from Theorem 9.9 are satised:

(i) We have

∥fn ∥p = ∥(Eun )χ∥p ≤ ∥χ∥∞ ∥Eun ∥p ≤ ∥χ∥∞ ∥E∥∥un ∥1,p ≤ C

(ii) Proposition 9.8 gives

∥fn (⋅ + h) − fn ∥p = ∥((Eun )χ)(⋅ + h) − (Eun )χ∥p


≤ ∣h∣∥∇((Eun )χ)∥p
≤ ∣h∣∥Eun χ∥1,p
≤ ∣h∣∥Eun ∥1,p ∥χ∥1,∞ (Product rule)

≤ C∣h∣.

(iii) This follows from ∥fn ∥Lp (RN ∖K) = 0.

58
So Theorem 9.9 shows that F is precompact in Lp (RN ) and hence fn → f in Lp (RN )
after passing to a subsequence. In particular, for u ∶= f 1Ω ,

∥un − u∥Lp (Ω) = ∥fn − f ∥Lp (Ω) ≤ ∥fn − f ∥Lp (RN ) → 0 as n → ∞.

This proves that W 1,p (Ω) ↪ Lp (Ω) is compact.

To treat general exponents 1 ≤ q < p∗ we use Lyapunov's Inequality and choose θ ∈ (0, 1)
according to
1
q = θ
1 + 1−θ
p∗ . Then

∥un − u∥Lq (Ω) ≤ ∥un − u∥θL1 (Ω) ∥un − u∥1−θ


Lp∗ (Ω)
θ(p−1)
≤ ∣Ω∣ p ∥un − u∥θLp (Ω) ⋅ C∥un − u∥1−θ
W 1,p (Ω)

≤ C ′ ∥un − u∥θLp (Ω) .

Since θ is bigger than zero (here we use q < p∗ ) we nd un → u is Lq (Ω). ◻

We now comment on why the compactness is important. The short answer is that it
shows that the solution theory of elliptic boundary value problems can be reduced to the
solution theory for linear problems of the form

(I − K)u = f, u ∈ H01 (Ω)

where f ∈ L2 (Ω) and K ∶ L2 (Ω) → L2 (Ω) is compact. The solution theory for such equa-
tion is well-known; it is governed by Fredholm's Alternative. Without the compactness
assumption this theory breaks down.

To see why these equations appear consider

(−∆ + 1)u + c(x)u = f (x), u ∈ H01 (Ω).

Using the solution operator


38
(−∆+1)−1 ∶ L2 (Ω) → H01 (Ω) from Corollary 3.2 we obtain
the equivalent problem

u + (−∆ + 1)−1 (cu) = (−∆ + 1)−1 f, u ∈ H01 (Ω).

Assuming c ∈ L∞ (Ω) this equation of the form above with

K ∶ H01 (Ω) → H01 (Ω), ϕ ↦ (−∆ + 1)−1 (c ⋅ ιϕ).

This is a compact (and self-adjoint) operator as a concatenation of the bounded operators


(−∆ + 1)−1 ∶ L2 (Ω) → H01 (Ω), L2 (Ω) ∋ ϕ ↦ cϕ ∈ L2 (Ω) and the compact Embedding
operator ι ∶ H0 (Ω) → L (Ω).
1 2
End Lec 12

38
It maps f ∈ L2 (Ω) to the unique H01 (Ω)-solution of −∆u + u = f . It is therefore a kind of inverse (a
right inverse) of the dierential operator −∆ + 1 in a weak sense.

59
10 Poincaré's Inequality and Applications

In this section we want to remove some insuciency in our analysis of the boundary
value problem
−∆u + c(x)u = f (x) in Ω, u ∈ H01 (Ω).
The very important case c≡0 was not covered by this discussion because our approach
required c(x) to be positive so that the bilinear form

(u, v) ↦ ∫ ∇u ⋅ ∇v + c(x)uv dx (10.1)


is bounded and coercive on H01 (Ω). Being given these properties we deduced the existence
of solutions to this boundary value problem from the Lax-Milgram Theorem resp. Riesz'
Representation Theorem. On the other hand, it is known from Classical PDE Theory,
notably Perron's method, that problems of the form −∆u = f are equally well-behaved,
at least for continuous right hand sides and bounded domains Ω ⊂ RN . So the question
is immediate whether the weak solution approach using Sobolev spaces may be rened
to cover this case is well. This is the topic we want to discuss here.

According to the above, we may concentrate on weakest possible conditions on c that


make the bilinear form (10.1) coercive. Working in the space H (Ω)
1
we cannot do much
about the case c≡0 because the inequality

1/2
(∫ ∣∇u∣2 dx) ≥ α∥u∥H 1 (Ω) (u ∈ H 1 (Ω))

cannot hold for any positive α. In fact, nontrivial constant functions belong to H 1 (Ω)
and give zero on the left and something positive on the right, contradiction! Here we used
that Ω is a bounded domain. But it turns out that the estimate

1/2
(∫ ∣∇u∣2 dx) ≥ α∥u∥H 1 (Ω) (u ∈ H01 (Ω))

is true for some positive α. This will be a consequence of Poincaré's inequality to be


proved below. As a consequence, the counterexample of constant functions does not
work any more and so the only constant function belonging to H01 (Ω) is the trivial one.
In particular this tells us that H0 (Ω) is a strict subspace of H (Ω).
1 1

Such an inequality was used in the proof of Sobolev's Embedding Theorem on RN where
H (R ) =
1 N
H01 (RN ). Using the Mean Value Theorem we expressed u(x) in terms of
its derivatives only and estimated the integrals using Hölder's Inequality. This worked
because all elements of the dense subspace C0∞ (RN ) vanish at innity. We will see below
that the same sort of idea works in a general bounded domain Ω as long as  u vanishes
somewhere in Ω.

60
Satz 10.1 (Poincaré (1890), Friedrichs (1928)) . Assume Ω ⊂ {x ∈ RN ∶ a < x ⋅ v < b} for
some v ∈ R , ∣v∣ = 1.
N
Then, we have for 1≤p<∞ and u ∈ W01,p (Ω)
p
∥u∥p ≤ (b − a)∥∂v u∥p .
2

∞ 1,p
Beweis. Since C0 (Ω) is dense in W0 (Ω) (by denition), it suces to prove this estimate

for u ∈ C0 (Ω). Choose x0 ∈ R such that x0 ⋅ v =
N a+b
2 , so

b−a
∣(x − x0 ) ⋅ v∣ ≤ for all x ∈ Ω.
2
39
In the case p>1 we use the following identity :

∂v ((⋅ − x0 ) ⋅ v∣u∣p ) = ∣u∣p + p(⋅ − x0 ) ⋅ v∣u∣p−2 u∂v u in Ω.

Integrating this over Ω gives

0 = ∫ ∂v ((⋅ − x0 ) ⋅ v∣u∣p ) dx = ∫ ∣u∣p + p(⋅ − x0 ) ⋅ v∣u∣p−2 ∂v u dx.


Ω Ω

Hence,

∥u∥pp ≤ p ∫ ∣(x − x0 ) ⋅ v∣∣u∣p−1 ∣∂v u∣ dx



p
≤ (b − a) ∫ ∣u∣p−1 ∣∂v u∣ dx
2 Ω
p
≤ (b − a)∥u∥p−1
p ∥∂v u∥p .
2
This gives the claim in the case p > 1. The claim for p=1 follows from the Dominated
Convergence Theorem as p ↘ 1.

Given this result we dene

⟨u, v⟩H 1 (Ω) ∶= ∫ ∇u ⋅ ∇v dx,


0
√Ω
∥u∥H 1 (Ω) ∶= ⟨u, u⟩H 1 (Ω) ,
0 0
1/p
∥u∥W 1,p (Ω) ∶= (∫ ∣∇u∣p dx)
0 Ω

Korollar 10.2. Assume that Ω ⊂ RN is a bounded domain and 1 ≤ p < ∞. Then ∥⋅∥W01,p (Ω)
is a norm on W01,p (Ω) that is equivalent to ∥ ⋅ ∥W 1,p (Ω) .
39
This identity holds in the weak sense. It is not immediate, but rather follows by approximation of
t ↦ ∣t∣p by smooth versions such as t ↦ (t2 + ε2 )p/2 − εp .

61
Beweis:
We only show

β∥u∥W 1,p (Ω) ≥ ∥u∥W 1,p (Ω) ≥ α∥u∥W 1,p (Ω) (u ∈ W01,p (Ω))
0

for some α, β > 0. In fact we can choose β=1 because of

∥u∥p = ∫ ∣∇u∣p dx ≤ ∫ ∣∇u∣p dx + ∫ ∣u∣p dx = ∥u∥pW 1,p (Ω)


W01,p (Ω) Ω Ω Ω

The nontrivial opposite bound is a consequence of Poincaré's Inequality. To see this


choose a, b ∈ R and v as in Theorem 10.1. Then

p p p p
∥u∥pW 1,p (Ω) = ∥u∥pLp (Ω) + ∥∣∇u∣∥pLp (Ω) ≤ ∣ (b − a)∣ ∥∂v u∥pLp (Ω) ≤ ∣ (b − a)∣ ∥∇u∥pLp (Ω)
2 2
We conclude that one possible choice is

p −p 1
p
α ∶= (( (b − a)) + 1) .
2

Attention: ∥ ⋅ ∥W 1,p (Ω) is not a norm on W 1,p (Ω), only on W 1,p (Ω). Given that the norms
0
are equivalent on W01,p (Ω), we know that this space is a Banach space when equipped
with this new norm. The quantity

∥u∥Lp (Ω)
CP (Ω, p) ∶= sup
u∈W01,p (Ω)∖{0} ∥∣∇u∣∥Lp (Ω)

is called the Poincaré constant. It is particularly important in the case p = 2.

10.1 Applications to boundary value problems

Let's draw the consequences for our boundary value problem (3.3), which was given by

−∆u(x) + c(x)u(x) = f (x) (x ∈ Ω), u(x) = 0 (x ∈ ∂Ω).

Korollar
2N
10.3. Ω ⊂ RN , N ≥ 3 be a bounded Lipschitz domain and assume f ∈
Let
N
L N +2 (Ω), c ∈ L (Ω) with c ≥ 0 almost everywhere in Ω. Then (3.3) has a unique weak
2

solution u ∈ H0 (Ω) that satises


1

∥u∥H 1 (Ω) ≤ CS (2)∥f ∥ 2N .


0 N +2

62
Beweis:
We recall that we want to solve a(u, v) = l(v) for all v ∈ H01 (Ω) where

a(u, v) ∶= ∫ ∇u(x) ⋅ ∇v(x) + c(x)u(x)v(x) dx,


l(v) ∶= ∫ f (x)v(x) dx.


Again we need to check the assumptions of the Lax-Milgram-Lemma. We mainly repeat


our earlier analysis, but now we work on the Hilbert space (H01 (Ω), ⟨⋅, ⋅⟩H 1 (Ω) ) instead of
0
(H01 (Ω), ⟨⋅, ⋅⟩H 1 (Ω) ).

40
The boundedness of a follows from

∣a(u, v)∣ ≤ ∫ ∣∇u(x)∣∣∇v(x)∣ + ∣c(x)∣∣u(x)∣∣v(x)∣ dx



≤ ∥∣∇u∣∥2 ∥∣∇v∣∥2 + ∥c∥ N ∥u∥ 2N ∥v∥ 2N
2 N −2 N −2

≤ ∥∣∇u∣∥2 ∥∣∇v∣∥2 + ∥c∥ N CS (2) ∥∣∇u∣∥2 ∥∣∇v∣∥2


2
2

= (1 + ∥c∥ N CS (2) )∥u∥H 1 (Ω) ∥v∥H 1 (Ω)


2
2 0 0

The linear functional l is bounded because of

∣l(v)∣ ≤ ∥f v∥1 ≤ ∥f ∥ 2N ∥v∥ 2N ≤ ∥f ∥ 2N CS (2)∥∣∇v∣∥2 = CS (2)∥f ∥ 2N ∥v∥H 1 (Ω) .


N +2 N −2 N +2 N +2 0

We are left with checking the coercivity of a. We have

a(u, u) = ∫ ∣∇u(x)∣2 + c(x)∣u(x)∣2 dx ≥ ∥∣∇u∣∥2 ∥2 ≥ ∥u∥2H 1 (Ω) .


Ω 0

So the Lax-Milgram Lemma gives the claim. ◻

The improvement is that our earlier version required c(x) ≥ µ > 0 for some µ > 0. Even
this can be further improved to c(x) > −λ1 (Ω) a.e. where λ1 (Ω) denotes the smallest
positive eigenvalue of the Dirichlet-Laplacian. Notice that Corollary 10.3 estimates the
solution in the norm ∥ ⋅ ∥H 1 (Ω) , which is dierent from the H 1 (Ω)-norm that we used
0
earlier.
End Lec 13

40
If helpful (and c ∈ L∞ (Ω)), one may as well use

2
∫ ∣c(x)∣∣u(x)∣∣v(x)∣ dx ≤ ∥c∥∞ ∥u∥L2 (Ω) ∥v∥L2 (Ω) ≤ CP (Ω, 2) ∥c∥∞ ∥u∥H01 (Ω) ∥v∥H01 (Ω) .

Similarly for the estimate of l.

63
10.2 Some limitations and generalizations

Our rst remark is concerned with the failure of Poincarés Inequality in suciently thick
domains. We saw that it holds if a given domain has nite length in one direction, so
the following result may be seen as a kind of converse to that statement.

Satz 10.4. Ω ⊂ RN be open such that there is a sequece (xn ) ⊂ Ω and (rn ) ⊂ R+ such
Let
1,p
that Brn (xn ) ⊂ Ω and rn → ∞. Then Poincaré's Inequality cannot hold on W0 (Ω) for
any given p ∈ [1, ∞).

Beweis:
Choose a nontrivial test function χ ∈ C0∞ (B1 (0)) and dene un (x) ∶= χ( x−x
rn ) .
n
Then

un ∈ C0∞ (Ω) ⊂ W01,p (Ω) for all 1 ≤ p < ∞ and


N
∥un ∥Lp (Ω) ∥un ∥Lp (RN ) rnp ∥χ∥Lp (RN ) ∥χ∥Lp (Ω)
= = = rn → +∞
∥∇un ∥Lp (Ω) ∥∇un ∥Lp (RN ) N −p
∥∇χ∥Lp (Ω)
rnp
∥∇χ∥Lp (RN )

Next we turn towards more abstract versions of Poincaré's Inequality that will imply
Wirtinger's Inequality. We start with the following version.

Satz 10.5. Let Ω ⊂ RN be a bounded Lipschitz domain and let V ⊂ W 1,p (Ω) be a closed
subspace such that the only constant function in V is the trivial one. Then, for each
p ∈ (1, ∞), there is C>0 such that

∥u∥Lp (Ω) ≤ C∥∇u∥Lp (Ω) for all u ∈ V.

Beweis:
We argue by contradiction and assume that there is a bounded sequence (un ) ⊂ V such
that
∥un ∥Lp (Ω) → 1 and ∥∇un ∥Lp (Ω) → 0.
Then (un ) is bounded in W (Ω) and the Rellich-Kondrachov Theorem provides a sub-
1,p

sequence (unk ) and u ∈ L (Ω) such that unk → u in L (Ω) as k → ∞. In particu-


p p
41
lar, ∥u∥Lp (Ω) = limk→∞ ∥unk ∥Lp (Ω) = 1 and one even can ensure u ∈ V . Furthermore,
41
This fact is not obvious, and I don't see how to prove it with the methods developed so far. Here is
the reasoning that works but requires extra knowledge:
As a closed subspace of W
1,p
(Ω) the space V , equipped with the same norm, is a separable reexive
Banach space. This is true due to 1<p<∞ and proofs may be found in [Adams]. In such spaces,
bounded sequences have weakly convergent subsequence by the Banach-Alaoglu-Theorem. (This result
is very important for the Calculus of Variations!) So unk ⇀ u for some u∈V and the compactness of
the embedding ι ∶ V → Lp (Ω) implies ∥ι(unk ) − ι(u)∥Lp (Ω) in Lp (Ω).

64
∥∇unk ∥Lp (Ω) → 0 implies

0 = − lim ∫ ∂j unk ϕ dx
k→∞ Ω

= − lim ∫ unk ∂j ϕ dx
k→∞ Ω

= − ∫ u∂j ϕ dx for all ϕ ∈ C0∞ (Ω), j ∈ {1, . . . , N },


so the weak gradient of u is identically zero on Ω. From the Exercises we conclude that
u ∈ V must be constant. Our assumption on V then implies u = 0, which contradicts
∥u∥Lp (Ω) = 1. We thus conclude that our assumption was false, i.e., there is some Poin-
caré Inequality in V , which is all we had to show. ◻

We point out that the same argument yields Poincaré Inequalities of the form ∥u∥Lq (Ω) ≤
C∥∇u∥Lp (Ω) q ∈ [1, p∗ ). The classical Poincaré Inequality corre-
for general exponents
1,p
sponds to the choice V = W0 (Ω). Another important inequality, called Wirtinger's
Inequality, arises from the choice V = {u ∈ W (Ω) ∶ ∫Ω u dx = 0}. This subset is in-
1,p

deed closed because un → u ∈ W (Ω) with un ∈ V implies ∥un − u∥L1 (Ω) → 0 and in
1,p

particular

∫ u dx = n→∞
lim ∫ un dx = 0.
Ω Ω

Korollar 10.6 (Wirtinger's Inequality). Let Ω ⊂ RN be a bounded Lipschitz domain and


p ∈ (1, ∞). Then there is C>0 such that

∥u −∫− u dx∥ ≤ C∥∇u∥Lp (Ω) for all u ∈ W 1,p (Ω).


Ω Lp (Ω)

Here: ∫−Ω u dx = 1
∣Ω∣ ∫Ω u dx.

Beweis:
u ∈ W 1,p (Ω) the function v ∶= u − ∫−Ω u dx satises v ∈ V ∶= {u ∈ W 1,p (Ω) ∶ ∫Ω u dx =
For all
0} and ∇v = ∇u. Hence, denoting by C the Poincaré constant of the subspace V given
by Theorem 10.5, we get

∥u −∫− u dx∥ = ∥v∥Lp (Ω) ≤ C∥∇v∥Lp (Ω) = C∥∇u∥Lp (Ω) for all u ∈ W 1,p (Ω).
Ω Lp (Ω)

Note that we may replace ∫−Ω u dx by ∫−A u dx for any A ⊂ Ω of positive measure. We
now present some nice application of the optimal Wirtinger Inequality in one spatial
dimension. The latter reads

2π 2π 2 2π
∫ (f (s) −∫− f (t) dt) ds ≤ ∫ f ′ (s)2 ds (f ∈ H 1 (Ω)) (10.2)
0 0 0

65
We follow [12]. The setting is as follows. Let γ(t) ∶= (X(t), Y (t)) for t ∈ [0, 2π] where
X, Y ∶ R → R are smooth2π -periodic functions that are parametrized by arclength. In
particular, its length is 2π . From the Divergence Theorem we know42 that the area of
the encircled 2D-domain Ω is given by

1
∣Ω∣ = ∫ div(x, y) d(x, y)
2 Ω
1
= ∫ (x, y) ⋅ ν(x, y) dσ(x, y)
2 ∂Ω
1 2π (Y ′ (s), −X ′ (s))
= ∫ (X(s), Y (s)) ⋅ ∣(X ′ (s), Y ′ (s)∣ ds
2 0 ∣(Y ′ (s), −X ′ (s))∣ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ =1
=ν(X(s),Y (s))
1 2π
= ∫ (X(s), Y (s)) ⋅ (Y ′ (s), −X ′ (s)) ds
2 0
1 2π
= ∫ X(s)Y ′ (s) − X ′ (s)Y (s) ds
2 0

=∫ X(s)Y ′ (s) ds.
0

Here the last equality follows from integration by parts and that (X, Y ) are 2π -periodic
so that the boundary terms vanish. We want to show that the largest area comes a
circular circumference, proving thus the isoperimetric inequality in some special case.


We set X̄ ∶= ∫−0 X(s) ds we get


∣Ω∣ = ∫ X(s)Y ′ (s) ds
0

=∫ (X(s) − X̄)Y ′ (s) ds
0
1 2π ′
≤ ∫ ((X(s) − X̄) + Y (s) ) ds
2 2
2 0
(10.2) 1 2π
′ ′
≤ ∫ (X (s) + Y (s) ) ds
2 2
2 0
1 2π
= ∫ 1 ds
2 0
= π.

Let us discuss the equality case. Since ab = 12 (a2 + b2 ) if and only if a + b = 0, we obtain

X(s) − X̄ + Y ′ (s) = 0 for almost all s ∈ [0, 2π].

42
We rather assume that Ω is such that the Divergence Theorem applies. This can be ensured if γ does
not intersect itself, i.e. γ(t) = γ(s) for s, t ∈ R if and only if s − t ∈ 2πZ.

66
A = − ∫0 X ′ (s)Y (s) dx instead of A = ∫0 X ′ (s)Y (s) ds,
2π 2π
Similarly, starting from the formula
we infer
Y (s) − Ȳ − X ′ (s) = 0 for almost all s ∈ [0, 2π].
We thus obtain that X̃ ∶= X − X̄, Ỹ ∶= Y − Ȳ satisfy

X̃ ′′ (s) = Ỹ ′ (s) = −X̃(s), Ỹ ′′ (s) = −X̃ ′ (s) = −Ỹ (s) for almost all s ∈ [0, 2π].

So there are α, β ∈ R such that

X(s) X̄ X̃(s)
( )=( )+( )
Y (s) Ȳ Ỹ (s)
X̄ α sin(s) + β cos(s)
=( )+( )
Ȳ α cos(s) − β sin(s)
X̄ √ sin(s + s0 )
= ( ) + α2 + β 2 ( )
Ȳ cos(s + s0 )
α β
where cos(s0 ) = √ , sin(s0 ) = √ .
α +β
2 2 α + β2
2

Since X, Y are parametrized by arclength we even know α2 + β 2 = 1. We thus conclude

X(s) X̄ sin(s + s0 )
( )=( )+( ) (s0 ∈ R).
Y (s) Ȳ cos(s + s0 )

So equality can only hold if (X, Y ) describes the unit circle, and it does! The conclusion
is that among all curves with length 2π the circle has the largest area, which is π.
End Lec 14

11 Trace Theorem

In this section we are going to talk about traces of functions belonging to Sobolev spaces
W 1,p (Ω) where 1 ≤ p < ∞ and Ω ⊂ RN is a bounded Lipschitz domain. As before,
the discussion extends to higher order Sobolev spaces W k,p (Ω) (as in Section ??) using
∂1 u, . . . , ∂N u ∈ W k−1,p
(Ω) for all u ∈ W k,p
(Ω). A trace is nothing but a reasonable
generalization of the restriction of a given function to a given subset of Γ⊂Ω having
lower dimension. For the discussion of elliptic boundary value problems the particular
Γ = ∂Ω is the most important one. For continuous functions u ∈ C(Ω) the denition
of u∣∂Ω is evident. Accordingly, for p>N and u ∈ W 1,p (Ω) ⊂ C 0,α (Ω), α = 1 − Np there
is no problem either because the trace of u is simply dened as the restriction of the
43
C 0,α (Ω)-representative of u to the boundary . But this is not enough, because in the

43
In other words, denoting the trace operator by γ , γu = ũ∣∂Ω where ũ ∈ C 0,α (Ω) is uniquely determined
by ũ = u almost everywhere

67
most important case p = 2 this restricts all considerations to the case N = 1. In particular,
no PDE theory can be built on that. So the task is to nd a meaning of a trace for
functions u ∈ W 1,p (Ω) where 1 ≤ p ≤ N.

Denition 11.1. ⊂ RN be a bounded domain and 1 ≤ p ≤ N . Then a bounded


Let Ω
linear operator γ ∶ W (Ω) → Lq (∂Ω) is called trace operator if γu = u∣∂Ω for all u ∈
1,p

C(Ω) ∩ W 1,p (Ω). The function γu is called the trace of u onto ∂Ω.

Our aim is to show that bounded Lipschitz domains admit such trace operators. In the
proof we will need the following technical fact taken from [9, Lemma 1.5.1.9].

Proposition 11.2. Let Ω ⊂ RN be a bounded Lipschitz domain with unit outer normal
vector eld ν ∶ ∂Ω → R
N
and surface measure σ . Then there is a smooth vector eld
F ∈ C ∞ (Ω; RN ) such that F (x) ⋅ ν(x) ≥ 1 for σ -almost all x ∈ ∂Ω and there is t∗ > 0 such
c ∗ ∗
that x + tF (x) ∈ Ω for 0 < t < t , x + tF (x) ∈ Ω for −t < t < 0 for all x ∈ ∂Ω.

The idea of the proof is to dene F locally, i.e., within suciently small but nitely
many neighbourhoods of boundary pieces, as a mollied (smoothened) version of the
unit normal vector eld ν ∶ ∂Ω → RN itself. If I have time: see the Appendix for a proof.
It is particularly important that F is dened and smooth on Ω and not only on ∂Ω.

Satz 11.3. LetΩ ⊂ RN be a bounded Lipschitz domain and 1 ≤ p < N . Then there is
(N −1)p
operator γ ∶ W (Ω) → Lq (∂Ω) for 1 ≤ q ≤ N −p . It is compact provided that
1,p
a trace
(N −1)p
1≤q< N −p .

Beweis:
In view of Theorem 4.11 we rst consider prove that

∥γu∥Lq (∂Ω) = ∥u∥Lq (∂Ω) ≤ C∥u∥W 1,p (Ω) for all u ∈ C0∞ (RN ).

This turns out to be the main step of the proof. We focus on q > 1; the general case then
follows from the above inequality for all q>1 and the Dominated Convergence Theorem
as q ↘ 1. The Divergence Theorem and nally Sobolev's Embedding Theorem give

∥γu∥qLq (∂Ω) = ∫ ∣u∣q dσ


∂Ω

≤∫ ∣u∣q F ⋅ νdσ
∂Ω

= ∫ div(∣u∣q F ) dx

= ∫ (q∣u∣q−2 u⟨∇u, F ⟩ + ∣u∣q div(F )) dx


68
≤ C∥F ∥C 1 (Ω;RN ) ∫ (q∣u∣q−1 ∣∇u∣ + ∣u∣q ) dx

≤ C∥F ∥C 1 (Ω;RN ) (q∥u∥q−1


(q−1)p ∥∇u∥p + ∥u∥q )
q
p−1

≤C ∥F ∥C 1 (Ω;RN ) ∥u∥qW 1,p (Ω) .

This allows to dene the trace operator γ ∶ W 1,p (Ω) → Lq (Ω) by density:

γu ∶= lim ũ∣∂Ω ,
∥ũ−u∥W 1,p (Ω) →0,ũ∈C0∞ (RN )

see Exercise 4 on Exercise sheet 1.

It remains to prove the compactness statement. This is a consequence of the estimate

∥γu∥qLq (∂Ω) ≤ ∥F ∥C 1 (Ω;RN ) (q∥u∥q−1


(q−1)p ∥∇u∥p + ∥u∥q )
q
p−1

that we have proved above. Indeed, if (un )n∈N is a bounded sequence in W 1,p (Ω) then
the Rellich-Kondratchov Theorem provides a subsequence, again denoted by (un )n∈N for
(q−1)p
(q−1)p
Np
simplicity that converges in L p−1 (Ω) and in Lq (Ω). Here we used
p−1 < N −p , which
(N −1)p
follows from q< N −p . Hence,

∥γ(un ) − γ(um )∥Lq (∂Ω)


= ∥γ(un − um )∥Lq (∂Ω)
⎛ ⎞
⎜ ⎟
⎜ q⎟
≤ ∥F ∥C 1 (Ω;RN ) ⎜q ∥un − um ∥q−1 ∥∇(u − u )∥ + ∥u − u ∥ q⎟
⎜ (q−1)p n m p
p−1 ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
n m

⎜ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶⎟
⎝ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
→0
bounded →0 ⎠
→0 as n, m → ∞.

So (γ(un )) is a Cauchy sequence and thus converges. This proves the compactness. ◻

Formally, this result does not include the case p = N. As in the case of the Sobolev
Embedding Theorem, one may instead use the results for p<N to deal with this case
because of W 1,N (Ω) ⊂ W 1,p (RN ) for all p ∈ [1, N ), which is a consequence of the usual
embeddings of Lebesgue spaces on bounded domains. So we see that the trace operator
γ can be dened for all Sobolev spaces W 1,p (Ω) and hence the boundary values of any
such function makes sense in the sense of an L (∂Ω)-function. We are going to show
q
1,p
that u ∈ W (Ω), γu = 0 is equivalent to u ∈ W0 (Ω), so prescribing zero boundary data
1,p
1,p
requiring the trace to be zero resp. requiring W0 (Ω) is equivalent. One may rephrase
1,p
this as ker(γ) = W0 (Ω). To this end we rst generalize the integration-by-parts rule.

69
Proposition 11.4 (Integration by parts) .Ω ⊂ RN be a bounded Lipschitz domain
Let
and u ∈ W 1,p (Ω), v ∈ W 1,q (Ω) for 1 ≤ p, q < N such that p1 + 1q ≤ NN+1 . Then, for all
j = 1, . . . , N ,
∫ ∂j uv dx = ∫ γ(u)γ(v)νi dσ − ∫ u∂j v dx
Ω ∂Ω Ω

Beweis:
By Theorem 4.11 we choose un , vn ∈ C0∞ (RN ) such that un → u in W 1,p (Ω) und vn → v
in W 1,q
(Ω). The classical integration-by-parts rule gives

∫ ∂j un vn dx = ∫ un vn νj dσ − ∫ un ∂j vn dx
Ω ∂Ω Ω

=∫ γ(un )γ(vn )νj dσ − ∫ un ∂j vn dx


∂Ω Ω

Since un → u in W 1,p (Ω) and vn → v in W 1,q (Ω) we have

(N −1)p (N −1)q
γ(un ) → γ(u) in L N −p (∂Ω), γ(vn ) → γ(v) in L N −q (∂Ω).

Moreover,

Np
∂j un → ∂j u in Lp (Ω), un → u in L N −p (Ω),
Nq
∂j vn → ∂j v in Lq (Ω), vn → v in L N −q (Ω)
N +1 N +1
Next dene r ∈ [1, ∞] via
1
r ∶= N − 1
p − 1 1
q , which is possible in view of p + 1
q ≤ N and
1
p + 1
q > 2
N > 1
N . Then

∣∫ ∂j un vn dx − ∫ ∂j uv dx∣ ≤ ∫ ∣∂j un − ∂j u∣∣vn ∣ + ∣∂j u∣∣vn − v∣ dx


Ω Ω Ω
≤ ∥∂j un − ∂j u∥p ∥vn ∥ N q ∥1∥r + ∥∂j u∥p ∥vn − v∥ N q ∥1∥r
N −q N −q

→0 (n → ∞),

∣∫ un ∂j vn dx − ∫ u∂j v dx∣ ≤ ∫ ∣∂j vn − ∂j v∣∣un ∣ + ∣∂j v∣∣un − u∣ dx


Ω Ω Ω
≤ ∥∂j vn − ∂j v∥q ∥un ∥ N p ∥1∥r + ∥∂j v∥p ∥un − u∥ N p ∥1∥r
N −p N −p

→0 (n → ∞).

Moreover,

∣∫ γ(un )γ(vn )νj dσ − ∫ γ(u)γ(v)νj dσ∣


∂Ω ∂Ω

≤∫ ∣γ(un − u)∣∣γ(vn )∣ + ∣γ(u)∣∣γ(vn − v)∣ dσ


∂Ω
≤ ∥γ(un − u)∥ (N −1)p ∥γ(vn )∥ (N −1)q + ∥γ(u)∥ (N −1)p ∥γ(vn − v)∥ (N −1)p
L N −p (∂Ω) L N −q (∂Ω) L N −p (∂Ω) L N −p (∂Ω)

≤ C(∥un − u∥W 1,p (Ω) ∥vn ∥W 1,q (Ω) + ∥vn − v∥W 1,q (Ω) ∥u∥W 1,p (Ω) )

70
→0 (n → ∞).

We thus conclude

∫ ∂j uv dx = n→∞
lim ∫ ∂j un vn dx
Ω Ω

= lim ∫ γ(un )γ(vn )νj dσ − lim ∫ un ∂j vn dx


n→∞ ∂Ω n→∞ Ω

=∫ γ(u)γ(v)νj dσ − ∫ u∂j v dx
∂Ω Ω

End Lec 15

Proposition 11.5. Set Ω ⊂ RN be a bounded Lipschitz domain, 1 ≤ p < ∞. Then



γ(C (Ω)) is dense in Lp (∂Ω).

Beweis:
Let v ∈ Lp (∂Ω) and ε > 0. Then choose w ∈ C(∂Ω) such that ∥v−w∥Lp (∂Ω) < ε. This can be
justied as in Proposition 4.6, replacing the Lebesgue measure by the surface measure σ .
(It satises a regularity property analogous to the one from Lemma 4.5.) To approximate
w we use Tietze's Extension Theorem, see Theorem 13.6 in the appendix. It provides a
continuous function W ∈ C(RN ) such that W ∣∂Ω = w. Multiplying this function with a
cuto-function function having compact support which is identically 1 on∂Ω, we may
even w.l.o.g. assume that W has compact support. Finally, dene Vε ∶= ρε ∗ W . Then
Vε ∈ C ∞ (RN ) ⊂ C ∞ (Ω) and γ(Vε ) = Vε , γ(W ) = W ∣∂Ω = w imply

∥γ(Vε ) − v∥Lp (∂Ω) ≤ ∥Vε − W ∥Lp (∂Ω) + ∥w − v∥Lp (∂Ω)


1
≤ sup ∣(ρε ∗ W )(x) − W (x)∣ ⋅ ∣∂Ω∣ p + ε
x∈∂Ω
1
≤ sup ∫ ρε (x − y)∣W (y) − W (x)∣ dy ⋅ ∣∂Ω∣ p + ε
x∈∂Ω RN
1
≤ sup ∣W (y) − W (x)∣ ⋅ ∣∂Ω∣ p + ε
x∈∂Ω,∣y−x∣≤ε

→0 as ε → 0.

Here we used that W is uniformly continuous (why?). ◻

Lemma 11.6. Let Ω ⊂ RN be a bounded Lipschitz domain, 1≤p<∞ and u ∈ W 1,p (Ω).
Then the following statements are equivalent:

(i) u ∈ W01,p (Ω).


(ii) u ∈ ker(γ), i.e., γu = 0.

71
(iii) The trivial extension U of u belongs to W 1,p (RN ) with

∂j U = ∂j u ⋅ 1Ω for j = 1, . . . , N.

(iv) There is C>0 such that ∣ ∫Ω u∂i ϕ dx∣ ≤ C∥ϕ∥Lp′ (Ω) for alle ϕ ∈ C0∞ (RN ).

Beweis:
(i) → (ii) That's trivial: If (un ) ⊂ C0∞ (Ω) satises un → u in W 1,p (Ω), then γ(un ) =
un ∣∂Ω = 0 and the Trace Theorem (for q ∶= p) imply

∥γu∥Lp (∂Ω) = ∥γ(u − un )∥Lp (∂Ω) ≤ C∥u − un ∥W 1,p (Ω) → 0,

hence γu = 0.

(ii)→ (iii) For all ϕ ∈ C0∞ (RN ) we have by Proposition 11.4

∫ U ∂j ϕ dx = ∫ u∂j ϕ dx = ∫ γ(u) ϕνj dσ − ∫ (∂j u)ϕ dx = − ∫ (∂j u ⋅ 1Ω )ϕ dx


RN Ω ∂Ω ± Ω RN
=0

This shows that U has a j -th weak derivative on RN given by ∂j U = ∂j u⋅1Ω . In particular,
U ∈W 1,p
(R ).
N

(iii)→(i) We choose the vector eld F ∈ C ∞ (Ω; RN ) as in Proposition 11.2. Let U ∈


W 1,p
(R )
N
be the trivial extension of u and set Uε (x) ∶= U (x + εF (x)). As in the proof
of Proposition 9.7 one nds

∥Uε − U ∥W 1,p (Ω) → 0 as ε → 0.

The dene uε ∈ C ∞ (RN ) via

uε (x) ∶= (ρδε ∗ (Uε ⋅ 1Ω ))(x) = ∫ ρδε (x − y)Uε (y) dy.


We can choose δε > 0 uε vanishes in a neighbourhood of ∂Ω. Indeed,


so small that
otherwise there would be sequences (xn ), (yn ) such that xn → x ∈ ∂Ω with ∣xn − yn ∣ ≤
n , yn ∈ Ω, yn +εF (yn ) ∈ Ω (so that (Uε ⋅1Ω )(yn ) ≠ 0). But this implies x ∈ ∂Ω, x+εF (x) ∈ Ω,
1

which is impossible by Proposition 11.2. Accordingly, supp(uε ) ∩ ∂Ω = ∅, hence

uε ⋅ 1Ω ∈ C0∞ (Ω).

Additionally, shrinking δε further if necessary, we may assume

∥uε − Uε ∥W 1,p (RN ) ≤ ε

see Corollary 4.13. We thus conclude

∥uε ⋅ 1Ω − u∥W 1,p (Ω) ≤ ∥uε − U ∥W 1,p (Ω) ≤ ∥uε − Uε ∥W 1,p (RN ) + ∥Uε − U ∥W 1,p (Ω) → 0 as ε → 0.

72
This proves u ∈ W01,p (Ω).

(iii) → (iv) We have for all ϕ ∈ C0∞ (RN )

∣∫ u∂j ϕ dx∣ = ∣∫ U ∂j ϕ dx∣


Ω RN

= ∣∫ (∂j U )ϕ dx∣
RN

= ∣∫ (∂j u1Ω )ϕ dx∣


RN

≤ ∫ ∣∂j u∣∣ϕ∣ dx

≤ ∥∂j u∥Lp (Ω) ∥ϕ∥Lp′ (Ω) .

(iv) → (ii) Integration by parts gives for all ϕ ∈ C0∞ (RN ) ⊂ C ∞ (Ω)

C∥ϕ∥Lp′ (Ω) ≥ ∣∫ u∂j ϕ dx∣ = ∣∫ γ(u)ϕνj dσ − ∫ ∂j uϕ dx∣


Ω ∂Ω Ω

For j∈N Kj ∶= {x ∈ Ω ∶ dist(x, ∂Ω) ≥ 1j } and denote by χj ∶ RN → [0, 1] a smooth


let

function satisfying χj (x) = 0 for x ∈ Kj and χj (x) = 1 for x ∈ ∂Ω. Such a function exists,
see Theorem 4.3. Replacing ϕ by ϕ ⋅ χj , we thus obtain for all j ∈ N

RRR RRR
RRR RR
∥∂j u∥Lp (Ω) ∥ϕχj ∥Lp′ (Ω) ≥ RRRR∫ γ(u)ϕ χj νj dσ − ∫ ∂j uϕχj dxRRRRR .
RRR ∂Ω ¯ Ω RRR
RR =1 RR
Sending j to innity we obtain from the Dominated Convergence Theorem

0 ≥ ∣∫ γ(u)ϕνj dσ∣ .
∂Ω

This holds for all ϕ ∈ C ∞ (Ω) and Proposition 11.5 thus implies γ(u)νj = 0 for all
j = 1, . . . , N . This gives γ(u) = 0, which is (ii). ◻

So we conclude that the functions from W01,p (Ω) are the ones for which the corresponding
trace is zero. One may use the trace operator to rene Poincaré's Inequality. We show
that it is not necessary that the functions vanish on the whole of ∂Ω, but a reasonably
large piece of it is already sucient. The following corollary thus contains the classical
Poincaré Inequality from Theorem ?? as the sepcial case Γ = ∂Ω.

Korollar 11.7. Ω ⊂ RN be a bounded Lipschitz domain and assume that Γ ⊂ ∂Ω has


Let
positive surface measure (σ(Γ) > 0). Then, for all p ∈ (1, ∞), there is a C > 0 such that

1
∥u − ∫ γ(u) dσ∥ ≤ C∥∇u∥Lp (Ω) for all u ∈ W 1,p (Ω)
∣Γ∣ Γ Lp (Ω)

73
Beweis:
This is a consequence of Theorem 10.5 applied to v ∶= u − 1
∣Γ∣ ∫Γ γ(u) dσ belonging to

V = {v ∈ W 1,p (Ω) ∶ ∫ γ(v) dσ = 0} .


Γ

In fact, one may check that this subspace is closed and the only constant function in V
is the trivial one because u≡c implies

0 = ∫ γ(u) dσ = c σ(Γ), whence c = 0.


Γ

(Here, one sees why σ(Γ) > 0 is required.) ◻

We sketch some application to our favourite elliptic boundary value problem where now
nontrivial boundary conditions will be allowed. Generalizing the previous approach via
the Lax-Milgram-Lemma we may now try to solve a(u, v) = l(v) where

a(u, v) ∶= ∫ ∇u(x) ⋅ ∇v(x) + c(x)u(x)v(x) dx + ∫ (γu)(x)(γv)(x) dσ(x),


Ω ∂Ω

l(v) ∶= ∫ f (x)v(x) dx + ∫ κ(x)(γv)(x) dσ(x).


Ω ∂Ω
The integrals over Ω may be analyzed as before whereas the boundary integral makes
sense thanks to the trace theorem if we assume κ to be bounded44 . The coercivity of
the bilinear form on H 1 (Ω) holds for instance if c is positive, but even c ≥ 0 works (→
Exercises). The more interesting question is which boundary value problem the solution
solves. One nds as before

−∆u + c(x)u = f (x) in Ω


in the weak sense and the contribution over Ω in the bilinear form is zero. (Reason:
the equation holds for all v ∈ H 1 (Ω), so also for all v ∈ H01 (Ω). For those functions the
integral terms are zero.) It remains to study the integrals coming from the boundary ∂Ω.
We nd

∫ ((γu)(x) − κ(x))(γv)(x) dσ(x) = 0 for all v ∈ H 1 (Ω).


∂Ω
Proposition 11.5 implies γu = κ. In other words, the solution coming from the Lax-
Milgram-Lemma is the unique weak solution to

−∆u + c(x)u = f (x) in Ω, u∣∂Ω = κ on ∂Ω.

Quite remarkable: For κ ≡ 0 we thus obtain the unique solution in H01 (Ω) with a dierent
working on H (Ω). In other words, the following two bilinear forms give
1
functional and
the same solution:

a ∶ H 1 (Ω) × H 1 (Ω) → R, (u, v) ↦ ∫ ∇u ⋅ ∇v + c(x)u(x)v(x) dx + ∫ (γu)(x)(γv)(x) dσ(x),


Ω ∂Ω
44
What is the optimal integrability condition in view of Theorem 11.3?

74
ã ∶ H01 (Ω) × H01 (Ω) → R, (u, v) ↦ ∫ ∇u ⋅ ∇v + c(x)u(x)v(x) dx

How is that possible? One can show that the Lax-Milgram Lemma provides the uniquely
determined minimizers of the corresponding functionals
1 1
I ∶ H 1 (Ω) → R, u↦ ∫ ∣∇u∣ + c(x)u(x) dx + ∫ (γu)(x) dσ(x) − ∫ f (x)u(x) dx
2 2 2
2 Ω 2 ∂Ω Ω
1
I˜ ∶ H01 (Ω) → R, u ↦ ∫ ∣∇u∣ + c(x)u(x) dx − ∫ f (x)u(x) dx
2 2
2 Ω Ω

One can see that the minimizer of I wants to make ∣γu∣ as small as possible keeping
the other terms xed. So it is not surprising that the minimizer satises γu = 0 and thus
belongs to H01 (Ω). One can make this rigorous using the Calculus of Variations (Euler-
Lagrange equations). Nevertheless it is preferable to work with H01 (Ω) since it does not
involve the Trace operator machinery. In particular, no boundary regularity is needed.
End Lec 16

12 Separability

Denition 12.1. A Banach space X is called separable if it has a countable dense subset.

We rst investigate whether Lp −spaces have this property. It turns out that the case
p=∞ is dierent from p ∈ [1, ∞). To prove the non-separability of L∞ (Ω) we are ging
to use the following criterion.

Proposition 12.2. Let X be a Banach space with an uncountable set of open, non-empty
and pairwise disjoint sets. Then X is not separable.

Beweis:
Denote the uncountable set by (Ui )i∈I and assume for contradiction that M ∶= {xn ∶ n ∈ N}
is a dense subset of X . Since the sets Ui are open and non-empty and M is dense, there
is some n = n(i) ∈ N such that xn(i) ∈ Ui . On the other hand Ui ∩ Uj = ∅ for i ≠ j , so we
must have n(i) ≠ n(j) for i ≠ j . This shows that n ∶ I → N ist injective, which implies
that I is countable, a contradiction. ◻

Satz 12.3. Let Ω ⊂ RN be open and non-empty.

(i) Lp (Ω) is separable provided that 1 ≤ p < ∞.



(ii) L (Ω) is not separable.

75
Beweis:
We rst prove (ii) using the previous proposition. For x ∈ Ω choose rx > 0 such that
Brx (x) ⊂ Ω and set
1
Ux ∶= {f ∈ L∞ (Ω) ∶ ∥f − 1Brx (x) ∥∞ < } .
2
Then (Ux )x∈Ω is an uncountable set of pairwise disjoint open and non-empty sets. The
disjointness follows from

f ∈ Ux ∩ Uy ⇒ ∥1Brx (x) − 1Bry (y) ∥∞ < 1 ⇒ Brx (x) = Bry (y) ⇒ x = y.

So L∞ (Ω) is not separable.

To prove the separability of Lp (Ω) we use Theorem 4.8 where we proved that C0∞ (Ω) is
dense. So it suces to nd a countable subset of Lp (Ω) that approximates C0∞ (Ω) with
45
respect to the norm in L (Ω).
p
We choose

P ∶= {p ⋅ 1Ω∩BM (0) ∶ p is a polynomial with rational coecients and M ∈ N} .

So consider any function ϕ ∈ C0∞ (Ω) and ε > 0. Then choose M ∈ N such that the support
of ϕ is contained in BM (0). Weierstrass' Approximation Theorem provides a polynomial
p̃ such that
1 ε
∥p̃ − ϕ∥C(B ⋅ ∣BM (0)∣ p < .
M (0)) 2
This implies

1 ε
∥p̃ ⋅ 1Ω∩BM (0) − ϕ∥Lp (Ω) = ∥p̃ − ϕ∥Lp (Ω∩BM (0)) ≤ ∥p̃ − ϕ∥C(B ⋅ ∣B M (0)∣ p < . (12.1)
M (0)) 2
Since p̃ is a polynomial, we have p̃(x) = ∑∣α∣≤n ãα xα for some n ∈ N and ãα ∈ R. Choosing
aα ∈ Q suciently close to ãα , we obtain for p(x) ∶= ∑∣α∣≤n aα xα :
1
∥p̃ ⋅ 1Ω∩BM (0) − p ⋅ 1Ω∩BM (0) ∥Lp (Ω) ≤ ∥p̃ − p∥C(B ⋅ ∣BM (0)∣ p
M (0))
1
≤ ∑ ∣ãα − aα ∣∥xα ∥C(B (0)) ∣BM (0)∣ p (12.2)
M
∣α∣≤N
ε
< .
2
As a consequence of (12.1) and (12.2),

inf ∥q − ϕ∥Lp (Ω) ≤ ∥p ⋅ 1Ω∩BM (0) − ϕ∥Lp (Ω) < ε.


q∈P

Since ε>0 was arbitrary, the claim follows. ◻

45
This means that for each p ∈ P there is N ∈N and aα ∈ Q for multi-indices 0 ≤ ∣α∣ ≤ N such that
p(x) = ∑∣α∣≤N aα xα . This set is countable!

76
We draw the consequences for Sobolev spaces. We rst note that (nite) product spaces
46
Lp (Ω) × . . . × Lp (Ω) are also separable for 1 ≤ p < ∞. Dening now

Ψ ∶ W k,p (Ω) → Lp (Ω)K , u ↦ (∂ α u)0≤∣α∣≤k ,

we nd that the subspace Ψ(W k,p (Ω)) of Lp (Ω)K is closed. Moreover, Ψ is even isometric
by denition of the respective norms, in particular it is injective. We will use the following
criterion.

Proposition 12.4. Let X be a separable Banach space and ∅ ≠ M ⊂ X. Then M is


separable
47 .

Beweis:
Let {xn ∶ n ∈ N} be dense in X and x∗ ∈ M . Then dene yn,m = x∗ if B1/m (xn ) ∩ M = ∅
and yn,m ∈ B1/m (xn )∩M (arbitrary) otherwise. We claim that Y ∶= {yn,m ∶ n, m ∈ N} ⊂ M

is dense. Indeed, for all x ∈ M and N > 0 we can nd n = n(N ) ∈ N such that ∥x−xn ∥ <
1
2N .
Then we have x ∈ B1/2N (xn ) ∩ M , so there is yn,2N ∈ B1/2N (xn ) ∩ M . Hence,

1 1 1
∥x − yn,2N ∥ ≤ ∥x − xn ∥ + ∥xn − yn,2N ∥ < + = .
2N 2N N
Since x ∈ M, N ∈ N were arbitrary and yn,2N ∈ Y , which is a countable subset of M, we
infer that M is separable. ◻

Korollar 12.5. Let Ω ⊂ RN be open and non-empty, k ∈ N.


(i) W k,p (Ω) is separable provided that 1 ≤ p < ∞.
(ii) W k,∞
(Ω) is not separable.

Beweis:
In the case 1 ≤ p < ∞ the space Ψ(W k,p (Ω)) is separable as a closed subspace of Lp (Ω)M
in view of the previous proposition. So if P̃ ⊂ Ψ(W (Ω)) is a countable dense subset,
k,p
−1
then P ∶= {Ψ (p) ∶ p ∈ P̃} is countable and dense in W (Ω).
k,p

To prove (ii) we essentially repeat the trick from the proof of Theorem 12.3 (ii). Choose
bounded open subsets Ω′ ⊂ RN −1 , I ⊂ R such that Ω′ × I ⊂ Ω and a cuto function
χ∈ C0∞ (RN ) such that χ(x) = 1 for all x ∈ Ω′ × I . W.l.o.g. 0 ∈ I . For z ∈ I we may then
choose rz > 0 such that Iz ∶= (z − rz , z + rz ) ⊂ I and
xN t1 tk−1
Fz (x) ∶= χ(x) ∫ ∫ ...∫ 1Iz (s) ds . . . dtk−1 where x = (x′ , xN ) ∈ Ω.
0 0 0
46
K can be computed in terms of N and k
47
We do not insist on the fact that M is a subspace. Note that the only issue is that the approximating
countable set has to be a subset of M .

77
48
Then z∈I implies Fz ∈ W k,∞ (Ω). We dene

1
Uz ∶= {f ∈ W k,∞ (Ω) ∶ ∥f − Fz ∥W k,∞ (Ω) < } (z ∈ I).
2
Then (Uz )z∈I is an uncountable set of pairwise disjoint open and non-empty subsets of
W k,∞ (Ω). The disjointness follows from

f ∈ Uz1 ∩ Uz2 ⇒ ∥Fz1 − Fz2 ∥W k,∞ (Ω) < 1


⇒ ∥∂N
k
(Fz1 − Fz2 )∥L∞ (Ω) < 1
⇒ ∥∂N
k
(Fz1 − Fz2 )∥L∞ (Ω′ ×I) < 1
⇒ ∥1Iz1 − 1Iz2 ∥L∞ (I) < 1
⇒ Iz1 = Iz2
⇒ z1 = z2 .

(The restriction from Ω to Ω′ × I is made in order to get rid of χ. If Ω is known to be


bounded, then χ may be replaced by 1.) We conclude that W k,∞ (Ω) is not separable. ◻

This result and Proposition 12.4 also imply that W0k,p (Ω) and other subspaces are sepa-

rable for 1 ≤ p < ∞. Moreover, one can modify the proof in such a way that W0k,∞ (Ω) is
seen not to be separable.

13 Reexivity

Let (X, ∥ ⋅ ∥X ) be a real Banach space. Then its dual space is dened by

X ′ ∶= {ϕ ∶ X → R, ϕ is linear and bounded} .

Here, a linear functional ϕ ∶ X → R is called bounded if there is a C > 0 such that


∣ϕ(f )∣ ≤ C∥f ∥X for all f ∈ X . It is an important fact that linear functionals are bounded

if and only if they are continuous. One can show that X is a Banach space when equipped
with the norm
∥ϕ∥X ′ = sup {∣ϕ(f )∣ ∶ f ∈ X, ∥f ∥X = 1} .
As an example, one may consider X = L1 (Ω)
ϕ ∶ L1 (Ω) → R, f ↦ ∫Ω f (x) dx. One
and
may check ∥ϕ∥L1 (Ω)′ = 1. Introducing suitable weights or assumptions on Ω one may as
′ ′′ ′ ′
well ensure ϕ ∈ L (Ω) for 1 ≤ p ≤ ∞. Now X ∶= (X ) is the dual space of the dual
p

space, called its bidual.

48
Here χ ∈ C0∞ (RN ) implies that all weak partial derivatives of order ≤k are bounded on Ω.

78
The notion of a reexive space has to do with the nature of X ′′ . More precisely, dene
′′ ′
J ∶X →X via(Jf )(g) ∶= g(f ) for g ∈ X . This is well-dened because this map is
linear, satises J(0) = 0 and for f ∈ X ∖ {0} we have

∥Jf ∥X ′′ = sup ∣(Jf )(g)∣ = sup ∣g(f /∥f ∥X )∣ ⋅ ∥f ∥X ≤ sup ∥g∥X ′ ∥f ∥X = ∥f ∥X .


∥g∥X ′ =1 ∥g∥X ′ =1 ∥g∥X ′ =1

Hence, J is a bounded linear operator. In a course on functional analysis one shows that
J is injective (using the Hahn-Banach Theorem), but it need not be surjective.

Denition 13.1. A Banach space X is called reexive if J ∶ X → X ′′ is surjective.

At rst sight it is entirely unclear why such a seemingly articial property should be
important and even if so, how it can be checked. To get a better feeling for this property:

ˆ Lp (Ω) is reexive if and only if 1<p<∞ (see below)

ˆ W k,p
(Ω) is reexive if and only if 1<p<∞ (see below)

ˆ Hilbert spaces are reexive (Riesz' Representation Theorem)

ˆ The sequence spaces l 1 , l ∞ , c0 are not reexive, C([0, 1]) is not reexive.

Its importance comes from the fact that in reexive Banach spaces, bounded sequences
have weakly convergent subsequences, see Corollary 13.8 below. This is the true ge-
neralization of the Bolzano-Weierstraÿ Theorem to innite-dimensional Banach spaces.
This result is the standard tool to prove the existence of minimizers of functionals in
the Calculus of Variations. In fact, the minimizers are in most cases constructed as the
weak limits of suitable bounded minimizing sequences for a given functional. The latter is
often nonlinear, but one can check that a property called weak lower-semicontinuity is
sucient. The reexivity of the space may be checked by proving the uniform convexity
of its norm.
End Lec 17

Denition 13.2 (Uniform Convexity). A normed vector space (X, ∥ ⋅ ∥X ) is called uni-
formly convex if

x+y
∀ε > 0 ∃δ > 0 ∀x, y ∈ X ( ∥x∥ = ∥y∥ = 1, ∥x − y∥ ≥ ε ⇒ ∥ ∥≤1−δ )
2

Lemma 13.3. Let 1<p<∞ and Ω ⊂ RN . Then (Lp (Ω), ∥ ⋅ ∥p ) is uniformly convex.

The same is true for the spaces Lp (Ω)K , 1 < p < ∞, K ∈ N, and subspaces thereof. A proof
can be found in [1, p.41-45]. The proof of the following result is given in the Appendix.

79
Satz 13.4 (Milman-Pettis). Assume that (X, ∥ ⋅ ∥X ) is a uniformly convex Banach space.
Then X is reexive.

Korollar 13.5. Let 1<p<∞ and Ω ⊂ RN . Then (Lp (Ω), ∥ ⋅ ∥p ) and (W k,p (Ω), ∥ ⋅ ∥k,p )
are reexive Banach spaces.

Beweis:
The statement for Lp (Ω) is a direct consequence of the previous two results. Moreover,
choosing Ψ ∶ W (Ω) → Lp (Ω)K as before, one nds that Ψ(W k,p (Ω)) is a closed sub-
k,p

space of the uniformly convex Banach space L (Ω) . Hence, (Ψ(W (Ω)), ∥ ⋅ ∥Lp (Ω)K )
p K k,p
49
is uniformly convex. Since Ψ is a linear isometry , this implies that (W (Ω), ∥ ⋅ ∥k,p )
k,p

is uniformly convex and hence reexive by the Milman-Pettis Theorem. ◻

We mention that closed subspaces of reexive Banach spaces are again reexive, see [2,
Proposition 3.20]. In particular, this is true for W0k,p (Ω) for k ∈ N, 1 ≤ p < ∞. We
nally provide the main motivation why one cares about the reexivity of Banach spaces,
notably of W k,p (Ω). To this end we introduce the following notions of convergence.

Denition 13.6. Let X be a Banach space with dual space X ′.


⋆ ′
(i) (Weak- -convergence) A sequence (fk )k∈N ⊂ X is said to converge to f ∈ X ′ in the
⋆ ⋆
weak- -sense (i.e., pointwise), written fk ⇀ f , if fk (x) → f (x) as k → ∞ for all
x ∈ X.
(ii) (Weak convergence) A sequence (xk )k∈N ⊂ X is said to converge weakly to x ∈ X ′,
written xk ⇀ x, if f (xk ) → f (x) as k → ∞ for all f ∈ X ′ .

A detailed discussion about weak topologies and weak convergence is beyond the scope
of this course given that we are not interested in abstract functional analysis. Instead
some examples:

ˆ Weak convergence in nite-dimensional spaces is equivalent to norm-convergence.

ˆ Weak limits are uniquely determined if they exist.

ˆ xn → x implies xn ⇀ x, but in general not vice versa. For instance, if I ⊂ R is


non-empty and uk (x) ∶= sin(kx), then uk ⇀ 0 in L2 (I) but uk →
/ 0.
So weak convergence is indeed a weaker notion of convergence and thus easier to get
compared to norm convergence. This relaxation is important due to the following two
results from functional analysis, which fail completely if convergence in norm is considered
50
instead .
49
This means ∥Ψ(u)∥Lp (Ω)K = ∥u∥W k,p (Ω) for all u ∈ W k,p (Ω).
50
Recall: The unit ball of any(!) innite-dimensional Banach space contains sequences without convergent
subsequences (w.r.t. norm-convergence).

80
Satz 13.7 (Banach-Alaoglu) . Let X be a separable Banach space. Then every bounded
′ ⋆
sequences in X has a weak- -convergent subsequence.

Beweis:
We mimick the proof of the Ascoli-Arzelà Theorem. Let M ∶= {xn ∶ n ∈ N} be dense in
X and let (fk )k∈N be any bounded sequence in X ′, ∥fk ∥X ′ ≤ 1. One successively
w.l.o.g.
denes subsequences of (fk ) such that, after suitable relabeling the sequence at each
step,

ˆ (fk (x1 )) converges,

ˆ (fk (x1 )), (fk (x2 )) converge,

ˆ etc.

This procedure yields a subsequence that converges in each point of M. Dene

f ∶ M → R, y ↦ lim fk (y).
k→∞

Then f ∶M →R is Lipschitz-continuous because of

∣f (xi ) − f (xj )∣ = ∣ lim fk (xi − xj )∣ ≤ ∥xi − xj ∥.


k→∞

Here we used ∥fk ∥X ′ ≤ 1 for all k ∈ N. Hence, by density of M, the map

F ∶ X → R, x ↦ y→x,
lim f (y)
y∈M

is a well-dened bounded linear functional (i.e., F ∈ X ′) with ∥F ∥ ≤ ∥f ∥ ≤ 1. It remains



to check fk ⇀ F .

To this end choose x∈X and ε>0 arbitrary. First take xi ∈ M such that ∥xi − x∥X ≤ ε
3.
Then we can nd k0 suciently large such that

ε
sup ∣fk (xi ) − F (xi )∣ ≤ .
k≥k0 3

But then, for all k ≥ k0 ,

∣fk (x) − F (x)∣ ≤ ∣fk (x − xi )∣ + ∣fk (xi ) − F (xi )∣ + ∣F (xi − x)∣


ε
≤ ∥fk ∥∥x − xi ∥ + + ∥F ∥∥xi − x∥
3
ε
≤ 2∥x − xi ∥ +
3
≤ ε.

This proves fk (x) → F (x) as k→∞ for any given x ∈ X, which is the claim. ◻

81
Korollar 13.8. Let X be a
51 (separable and) reexive Banach space, then every bounded

seqeunce in X has a weakly convergent subsequence.

Beweis:
Let (xn ) be a bounded sequence in X . Then (Jxn )n∈N is a bounded sequence in X ′′ =
(X ) . Since X ′ is separable52 , Theorem 13.7 provides a subsequence (J(xnj ))j∈N such
′ ′
⋆ ′ ′
that J(xnj ) ⇀ T in (X ) . This means

lim J(xnj )(f ) = T (f ) for all f ∈ X ′.


j→∞

Since X is reexive, we have T = J(x) for some x ∈ X. So we nd for any given f ∈ X′

f (xnj ) = J(xnj )(f ) → T (f ) = J(x)(f ) = f (x) (j → ∞).

In other words, xn j ⇀ x as j → ∞. ◻

Finally, let us mention that L1 (Ω), L∞ (Ω) are not reexive whenever Ω ⊂ RN is a
nontrivial open set, see [2, p.101f]. This property carries over to the Sobolev spaces
W k,1 (Ω), W k,∞ (Ω) for k ∈ N.

51
This remains true for non-separable Banach spaces, because one may then consider X̃ ∶=
span{xn ∶ n ∈ N} instead. This space is now separable by construction and reexive as a closed sub-
space of a reexive Banach space.
52
Y ′ separable implies
It is a general fact that Y separable. So the separability of X ≃ X ′′ implies the

separability of X .

82
End Lec 18

Notation and conventions

ˆ All sets and functions are Lebesgue-measurable

ˆ Br (x0 ) = {x ∈ RN ∶ ∣x − x0 ∣ < r} = the open ball around x0 ∈ RN with radius r>0


ˆ ∣A∣ = Lebesgue measure of a (Lebesgue-measurable) set A⊂R N

ˆ ωN ∶= ∣B1 (0)∣ the volume of the unit ball in R N

Appendix

13.1 The Riesz-Fischer Theorem

We recall the Riesz-Fischer Theorem that establishes the completeness of Lp (Ω) for
1 ≤ p ≤ ∞. As a byproduct, it gives useful additional information about subsequences of
(convergent) Cauchy sequences in Lp (Ω).

Satz 13.1 (Riesz-Fischer [6,20]). Assume that Ω ⊂ RN and 1 ≤ p ≤ ∞. Then (Lp (Ω), ∥⋅∥p )
is complete. Additionally, for any Cauchy sequence (un ) ⊂ L (Ω) there is a subsequence
p

(unk ) ⊂ (un ) and w ∈ L (Ω) such that ∣unk ∣ ≤ w and (unk ) converges pointwise almost
p

everywhere to its L (Ω)-limit.


p

Beweis:
We only prove the claim for 1 ≤ p < ∞. Let (un ) be a Cauchy sequence. Choose a
subsequence (unk ) such that

∥unk − unk+1 ∥p ≤ 2−k (k ∈ N)

Then dene

w ∶= ∣un1 ∣ + ∑ ∣unk − unk+1 ∣.
k=1
We then have
k−1
∣unk ∣ ≤ ∣un1 ∣ + ∑ ∣unj − unj+1 ∣ ≤ w for all k ∈ N.
j=1

Moreover, the Monotone Convergence Theorem implies


∥w∥p = ∥∣un1 ∣ + ∑ ∣unj − unj+1 ∣∥p
j=1

83
m
= lim ∥∣un1 ∣ + ∑ ∣unj − unj+1 ∣∥p
m→∞
j=1
m
≤ lim inf ∥un1 ∥p + ∑ ∥unj − unj+1 ∥p
m→∞
j=1

≤ ∥un1 ∥p + ∑ 2−k < ∞,
k=1

∣un1 ∣ + ∑∞
In particular, j=1 ∣unj − unj+1 ∣ ≤ w is nite almost everyhwere. So (unj (x)) is a
Cauchy sequence for almost all x ∈ Ω. Since (R, ∣⋅∣) is complete, this subseqeunce converges
pointwise almost everywhere to some measurable function u satisfying ∣u(x)∣ ≤ w(x)
almost everywhere. This proves the additionally-part that we claimed to hold.

Let's prove unk → u in Lp (Ω). The Dominated Convergence Theorem gives

lim ∥unk − u∥pp = lim ∫ ∣unk (x) − u(x)∣p dx = 0.


k→∞ k→∞ Ω

Here we used unk − u → 0 pointwise almost everyhwere and ∣unk − u∣ ≤ 2w ∈ Lp (Ω). We


have to pshow rove that convergence actually holds for the full sequence, which is known
to be a Cauchy sequence. For any given ε>0 choose k = k(ε) such that

ε ε
∥unk − ul ∥p ≤ (l ≥ nk ), ∥unk − u∥p ≤ .
2 2
It follows

∥u − ul ∥p ≤ ∥u − unk ∥p + ∥unk − ul ∥p ≤ ε for all l ≥ nk ,

which is all we had to prove. ◻

13.2 Whitney's Covering Lemma

Lemma 13.2. Let Ω ⊂ RN be open and ∅ ⊊ Ω ⊊ RN . Then there are closed almost
disjoint dyadic cubes W1 , W2 , . . . with the following properties

(I) ⋃j∈N Wj = Ω,
(II) diam(Wj ) ≤ dist(Wj , Ωc ) ≤ 4 diam(Wj ) for all j ∈ N.
(III) Wi ∩ Wj ≠ ∅ implies
1
4 diam(Wi ) ≤ diam(Wj ) ≤ 4 diam(Wi ),
(IV) #{i ∈ N ∶ Wi ∩ Wj ≠ ∅} ≤ 12N for all j ∈ N.
Furthermore, for any xed κ ∈ (0, 14 ) there are ϕ1 , ϕ2 , . . . ∈ C0∞ (RN ) such that

(V) 0 ≤ ϕj ≤ 1, ϕj (x) = 1 for x ∈ Wj and ϕj (x) = 0 for dist(x, Wj ) ≥ κ diam(Wj ).


(In particular, ϕj (x) ≠ 0 and x ∈ Wi implies Wi ∩ Wj ≠ ∅.

84
(VI) ∣∂ α ϕj (x)∣ ≤ Cα diam(Wj )−∣α∣ for all α ∈ NN
0 .

Beweis:
For k∈Z we dene the k -th dyadic mesh as follows:

W ∈ Sk ⇔ W = {2−k (z + w) ∶ w ∈ [0, 1]N } for some z ∈ ZN .

So elements of Sk for ∣k∣ large and k < 0 are large dyadic cubes whereas the cubes from Sk
for large k are small ones (to approximate the ne structures of Ω close to the potentially
complicated boundary). We use

√ √
Ω = ⋃ Ωk where Ωk ∶= {x ∈ Ω ∶ 21−k N < dist(x, Ωc ) ≤ 22−k N } (13.1)
k∈Z

and dene the collection of all dyadic cubes as follows:

F ∶= ⋃ Fk , where Fk ∶= {W ∈ Sk ∶ W ∩ Ωk ≠ ∅}. (13.2)


k∈Z

The set F is countable as a countable union of countable sets. Then one can check

Ωk ⊂ ⋃ W ⊂ Ω. (13.3)
W ∈Sk

Due to Ω ≠ RN we can attribute to each cube W ∈ F its uniquely determined ancestor


53
cube Ŵ ∈ F, Ŵ ⊃ W that is maximal w.r.t inclusion, set

{W1 , W2 , . . .} ∶= {Ŵ ∶ W ∈ Fk }

For j ∈N we dene kj ∈ Z by Wj ∈ Fkj and the basepoint zj ∈ ZN by Wj = 2−kj (zj +


[0, 1]N ).

Proof of (I): Let x ∈ Ω. By (13.1) there is some k ∈√Z such that x ∈ Ωk . By (13.3) there is
W ∈ Fk such that x ∈ Ωk ∩ W . Then diam(W ) = N 2−k < dist(x, Ωc ) by (13.1) implies
W ⊂ Ω and thus
x ∈ W ⊂ Ŵ ⊂ ⋃ Wj .
j∈N


Proof of (II): Wj ∈ Fkj ⊂ Skj implies diam(Wj ) = 2−kj N . By denition of Fkj we may
choose x ∈ Wj ∩ Ωkj , whence

dist(Wj , Ωc ) ≤ dist(x, Ωc ) ≤ 22−kj N = 4 diam(Wj ).

53
This is done in order not to count subcubes as new cubes, so [0, 1] × [0, 1] should not be added to the
list of cubes if [0, 2] × [0, 2] is already there. We want to have almost disjoint cubes!

85
On the other hand, the triangle inequality gives

(13.1) √ √
dist(Wj , Ωc ) ≥ dist(x, Ωc ) − diam(Wj ) ≥ 21−kj N − 2−kj N = diam(Wj ).

So (II) is proved.

Proof of (III): Assume W i ∩ W j ≠ ∅. Then the triangle inequality gives

(II) (II)
diam(Wj ) ≤ dist(Wj , Ωc ) ≤ dist(Wi , Ωc ) + diam(Wi ) ≤ 5 diam(Wi ).

But the quotients of the diameters is necessarily of the form where m ∈ Z, so we 2m


conclude diam(Wj ) ≤ 4 diam(Wi ). Interchanging the roles of i, j gives the other inequality
and (III) is proved.

diam(Wi )
Proof of (IV): Assume Wi ∩ Wj ≠ ∅ . Then Wi ∈ Ski , Wj ∈ Skj implies
diam(Wj ) = 2−ki +kj
and (III) gives 2−ki +kj ∈ { 41 , 12 , 1, 2, 4}. If zi , z j are the basepoints of these cubes, we get
for all l ∈ {1, . . . , N }

2ki ((zi )l + αl ) = 2kj ((zj )l + βl ) where αl , βl ∈ {0, 1}.

Hence,

#{i ∈ N ∶ Wi ∩ Wj ≠ ∅} = #{i ∈ N ∶ 2ki −kj ((zi )l + αl ) − βl = (zj )k for some αl , βl ∈ {0, 1}, l = 1, . . . , N }
≤ (5 ⋅ 2 + 2) = 12 .
N N

Proof of (V),(VI): Choose 0<κ< 1


4 and ϕ ∈ C0∞ (RN ) such that


ϕ(x) = 1 for x ∈ [0, 1]N , ϕ(x) = 0 if dist(x, [0, 1]N ) ≥ κ N .

(You may deduce the existence of such a function from Theorem 4.3.) If Wj = 2−kj (zj +
[0, 1] ),
N
then we set
ϕj (x) ∶= ϕ(2kj x − zj ) (x ∈ RN )

Then we get ϕj (x) = 1 for x ∈ Wj as well as ϕj (x) = 0 for dist(x, Wj ) ≥ κ N 2−kj =
κ diam(Wj ). In particular,
supp(ϕj ) ⊂ ⋃ Wi
W i ∩W j ≠∅

Furthermore,
∣(∂ α ϕj )(x)∣ ≤ 2kj ∣α∣ ∥∂ α ϕ∥∞ ≤ Cα diam(Wj )−∣α∣ .

86
13.3 A technical fact about Lipschitz domains

Proposition 13.3 (see Proposition 11.2) . Let Ω ⊂ RN be a bounded Lipschitz domain


with unit outer normal vector eld ν ∶ ∂Ω → R
N
and surface measure σ . Then there is a

smooth vector eld F ∈ C (Ω; R ) such that F (x) ⋅ ν(x) ≥ 1 for σ -almost all x ∈ ∂Ω and
N
∗ c ∗ ∗
there is t > 0 such that x + tF (x) ∈ Ω for 0 < t < t , x + tF (x) ∈ Ω for −t < t < 0 for all
x ∈ ∂Ω.

13.4 Tietze's Extension Theorem

In the following let (X, d) be a metric space and

dist(x, A) ∶= inf d(x, a)


a∈A

measures the distance of x∈X to a given subset A ⊂ X.

Proposition 13.4. Let A ⊂ X. Then x ↦ dist(x, A) is Lipschitz-continuous with Lip-


schitz constant 1.

Beweis:
For any given a∈A and x, y ∈ X we use ∣d(x, a) − d(y, a)∣ ≤ d(x, y). It implies

dist(x, A) ≤ d(x, y) + d(y, a), dist(y, A) ≤ d(y, x) + d(x, a).

Taking the inmums with respect to a∈A gives ∣ dist(x, A) − dist(y, A)∣ ≤ d(x, y) and
the claim follows ◻

Lemma 13.5 (Urysohn's Lemma) . Let A, B ⊂ X be closed disjoint subsets. Then there
is a continuous function f ∶ X → [0, 1] such that f ∣A = 1 and f ∣B = 0.
Beweis:
dist(x,B)
Choose f (x) ∶= dist(x,A)+dist(x,B) . ◻

Satz 13.6 (Tietze's Extension Theorem) . Let A ⊂ X be closed and f ∶ A → R stetig.


Then there is a continuous function F ∶ X → [inf A f, supA f ] such that

F ∣A = f and sup ∣F (x)∣ ≤ sup ∣f (x)∣.


x∈X x∈A

87
Beweis:
We may w.l.o.g. assume inf A f = −1, supA f = 1, for otherwise consider the continuous
function

1 1 1
x↦ arctan(f (x) − c) where c ∶= (sup f + inf f ), d ∶= arctan( (sup f − inf f )).
d 2 A A 2 A A

(In the exceptional case supA f − inf A f = ∞ dene d= π


2 , in the case supA f − inf A f = 0
simply extend by a constant function.)

We rst construct a continuous function g0 ∶ X → R such that

2 1
∣f (x) − g0 (x)∣ ≤ ∀x ∈ A ∣g0 (x)∣ ≤ ∀x ∈ X.
3 3
In fact, Urysohn's Lemma provides a function h ∶ X → [0, 1] satisfying h∣{f ≤− 1 } = 0 and
3
h∣{f ≥ 1 } = 1. Then g0 (x) ∶= 1
3 h(x) − 3 has the desired properties because of ∣g0 (x)∣ ≤ 3
1 1
3
and

1 1 2
Case − 1 ≤ f (x) ≤ − ∶ ∣f (x) − g0 (x)∣ = ∣f (x) + ∣ ≤
3 3 3
1 1 2
Case − ≤ f (x) ≤ ∶ ∣f (x) − g0 (x)∣ = ∣f (x)∣ + ∣g(x)∣ ≤
3 3 3
1 1 2
Case ≤ f (x) ≤ 1 ∶ ∣f (x) − g0 (x)∣ = ∣f (x) − ∣ ≤ .
3 3 3

Next apply this preliminary result to the continuous function f˜ ∶= 32 (f − g0 ) ∶ X → [0, 1].
g̃1 ∶ X → [0, 13 ] that ∣f˜(x) − g˜1 (x)∣ ≤
2
We thus obtain a continuous function such
3 for all
x ∈ A. Dening g1 ∶= 2
3 g˜1 we obtain

2 2
∣f (x) − g0 (x) − g1 (x)∣ ≤ ( ) for all x ∈ A.
3
Inductively we obtain a sequence of continuous functions (gn ) that satisfy

n
1 2 n 2 n+1
∣gn (x)∣ ≤ ( ) ∀x ∈ X, ∣f (x) − ∑ gi (x)∣ ≤ ( ) ∀x ∈ A (13.4)
3 3 i=0 3

Dene F (x) ∶= ∑∞i=0 gi (x). This series converges absolutely, so F is continuous with
∣F (x)∣ ≤ 1 and F ∣A = f follows from (13.4). ◻

13.5 The Milman-Pettis Theorem

We now provide a proof of Theorem 13.4 that goes back to [15, 18].

88
Proposition 13.7. Let X
be a normed space and f, f1 , . . . , fn ∈ X ′. Then f is a linear
combination of f1 , . . . , fn if and only if ⋂j=1 ker(fj ) ⊂ ker(f ).
n

Beweis:
We assume w.l.o.g. that {f1 , . . . , fn } is linearly independent, in particular fj ≠ 0 for
j = 1, . . . , n.

Assume rst f = ∑nj=1 αj fj . Then, for all x ∈ ⋂nj=1 ker(fj ), we have

n
f (x) = ∑ αj fj (x) = 0,
j=1

hence x ∈ ker(f ). So this direction is trivially true.

Now assume ⋂nj=1 ker(fj ) ⊂ ker(f ) and our aim is to show f = α1 f1 + . . . + αn fn for some
n ∈ N and α1 , . . . , αn ∈ R. We proceed inductively and start with the case n = 1.
In that case we have ker(f1 ) ⊂ ker(f ). Since f ≠ 0 the spaces ker(f ), ker(f1 ) both have
54 ∗ ∗
codimension 1 , we infer ker(f1 ) = ker(f ). Choosing x ∈ X such that f1 (x ) ≠ 0 we
f (x )

obtain f −
f1 (x∗ ) f1 ≡ 0, which settles the case n = 1.
Now assume that the claim has been proved for up to n linearly independent functionals
and let f1 , . . . , fn+1 be given as required. To apply the induction hypothesis dene for
j = 1, . . . , n the functionals gj ∶= fj ∣ker(fn+1 ) and g ∶= f ∣ker(fn+1 ) on the space ker(fn+1 ).
Then
n n
⋂ ker(gj ) = ⋂ (ker(fj ) ∩ ker(fn+1 )) ⊂ ker(f ) ∩ ker(fn+1 ) = ker(g).
j=1 j=1

Hence, on ker(fn+1 ), g = ∑nj=1 αj gj for some α1 , . . . , αn ∈ R. This implies

n n ⎛ n ⎞
x ∈ ker(fn+1 ) ⇒ f (x) = g(x) = ∑ αj gj (x) = ∑ αj fj (x =) ⇒ x ∈ ker f − ∑ αj fj .
j=1 j=1 ⎝ j=1 ⎠

The induction hypothesis gives f − ∑nj=1 αj fj = αn+1 fn+1 for some αn+1 ∈ R. This proves
the claim. ◻

Satz 13.8 (Helly) . Let X be a normed space and f1 , . . . , fn ∈ X ′ , c1 , . . . , cn ∈ R. Then


the following statements are equivalent:

(i) There is x∈X such that fj (x) = cj for j = 1, . . . , n.


(ii) There is an M > 0 such that ∣α1 c1 + . . . + αn cn ∣ ≤ M ∥α1 f1 + . . . + αn fn ∥ for all
α1 , . . . , αn ∈ R.

54
More precisely: Choose x∗ ∈ X with f (x∗ ) = 1. If x ∈ ker(f ) is arbitrary, then x − f1 (x)x∗ ∈ ker(f1 ).
By assumption, x − f1 (x)x ∈ ker(f ), which is impossible in case f1 (x) = 0, so we get x ∈ ker(f1 ).

This proves ker(f ) = ker(f1 ).

89
If (ii) is satised, then x can be chosen as in (i) with ∥x∥ ≤ M if dim(X) < ∞ or
∥x∥ ≤ M + ε, ε > 0 if dim(X) = ∞.

Beweis:
(i)→(ii): If such an x exists, then

∣α1 c1 + . . . + αn cn ∣ = ∣(α1 f1 + . . . + αn fn )(x)∣ ≤ ∥α1 f1 + . . . + αn fn ∥∥x∥,

so we may choose M ∶= ∥x∥.

(ii)→ (i): We assume w.l.o.g. that {f1 , . . . , fn } is linearly independent. Dene T (x) =
(f1 (x), . . . , fn (x)) for x ∈ Rn .
fk is not a linear combination of the other n − 1
Since
functionals, Proposition 13.7 implies ⋂j≠k ker(fj ) ⊂ / ker(fk ). In particular there is yk ∈
⋂j≠k ker(fj ) such that fk (yk ) = 1. This implies T (yk ) = ek for all k ∈ {1, . . . , n} and hence
T is surjective. For any c ∈ Rn ∖ {0} we may thus nd y ∈ X such that
n
(f1 (y), . . . , fn (y)) = T (y) = (c1 , . . . , cn ), y ∉ ⋂ ker(fj ).
j=1

It therefore remains to nd x ∈ y + ⋂nj=1 ker(fj ) such that ∥x∥ can be chosen as required.

In fact, a corollary of the Hahn-Banach Theorem provides a bounded linear functional


f ∈ X ′ with ∥f ∥ = 1 und f (y) = dist(y, ⋂nj=1 ker(fj )) and f ∣⋂nj=1 ker(fj ) ≡ 0. This implies
⋂nj=1 ker(fj ) ⊂ ker(f ). Then Proposition 13.7 implies f = ∑nj=1 αj fj for some α1 , . . . , αn ∈
R and our assumption implies
n n n n
dist(y, ⋂ ker(fj )) = f (y) = ∑ αj fj (y) = ∑ αj cj ≤ M ∥ ∑ αj fj ∥ = M ∥f ∥ = M.
j=1 j=1 j=1 j=1

We may thus nd z ∈ ⋂nj=1 ker(fj ) such that ∥y − z∥ ≤ M if dim(X) < ∞ and ≤ M +ε if
dim(X) = ∞. So the claim follows for x ∶= y − z . ◻

Satz 13.9 (Milman (1938), Pettis (1939)). Let X be a uniformly convex Banach space.
Then X is reexive.

Beweis:
Let F ∈ X ′′ be arbitrary with ∥F ∥ = 1. We have to construct x∈X with F = Jx, which
will be achieved with Helly's Theorem.

By denition of the norm in X ′′ there is a normed sequence (fn ) ⊂ X ′ such that F (fn ) >
1− 1
n . We then apply Helly's Theorem to cj ∶= F (fj ), j = 1, . . . , n. In view of

∣α1 c1 + . . . + αn cn ∣ = F (α1 f1 + . . . + αn fn ) ≤ ∥F ∥∥α1 f1 + . . . + αn fn ∥

90
we nd xn ∈ X satisfying

1
∥xn ∥ ≤ 1 + fk (xn ) = F (fk ) for all k = 1, . . . , n.
n
This implies 1− 1
n < F (fk ) = fk (xn ) ≤ ∥xn ∥ ≤ 1 + 1
n and thus for m≥n

2 2
2− ≤ F (fn ) + F (fn ) = fn (xn ) + fn (xm ) = fn (xn + xm ) ≤ ∥xn + xm ∥ ≤ 2 + .
n n
Hence,
xn + xm
lim ∥ ∥ = 1, lim ∥xn ∥ = 1.
n→∞ 2 n→∞

Uniform convexity implies (argue by contradiction)

sup ∥xn − xm ∥ → 0 (n → ∞).


m≥n

Hence, (xn ) is a Cauchy sequence in X and thus converges to some x ∈ X. This proves
the existence of x∈X such that

∥x∥ = 1 and F (fk ) = lim fk (xm ) = fk (x).


m→∞

This element x ∈ X is uniquely determined. Indeed, if x̃ ∈ X is another such element,


then the sequence (xn ) ∶= (x, x̃, x, x̃, . . .) satises the conditions ∥xn ∥ ≤ 1+ n1 and fk (xn ) =
F (fk ) for all k = 1, . . . , n just as above. We have seen that this implies that (xn ) is Cauchy,
so the sequence converges. But this implies x = x̃, which proves the uniqueness.

Now x any f ∈ X ′ , ∥f ∥X ′ = 1 and dene f0 ∶= f and consider the sequence (fk )k∈N0 for
fk as above. Helly's Theorem yields a sequence (yn ) with

1
∥yn ∥ ≤ 1 + fk (yn ) = F (yk ) for all k = 0, . . . , n,
n
which is Cauchy by the arguments presented above. Hence (yn ) converges and the uni-
queness property proved above implies yn → x as n → ∞. But this implies

F (f ) = F (f0 ) = lim f0 (yn ) = f0 (x) = f (x).


n→∞

Since x is independent of f and f ∈ X ′ , ∥f ∥X ′ = 1 was arbitrary, we conclude

F (f ) = f (x) for all f ∈ X ′.

Hence F = Jx. Since F ∈ X ′′ was arbitrary, we conclude that J ∶ X → X ′′ is surjective,


i.e., X is reexive. ◻

91
Literatur

[1] R. A. Adams and J. J. F. Fournier. Sobolev spaces, volume 140 of Pure and Applied
Mathematics (Amsterdam). Elsevier/Academic Press, Amsterdam, second edition,
2003.

[2] H. Brezis. Functional analysis, Sobolev spaces and partial dierential equations.
Universitext. Springer, New York, 2011.

[3] A. P. Calderón. On the dierentiability of absolutely continuous functions. Riv.


Mat. Univ. Parma, 2:203213, 1951.

[4] A.-P. Calderón and A. Zygmund. Local properties of solutions of elliptic partial die-
rential equations. Studia Math., 20:171225, 1961. doi:10.4064/sm-20-2-181-225.

[5] H. Federer and W. H. Fleming. Normal and integral currents. Ann. of Math. (2),
72:458520, 1960. doi:10.2307/1970227.

[6] E. Fischer. Sur la convergence en moyenne. Comptes rendus de l'Académie des


sciences, 144:10221024, 1907.

[7] W. H. Fleming and R. Rishel. An integral formula for total gradient variation. Arch.
Math. (Basel), 11:218222, 1960. doi:10.1007/BF01236935.

[8] E. Gagliardo. Proprietà di alcune classi di funzioni in più variabili. Ricerche Mat.,
7:102137, 1958.

[9] P. Grisvard. Elliptic problems in nonsmooth domains, volume 69 of Classics in


Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM),
Philadelphia, PA, 2011. Reprint of the 1985 original [ MR0775683], With a foreword
by Susanne C. Brenner. doi:10.1137/1.9781611972030.ch1.

[10] H. Hanche-Olsen and H. Holden. The Kolmogorov-Riesz compactness theorem.


Expo. Math., 28(4):385394, 2010. doi:10.1016/j.exmath.2010.03.001.

[11] W. Kondrachov. Sur certaines propriétés des fonctions dans l'espace. C. R. (Dokla-
dy) Acad. Sci. URSS (N. S.), 48:535538, 1945.

[12] P. D. Lax. A short path to the shortest path. Amer. Math. Monthly, 102(2):158159,
1995. doi:10.2307/2975350.

[13] P. D. Lax and A. N. Milgram. Parabolic equations. In Contributions to the theory of

92
partial dierential equations, Annals of Mathematics Studies, no. 33, pages 167190.
Princeton University Press, Princeton, N. J., 1954.

[14] N. G. Meyers and J. Serrin.H = W. Proc. Nat. Acad. Sci. U.S.A., 51:10551056,
1964. doi:10.1073/pnas.51.6.1055.

[15] D. Milman. On some criteria for the regularity of spaces of type (b). Comptes
Rendus (Doklady) de l'Académie des Sciences de l'URSS, 20:243246, 1938.

[16] J. Moser. A sharp form of an inequality by N. Trudinger. Indiana Univ. Math. J.,
20:10771092, 1970/71. doi:10.1512/iumj.1971.20.20101.

[17] L. Nirenberg. On elliptic partial dierential equations. Ann. Scuola Norm. Sup.
Pisa Cl. Sci. (3), 13:115162, 1959.

[18] B. J. Pettis. A proof that every uniformly convex space is reexive. Duke Math. J.,
5(2):249253, 1939. doi:10.1215/S0012-7094-39-00522-3.

[19] F. Rellich. Ein satz über mittlere konvergenz. Nachrichten von der Gesellschaft
der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse, 1930:3035,
1930. URL: http://eudml.org/doc/59297.

[20] F. Riesz. Sur les systèmes orthogonaux de fonctions. Comptes rendus de l'Académie
des sciences, 144:615619, 1907.

[21] F. Riesz. Sur une espèce de géométrie analytique des systèmes de fonctions somma-
bles. Comptes rendus de l'Académie des sciences, 144:14091411, 1907.

[22] S. Sobolev. Sur un théorème d'analyse fonctionnelle. Rec. Math. [Mat. Sbornik]
N.S., 4:471497, 1938.

[23] E. M. Stein. Singular integrals and dierentiability properties of functions. Princeton


Mathematical Series, No. 30. Princeton University Press, Princeton, N.J., 1970.

[24] G. Talenti. Best constant in Sobolev inequality. Ann. Mat. Pura Appl. (4), 110:353
372, 1976. doi:10.1007/BF02418013.

[25] N. S. Trudinger. On imbeddings into Orlicz spaces and some applications. J. Math.
Mech., 17:473483, 1967. doi:10.1512/iumj.1968.17.17028.

[26] H. Whitney. Analytic extensions of dierentiable functions dened in closed sets.


Trans. Amer. Math. Soc., 36(1):6389, 1934. doi:10.2307/1989708.

93
[27] W. H. Young. On the multiplication of successions of fourier constants. Proc. R.
Soc. Lond. A, 87:331339, 1912. doi:http://doi.org/10.1098/rspa.1912.0086.

94

You might also like