Corresponding Lecture Notes
Corresponding Lecture Notes
Corresponding Lecture Notes
1 Introduction
This course is about Sobolev spaces which are indispensable for a modern theory of
Partial Dierential Equations (PDEs). One example for such a Sobolev space is given
by
H 1 (Ω) = {u ∈ L2 (Ω) ∶ ∂i u ∈ L2 (Ω) ∀i ∈ {1, . . . , N }}
where Ω ⊂ RN , N ∈ N is an open set and ∂i u is the i-th weak (sometimes called distribu-
tional) partial derivative of u. We will clarify later how this is dened. For the moment
it suces to know that it is a generalization of the classical i-th partial derivative. The
main reasons for using these spaces are the following:
One can formulate PDEs in these spaces. Finding a solution u of some PDE is
often equivalent to nding a function ũ ∈ H 1 (Ω) solving the corresponding PDE in
its weak formulation. These functions are then called weak solutions of the given
problem.
Since Sobolev spaces carry the structure of Banach spaces the space H 1 (Ω) above
is even a Hilbert space one can use powerful tools from functional analysis to prove
the existence of weak solutions of PDEs.
In addition to that, Sobolev space allow to prove the existence of (weak) solutions even
when classical solutions do not exist, for instance when coecient functions are disconti-
nuous. Furthermore, they are nowadays indispensable for numerical methods like Finite
Element Methods or Galerkin Methods. This is our motivation to study these spaces in
more detail. The plan of the lecture is the following:
1
(12) (1L) Reexivity
I am going to illustrate the benets of the theory with the aid of a running example
that we will understand better and better during this course. This example is an elliptic
boundary value problem of the form
Here, ν ∶ ∂Ω → RN denotes the outer unit normal vector eld, σ is the surface measure
of ∂Ω. So our boundary value problem (1.1) takes the form
∞
∫ ∇ϕ(x) ⋅ ∇u(x) + c(x)u(x)ϕ(x) dx = ∫ f (x)ϕ(x) dx ∀ϕ ∈ C0 (Ω), u∣∂Ω = g.
Ω Ω
Then the so-called weak formulation of the boundary value problem (1.1) takes the form
In the following we shall develop the tools to get a rich existence theory for this problem.
The interesting point is that classical solutions are not only necessarily solutions of (1.2),
but even the converse holds in some cases (after modication on a null set).
Preliminaries:
1
Recalldiv(ϕf ) = ϕ div(f ) + ∇ϕ ⋅ f for scalar functions ϕ ∈ C 1 (Ω) and vector elds f ∈ C 1 (Ω; RN ).
Moreover, div(∇u) = ∆u.
2
We collect some facts and x the notation. All sets respectively functions considered in
this lecture are assumed to be Lebesgue-measurable. Vector spaces come with the eld
K = R. In the following:
1/p
∥f ∥p ∶= (∫ ∣f (x)∣p dx) (1 ≤ p < ∞), ∥f ∥∞ ∶= ess supΩ ∣f ∣.
Ω
is nite for these functions and ∥ ⋅ ∥p denes a norm that turns (Lp (Ω), ∥ ⋅ ∥p ) into
a Banach space. In the case p = 2 it is a Hilbert space endowed with the inner
product
1 1
∥f g∥1 ≤ ∥f ∥p ∥g∥q if + = 1.
p q
We provide an argument for this in the case 1 ≤ p < ∞. It suces to nd a sequence
(ϕn ) ⊂ C0∞ (Ω) such that ϕn → ∣f ∣ Lp (Ω). We will see later why such a
p−2 ′
f in
sequence exists. Then
∫ ∣f (x)∣ dx = n→∞
lim (∫ f (x)ϕn (x) dx + ∫ f (x)(∣f (x)∣p−2 f (x) − ϕn (x)) dx)
p
Ω Ω Ω
3
≤ 0 + lim sup ∥f ∥p ∥ϕn − ∣f ∣p−2 f ∥p′
n→∞
= 0,
hence f =0 almost everywhere. A similar proof shows that the same conclusion is
2
true under the weaker assumption f ∈ L1loc (Ω).
Dierential operators: The dierential operators ∂i , ∇, ∆ have the usual mea-
ning: ∂i is the i-the partial derivative of a function, ∇ = (∂1 , . . . , ∂N ) is the gradient
and ∆ = div(∇) = ∑i=1 ∂ii is the Laplacian, which plays a central role in many
N
PDEs. Later on ∂i u, ∇u, etc. will denote the weak i-th partial derivative, weak
gradient of u, etc. We will not make a notational distinction.
∥T x∥Y
∥T ∥ ∶= ∥T ∥X→Y = sup < ∞.
x≠0 ∥x∥X
We will only deal with linear operators (in contrast to nonlinear ones) in this
lecture.
for some function ψ ∈ C m,α (RN −1 ). In that case the outer unit normal vector eld
′ 3
at the boundary point x ∶= (x , xN ) ∈ ∂Ω is given by
1 ∇ψ(x′ )
ν(x) = √ ( ). (1.3)
1 + ∣∇ψ(x′ )∣2 −1
Surface integrals: We want to (rather: need to) integrate over the boundaries of
C m,α -domains Ω ⊂ RN . This is done via
M
∫ g dσ ∶= ∑ ∫ g dσUi
∂Ω i=1 ∂Ω∩Ui
2
Apply the previous reasoning to f ⋅ 1K for all compact subsets K ⊂ Ω.
3 c
It is indeed the outer one because you can formally check via Taylor expansion that x + tν(x) ∈ Ω
for 0 < t < t0 and x + tν(x) ∈ Ω for −t0 < t < 0 provided that t0 > 0 is chosen suciently small.
4
where ∂Ω = ⋃Mi=1 Ui . Here, the Ui 's are disjoint neighbourhoods (graphical pieces)
as above with ψi ∈ C
m,α
(RN −1 ). For such neighbourhoods the latter integrals are
dened according to
√
∫ g dσUi = ∫ g(x′ , ψi (x′ )) 1 + ∣∇ψi (x′ )∣2 dx′ .
∂Ω∩Ui {x′ ∈RN −1 ∶(x′ ,ψi (x′ ))∈Ui }
∫ div(f ) dx = ∫ f ⋅ ν dσ,
Ω ∂Ω
where the boundary integral is given by the previous denition. Notice that the
outer unit normal vector eld is dened locally in terms of the parametrizing
function ψ as in (1.3). As a consequence one obtains the integration-by-parts for-
mula for u, v ∈ C 1 (Ω):
We shall use this for test functions v = ϕ ∈ C0∞ (Ω) that vanish close to the boundary.
4
open sets Ω ⊂ R and all u ∈ C (Ω) the equality
N 1
In that case we obtain for all
∫ ∂i uϕ dx = − ∫ u∂i ϕ dx.
Ω Ω
End Lec 01
We start with the denition of a weak derivative of a given function u ∈ L1loc (Ω) for some
open subset Ω ⊂ RN , N ∈ N.
4
Establishing this rigorously is a bit technical, we skip this. Essentially, it is a consequence of (1.4)
where Ω is replaced by some large enough ball where all boundary terms are well-dened.
5
In the one-dimensional case one replaces the i-th partial derivative by the usual derivative.
We shall also use the symbols ∂ x , ∂y etc. as for classical derivatives. The denition ∂i u ∶= w
makes sense because we now prove that two dierent weak partial derivatives coincide
almost everywhere.
that w, w̃ ∈ Lloc (Ω) are an i-th weak derivative of u. Then w = w̃ (almost everywhere).
1
Beweis:
Let ϕ ∈ C0∞ (Ω) be arbitrary. By Denition 2.1,
So we can speak of the weak partial derivative, the weak gradient (dened via
∇ ∶= (∂1 , . . . , ∂N )) of a function u ∈ L1loc (Ω). A function u ∈ L1loc (Ω) may in general be
discontinuous on Ω, but nevertheless admits weak derivatives. We will see some examples
below. Furthermore, in contrast to the classical derivatives that are dened pointwise for
each x ∈ Ω, the weak derivative a priori depends on Ω as a whole. Higher weak derivatives
dened accordingly: For a given multi-index α ∈ N0 the corresponding weak partial
N
are
derivative is supposed to satisfy
∣α∣
∫ u(x)∂ ϕ(x) dx = (−1) ∫ w(x)ϕ(x) dx
α
for all ϕ ∈ C0∞ (Ω).
Ω Ω
αN
Here, for any given α = (α1 , . . . , αN ) ∈ NN
0 the symbol ∂α stands for ∂1α1 . . . ∂N and
∣α∣ ∶= α1 + . . . + αN . We rst show that this dierentiation concept generalizes the notion
of a classical derivative. The following result tells us that the classical gradient is the
only candidate for the weak derivative if it exists.
Beweis:
∂
In this proof we denote the classical i-th partial derivative by
∂xi . The classical integration-
by-parts formula (1.4) yields for all i = 1, . . . , N
∂u
∫ (x)ϕ(x) dx = − ∫ u(x)∂i ϕ(x) dx for all ϕ ∈ C0∞ (Ω̃).
Ω̃ ∂xi Ω̃
6
Using this fact for Ω̃ = Ω proves (i). In order to prove (ii) we assume that a weak gradient
∞ ∞
on Ω exists. Then each ϕ ∈ C0 (Ω̃) belongs to ϕ ∈ C0 (Ω), so the denition of a weak
derivative implies
One can check: ∂i (β1 u + β2 v) = β1 ∂i u + β2 ∂i v for all β1 , β 2 ∈ R provided that the weak
derivatives on the right exist.
Example 2.4.
(a) Let u(x) ∶= ∣x∣ for x ∈ Ω ∶= (−1, 1) ⊂ R. Proposition 2.3 tells us that the function
v(x) ∶= 1 for x > 0 and v(x) ∶= −1 for x < 0 is the only candidate for a weak
∞
derivative of u. We can check this by hand. For ϕ ∈ C0 (Ω) we have
1 0 1
∫ u(x)ϕ′ (x) dx = − ∫ xϕ′ (x) dx + ∫ xϕ′ (x) dx
−1 −1 0
0 1
= −[xϕ(x)]0−1 + ∫ ϕ(x) dx + [xϕ(x)]10 − ∫ ϕ(x) dx
−1 0
1
= ϕ(−1) + ϕ(1) − ∫ v(x)ϕ(x) dx
−1
1
= −∫ v(x)ϕ(x) dx.
−1
7
The last equality holds because ϕ vanishes close to the boundary of Ω and
1 1 1 1
∫ u(x, y)∂x ϕ(x, y) d(x, y) = ∫ (∫ ∂x ϕ(x, y) dx) dy + ∫ (∫ ∂x ϕ(x, y) dx) dy
Ω −1 0 0 −1
1 1
=∫ (ϕ(1, y) − ϕ(0, y)) dy + ∫ (ϕ(1, y) − ϕ(−1, y)) dy
−1 0
1
= −∫ ϕ(0, y) dy.
−1
Further examples will be given below. We now introduce the Sobolev spaces.
Here, ∂ α u ∈ Lp (Ω) stands for the statement that the weak derivative ∂ α u exists and that
it lies in L (Ω) (not only in Lloc (Ω)). We remark that other equivalent norms can be
p 1
taken without changing the theory. For instance, for 1 ≤ p ≤ ∞ one may also take
u ↦ ∑ ∥∂ α u∥p .
∣α∣≤k
The denition given above has the pleasant feature that the most important spaces
H k (Ω) are generated by the inner product
In the special case k = 1, which is the most important one for us,
N
⟨u, v⟩1,2 = ∫ ∑ ∂i u(x)∂i v(x) + u(x)v(x) dx = ∫ ∇u(x) ⋅ ∇v(x) + u(x)v(x) dx.
Ω i=1 Ω
8
Satz 2.6. Let k ∈ N, 1 ≤ p ≤ ∞. Then (W k,p (Ω), ∥ ⋅ ∥W k,p (Ω) ) is a Banach space and
(H k (Ω), ⟨⋅, ⋅⟩k,2 ) is a Hilbert space.
Beweis:
We use that ∥ ⋅ ∥W k,p (Ω) , ⟨⋅, ⋅⟩k,2 are norms respectively inner products. The proof of this
fact is straightforward and therefore omitted. So it remains to show that the spaces
W k,p (Ω) are complete with respect to these norms. To show this, we use that the spaces
(Lp (Ω), ∥ ⋅ ∥p ) are complete.
Let (un )n∈N be a Cauchy sequence in W k,p (Ω), i.e., for all ε>0 there is m0 ∈ N such
that
∥um − un ∥W k,p (Ω) ≤ ε for all m, n ≥ m0 .
By denition of the norm we conclude that for each xed α ∈ NN
0 , ∣α∣ ≤ k we have
Dene v ∶= v(0,...,0) ∈ Lp (Ω). We claim ∂ α v = vα . Indeed, for all test functions ϕ ∈ C0∞ (Ω),
we have
This implies that vα is the α-th weak derivative of v . Since vα ∈ Lp (Ω), we infer ∂ α v = vα
for all α∈ NN
0 such that ∣α∣ ≤ k , hence v ∈ W
k,p
(Ω). Hence,
1/p 1/p
⎛ ⎞ ⎛ ⎞ (2.1)
∥un − v∥W k,p (Ω) = ∑ ∥∂ α (un − v)∥pp = ∑ ∥∂ α un − vα ∥pp → 0 as n → ∞.
⎝∣α∣≤k ⎠ ⎝∣α∣≤k ⎠
We have thus proved that (un ) converges in W k,p (Ω), which nishes the proof. ◻
As a closed subspace of W k,p (Ω) the space W0k,p (Ω) is a Banach space (equipped with
the same norm as W k,p (Ω)).
9
Example 2.8.
(a) Consider u(x) ∶= ∣x∣γ for x ∈ Ω ∶= {y ∈ RN ∶ ∣y∣ < 1}
γ ∈ R ∖ {0}, N ∈ N, N ≥ 2. and
By Proposition 2.3, the only candidate for the weak gradient is ∇u ∶= γx∣x∣
γ−2
. One
may show that this function is indeed the weak partial derivative of u provided
that γ > 1 − N (which ensures that ∇u is locally integrable). We compute using
5
polar coordinates
1 1 ∣SN −1 ∣
∫ ∣u(x)∣ = ∫
p
∣x∣γp dx = ∫ rN −1 ⋅∣SN −1 ∣rγp dr = ∣SN −1 ∣ ∫ rN +γp−1 dr =
Ω ∣x∣<1 0 0 N + γp
if and only if N + γp > 0, otherwise +∞. The same way we get precisely for N+
(γ − 1)p > 0
1 ∣γ∣∣SN −1 ∣
∫ ∣∇u(x)∣ = ∫
p
∣γx∣x∣γ−2 ∣p dx = ∣γ∣p ∫ rN −1 ⋅ ∣SN −1 ∣r(γ−1)p dr = .
Ω ∣x∣<1 0 N + (γ − 1)p
We conclude:
N
u ∈ W 1,p (Ω) ⇔ N + γp > 0, N + (γ − 1)p > 0 ⇔ γ >1− .
p
x
G(x) ∶= ∫ g(t) dt.
0
We claim G ∈ W 1,p (I) and G′ = g in the weak sense. So let ϕ ∈ C0∞ (I) be a test
function. Using Fubini's Theorem we get
1 1 x
∫ G(x)ϕ′ (x) dx = ∫ (∫ g(t) dt) ϕ′ (x) dx
0 0 0
1 1
=∫ ∫ 1t≤x≤1 g(t)ϕ′ (x) dt dx
0 0
1 1
=∫ (∫ 1t≤x≤1 g(t)ϕ′ (x) dx) dt
0 0
1 1
=∫ g(t) (∫ ϕ′ (x) dx) dt
0 t
1
=∫ g(t) (ϕ(1) − ϕ(t)) dt
0
1
= −∫ g(t)ϕ(t) dt.
0
5
For integrable functions we have
∞
∫ u(x) dx = ∫ rN −1 (∫ u(rω) dσ(ω)) dr,
RN 0 SN −1
10
So the weak derivative of G is g, i.e., G′ = g in the weak sense. Moreover, Hölder's
inequality gives
1 1 x p
∫ ∣G(x)∣p + ∣G′ (x)∣p dx = ∫ ∣∫ g(t) dt∣ + ∣g(x)∣p dx
0 0 0
1
1 x p′
≤∫ (∫ 1 dt) ∥g∥pp + ∣g(x)∣p dx
0 0
≤ 2∥g∥pp < ∞.
W k,p (Ω) and almost everywhere. We will call such sequences approximating sequences.
In particular,
This is not true for p = ∞. To see this choose Ω = {y ∈ RN ∶ ∣y∣ < 1} and u(x) = ∣x∣.
Then u∈W 1,∞
(Ω) and its weak gradient is given by ∇u(x) =
∣x∣ . If a sequence (un ) as
x
above existed, then ∂1 u would be the L∞ (Ω)-limit of continuous (even smooth) functions.
limit of continuous functions is continuous, so x ↦ 1 would have to be
x
But a uniform
∣x∣
continuous, which is false. Nevertheless, the sequences can be chosen to satisfy
6
In the case v ∈ W 1,p (Ω)∩C ∞ (Ω) an easier proof without density argument is possible. The observation
for any test function ϕ ∈ C0 (Ω) the function vϕ is again a test function. So if ∂i u denotes the
∞
is that
weak partial derivative, we get
= − ∫ ∂i u(vϕ) dx − ∫ uϕ∂i v dx
Ω Ω
11
(Chain rule) Assume u ∈ W (Ω) and that G ∈ C 1 (R)
1,p
(ii) has a bounded derivative.
Then G○u∈ W (Ω) with ∂i (G ○ u) = G′ (u)∂i u.
1,p
(iii) Assume u ∈ W 1,p (Ω). Then u+ ∶= max{u, 0}, u− ∶= max{−u, 0}, ∣u∣ ∈ W 1,p (Ω) 7
with
?
∫ u(x)v(x)∂i ϕ(x) dx = n→∞
lim ∫ un (x)vn (x)∂i ϕ(x) dx
Ω Ω
We justify the equalities with ?. Applying Hölder's inequality a couple of times we get
So the pointwise almost everyhwere convergence of (∂i un )(x)vn (x) + (∂i vn )(x)un (x) →
(∂i u)(x)v(x) + (∂i v)(x)u(x) gives the claim.
We now prove (ii). Since the assumptions imply G′ (u)∂i u ∈ Lp (Ω), it suces to prove
that the i-th weak partial derivative is given by ∂i (G ○ u) = G′ (u)∂i u. So let ϕ ∈ C0∞ (Ω)
7
sign(z) = 1 if z > 0, sign(0) = 0, sign(z) = −1 if z < 0.
8
The Riesz-Fischer Theorem, which establishes the completeness of L (Ω), tells you that un → u in
p
L (Ω) implies un → u almost everywhere and that there is subsequence (unk ) satisfying ∣unk ∣ ≤ w
p
for some w ∈ L (Ω). So un → u, vn → v in W (Ω) implies unk → u, vnk → v, ∇unk → ∇u, ∇vnk → ∇v
p 1,p
almost everywhere and ∣unk ∣ + ∣vnk ∣ + ∣∇unk ∣ + ∣∇vnk ∣ ≤ w for some w ∈ L (Ω).
p
Example: un (x) ∶= ∑n∈N 1[n,1/n] (x) coverges to the trivial function in L (R) for 1 ≤ p < ∞. In the
p
case 1 < p < ∞ we can take w(x) ∶= ∑n∈N ∣un (x)∣ ∈ L (R). In the case p = 1 this is not true, but
p
we may take w(x) ∶= ∑n∈N ∣un2 (x)∣ ∈ L (R), which is a bound for the subsequence (un2 )n∈N . Notice
1
12
be given and choose an approximating sequence (un ) for u. The classical chain rule gives
′
∂i (G ○ un ) = G (un )∂i un for all n∈N and hence
?
∫ G(u(x))∂i ϕ(x) dx = n→∞
lim ∫ G(un (x))∂i ϕ(x) dx
Ω Ω
The claim is proved once we have justied the equalities with ?. The rst one is a
consequence of
The second one follows again by the Dominated Convergence Theorem. Notice that
G′ (un ) → G′ (u) holds pointwise almost everywhere because G′ is continuous.
√
We prove (iii). Set Gε (z) ∶= z 2 + ε2 − ε for ε > 0. Then
2ε∣z∣
∣Gε (z) − ∣z∣∣ = √ ≤ε
z2 + ε2 + ε + ∣z∣
Part (ii) gives ∂i (Gε (u)) = G′ε (u)∂i u in the weak sense. We thus obtain from the Domi-
nated Convergence Theorem
This proves the claim for ∣u∣. The remaining statements are a consequence of u+ = 12 (∣u∣+u)
and u− = 2 (∣u∣ − u) and the linearity of weak derivatives.
1
◻
Similarly, one can prove further elementary properties of Sobolev functions by exploiting
the denseness of smooth functions.
13
3 Lax-Milgram Theorem and Riesz' Representation
Theorem
We now show how Sobolev spaces may be used to solve Partial Dierential Equations.
To this end we go back to (1.1) and study the elliptic boundary value problem
∞
∫ ∇ϕ(x) ⋅ ∇u(x) + u(x)ϕ(x) dx = ∫ f (x)ϕ(x) dx ∀ϕ ∈ C0 (Ω), u∣∂Ω = 0.
Ω Ω
The boundary conditions are encoded in the solution space. We are thus looking for a
function u ∈ H01 (Ω) satisfying
∞
∫ ∇ϕ(x) ⋅ ∇u(x) + u(x)ϕ(x) dx = ∫ f (x)ϕ(x) dx ∀ϕ ∈ C0 (Ω).
Ω Ω
Where does the proof fail? It is the existence of v ∈ ker(l) such that l(v) = 1. For that,
⊥
one needs that the kernel (more generally: a closed subspace) admits an orthogonal complement,
i.e., H = ker(l) ⊕⊥ ker(l) . Recall that the construction of the orthogonal complement uses that
⊥
Cauchy sequences converge: For u ∈ H one denes its projection π(u) ∈ ker(l) onto ker(l) via
∥π(u) − u∥ = inf{∥v − u∥ ∶ v ∈ ker(l)} = min{∥v − u∥ ∶ v ∈ ker(l)}, so u = π(u) + (u − π(u)). From
this construction: u − π(u) ⊥ ker(l). The existence of a minimizer is due to the fact that the minimi-
zing sequence (which is a Cauchy sequence) converges. So here is the point where the completeness
of H is used.
14
so ϕ − l(ϕ)v ∈ ker(l) for all ϕ ∈ H. Since v is orthogonal to the kernel, we obtain
Korollar 3.2. Assume f ∈ L2 (Ω). Then (3.1) has a unique weak solution u ∈ H01 (Ω)
that satises
∥u∥1,2 ≤ ∥f ∥2 .
Beweis. We apply Riesz' Representation Theorem to the Hilbert space H0 (Ω), equipped
1
with inner product ⟨⋅, ⋅⟩1,2 , and the linear functional l ∶ H0 (Ω) → R given by l(ϕ) =
1
So Riesz' Representation Theorem shows that there is precisely one u ∈ H01 (Ω) satisfying
∫ ∇u(x) ⋅ ∇ϕ(x) + u(x)ϕ(x) dx = ⟨u, ϕ⟩1,2 = l(ϕ) = ∫ f (x)ϕ(x) dx for all ϕ ∈ H01 (Ω).
Ω Ω
End Lec 03
In principle, one may apply Riesz' Representation Theorem not only to the standard
inner product, but any other equivalent one may be taken. So in fact this result allows to
solve a whole family of boundary problems and not only the particular one from (3.1).
Anyway, there is a more general result, which is called the Lax-Milgram Lemma. It
essentially tells us that the symmetry requirement of an inner product (i.e. ⟨u, v⟩ = ⟨v, u⟩
∀u, v ∈ H ) is not needed for a solution theory for problems like
Satz 3.3 (Lax-Milgram Lemma [13]) . Let (H, ⟨⋅, ⋅⟩) be a (real) Hilbert space, let a(⋅, ⋅) ∶
H ×H →R be a bilinear form and l ∶ H → R a linear functional such that:
(i) a is bounded, i.e., there is C>0 such that ∣a(u, v)∣ ≤ C∥u∥∥v∥ for all u, v ∈ H ,
(ii) a is coercive, i.e., there is c>0 such that a(u, u) ≥ c∥u∥ 2
for all u ∈ H,
(iii) l is bounded, i.e., there is M >0 such that ∣l(v)∣ ≤ M ∥v∥ for all v ∈ H.
15
Then (3.2) has a unique solution u∈H satisfying ∥u∥ ≤ c−1 M .
Beweis:
For any given u ∈ H , the maps v ↦ a(u, v) and v ↦ l(v) are bounded linear functionals by
assumption (i) and (iii). So Riesz' Representation Theorem yields uniquely determined
elements wu , r ∈ H such that
To nd a unique solution to this problem we apply Banach's Fixed Point Theorem to
T ∶ H → H, u ↦ u − ϱ ⋅ (Au − r)
We rst show that A is linear and bounded. For any given u1 , u2 , v ∈ H and α1 , α2 ∈ R
we have
Using these facts we now show that T is a contraction for suitable ϱ ≠ 0. Using that T is
linear, we get for all u1 , u2 ∈ H
∥T u1 − T u2 ∥2 = ∥T (u1 − u2 )∥2
= ∥u1 − u2 − ϱ ⋅ A(u1 − u2 )∥2
= ∥u1 − u2 ∥2 − 2ϱ⟨A(u1 − u2 ), u1 − u2 ⟩ + ϱ2 ∥A(u1 − u2 )∥2
= ∥u1 − u2 ∥2 − 2ϱa(u1 − u2 , u1 − u2 ) + ϱ2 ∥A(u1 − u2 )∥2
≤ ∥u1 − u2 ∥2 − 2ϱc∥u1 − u2 ∥2 + ϱ2 C 2 ∥u1 − u2 ∥2
16
Choosing ϱ = cC −2 we thus obtain
√
∥T u1 − T u2 ∥ ≤ 1 − c2 C −2 ∥u1 − u2 ∥.
So T is a contraction and hence posses precisely one xed point. As we have seen
above, this implies that (3.2) has a unique solution. This solution, call it u, satises
c∥u∥2 ≤ a(u, u) = l(u) ≤ M ∥u∥ so that ∥u∥ ≤ c−1 M is proved, too. ◻
A weak solution to this problem u ∈ H01 (Ω) satises a(u, v) = l(v) for all v∈H where
Korollar 3.4. Assume f ∈ L2 (Ω), c ∈ L∞ (Ω) with c(x) ≥ µ > 0 almost everywhere.
unique weak solution u ∈ H0 (Ω) that satises
1
Then (3.3) has a
17
4 Approximation by smooth functions
In this section we want to show that smooth functions approximate Sobolev functions u∈
W k,p (Ω) fork ∈ N, 1 ≤ p < ∞. In particular we will prove the existence of approximating
sequences (un ) ⊂ C ∞ (Ω) ∩ W k,p (Ω) satisfying (2.2). We start with some preliminaries
about test functions.
We rst need to establish the mere existence of test functions. The starting point is the
following fact about
⎧
⎪
⎪e−1/x if x > 0,
ζ(x) ∶= ⎨
⎪
⎪ x ≤ 0.
⎩0 if
The main diculty is to inductively prove ζ (n) (x) = pn (x)x−2n e−1/x for all x ∈ (0, ∞)
−z m
where pn is a polynomial (of degree ≤ n). Using this and e z → 0 as z → ∞ for all
m ∈ N one gets the result. Notice that this counterexample shows that there are C ∞ (R)-
10
functions that are not real-analytic . The following result establishes the existence of
cut-o (or bump) functions, which are a special kind of test functions from C0∞ (Ω).
Proposition 4.2. Let Ω ⊂ RN be open, x0 ∈ Ω and 0 < r < R < dist(x0 , ∂Ω). Then there
is ψ∈ C0∞ (Ω) such that
as well as ∣∇ψ(x)∣ ≤ C∣R − r∣−1 for all x ∈ BR (x0 ) ∖ Br (x0 ) and some C > 0. In particular,
C0∞ (Ω) ⊋ {0}.
Beweis:
Choose ζ as in Proposition 4.1 and dene ψ1 ∈ C ∞ (R) via ψ1 (t) ∶= ζ(1 − t)ζ(t), in
particular ψ1 ≥ 0, supp(ψ1 ) = [0, 1]. As a consequence,
∞
∫ ψ1 (s) ds
0 ≤ ψ2 ≤ 1, ψ2 ∣(−∞,0] ≡ 1, ψ2 ∣[1,∞) ≡ 0 where ψ2 (t) ∶= t .
∫R ψ1 (s) ds
∣x−x0 ∣−r
Then ψ(x) ∶= ψ2 ( R−r ) has all the desired properties. ◻
10
Notice that real-analytic functions have isolated zeros whereas the zero 0 is not an isolated one of ζ.
18
They are the building blocks for the following more general result that allows to localize
the considerations. We will see an example for this in the proof of the Meyers-Serrin
Theorem.
Satz 4.3 (Partition of Unity). Let I be a set and (Oi )i∈I a family of open subsets of RN ,
Ω ∶= ⋃i∈I Oi . Then there is a sequence (ϕj )j∈N ⊂ C0∞ (Ω) with the following properties:
(i) 0 ≤ ϕj (x) ≤ 1 for all x∈Ω for all j ∈ N,
(ii) supp(ϕj ) ⊂ Oi(j) for some i(j) ∈ I for all j ∈ N,
∞
(iii) ∑j=1 ϕj (x) =1 for all x ∈ Ω,
(iv) For each compact set K⊂Ω there is an m∈N and an open set W such that
Beweis:
11
We dene the set of open balls
Then dene
Then (i) and (ii) are clear and it remains to prove (iii),(iv).
ϕ1 + . . . + ϕj = 1 − (1 − ψ1 ) ⋅ . . . ⋅ (1 − ψj ) for all j ∈ N.
For any given compact subset K ⊂ Ω we have12 K ⊂ ⋃m j=1 Brj /2 (qj ) =∶ W for some m ∈ N.
Hence (1 − ψ1 (x)) ⋅ . . . ⋅ (1 − ψm (x)) = 0 for x ∈ W . We obtain for all n ∈ N, n ≥ m
11
We call them Br (q) ∶= {x ∈ RN ∶ ∣x − q∣ < r}.
12
Here we use that K ⊂ Ω ⊂ {Brj /2 (q) ∶ j ∈ N}. Prove this!
19
A particularly important role is played by so-called molliers. These are test functions
φ ∈ C0∞ (RN ) with φ ≥ 0, supp(φ) ⊂ B1 (0) and ∫RN φ(x) dx = 1. Proposition 4.2 tells us
that such functions exist. Considering
1 x
φε (x) ∶= φ( )
εN ε
we obtain a mollifying sequence satisfying
∥f ∗ g∥r ≤ ∥f ∥p ∥g∥q .
Beweis:
We only consider 1 ≤ p, q, r < ∞. This is a consequence of the following application of
inequality (notice r > p, r > q and 1 =
r + pr + qr ) and Tonelli's Theorem:
1 1 1
Hölder's
r−p r−q
∥f ∗ g∥rr = ∫ ∣f ∗ g∣r dx
RN
r
≤∫ (∫ ∣f (y)∣∣g(x − y)∣ dy) dx
RN RN
p q r−p r−q r
=∫ (∫ (∣f (y)∣ r ∣g(x − y)∣ r ) ⋅ ∣f (y)∣ r ⋅ ∣g(x − y)∣ r dy) dx
RN RN
r−p r−q
p q
≤∫ (∫ ∣f (y)∣ ∣g(x − y)∣ dy) (∫
p q
∣f (y)∣ dy)
p
(∫ ∣g(x − t)∣ dt)
q
dx
RN RN RN RN
=∫ ∫ p ∥g∥q
∣f (y)∣p ∣g(x − y)∣q dy dx ⋅ ∥f ∥r−p r−q
RN RN
=∫ p ∥g∥q
∣f (y)∣p dy∥g∥qq ⋅ ∥f ∥r−p r−q
RN
= ∥f ∥rp ∥g∥rq .
In particular, convolution is a well-dened operationLp (RN )∗Lq (RN ) ⊂ Lr (RN ) with the
∞
corresponding inequality. One checks that the following rules hold for f, g, h ∈ C0 (R ):
N
20
(i) f ∗ g = g ∗ f,
(ii) (f ∗ g) ∗ h = f ∗ (g ∗ h),
(iii) supp(f ∗ g) ⊂ supp(f ) + supp(g) = {x + y ∶ x ∈ supp(f ), y ∈ supp(g)},
(iv) ∂ α (f ∗ g) = ∂ α f ∗ g = f ∗ ∂ α g ,
(v) ∫RN (f ∗ g)h dx = ∫RN f (g ∗ h) dx.
We will prove these identities in the exercise sessions. The strong hypothesis f, g, h ∈
C0∞ (RN ) is chosen here for simplicity. Each item (i)-(v) actually holds for a much more
general class of functions.
End Lec 04
For a given function u ∈ Lp (Ω), i.e., u 1Ω ∈ Lp (RN ), we consider the convolution pro-
ducts
uε (x) ∶= (φε ∗ u)(x) = ∫ φε (x − y)u(y) dy.
Ω
step functions,
Lemma 4.5. Let A ⊂ RN measurable, ∣A∣ < ∞. Then, for every ε > 0, there is a compact
set K⊂R N
and an open set O ⊂ RN such that
K ⊂ A ⊂ O, ∣O ∖ K∣ < ε.
13
This can be used as follows. In the situation of the Lemma, consider the continuous
function
dist(x, Oc )
ϕ(x) ∶= (x ∈ RN ).
dist(x, Oc ) + dist(x, K)
13
Prove this!
21
We want to show that it is a good Lp -approximation for the indicator function whenever
1 ≤ p < ∞. It satises ϕ(x) = 1A (x) = 0 for x ∈ Oc as well as ϕ(x) = 1A (x) = 1 for x ∈ K .
Moreover, 0 ≤ ϕ − 1A ≤ 1 on R . Hence,
N
Proposition 4.6. Let Ω ⊂ RN be open and 1 ≤ p < ∞. Then C0 (Ω) is dense in Lp (Ω).
Beweis:
Let u ∈ Lp (Ω). By construction of the Lebesgue measure there is a step function s=
∑M
j=1 aj 1Aj ∈ Lp (Ω) with
δ
∥u − s∥Lp (Ω) ≤ .
4
14
By the Dominated Convergence Theorem there is a compact subset K ⊂ Ω such that
δ
∥s∥Lp (Ω∖K) ≤ .
4
Following the ideas from above, we nd continuous functions ϕ1 , . . . , ϕM ∈ C0 (Ω) as above
with
δ
∥1Aj ∩K − ϕj ∥Lp (Ω) ≤ (j = 1, . . . , M ).
2M (∣aj ∣ + 1)
15
Supports inside Ω can be achieved because Aj ∩ K has compact support inside Ω. We
dene
M
v ∶= ∑ aj ϕj ∈ C0 (Ω).
j=1
22
This proves the claim. ◻
Beweis:
uε ∈ C ∞ (RN ) follows from (iv). Young's inequality gives for 1≤p≤∞ and ε>0
∥uε ∥Lp (RN ) = ∥φε ∗ u∥Lp (RN ) ≤ ∥φε ∥L1 (RN ) ∥u∥Lp (RN ) = ∥u∥Lp (RN ) .
δ
∥u − v∥Lp (RN ) ≤ .
4
We then have
∥uε − u∥Lp (RN ) ≤ ∥uε − vε ∥Lp (RN ) + ∥vε − v∥Lp (RN ) + ∥v − u∥Lp (RN )
≤ ∥(u − v)ε ∥Lp (RN ) + ∥vε − v∥Lp (RN ) + ∥u − v∥Lp (RN )
≤ 2∥u − v∥Lp (RN ) + ∥vε − v∥Lp (RN )
δ
≤ + ∥vε − v∥Lp (RN ) .
2
So it remains to show that the latter term tends to zero as ε → 0. Choose K ⊂ RN a
compact superset of supp(v) + B1 (0). Then supp(vε ), supp(v) ⊂ K for 0 < ε < 1 and we
obtain
→0 as ε → 0.
∥vε − v∥Lp (RN ) = ∥1 ⋅ (vε − v)∥Lp (K) = ∥1∥Lp (K) ∥vε − v∥L∞ (K) ≤ δ.
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
<∞
23
This proves the claim. ◻
(u1K )ε ∈ C0∞ (Ω). Proposition 4.7 shows ∥u1K − (u1K )ε ∥Lp (RN ) ≤ 2δ for small enough
ε > 0, so
δ δ
∥u − vε ∥Lp (Ω) ≤ ∥u1K − (u1K )ε ∥Lp (Ω) + = ∥u1K − (u1K )ε ∥Lp (RN ) + ≤ δ,
2 2
which proves the claim. ◻
NB: This approximation by smooth functions with compact support is possible in Lp (Ω),
but in most cases not for W k,p (Ω) with k ≥ 1. The reason is that cutting away the regions
close to the boundary produces large derivatives. Later on, we will prove Poincaré's
inequality for functions W0k,p (Ω) that obviously does not hold for functions from W k,p (Ω)
for reasonable Ω ⊂ RN . This will provide an indirect proof of W0k,p (Ω) ⊊ W k,p (Ω).
∂ α vε = ∂ α (φε ∗ v) = φε ∗ ∂ α v = (∂ α v)ε .
p
∥vε − v∥pW k,p (Ω) = ∑ ∥∂ α (vε − v)∥Lp (Ω)
∣α∣≤k
p
= ∑ ∥∂ α (vε − v)∥Lp (RN )
∣α∣≤k
16
Notice that χ ∈ C0∞ (Ω) implies that the proof of the product rule actually does not rely on the
approximation result that we are about to prove. So no danger of circular reasoning!
24
= ∑ ∥(∂ α v)ε − ∂ α v∥pLp (RN )
∣α∣≤k
≤δ . p
Satz 4.10 (Meyers,Serrin (1964) [14]) . Let Ω ⊂ RN be open and 1 ≤ p < ∞. Then
∥⋅∥W k,p (Ω)
W k,p
(Ω) = C ∞ (Ω) ∩ W k,p (Ω) .
Beweis:
For k∈N dene the open sets
1
Ωj ∶= {x ∈ Ω ∶ dist(x, ∂Ω) > } , Uj ∶= Ωj ∖ Ωj−2 ,
j
where Ω−1 ∶= Ω0 ∶= ∅. Then (Uj )j∈N is an open covering of Ω. Choose some subordinate
partition of unity (ψj )j∈N , see Theorem 4.3.
Let u ∈ W k,p (Ω) and ε > 0 be arbitrary. Since supp(uψj ) ⊂ Uj ∖ Uj−2 ⊂ Ω, Proposition 4.9
∞
yields a mollier φεj ∈ C0 (R ) such that vj ∶= (uψj )εj = φεj ∗ (uψj ) satises
n
Set v ∶= ∑∞
j=1 vj . Then v ∈ C ∞ (Ω) since v is a locally nite sum, see Theorem 4.3 (iv).
Moreover,
X
X
X ∞ ∞ X
X
X ∞ ∞
∥v − u∥W k,p (Ω) = X
X
X
X
X ∑ v j − ∑ uψj
X
X
X
X
X ≤ ∑ ∥v j − uψj ∥ W (Ω)
k,p ≤ ∑ ε2−j ≤ ε.
X
X
Xj=1 j=1 X
X
XW k,p (Ω) j=1 j=1
This holds regardless of any regularity assumptions on the boundary of Ω. The situation
is dierent if we require the approximating sequence (un ) to be an element of C ∞ (RN ) ∩
W k,p (RN ) or even C0∞ (RN )∣Ω ∶= {u∣Ω ∶ u ∈ C0∞ (RN )}. For the proof of the following
result we refer to [1, Theorem 3.22, 4.11].
25
This result extends to many important unbounded uniform Lipschitz domains where,
essentially, the boundary of Ω can be written as a graph of Lipschitz functions with
17
uniformly bounded Lipschitz constants. Notice that for generic open sets Ω ≠ RN we
∞
have that the closure of C0 (R )∣Ω is a strict superset of the closure of
N
C0∞ (Ω). The case
Ω=R N
(no boundary at all) is the only important exception.
Beweis. We only prove the result for k = 1 to avoid technicalities (i.e. the product rule
for higher derivatives). Let u ∈ W (RN ) and choose a cut-o function ϕ ∈ C0∞ (RN )
1,p
as in Proposition 4.2 with 0 ≤ ϕ ≤ 1, ϕ(x) = 1 for ∣x∣ ≤ 1 and ϕ(x) = 0 for ∣x∣ ≥ 2. Set
ϕR (x) ∶= ϕ(x/R). We claim uϕR → u in W
1,p
(Ω). Indeed,
This proves uϕR → u ∈ W 1,p (RN ). So Proposition 4.9 (with Ω = RN , χ = ϕR ) shows that
∞
the function (uϕR )ε ∈ C0 (R ) converges to uϕR
N
as ε → 0. Hence, C0∞ (RN ) is dense in
W (R ), i.e.,
1,p N
∥uε − u∥W k,p (Ω) ≤ ∥uε − (un )ε ∥W k,p (Ω) + ∥(un )ε − un ∥W k,p (Ω) + ∥un − u∥W k,p (Ω)
17
Ω = RN ∖ {0} is not such a generic open set.
26
≤ 2∥un − u∥W k,p (Ω) + ∥(un )ε − un ∥W k,p (Ω) .
So, for any given δ>0 we may choose n∈N such that
δ
∥un − u∥W k,p (Ω) ≤ .
4
On the other hand, by Proposition 4.9 for Ω = RN and χ ∈ C0∞ (RN ) satisfying χ=1 on
the support of un , we have un = un χ and hence
δ
∥(un )ε − un ∥W k,p (Ω) = ∥(un χ)ε − un χ∥W k,p (Ω) ≤ for 0 < ε < ε0 .
2
Taking these two estimates together, we obtain
δ δ
∥uε − u∥W k,p (Ω) ≤ 2 ⋅ + =δ for all ε ∈ (0, ε0 ).
4 2
This means uε → u in W k,p (Ω) as ε → 0, which is all we had to show. ◻
End Lec 05
In this section we want to prove Stein's Extension Theorem [23]. It states that for bounded
18
Lipschitz domains Ω ⊂ RN each function u ∈ W k,p (Ω) with k ∈ N, p ∈ [1, ∞] admits an
extension Eu ∈ W k,p (RN ) such that19
We will show (indirectly) that this requirement on the boundary regularity of Ω is close
to optimal. In fact, the result is not true for mere C -domains with 0,α
0 < α < 1 such
as Ω ∶= {(x, y) ∈ (0, 1) ∶ 0 < x < 1, 0 < y < x1+δ } with δ > 0. We recall that a bounded
Lipschitz domain Ω ⊂ R is such that ∂Ω ⊂ ⋃j=1 Uj ⊂ R
N M N
for open sets U1 , . . . , UM such
that, after permutation of coordinates,
χ∣Ω ≡ 1.
27
Why is it interesting and of practical relevance to have such an operator? Assume you
want to prove some estimate of the form
for some positive constant D. This is for instance satised for the identity operator T = id
or integral operators T U (x) = ∫RN K(x, y)U (y) dy with nonnegative kernels K . We show
that estimates for such operators can be obtained with the aid of the corresponding
estimates on RN that are sometimes easier to prove. Having proved the latter and having
an extension operator E as above at our disposal, one obtains the desired estimate on Ω
for free. Indeed,
We start with a technical tool known as Whitney Decomposition Theorem (or Whitney's
Covering Lemma [26, pp.67-69]). We say that W is a closed dyadic cube if
Two such dyadic cubes W, W ′ are called almost disjoint if W ∩ W′ is a null set. So they
intersect at some corner or along parts of their faces, but their interiors are disjoint. For
example, when N = 2, set
3 3 3 3
W1 ∶= [0, 1] × [0, 1], W2 ∶= [0, 1] × [1, 2], W3 ∶= [1, ] × [1, ], W4 ∶= [ , 1] × [ , 1].
2 2 4 4
Each of these cubes is dyadic, W1 , W2 , W3 are mutually almost disjoint, W3 , W4 and
W2 , W 4 are almost disjoint, too, but W1 , W4 are not. The following preliminary results
are essentially due to Caldéron and Zygmund [4, Section 3].
Lemma 5.1. Let Ω ⊂ RN be open and ∅ ⊊ Ω ⊊ RN . Then there are closed almost disjoint
dyadic cubes W1 , W2 , . . . with the following properties
(I) ⋃j∈N Wj = Ω,
(II) diam(Wj ) ≤ dist(Wj , Ωc ) ≤ 4 diam(Wj ) for all j ∈ N.
(III) Wi ∩ Wj ≠ ∅ implies
1
4 diam(Wi ) ≤ diam(Wj ) ≤ 4 diam(Wi ),
(IV) #{i ∈ N ∶ Wi ∩ Wj ≠ ∅} ≤ 12N for all j ∈ N.
28
Furthermore, for any xed κ ∈ (0, 14 ) there are ϕ1 , ϕ2 , . . . ∈ C0∞ (RN ) such that
The proof of this result is quite technical. The interested reader may nd it in the
Appendix for completeness. We need this result in order to prove the existence of a
smooth version of the distance function. We study
Beweis:
We choose dyadic cubes Wj and ϕj ∈ C0∞ (RN ) as in Lemma 5.1, dene
∞
dΩ (x) ∶= d(x) ∶= ∑ diam(Wk )ϕk (x).
k=1
21
In view of property (IV) this sum is actually a nite sum Then cover a given compact
set K ⊂ Ω by nitely many Ox1 , . . . , Oxm , so ϕj = 0 on Ox1 ∪ . . . Oxm ⊃ K whenever
j ∈ N ∖ (Ix1 ∪ . . . ∪ Ixm ). So only nitely many ϕj are non-zero on each compact subset of
Ω. So ϕj ∈ C0∞ (Ω) for all j ∈ N implies d ∈ C ∞ (Ω).
We start by proving (i). Let x ∈ Ω, choose k∈N with x ∈ Wk , which is possible by (I).
Then
(II)
δ(x) ≤ dist(Wk , Ωc ) + diam(Wk ) ≤ 5 diam(Wk ). (5.1)
So the lower bound from (i) follows from ϕk (x) = 1 by (V) and
(5.1) 1
d(x) ≥ diam(Wk )ϕk (x) = diam(Wk ) ≥ δ(x).
5
(II) (III) 1
δ(x) ≥ dist(Wk , Ωc ) ≥ diam(Wk ) ≥ diam(Wj ) if Wj ∩ Wk ≠ ∅. (5.2)
4
20
This is a consequence of (III)
21
Indeed, for any given point x ∈ Ω has an open neighbourhood Ox and a nite index set Ix ⊂ N such
that j ∈ N ∖ Ix implies ϕj ∣Ox = 0. This follows from (I),(V).
29
This implies
(V ) (5.2),(V ) (IV )
d(x) = ∑ diam(Wj )ϕj (x) ≤ ∑ 4δ(x) ≤ 4 ⋅ 12N δ(x).
W j ∩W k ≠∅ W j ∩W k ≠∅
(V I)
∣∂ α d(x)∣ ≤ ∑ diam(Wj ) ⋅ Cα diam(Wj )−∣α∣
Wj ∩Wk ≠∅
= Cα ∑ diam(Wj )1−∣α∣
Wj ∩Wk ≠∅
(5.1) 1−∣α∣
1
≤ Cα ∑ ( δ(x))
Wj ∩Wk ≠∅
5
(IV )
≤ Cα 5∣α∣−1 12N δ(x)1−∣α∣ .
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
C̃α
◻
End Lec 06
22
Another technical tool is the following .
Proposition 5.3. There are c, C > 0 and a continuous function ϕ ∶ [1, ∞) → R satisfying
∞
(i) ∫1 ϕ(t) dt = 1,
∞ k
(ii) ∫1 t ϕ(t) dt = 0 for all k ∈ N,
−ct
(iii) ∣ϕ(t)∣ ≤ Ce for all t ∈ [1, ∞).
Beweis:
The basic idea is to use the residue theorem for (i) and Cauchy's integral formula for (ii).
We consider
e
ψ(z) ∶= exp ( − ω(z − 1)1/4 ) (z ∈ C ∖ [1, ∞))
πz
where ω ∶= e−iπ/4 = √ . One can check23 that the function
1−i
2
30
is holomorphic in C ∖ [1, ∞). Hence, ψ is meromorphic and z ↦ ψ(z)z k is holomorphic
in C ∖ [1, ∞) for any given k ∈ N. So the integrals along piecewise smooth closed curves
γ z=0
encircling may be computed as follows:
1 e e 1
lim ψ(z)z = exp ( − ω(−1)1/4 ) = exp ( − ω ω̄) = .
∫ ψ = z→0
2πi γ π π π
Moreover,
∫ (⋅) ψ(⋅) = 0 k ∈ N.
k
for all
γ
Still, this is a statement about line integrals (Kurvenintegrale) in the complex plane
for complex-valued integrands and not about integrals along the real interval [1, ∞) for
real-valued integrands. So we approximate such an integral by suitable line integrals in
the complex plane.
24
We dene the following curve
via
⎧
⎪
⎪γ(2t) if 0 ≤ t ≤ 12 ,
(γ ⊕ η)(t) = ⎨
⎪
⎩η(2t − 1) ≤ t ≤ 1.
1
⎪ if
2
25
For z = 1 + reiϕ with 0 < ϕ < 2π we have
ϕ−π 1 ∣z − 1∣1/4
R(ω(z − 1)1/4 ) = R(e−iπ/4 ⋅ r1/4 eiϕ/4 ) = r1/4 cos( ) ≥ r1/4 √ = √ .
4 2 2
31
From (5.3) and the Dominated Convergence Theorem we get
and thus
δk0
2πi ⋅ = ∫ (⋅)k ψ
π γε
where
e (t − 1)1/4 (t − 1)1/4
ϕ(t) = exp (− √ ) sin ( √ ).
πt 2 2
◻
Until now we have not seen any reason why Lipschitz domains, or Lipschitz-continuous
functions, play a particular role. The basic link between Lipschitz domains and the re-
gularized distance function d ∶= dRN ∖Ω is the following.
Beweis:
We have by Proposition 5.2 (i) for x ∈ RN ∖ Ω
d(x) ≤ 4 ⋅ 15N δ(x)
= 4 ⋅ 15N inf{∣x − z∣ ∶ z ∈ RN ∖ Ω}
≤ 4 ⋅ 15N ∣(x′ , xN ) − (x′ , ψ(x′ ))∣
= 4 ⋅ 15N (ψ(x′ ) − xN ).
To prove the lower bound for d let L denote the Lipschitz constant of ψ. Then Proposi-
26
tion 5.2 (i) gives
1
d(x) ≥ δ(x)
5
26 √
Here we distinguish between the Euclidean norm ∣ ⋅ ∣2 on R and ∣ ⋅ ∣1 ; they satisfy ∣v∣2 ≤ ∣v∣1 ≤ N ∣v∣2
N
for all v ∈ R . In the fourth line of this chain of inequalities ∣x − y ∣1 = ∑i=1 ∣xi − yi ∣.
N ′ ′ N −1
32
1 ′
= inf ∣(x , xN ) − (y ′ , ψ(y ′ ))∣2
y ′ ∈RN −1 5
1
≥ √ inf ∣(x′ , xN ) − (y ′ , ψ(y ′ ))∣1
5 N y ′ ∈RN −1
1
= √ inf [∣x′ − y ′ ∣1 + ∣xN − ψ(y ′ )∣]
5 N y ′ ∈RN −1
1
≥ √ inf [∣x′ − y ′ ∣1 + min{1, L−1 }∣(xN − ψ(x′ )) + (ψ(x′ ) − ψ(y ′ ))∣]
5 N y ′ ∈RN −1
1
≥ √ inf [∣x′ − y ′ ∣2 + min{1, L−1 }∣xN − ψ(x′ )∣ − min{1, L−1 } ∣ψ(x′ ) − ψ(y ′ )∣ ]
5 N y ′ ∈RN −1 ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
≤L∣x′ −y ′ ∣2
min{1, L−1 }
≥ √ ∣xN − ψ(x′ )∣
5 N
min{1, L−1 }
= √ (ψ(x′ ) − xN ).
5 N
This proves the claim. ◻
Dene
∞
T v(t) ∶= t ∫ v(s)s−2 ds.
t
Lemma 5.5 (Hardy's Inequality). Let p ∈ [1, ∞]. Then ∥T v∥Lp ([0,∞)) ≤ p
p+1 ∥v∥Lp ([0,∞)) .
Beweis:
The case p=∞ results from
∞
∣T v(t)∣ ≤ ∣t∣∥v∥∞ ∫ s−2 ds = ∥v∥∞ .
t
So we may assume p ∈ [1, ∞) from now on. Also, it suces to prove the estimate for
nontrivial v ∈ C0∞ (RN ) in view of Theorem 4.8. The idea is to use integration by parts.
We have
∞ ∞ p
∥T v∥pLp ([0,∞)) = ∫ tp (∫ v(s)s−2 ) dt
0 t
tp+1 ∞ p ∞ ∞ tp+1 ∞ p−1
=[ (∫ v(s)s−2 ) ] − ∫ ⋅ p (∫ v(s)s−2 ) ⋅ (−v(t)t−2 ) dt
p+1 t 0 0 p+1 t
p ∞ ∞ p−1
−2
=0+ ∫ (t ∫ v(s)s ) ⋅ v(t) dt
p+1 0 t
p ∞ p−1
≤ ∥(t ∫ v(s)s−2 ) ∥ ∥v∥Lp ([0,∞))
p+1 t Lp′ ([0,∞))
p
= ∥∣T v(t)∣p−1 ∥Lp′ ([0,∞)) ∥v∥Lp ([0,∞))
p+1
33
p
= ∥T v∥p−1
Lp ([0,∞))
∥v∥Lp ([0,∞)) .
p+1
result. ◻
Beweis:
The proof is very advanced: we focus on k = 1, 1 ≤ p < ∞ and do not provide all details,
only the main ideas Sorry!
The strategy is the following: We consider special Lipschitz domains (as in Propositi-
on 5.4) rst and prove the existence of an extension operator for those. This is the main
intellectual challenge of the proof. Afterwards, we generalize this to a general bounded
Lipschitz domain. Here one uses that the boundary of such a general Lipschitz domain
is a nite union of special Lipschitz domains, for which we have already constructed an
extension operator. So it remains to combine these nitely many extension operators to
27
some extension operator for the whole domain Ω using a partition of unity , see Theo-
rem 4.3.
It is again sucient to prove the estimates for smooth functions u ∈ C0∞ (RN ), see Theo-
rem 4.11. We x a function ϕ as in Proposition 5.3 and dene d ∶= d N
R ∖Ω to be the
regularized distance function of the complement of Ω.
⎧
⎪
⎪u(x) , if xN ≥ ψ(x′ ), i.e., for x ∈ Ω,
(Eu)(x) ∶= ⎨ ∞
⎪ ′ ′
⎩∫1 u(x , xN + 2cd(x)t)ϕ(t) dt , if xN < ψ(x ), i.e.,
⎪ for x ∈ RN ∖ Ω.
This operator obviously satises Eu∣Ω = u∣Ω . We need to show u ∈ W 1,p (RN ) and
34
Step 1(a): Lp -bound for Eu.
Choose A > 0 such that ∣ϕ(t)∣ ≤ At−2 for all t ≥ 1. This is possible in view of Propositi-
on 5.3 (iii). We have
∞ ∣u(x′ , xN + 2cd(x)t)∣
∣(Eu)(x)∣ ≤ A ∫ dt (xN < ψ(x′ ))
1 t2
The change of coordinates 2cd(x)t = ψ(x′ ) − xN + s gives for v(s) ∶= u(x′ , ψ(x′ ) + s)
∞ ∣u(x′ , ψ(x′ ) + s)∣
∣(Eu)(x′ , xN )∣ ≤ 2Acd(x) ∫ ds
xN −ψ(x′ )+2cd(x) (s + ψ(x′ ) − xN )2
(5.4) ∞ ∣v(s)∣
≤ 2AC(ψ(x′ ) − xN ) ∫ ds.
ψ(x′ )−xN s2
So Hardy's Inequality gives
ψ(x′ ) ψ(x′ ) ∞ p
∣v(s)∣
∫ ∣(Eu)(x′ , xN )∣p dxN ≤ (2AC)p ∫ (ψ(x′ ) − xN )p (∫ ds) dxN
−∞ −∞ ψ(x′ )−xN s2
∞ ∞ p
∣v(s)∣
= (2AC)p ∫ (t ∫ ds) dt
0 t s2
p p ∞
≤ (2AC)p ( ) ∫ ∣v(s)∣p ds
p+1 0
p p ∞
= (2AC)p ( ) ∫ ∣u(x′ , xN )∣p dxN .
p+1 ψ(x′ )
ψ(x′ )
∥Eu∥pLp (RN ∖Ω) =∫ (∫ ∣(Eu)(x′ , xN )∣p dxN ) dx′
RN −1 −∞
p p ∞
′ ′
≤ (2AC)p ( ) ∫ ∫ ′ ∣u(x , xN )∣ dxN dx
p
p+1 RN −1 ψ(x )
p p
= (2AC)p ( ) ∥u∥pLp (Ω) .
p+1
As a consequence,
p
∥Eu∥Lp (RN ) ≤ ∥u∥Lp (Ω) + ∥Eu∥Lp (RN ∖Ω) ≤ (1 + 2AC ) ∥u∥Lp (Ω) . (5.5)
p+1
35
= O(∣x − y∣2 ) as x → y, x ∈ Ω.
Here we used:
Dierentiation under the integral sign gives for x ∈ RN ∖ Ω, i.e., for xN < ψ(x′ ),
∞
∂i (Eu)(x) = ∫ (∂i u(x′ , xN + 2cd(x)t) + 2ct∂i d(x)∂N u(x′ , xN + 2cd(x)t)) ϕ(t) dt.
1
∞ p
∥∫ ∂i u(x′ , xN + 2cd(x)t) dt∥ ≤ 2AC ∥∂i u∥Lp (Ω)
1 Lp (RN ∖Ω) p+1
p
≤ 2AC ∥u∥W 1,p (Ω)
p+1
follows as above: it suces to replace u by ∂i u. The second term is estimated similarly.
−3 ′
Choose B>0 such that ∣ϕ(t)∣ ≤ Bt for all t ∈ [1, ∞). Then we nd for xN < ψ(x )
∞
∣∫ 2ct∂i d(x)∂N u(x′ , xN + 2cd(x)t) ⋅ ϕ(t) dt∣
1
36
∞
≤ 2c ∫ ∣∂i d(x)∣∣∂N u(x′ , xN + 2cd(x)t)∣∣tϕ(t)∣ dt
1
∞ ∣∂N u(x′ , xN + 2cd(x)t)∣
≤ 2cB∥∂i d∥∞ ∫ dt.
1 t2
Again, one obtains via Hardy's Inequality
p
∥∂i (Eu)∥Lp (RN ) ≤ (1 + N ⋅ 2(A + B)C ) ∥u∥W 1,p (Ω) . (5.6)
p+1
us how to extend a given function across the boundary.) Next choose an open set U0 ⊂ Ω
such that Ω ⊂ ⋃j=0 Uj and dene E0 ∶ W (Ω), u ↦ u. As in Theorem 4.3 choose nitely
M 1,p
many test functions (ϕi )i∈I with supp(ϕi ) ⊂ Uj(i) and ∑i∈I ϕi (x) = 1 for all x ∈ Ω. Set
where χ ∈ C0∞ (RN ) is an arbitrary function satisfying χ(x) = 1 on Ω and ∑i∈I ϕi (x)2 ≠ 0
on supp(χ).
Moreover, the triangle inequality, the product rule and Hölder's inequality give
37
≤ C ∑ ∥ϕi ∥2W 1,∞ (RN ) Ci∗ ⋅ ∥u∥W 1,p (Ω) .
i∈I
In the case k≥2 the proof is even more complicated because the derivatives of order ≥2
of the regularized distance function are not bounded any more. For instance, in the case
k=2 one needs to bound terms of the form
∞
∂ij (Eu)(x) = ∂ij d(x) ∫ ∂N u(x′ , xN + 2cd(x)t) ⋅ tϕ(t) dt
1
∞
∣∂ij (Eu)(x)∣ = ∣∂ij d(x)∣ ∣∫ (∂N u(x′ , xN + 2cd(x)t) − ∂N u(x′ , xN + 2cd(x))) ⋅ tϕ(t) dt∣
1
∞ 1
= ∣2cd(x)∂ij d(x)∣ ∣∫ 2
∂N u(x′ , xN + 2cd(x)(1 + st)) ds) ⋅ t2 ϕ(t) dt∣
(∫
0 1
1 ∞ ∣∂ 2 u(x′ , xN + 2cd(x)(1 + s)t)∣
≤ 2c∥d ∂ij d∥∞ ∫ (∫ N
dt) ds.
0 1 t2
The same techniques as above allow to bound this integral in terms of ∥∂N
2
u∥Lp (Ω) and
hence in terms of ∥u∥W 2,p (Ω).
End Lec 07
Roughly speaking, assuming more and more weak dierentiability (i.e., large enough k )
the functions should become more and more regular, possibly ending up being bounded
or even continuous. In Example 2.8 we have seen that the function u(x) = ∣x∣γ , γ < 0 lies
in W 1,p
(B) if and only if γ > 1− N
p . Here, B was the unit ball centered at zero. This
prototypical singularity leads to the following observation:
For p>N there is the chance that elements of W 1,p (B) cannot be unbounded.
We shall prove related statements in this section. To be more precise we seek for the
validity of continuous embeddings
38
under suitable assumptions on p, q, Ω and some constant C>0 that does not depend on
u. Later on we will show how this aects the theory of our elliptic model boundary value
problem.
We start with a trivial remark: If Ω ⊂ RN is bounded, more generally: has nite measure,
then we actually have L (Ω) ⊂ L (Ω) for all q ∈ [1, p]. This follows from
p q
1
−1
∥u∥q = ∥u ⋅ 1∥q ≤ ∥u∥p ∥1∥ pq = ∥u∥p ∣Ω∣ q p .
p−q
´¹¹ ¹ ¹¸ ¹ ¹ ¹ ¶
<∞
So we see that on bounded domains lower integrability is for free. In typical unbounded
N
domains (R , half-spaces, strips, cylinders,. . . ) this is not the case as can be seen from
examples of the form x ↦ (1 + ∣x∣)−α for α > 0. This has two consequences: First, we will
not investigate such embeddings for q < p; they are practically irrelevant. Second, the
nal result for bounded domains will be slightly dierent from the general case.
We turn our attention towards higher integrability where the theory for bounded Lip-
29
schitz domains and unbounded domains is essentially the same. As mentioned above,
the question is how much integrability / continuity a generic function u ∈ W k,p (Ω) ad-
N
mits. We will reduce our analysis to R by means of an extension operator that we
constructed in the last section. We start with necessary conditions that are quite easy to
obtain. They t very well to the model singularity x ↦ ∣x∣γ discussed earlier.
Proposition 6.1. Let N, k ∈ N and p, q ∈ [1, ∞]. If a continuous embedding W k,p (RN ) ↪
L (R )
q N
exists, then necessarily
1 1 k
0≤ − ≤ . (6.1)
p q N
Beweis:
We assume that the embedding is continuous and take any nontrivial
30
u ∈ C0∞ (RN ). For
λ>0 set uλ (x) ∶= u(λx) we have ∂ α uλ (x) = λ∣α∣ (∂ α u)(λx). So we have
∥uλ ∥pW k,p (RN ) = ∑ ∥∂ α uλ ∥pLp (RN ) = ∑ λp∣α∣ ∥(∂ α u)(λ⋅)∥pLp (RN ) = ∑ λp∣α∣−N ∥∂ α u∥pLp (RN ) .
∣α∣≤k ∣α∣≤k ∣α∣≤k
−N ⎛ p∣α∣−N ⎞
p
1
λ q ∥u∥Lq (RN ) ≤ C ∑ λ ∥u∥W k,p (RN ) ≤ C ′ (λ−N + λpk−N ) p ∥u∥W k,p (RN ) .
⎝∣α∣≤k ⎠
29
Wild domains with cusps, fractal structure etc. may cause problems. For Lipschitz domains these
exotic phenomena do not occur due to the presence of an extension operator.
30
u ∈ W k,p (RN ) to carry through the arguments.
Actually one may take any nontrivial function
39
This implies (consider λ→∞ resp. λ → 0)
N N N N
− ≤k− and − ≥− ,
q p q p
For 1≤p< N
k the conditions (6.1) mean p ≤ q ≤ N −kp . For p ≥ k all q ≥ p are allowed.
Nk N
So we come to one of the most important theorems in the Sobolev spaces: Sobolev's
Inequality. For 1<p<N it is due to Sobolev himself [22]. Gagliardo [8] extended the result
to p=1 and large classes of bounded domains. At almost the same time, Nirenberg [17]
proved a more general version on RN including the case p=1 as well (knowing that the
result holds on suciently nice domains Ω⊂R N
). To prove Sobolev's Inequality, we
will use the generalized Hölder inequality
1 1 1 1
∥∣v1 ∣ N −1 ⋅ . . . ⋅ ∣vN −1 ∣ N −1 ∥ ≤ ∥v1 ∥LN1−1
(R)
⋅ . . . ⋅ ∥vN −1 ∥LN1−1
(R)
, (6.2)
L1 (R)
see Exercise 1 on Exercise Sheet 1. Moreover we will need the Inequality of arithmetic
and geometric means
N 1 1 N
∏ aiN ≤ ∑ ai (a1 , . . . , aN ≥ 0). (6.3)
i=1 N i=1
p(N − 1)
∥u∥ Np ≤√ ∥∇u∥Lp (RN ) .
L N −p (RN ) N (N − p)
Beweis:
By Lemma 4.12 and Exercise 4 of Exercise Sheet 1 it suces to prove this inequality for
u ∈ C0∞ (RN ). The crucial step is to prove the claim for p = 1, which we shall do rst. For
u ∈ C0∞ (RN ) and j ∈ {1, . . . , N } we have
xj
∣u(x)∣ = ∣∫ ∂j u(x1 , . . . , xj−1 , t, xj+1 , . . . , xN ) dt∣
−∞
40
This implies
1
N
N N −1
∣u(x)∣ N −1 ≤ ∏ (∫ ∣∂j u(x1 , . . . , xj−1 , t, xj+1 , . . . , xN )∣ dt) .
j=1 R
Hence,
N 1
N N −1
∫ ∣u(x)∣ N −1 dx1 ≤ ∫ ∏ ( ∫ ∣∂i u(x)∣ dxi ) dx1
R R i=1 R
1 N 1
N −1 N −1
= ( ∫ ∣∂1 u(x)∣ dx1 ) ∫ ∏ ( ∫ ∣∂i u(x)∣ dxi ) dx1
R R i=2 R
1 N 1
(6.2) N −1 N −1
≤ ( ∫ ∣∂1 u(x)∣ dx1 ) ∏(∫ ∣∂i u(x)∣ dx1 dxi ) .
R i=2 R2
N
∫ ∣u(x)∣ N −1 dx1 dx2
R2
1 N 1
N −1 N −1
≤ ∫ [( ∫ ∣∂1 u(x)∣ dx1 ) ∏(∫ ∣∂i u(x)∣ dx1 dxi ) ] dx2
R R i=2 R2
1 1 N 1
N −1 N −1 N −1
= (∫ ∣∂2 u(x)∣ dx1 dx2 ) ⋅ ∫ [( ∫ ∣∂1 u(x)∣ dx1 ) (∏∫ ∣∂i u(x)∣ dx1 dxi ) ] dx2
R2 R R i=3 R2
2 1 N 1
(6.2) N −1 N −1
≤ ∏(∫ ∣∂i u(x)∣ dx1 dx2 ) ∏(∫ ∣∂i u(x)∣ dx1 dx2 dxi ) .
i=1 R2 i=3 R3
where the last product it to be understood as 1 for k = N . Finally, we use (6.3) and get
N −1 1
N N
N N
∥u∥ N = (∫ ∣u(x)∣ N −1 dx) ≤ ∏ (∫ ∣∂i u(x)∣ dx)
N −1 RN i=1 RN
(6.3)1 N
≤ ∑ ∫ ∣∂i u(x)∣ dx
N i=1 RN
1
≤ √ ∫ ∣∇u(x)∣ dx.
N RN
This proves the claim for p = 1.
41
N (p−1)
For 1<p<N and nontrivial u ∈ C0∞ (RN ) consider v ∶= ∣u∣ N −p u. Then
p(N − 1) NN(p−1)
v ∈ C01 (RN ) and ∇v = ∣u∣ −p ∇u.
N −p
So the result above implies
p(N −1)
1
≤ √ ∥∇v∥1
N
p(N − 1) N (p−1)
=√ ∥∣u∣ N −p ∣∇u∣∥1
N (N − p)
Hölder p(N − 1) N (p−1)
≤ √ ∥∣∣u∣ N −p ∥p′ ∥∣∇u∣∥p
N (N − p)
p(N − 1) N (p−1)
Bemerkung 6.3.
(a) The best constant in the Sobolev Inequality is known is given by
1 1/N
p − 1 1− p Γ(N )Γ(1 + N /2)
CS (p) ∶= π −1/2 N −1/p ( ) ( ) .
N −p Γ(N /p)Γ(1 + N − N /p)
In 1976, Talenti [24] proved that for 1 < p < N this value is attained precisely31 for
functions u(x) = (a + b∣x − x0 ∣ )1−N /p where a, b > 0, x0 ∈ RN . In the case p = 1 we
p′
have
Γ(1 + N /2)1/N
CS (1) = lim CS (p) = √
p↘1 πN
and a maximizing sequence for the inequality can be chosen to consist of functions
converging to the indicator function of a ball in a suitable sense. Federer, Fleming [5]
and Rishel [7, Theorem II] proved that the inequality for p=1 is related to the
so-called isoperimetric inequality
N −1
∣E∣ N ≤ CS (1) area(∂E)
42
Satz 6.4 (Sobolev's Embedding Theorem) Assume N ∈ N, N ≥ . 2 and 1 ≤ p < N and
∗ Np
p ∶= N −p . Then there is a continuous embedding W 1,p (RN ) ↪ Lq (RN ) precisely for
p ≤ q ≤ p∗ , i.e., for 0 ≤ p1 − 1q ≤ N1 .
Beweis:
The Sobolev Inequality shows
Moreover, ∥u∥p ≤ ∥u∥1,p . For any given q ∈ [p, p∗ ] we nd θ ∈ [0, 1] such that
1
q = θ
p + 1−θ
p∗ .
32
Then Hölder's inequality gives
p∗ .
Beweis:
Theorem 5.6 provides an extension operator E ∶ W 1,p (Ω) → W 1,p (RN ). We thus obtain
∥u∥Lp∗ (Ω) ≤ ∥Eu∥Lp∗ (RN ) ≤ CS (p)∥Eu∥W 1,p (RN ) ≤ CS (p)∥E∥∥u∥W 1,p (Ω) .
Hence, for 1 ≤ q ≤ p∗ ,
1
∥u∥Lq (Ω) ≤ ∥u∥Lp∗ (Ω) ∥1∥ 1− 1 ≤ CS (p)∥E∥∥u∥W 1,p (Ω) ∣Ω∣ p′ .
L q p∗
(Ω)
33
The Sobolev Inequality is false for p=N and the question is whether an embedding
∞
W 1,N
(Ω) ↪ L (Ω) holds. The answer is no. In the Exercises we shall prove the followi-
ng.
43
Korollar 6.7. Assume N ∈ N, N ≥ 2. For any bounded Lipschitz domain Ω ⊂ RN there
is a continuous embedding W
1,N
(Ω) ↪ Lq (Ω) precisely for 1 ≤ q < ∞.
There is a sharper statement about the limiting case p = N, which is particularly im-
portant in two spatial dimensions. The fundamental result in this direction is the Moser-
Trudinger Inequality [16, 25] which essentially says that any function u ∈ W01,N (Ω) satis-
es
N
eα∣u∣ ∈ L1 (Ω)
N −1
Example 6.8. We give an indirect proof of the fact that Hölder-domains do not admit
extension operators as in Stein's Extension Theorem. To keep the technicalities at a
moderate level, we concentrate on the case N = 2. We want to show that for 1 ≤ p < N = 2
we have
Np 2p
W 1,p (Ω) ↪
/ Lp (Ω), p∗ = =
∗
N −p 2−p
where the non-Lipschitz domain is given by
Ω = {(x, y) ∈ R2 ∶ ∣x∣γ < y < 1, 0 < ∣x∣ < 1}, 0 < γ < 1.
To see this dene
γ+1
u(x, y) ∶= y −α where α ∶= .
γp∗
One checks:
1 1
∫ ∣∇u(x, y)∣ + ∣u(x, y)∣ d(x, y) ≤ ∫
p p
∫ (∣α∣ + 1)y −(α+1)p dy dx
Ω 0 ∣x∣γ
1 1
≤ (∣α∣ + 1) ∫ (1 − xγ(1−(α+1)p) ) dx
0 1 − (α + 1)p
∣α∣ + 1 1
≤ ∫ x
γ(1−(α+1)p)
dx
p(α + 1) − 1 0
< ∞,
On the other hand,
1 1
∫ ∣u(x, y)∣ d(x, y) = ∫ y −αp dy dx
p ∗ ∗
∫
Ω 0 ∣x∣γ
44
1 1
=∫ (1 − xγ(1−αp ) ) dx
∗
0 1 − αp∗
1 1
−1
= ∫ (x − 1) dx
γ 0
= ∞.
We infer
W 1,p (Ω) ↪
/ Lp (Ω).
∗
This implies that Ω does not admit a bounded extension operator W 1,p (Ω) → W 1,p (RN ).
6.1 Applications
Korollar
2N
6.9. Ω ⊂ RN , N ≥ 3
Let be a bounded Lipschitz domain and assume f ∈
N
L N +2 (Ω), c ∈ L (Ω) with c(x) ≥ µ > 0
2 almost everywhere. Then (3.3) has a unique weak
solution u ∈ H0 (Ω) that satises
1
Beweis:
We recall that we want to solve a(u, v) = l(v) for all v∈H where
34
The boundedness of a follows from
45
≤ ∥∣∇u∣∥2 ∥∣∇v∣∥2 + ∥c∥ N ∥u∥ 2N ∥v∥ 2N
2 N −2 N −2
The proof of coercivity uses c(x) ≥ µ > 0 and works as in the proof of Corollary 3.4.
Finally, l is a bounded linear functional because
L∞ (Ω) ⊂ L 2 (Ω)
2N N
What do we gain here? Due to L2 (Ω) ⊂ L N +2 (Ω) and our assumptions
on the coecients are less restrictive than before. For instance, the function c may be
unbounded from above (not below, though!), which was not allowed before.
Question: What would be the corresponding result for N = 2? How should an improve-
ment on unbounded domains look like?
In the last Section we have shown that W 1,p (Ω) ↪ Lq (Ω) for 1 ≤ p < N and p ≤ q ≤ p∗ =
Np
N −p . In the case p = N one obtains the same result for p ≤ q < ∞, but not for q = ∞. Now
we want to show that in the case p>N actually more is true: W 1,p (Ω) ↪ C 0,α (Ω) where
α ∶= 1 − N
p ∈ (0, 1).
At rst sight it seems odd to prove such a result for elements of Sobolev spaces, that
are only dened up to a set of measure zero. Actually, elements of Sobolev spaces are,
just as elements of Lebesgue spaces, equivalence classes of functions that coincide almost
everywhere. Since continuous functions may become discontinuous (and vice versa) after
modication on a null set, it does not make sense to claim that any function u ∈ W 1,p (Ω)
should be automatically Hölder-continuous. We rather claim that the equivalence class
u ∈ W 1,p (Ω) contains a Hölder-continuous function. In other words, we will prove that
for any given u ∈ W 1,p (Ω) there is ũ ∈ C 0,α (Ω) such that u = ũ almost everywhere.
46
∣u(x) − u(y)∣
[u]C 0,α (Ω) ∶= sup
x,y∈Ω,x≠y ∣x − y∣α
∥u∥C m,α (Ω) ∶= ∥u∥C m (Ω) + ∑ [∂ α u]C 0,α (Ω)
α∈NN
0 ,∣α∣=m
We use the following result without proof (which is not too dicult).
To prove embeddings into Hölder spaces, we focus on the model situation Ω = RN . Using
an Extension operator, this turns out to be sucient. In the following result we denote
by ωN ∶= ∣B1 (0)∣ the volume of the unit ball in RN .
Satz 7.2 (Morrey). Let N < p < ∞, u ∈ W 1,p (RN ) and α ∶= 1 − Np ∈ (0, 1). Then we have
2p − N −1/p
∣u(x)∣ ≤ ω ∥u∥W 1,p (RN )
p−N N
∣u(x) − u(y)∣ 4p −1/p
≤ ω ∥∇u∥Lp (RN ) .
∣x − y∣α p−N N
Beweis:
We rst prove these inequalities for u ∈ C0∞ (RN ). So let x, y ∈ RN be arbitrary and dene
x+y ∣x−y∣
their midpoint by m ∶= 2 , set ρ ∶= 2 = ∣x − m∣ = ∣y − m∣. Then we have ∣Bρ ∣ = ωN ρN
and thus
ωN ρN ∣u(x) − u(y)∣
=∫ ∣u(x) − u(y)∣ dz
Bρ (m)
47
1 p−1
≤ 2ρ∥∇u∥Lp (RN ) ∫ t−N ⋅ 2(ωN (ρt)N ) p dt
0
p−1 1
= 4ρ(ωN ρN ) p ∥∇u∥Lp (RN ) ∫ t−N /p dt
0
1− N 4p −1/p
≤ ωN ρN ∣x − y∣ p ⋅ ω ∥∇u∥Lp (RN ) .
p−N N
In the last line we used 2ρ = ∣x − y∣. The second estimate is proved similarly:
B1 (x)
1
1/p′
≤∫ (∫ ∣∇u(x + t(y − x))∣∣x − y∣ dt) dy + ∥u∥Lp (RN ) ωN
B1 (x) 0
1
∣∇u(y)∣ dy) t−N dt + ∥u∥Lp (RN ) ωN
1/p′
≤∫ (∫
0 Bt (x)
1
∥∇u∥Lp (Bt (x)) ⋅ ∣Bt (x)∣1/p t−N dt + ∥u∥Lp (RN ) ωN
1/p′
≤∫
′
0
1
t−N /p dt + ∥u∥Lp (RN ) ωN
1/p′ 1/p′
≤ ωN ∥∇u∥Lp (RN ) ∫
0
−1/p p −1/p
≤ωN ⋅ (ωN ∥∇u∥Lp (RN ) + ωN ∥u∥Lp (RN ) )
p−N
−1/p p
≤ ωN ⋅ ωN ∥u∥W 1,p (RN ) .
p−N
This prove the inequalities for test functions u. To treat the general case consider a se-
quence (un ) un → u in W 1,p (RN ) and un → u almost everywhere.
of test functions with
Then the above estimate shows that (un ) is a Cauchy sequence in C (RN ). Since this
0,α
Korollar 7.3. Let N ∈ N and N < p < ∞. Then there is a continuous embedding
W 1,p (RN ) ↪ C 0,α (RN ) where α = 1 − Np . For bounded Lipschitz domains Ω ⊂ RN we
have W 1,p (Ω) ↪ C 0,α (Ω).
Bemerkung 7.4.
(a) We already saw: The result is not true for p = N ; W 1,p −functions need not even be
bounded.
(b) The embedding W 1,p (Ω) ↪ C 0,α (Ω) is true for p = ∞, so W 1,∞ -functions coincide
with a Lipschitz-continuous function almost everywhere. We did not include this to
avoid technicalities. Notice that the reasoning by density in our proof from above
48
does not work, at least not directly, for such functions. The interested reader may
have a look at Rademacher's Theorem.
(c) In 1951 Caldéron [3] proved the following. If u ∈ W 1,p (Ω) with N < p ≤ ∞, then u
coincides almost everywhere with some dierentiable function. The derivatives of
the latter coincide with the corresponding weak derivatives of u almost everywhere.
The argument is based on Lebesgue's dierentiation theorem.
End Lec 09
We start with recapitulating the important continuous embeddings of rst order Sobolev
spaces W 1,p (RN ) with 1 ≤ p < ∞. In the past lectures we have proved the following:
Np
(i) (Theorem 6.4) If 1 < p < N : W 1,p (RN ) ↪ Lq (RN ) for p≤q≤ N −p
(ii) (Theorem 6.6) If p = N : W 1,p (RN ) ↪ Lq (RN ) for p≤q<∞
(iii) (Corollary 7.3) If p > N: W 1,p
(R ) ↪ C
N 0,α
(R )
N
for α=1− N
p
These result show to which extent functions belonging to W 1,p (RN ) are better than
ordinary L (R )-functions; the existence of a weak gradient in L (R ; R ) regularizes
p N p N N
the function. Singularities become milder (1 < p < N or p = N ) or even become impossible
(p > N ).
What is the consequence for higher order Sobolev spaces? Assume 1 ≤ p < N . For any
u ∈ W (R ) we have ∂1 u, . . . , ∂N u ∈ W (R ) with weak
2,p N 1,p N
derivatives ∂j (∂i u) = ∂ij u ∈
Lp (RN ). Accordingly, we may apply the embeddings for rst order Sobolev spaces to get
∂1 u, . . . , ∂N u ∈ Lq (RN ) for q as in (i). This implies u ∈ W 1,q (RN ), hence
Np
W 2,p (RN ) ↪ W 1,q (RN ) for p≤q≤ .
N −p
Np Nq
W 2,p (RN ) ↪ W 1,q (RN ) ↪ Lr (RN ) for p≤q≤ , q < N, q ≤ r ≤ .
N −p N −q
(ii) If p= N
2: W 2,p (RN ) ↪ Lr (RN ) for p ≤ r < ∞.
For p> N
2 we get
49
1, NN−p
p
(iii) If
N
2 < p < N : W 2,p (RN ) ↪ W (RN ) ↪ C 0,α (RN ) for α=2− N
p.
(iii) If p = N : W 2,p (RN ) ↪ ⋂p≤q<∞ W 1,q (RN ) ↪ ⋂0<α<1 C 0,α (RN ).
(iv) If p > N : W 2,p (RN ) ↪ C 1,α (RN ) for α = 1 − Np .
(because u, ∂1 u, . . . , ∂N u ∈ C (RN ) implies u ∈ C 1,α (RN ))
0,α
Np
(A) If 1≤p< N
k: W k,p (RN ) ↪ Lr (RN ) for p≤r≤ N −kp .
(B) If p= N
k: W
k,p
(RN ) ↪ Lr (RN ) for p ≤ r < ∞.
(C) If p> k and p ∉ N:
N N
W k,p
(R ) ↪ C l,α (RN )
N
for l = k − ⌊ Np ⌋ − 1, α ∶= 1 + ⌊ Np ⌋ − N
p
The corresponding embeddings on bounded Lipschitz domains are the same up to repla-
cing p≤r by 1≤r in (A) and (B).
An excursion: The Calculus of Variations was invented to prove the existence of mini-
mizers of a given energy functional. In physical applications this can be
1
I ∶ H01 (Ω) → R, u↦ ∫ ∣∇u∣ dx − ∫ f u dx
2
2 Ω Ω
where, for simplicity, f ∈ L2 (Ω). One can show that such functionals indeed have unique
minimizers u that are (again unique) weak solutions of the boundary value problem
1 1
J ∶ H01 (Ω) → R, u↦ ∫ ∣∇u∣ dx + ∫ ∣u∣ dx − ∫ f u dx
2 q
2 Ω q Ω Ω
For which q > 1 is this well-dened? Sobolev's Embedding Theorem shows that this
functional is well-dened (even continuously dierentiable) provided that 1<q≤ 2N
N −2 .
Having proved the existence of a unique minimizer, which one can do with abstract
35
methods , one has found a weak solution to the nonlinear boundary value problem
1 1
J ∶ W01,p (Ω) → R, u↦ ∫ ∣∇u∣ dx + ∫ ∣u∣ dx − ∫ f u dx
p q
2 Ω q Ω Ω
Np
one has a good theory available for p≤q≤ N −p . The conclusion is that Sobolev's Em-
bedding Theorem allows to treat nonlinear problems with the methods of the Calculus
of Variations.
End Lec 10
35
The Direct Method of the Calculus of Variations where lower semi-continuity and compact embed-
dings are heavily exploited. This is the topic on a course on nonlinear boundary value problems.
50
9 Compact Embeddings: The Rellich-Kondrachov Theorem
and beyond
We will see in the Exercises that these assumptions are natural in the sense that typi-
cally the embeddings are not compact for unbounded Ω or Sobolev-critical exponents.
Compactness is of utmost importance for the whole theory of analysis, notably elliptic
boundary value problems and the calculus of variations. We will later provide some more
information about this. We start with the denition of compactness.
The operator A ∶ l2 (N) → l2 (N), (cn )n∈N ↦ (an cn )n∈N is compact if and only if
(an )n∈N is a null sequence.
are compact.
51
9.1 Compact Embeddings into Hölder spaces
The starting point of compactness investigations is the Ascoli-Arzelà Theorem from 1884
? ?
[ ] resp. 1894 Arzelà [ ]. It relies on the notion of equicontinuity.
Satz 9.3 (Ascoli(1884), Arzelà (1894)) . Let K ⊂ RN be bounded and closed and let
(fn )n∈N ⊂ C(K) be a pointwise bounded and equicontinuous sequence. Then (fn )n∈N has
a uniformly convergent subsequence.
Beweis:
We choose x1 , x2 , . . . such that Q ∩ K = {xi ∶ i ∈ N}. Then (fn (x1 ))n∈N is (by assumption)
a bounded sequence of real numbers. So there is an injective map ψ1 ∶ N → N such that
the subsequence (fψ1 (n) (x1 ))n∈N converges. Next, (fψ1 (n) (x2 ))n∈N is a bounded sequence
of real numbers and we nd another injective map ψ2 ∶ N → N such that the subsequence
(fψ1 (ψ2 (n)) (x2 ))n∈N converges. Since it is a subsequence of the previous subsequence,
we get that both (fψ1 (ψ2 (n)) (x1 ))n∈N , (fψ1 (ψ2 (n)) (x2 ))n∈N converge. Inductively, one nds
injective maps ψ1 , ψ2 , . . . , ψk ∶ N → N such that the sequences (fΨ (n) )(xi ))n∈N converge
k
for i = 1, . . . , k where Ψk (n) ∶= ψ1 (ψ2 (. . . (ψk (n)))). Dene the diagonal sequence gn (x) ∶=
fΨn (n) (x). Then (gn )n∈N is a subsequence of (fn )n∈N and we want to show that it is a
Cauchy sequence in C(K).
M
K ⊂ ⋃ Bδ (xi ) for δ = δε/3 as above.
i=1
ε
∣gn (xi ) − gm (xi )∣ ≤ for i = 1, . . . , M and n, m ≥ n0 .
3
Then we have for all x∈K and n, m ≥ n0
∣gn (x) − gm (x)∣ ≤ min [∣gn (x) − gn (xi )∣ + ∣gn (xi ) − gm (xi )∣ + ∣gm (xi ) − gm (x)∣]
i=1,...,M
52
ε
≤ + min [∣gn (x) − gn (xi )∣ + ∣gm (xi ) − gm (x)∣]
3 i=1,...,M
≤ε
by choosing xi (dependent on x) such that ∣x − xi ∣ < δε/3 , which is possible as we saw
above. So (gn )n∈N is a Cauchy sequence in the Banach space (C(K), ∥ ⋅ ∥C(K) ) and thus
converges uniformly to some continuous function. ◻
Satz 9.4. Let Ω ⊂ RN be bounded and 1 > α > β > 0. Then the embedding C 0,α (Ω̄) ↪
C 0,β
(Ω̄) 36
is compact .
Beweis:
Let (fn )n∈N be bounded in C 0,α (Ω) with M ∶= supn∈N ∥fn ∥0,α < ∞. We have seen above
that (fn )n∈N is then equicontinuous. Moreover, this sequence is pointwise bounded due to
∥fn ∥C(Ω) ≤ ∥fn ∥0,α ≤ M for all n ∈ N. So the Ascoli-Arzelà Theorem provides a uniformly
convergent subsequence (fnj )j∈N with limit f ∈ C(Ω̄). We want to show fnj → f in
ε ε ε α−β
β
∥f − fj ∥C(Ω̄) < min { , ( ) } for all j ≥ j0 .
2 4 4M
Then we get for x≠y∈Ω
ε α−β
1 ∣(f − fj )(x) − (f − fj )(y)∣ (9.1) ε
∣x − y∣ < ( ) ⇒ ≤ 2M ∣x − y∣α−β < ,
4M ∣x − y∣β 2
ε α−β
1 ∣(f − fj )(x) − (f − fj )(y)∣ (9.2) 2∥f − fj ∥C(Ω̄) ε
∣x − y∣ ≥ ( ) ⇒ ≤ < .
4M ∣x − y∣β ∣x − y∣β 2
This implies
ε ε
∥f − fj ∥0,β = ∥f − fj ∥C(Ω̄) + [f − fj ]0,β < + = ε,
2 2
which is all we had to show. ◻
With a little bit of technical work, one may prove the more general statement that
C k1 ,α1 (Ω̄) ↪ C k2 ,α2 (Ω̄) is compact provided that k 1 , k 2 ∈ N0 und α1 , α2 ∈ (0, 1) satisfy
k1 + α1 > k2 + α2 .
36
We haven't even proved the weaker statement C 0,α (Ω̄) ⊂ C 0,β (Ω̄) yet. It is a consequence of this result.
53
Korollar 9.5. ⊂ RN be a bounded Lipschitz domain and assume N < p < ∞. Then
Let Ω
the embedding W (Ω) ↪ C 0,β (Ω) is compact provided that 0 < β < 1 − Np . In particular,
1,p
Beweis:
Let (un )n∈N be bounded in W 1,p (Ω). Then Morrey's Embedding Theorem shows that
there is a C > 0 such that
for α ∶= 1 − Np > 0. So Theorem 9.4 implies that (un )n∈N has a subsequence that converges
in C 0,β
(Ω). This proves the rst claim. In view of
1 1
∥un − u∥Lq (Ω) ≤ ∥un − u∥L∞ (Ω) ∣Ω∣ q ≤ ∥un − u∥C 0,β (Ω) ∣Ω∣ q
Notice that this has an abstract generalization: The concatenation of bounded linear
operator and a compact linear operator is a compact linear operator.
End Lec 11
We now prove the (more important) statement that Sobolev's Embedding W 1,p (Ω) ↪
Lq (Ω) 1 ≤ q < p∗ . This is the Rellich-Kondrachov Theorem Rellich [19]
is compact for
proved it in the special case p = q = 2 and Kondrachov [11] extended this result to general
p, q . A modern proof may be based on a characterization of precompact subsets of Lp (Ω)
respectively L (R ). We use without proof the following characterization of such sets.
p N
Proposition 9.6. Let (X, ∥ ⋅ ∥X ) be a Banach space and V ⊂ X a subset. Then the
following statements are equivalent:
54
Proposition 9.7. Let f ∈ Lp (RN ). Then f (⋅ + h) → f in Lp (RN ) as ∣h∣ → 0.
Beweis:
Choose ε > 0 and g ∈ C0∞ (RN ) such that ∥f −g∥p < ε
3 , see Theorem 4.8. Since g is uniformly
continuous with compact support, the Dominated Convergence Theorem yields a δ > 0
such that ∣h∣ ≤ δ implies ∥g(⋅ + h) − g∥p < ε
3 , whence
∥f (⋅ + h) − f ∥p ≤ ∣h∣∥∇f ∥p .
Beweis:
By density (Lemma 4.12), it suces to prove this inequality for C0∞ (RN ). We use
1
1 1 1 p
∣f (x + h) − f (x)∣ = ∣∫ ∇f (x + th) ⋅ h dt∣ ≤ ∣h∣ ∫ ∣∇f (x + th)∣ dt ≤ ∣h∣ (∫ ∣∇f (⋅ + th)∣ dt) .
p
0 0 0
1 1
1 p 1 p
∥f (⋅ + h) − f ∥p ≤ ∣h∣ (∫ ∫ ∣∇f (x + th)∣ dt dx) = ∣h∣ (∫
p
∥∇f ∥pp dt) = ∣h∣∥∇f ∥p .
RN 0 0
The historical background of the following result is beautifully described in the survey
paper [10].
Satz 9.9 (Fréchet (1908), Kolmogorov (1931), M. Riesz (1933)). Assume 1≤p<∞ and
N ∈ N. Then a family F is precompact in L (R )
p N
if and only if the following conditions
hold:
55
Beweis:
We rst show that a precompact family F satises (i),(ii),(iii). So let ε > 0. Then there
are g1 , . . . , gm ∈ Lp (RN ) such that
m
F ⊂ ⋃ Bε/3 (gi ). (9.3)
i=1
ε
max ∥gi (⋅ + h) − gi ∥p < for ∣h∣ ≤ δε .
i=1,...,m 3
For any given f ∈F we may then choose (according to (9.3)) i ∈ {1, . . . , m} such that
∥f − gi ∥p < ε
3 . Hence, (ii) results from
To prove (iii) let us choose φ1 , . . . , φm ∈ C0∞ (RN ) such that ∥gi − φi ∥p < 2ε
3 , see Theo-
rem 4.8, set Kε ∶= ⋃i=1 supp(φi ). Choosing gi as above for any given f ∈ F
m
we get
Now assume that F ⊂ Lp (RN ) satises (i),(ii),(iii), let ε>0 be arbitrary. The strategy is
to approximate the family by a family of continuous functions to which we can apply the
Ascoli-Arzelà Theorem. Proposition 4.7 and (ii) show that a nonnegative ρ ∈ C0∞ (RN )
with ∥ρ∥1 = 1 and suciently small support may be chosen in such a way that the
56
37
following holds for all(!) f ∈ F:
1
p p
∥ρ ∗ f − f ∥p = (∫ ∣∫ ρ(x − y)f (y) dy − f (x)∣ dx)
RN RN
1
1 1 p p
≤ (∫ ∣∫ ρ(h) p′ ⋅ ρ(h) p (f (x − h) − f (x)) dh∣ dx)
RN RN
1
1 1 p
≤ (∫ ∥ρ p′ ∥pp′ ∥ρ p (f (x − ⋅) − f (x))∥pp dx)
RN
1 (9.4)
1
p
≤ ∥ρ∥1 (∫
p′
ρ(h) (∫ ∣f (x − h) − f (x)∣ dx) dh) p
(Fubini)
RN RN
1
p
≤1⋅ sup ∥f (⋅ + h) − f ∥p ⋅ (∫ ρ(h) dh)
h∈supp(ρ) RN
ε
≤ sup ∥f (⋅ + h) − f ∥p < for all f ∈ F.
h∈supp(ρ) 3
Having chosen ρ in such a way we consider the family of continuous(!) functions
sup ∣(ρ ∗ f )(x) − (ρ ∗ f )(y)∣ = sup ∣∫ ρ(x − z)f (z) dz − ∫ ρ(y − z)f (z) dz∣
x,y∈K,∣x−y∣<δ x,y∈K,∣x−y∣<δ RN Rn
57
Extending the functions gi trivially to RN , we obtain for all f ∈F
min ∥f − gi ∥Lp (RN ) = min (∥f ∥Lp (RN ∖K) + ∥f − gi ∥Lp (K) )
i=1,...,m i=1,...,m
ε
≤ + min (∥f − ρ ∗ f ∥Lp (K) + ∥ (ρ ∗ f )∣K −gi ∥Lp (K) )
3 i=1,...,m ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
∈G
ε ε
≤ + + ε̃ ⋅ ∥1∥Lp (K)
3 3
= ε.
Hence
m
F ⊂ ⋃ {h ∈ Lp (RN ) ∶ ∥h − gi ∥Lp (RN ) < ε},
i=1
Beweis:
The rst and main step is to prove the compactness of W 1,p (Ω) ↪ Lp (Ω). We have to
show that every bounded sequence (un )n∈N in W 1,p
(Ω) has a subsequence that converges
in Lp (Ω). To this end we show that
is precompact provided that E ∶ W 1,p (Ω) → W 1,p (RN ) denotes Stein's Extension opera-
tor and χ∈ C0∞ (RN ) satises χ(x) = 1 for x ∈ Ω, set K ∶= supp(χ).
(i) We have
≤ C∣h∣.
58
So Theorem 9.9 shows that F is precompact in Lp (RN ) and hence fn → f in Lp (RN )
after passing to a subsequence. In particular, for u ∶= f 1Ω ,
To treat general exponents 1 ≤ q < p∗ we use Lyapunov's Inequality and choose θ ∈ (0, 1)
according to
1
q = θ
1 + 1−θ
p∗ . Then
We now comment on why the compactness is important. The short answer is that it
shows that the solution theory of elliptic boundary value problems can be reduced to the
solution theory for linear problems of the form
where f ∈ L2 (Ω) and K ∶ L2 (Ω) → L2 (Ω) is compact. The solution theory for such equa-
tion is well-known; it is governed by Fredholm's Alternative. Without the compactness
assumption this theory breaks down.
38
It maps f ∈ L2 (Ω) to the unique H01 (Ω)-solution of −∆u + u = f . It is therefore a kind of inverse (a
right inverse) of the dierential operator −∆ + 1 in a weak sense.
59
10 Poincaré's Inequality and Applications
In this section we want to remove some insuciency in our analysis of the boundary
value problem
−∆u + c(x)u = f (x) in Ω, u ∈ H01 (Ω).
The very important case c≡0 was not covered by this discussion because our approach
required c(x) to be positive so that the bilinear form
is bounded and coercive on H01 (Ω). Being given these properties we deduced the existence
of solutions to this boundary value problem from the Lax-Milgram Theorem resp. Riesz'
Representation Theorem. On the other hand, it is known from Classical PDE Theory,
notably Perron's method, that problems of the form −∆u = f are equally well-behaved,
at least for continuous right hand sides and bounded domains Ω ⊂ RN . So the question
is immediate whether the weak solution approach using Sobolev spaces may be rened
to cover this case is well. This is the topic we want to discuss here.
1/2
(∫ ∣∇u∣2 dx) ≥ α∥u∥H 1 (Ω) (u ∈ H 1 (Ω))
Ω
cannot hold for any positive α. In fact, nontrivial constant functions belong to H 1 (Ω)
and give zero on the left and something positive on the right, contradiction! Here we used
that Ω is a bounded domain. But it turns out that the estimate
1/2
(∫ ∣∇u∣2 dx) ≥ α∥u∥H 1 (Ω) (u ∈ H01 (Ω))
Ω
Such an inequality was used in the proof of Sobolev's Embedding Theorem on RN where
H (R ) =
1 N
H01 (RN ). Using the Mean Value Theorem we expressed u(x) in terms of
its derivatives only and estimated the integrals using Hölder's Inequality. This worked
because all elements of the dense subspace C0∞ (RN ) vanish at innity. We will see below
that the same sort of idea works in a general bounded domain Ω as long as u vanishes
somewhere in Ω.
60
Satz 10.1 (Poincaré (1890), Friedrichs (1928)) . Assume Ω ⊂ {x ∈ RN ∶ a < x ⋅ v < b} for
some v ∈ R , ∣v∣ = 1.
N
Then, we have for 1≤p<∞ and u ∈ W01,p (Ω)
p
∥u∥p ≤ (b − a)∥∂v u∥p .
2
∞ 1,p
Beweis. Since C0 (Ω) is dense in W0 (Ω) (by denition), it suces to prove this estimate
∞
for u ∈ C0 (Ω). Choose x0 ∈ R such that x0 ⋅ v =
N a+b
2 , so
b−a
∣(x − x0 ) ⋅ v∣ ≤ for all x ∈ Ω.
2
39
In the case p>1 we use the following identity :
Hence,
Korollar 10.2. Assume that Ω ⊂ RN is a bounded domain and 1 ≤ p < ∞. Then ∥⋅∥W01,p (Ω)
is a norm on W01,p (Ω) that is equivalent to ∥ ⋅ ∥W 1,p (Ω) .
39
This identity holds in the weak sense. It is not immediate, but rather follows by approximation of
t ↦ ∣t∣p by smooth versions such as t ↦ (t2 + ε2 )p/2 − εp .
61
Beweis:
We only show
β∥u∥W 1,p (Ω) ≥ ∥u∥W 1,p (Ω) ≥ α∥u∥W 1,p (Ω) (u ∈ W01,p (Ω))
0
p p p p
∥u∥pW 1,p (Ω) = ∥u∥pLp (Ω) + ∥∣∇u∣∥pLp (Ω) ≤ ∣ (b − a)∣ ∥∂v u∥pLp (Ω) ≤ ∣ (b − a)∣ ∥∇u∥pLp (Ω)
2 2
We conclude that one possible choice is
p −p 1
p
α ∶= (( (b − a)) + 1) .
2
◻
Attention: ∥ ⋅ ∥W 1,p (Ω) is not a norm on W 1,p (Ω), only on W 1,p (Ω). Given that the norms
0
are equivalent on W01,p (Ω), we know that this space is a Banach space when equipped
with this new norm. The quantity
∥u∥Lp (Ω)
CP (Ω, p) ∶= sup
u∈W01,p (Ω)∖{0} ∥∣∇u∣∥Lp (Ω)
Let's draw the consequences for our boundary value problem (3.3), which was given by
Korollar
2N
10.3. Ω ⊂ RN , N ≥ 3 be a bounded Lipschitz domain and assume f ∈
Let
N
L N +2 (Ω), c ∈ L (Ω) with c ≥ 0 almost everywhere in Ω. Then (3.3) has a unique weak
2
62
Beweis:
We recall that we want to solve a(u, v) = l(v) for all v ∈ H01 (Ω) where
40
The boundedness of a follows from
The improvement is that our earlier version required c(x) ≥ µ > 0 for some µ > 0. Even
this can be further improved to c(x) > −λ1 (Ω) a.e. where λ1 (Ω) denotes the smallest
positive eigenvalue of the Dirichlet-Laplacian. Notice that Corollary 10.3 estimates the
solution in the norm ∥ ⋅ ∥H 1 (Ω) , which is dierent from the H 1 (Ω)-norm that we used
0
earlier.
End Lec 13
40
If helpful (and c ∈ L∞ (Ω)), one may as well use
2
∫ ∣c(x)∣∣u(x)∣∣v(x)∣ dx ≤ ∥c∥∞ ∥u∥L2 (Ω) ∥v∥L2 (Ω) ≤ CP (Ω, 2) ∥c∥∞ ∥u∥H01 (Ω) ∥v∥H01 (Ω) .
Ω
63
10.2 Some limitations and generalizations
Our rst remark is concerned with the failure of Poincarés Inequality in suciently thick
domains. We saw that it holds if a given domain has nite length in one direction, so
the following result may be seen as a kind of converse to that statement.
Satz 10.4. Ω ⊂ RN be open such that there is a sequece (xn ) ⊂ Ω and (rn ) ⊂ R+ such
Let
1,p
that Brn (xn ) ⊂ Ω and rn → ∞. Then Poincaré's Inequality cannot hold on W0 (Ω) for
any given p ∈ [1, ∞).
Beweis:
Choose a nontrivial test function χ ∈ C0∞ (B1 (0)) and dene un (x) ∶= χ( x−x
rn ) .
n
Then
Next we turn towards more abstract versions of Poincaré's Inequality that will imply
Wirtinger's Inequality. We start with the following version.
Satz 10.5. Let Ω ⊂ RN be a bounded Lipschitz domain and let V ⊂ W 1,p (Ω) be a closed
subspace such that the only constant function in V is the trivial one. Then, for each
p ∈ (1, ∞), there is C>0 such that
Beweis:
We argue by contradiction and assume that there is a bounded sequence (un ) ⊂ V such
that
∥un ∥Lp (Ω) → 1 and ∥∇un ∥Lp (Ω) → 0.
Then (un ) is bounded in W (Ω) and the Rellich-Kondrachov Theorem provides a sub-
1,p
64
∥∇unk ∥Lp (Ω) → 0 implies
0 = − lim ∫ ∂j unk ϕ dx
k→∞ Ω
= − lim ∫ unk ∂j ϕ dx
k→∞ Ω
so the weak gradient of u is identically zero on Ω. From the Exercises we conclude that
u ∈ V must be constant. Our assumption on V then implies u = 0, which contradicts
∥u∥Lp (Ω) = 1. We thus conclude that our assumption was false, i.e., there is some Poin-
caré Inequality in V , which is all we had to show. ◻
We point out that the same argument yields Poincaré Inequalities of the form ∥u∥Lq (Ω) ≤
C∥∇u∥Lp (Ω) q ∈ [1, p∗ ). The classical Poincaré Inequality corre-
for general exponents
1,p
sponds to the choice V = W0 (Ω). Another important inequality, called Wirtinger's
Inequality, arises from the choice V = {u ∈ W (Ω) ∶ ∫Ω u dx = 0}. This subset is in-
1,p
deed closed because un → u ∈ W (Ω) with un ∈ V implies ∥un − u∥L1 (Ω) → 0 and in
1,p
particular
∫ u dx = n→∞
lim ∫ un dx = 0.
Ω Ω
Here: ∫−Ω u dx = 1
∣Ω∣ ∫Ω u dx.
Beweis:
u ∈ W 1,p (Ω) the function v ∶= u − ∫−Ω u dx satises v ∈ V ∶= {u ∈ W 1,p (Ω) ∶ ∫Ω u dx =
For all
0} and ∇v = ∇u. Hence, denoting by C the Poincaré constant of the subspace V given
by Theorem 10.5, we get
∥u −∫− u dx∥ = ∥v∥Lp (Ω) ≤ C∥∇v∥Lp (Ω) = C∥∇u∥Lp (Ω) for all u ∈ W 1,p (Ω).
Ω Lp (Ω)
Note that we may replace ∫−Ω u dx by ∫−A u dx for any A ⊂ Ω of positive measure. We
now present some nice application of the optimal Wirtinger Inequality in one spatial
dimension. The latter reads
2π 2π 2 2π
∫ (f (s) −∫− f (t) dt) ds ≤ ∫ f ′ (s)2 ds (f ∈ H 1 (Ω)) (10.2)
0 0 0
65
We follow [12]. The setting is as follows. Let γ(t) ∶= (X(t), Y (t)) for t ∈ [0, 2π] where
X, Y ∶ R → R are smooth2π -periodic functions that are parametrized by arclength. In
particular, its length is 2π . From the Divergence Theorem we know42 that the area of
the encircled 2D-domain Ω is given by
1
∣Ω∣ = ∫ div(x, y) d(x, y)
2 Ω
1
= ∫ (x, y) ⋅ ν(x, y) dσ(x, y)
2 ∂Ω
1 2π (Y ′ (s), −X ′ (s))
= ∫ (X(s), Y (s)) ⋅ ∣(X ′ (s), Y ′ (s)∣ ds
2 0 ∣(Y ′ (s), −X ′ (s))∣ ´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¶
´¹¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¸ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹ ¹¶ =1
=ν(X(s),Y (s))
1 2π
= ∫ (X(s), Y (s)) ⋅ (Y ′ (s), −X ′ (s)) ds
2 0
1 2π
= ∫ X(s)Y ′ (s) − X ′ (s)Y (s) ds
2 0
2π
=∫ X(s)Y ′ (s) ds.
0
Here the last equality follows from integration by parts and that (X, Y ) are 2π -periodic
so that the boundary terms vanish. We want to show that the largest area comes a
circular circumference, proving thus the isoperimetric inequality in some special case.
2π
We set X̄ ∶= ∫−0 X(s) ds we get
2π
∣Ω∣ = ∫ X(s)Y ′ (s) ds
0
2π
=∫ (X(s) − X̄)Y ′ (s) ds
0
1 2π ′
≤ ∫ ((X(s) − X̄) + Y (s) ) ds
2 2
2 0
(10.2) 1 2π
′ ′
≤ ∫ (X (s) + Y (s) ) ds
2 2
2 0
1 2π
= ∫ 1 ds
2 0
= π.
Let us discuss the equality case. Since ab = 12 (a2 + b2 ) if and only if a + b = 0, we obtain
42
We rather assume that Ω is such that the Divergence Theorem applies. This can be ensured if γ does
not intersect itself, i.e. γ(t) = γ(s) for s, t ∈ R if and only if s − t ∈ 2πZ.
66
A = − ∫0 X ′ (s)Y (s) dx instead of A = ∫0 X ′ (s)Y (s) ds,
2π 2π
Similarly, starting from the formula
we infer
Y (s) − Ȳ − X ′ (s) = 0 for almost all s ∈ [0, 2π].
We thus obtain that X̃ ∶= X − X̄, Ỹ ∶= Y − Ȳ satisfy
X̃ ′′ (s) = Ỹ ′ (s) = −X̃(s), Ỹ ′′ (s) = −X̃ ′ (s) = −Ỹ (s) for almost all s ∈ [0, 2π].
X(s) X̄ X̃(s)
( )=( )+( )
Y (s) Ȳ Ỹ (s)
X̄ α sin(s) + β cos(s)
=( )+( )
Ȳ α cos(s) − β sin(s)
X̄ √ sin(s + s0 )
= ( ) + α2 + β 2 ( )
Ȳ cos(s + s0 )
α β
where cos(s0 ) = √ , sin(s0 ) = √ .
α +β
2 2 α + β2
2
X(s) X̄ sin(s + s0 )
( )=( )+( ) (s0 ∈ R).
Y (s) Ȳ cos(s + s0 )
So equality can only hold if (X, Y ) describes the unit circle, and it does! The conclusion
is that among all curves with length 2π the circle has the largest area, which is π.
End Lec 14
11 Trace Theorem
In this section we are going to talk about traces of functions belonging to Sobolev spaces
W 1,p (Ω) where 1 ≤ p < ∞ and Ω ⊂ RN is a bounded Lipschitz domain. As before,
the discussion extends to higher order Sobolev spaces W k,p (Ω) (as in Section ??) using
∂1 u, . . . , ∂N u ∈ W k−1,p
(Ω) for all u ∈ W k,p
(Ω). A trace is nothing but a reasonable
generalization of the restriction of a given function to a given subset of Γ⊂Ω having
lower dimension. For the discussion of elliptic boundary value problems the particular
Γ = ∂Ω is the most important one. For continuous functions u ∈ C(Ω) the denition
of u∣∂Ω is evident. Accordingly, for p>N and u ∈ W 1,p (Ω) ⊂ C 0,α (Ω), α = 1 − Np there
is no problem either because the trace of u is simply dened as the restriction of the
43
C 0,α (Ω)-representative of u to the boundary . But this is not enough, because in the
43
In other words, denoting the trace operator by γ , γu = ũ∣∂Ω where ũ ∈ C 0,α (Ω) is uniquely determined
by ũ = u almost everywhere
67
most important case p = 2 this restricts all considerations to the case N = 1. In particular,
no PDE theory can be built on that. So the task is to nd a meaning of a trace for
functions u ∈ W 1,p (Ω) where 1 ≤ p ≤ N.
C(Ω) ∩ W 1,p (Ω). The function γu is called the trace of u onto ∂Ω.
Our aim is to show that bounded Lipschitz domains admit such trace operators. In the
proof we will need the following technical fact taken from [9, Lemma 1.5.1.9].
Proposition 11.2. Let Ω ⊂ RN be a bounded Lipschitz domain with unit outer normal
vector eld ν ∶ ∂Ω → R
N
and surface measure σ . Then there is a smooth vector eld
F ∈ C ∞ (Ω; RN ) such that F (x) ⋅ ν(x) ≥ 1 for σ -almost all x ∈ ∂Ω and there is t∗ > 0 such
c ∗ ∗
that x + tF (x) ∈ Ω for 0 < t < t , x + tF (x) ∈ Ω for −t < t < 0 for all x ∈ ∂Ω.
The idea of the proof is to dene F locally, i.e., within suciently small but nitely
many neighbourhoods of boundary pieces, as a mollied (smoothened) version of the
unit normal vector eld ν ∶ ∂Ω → RN itself. If I have time: see the Appendix for a proof.
It is particularly important that F is dened and smooth on Ω and not only on ∂Ω.
Satz 11.3. LetΩ ⊂ RN be a bounded Lipschitz domain and 1 ≤ p < N . Then there is
(N −1)p
operator γ ∶ W (Ω) → Lq (∂Ω) for 1 ≤ q ≤ N −p . It is compact provided that
1,p
a trace
(N −1)p
1≤q< N −p .
Beweis:
In view of Theorem 4.11 we rst consider prove that
∥γu∥Lq (∂Ω) = ∥u∥Lq (∂Ω) ≤ C∥u∥W 1,p (Ω) for all u ∈ C0∞ (RN ).
This turns out to be the main step of the proof. We focus on q > 1; the general case then
follows from the above inequality for all q>1 and the Dominated Convergence Theorem
as q ↘ 1. The Divergence Theorem and nally Sobolev's Embedding Theorem give
≤∫ ∣u∣q F ⋅ νdσ
∂Ω
= ∫ div(∣u∣q F ) dx
Ω
68
≤ C∥F ∥C 1 (Ω;RN ) ∫ (q∣u∣q−1 ∣∇u∣ + ∣u∣q ) dx
Ω
This allows to dene the trace operator γ ∶ W 1,p (Ω) → Lq (Ω) by density:
γu ∶= lim ũ∣∂Ω ,
∥ũ−u∥W 1,p (Ω) →0,ũ∈C0∞ (RN )
that we have proved above. Indeed, if (un )n∈N is a bounded sequence in W 1,p (Ω) then
the Rellich-Kondratchov Theorem provides a subsequence, again denoted by (un )n∈N for
(q−1)p
(q−1)p
Np
simplicity that converges in L p−1 (Ω) and in Lq (Ω). Here we used
p−1 < N −p , which
(N −1)p
follows from q< N −p . Hence,
So (γ(un )) is a Cauchy sequence and thus converges. This proves the compactness. ◻
Formally, this result does not include the case p = N. As in the case of the Sobolev
Embedding Theorem, one may instead use the results for p<N to deal with this case
because of W 1,N (Ω) ⊂ W 1,p (RN ) for all p ∈ [1, N ), which is a consequence of the usual
embeddings of Lebesgue spaces on bounded domains. So we see that the trace operator
γ can be dened for all Sobolev spaces W 1,p (Ω) and hence the boundary values of any
such function makes sense in the sense of an L (∂Ω)-function. We are going to show
q
1,p
that u ∈ W (Ω), γu = 0 is equivalent to u ∈ W0 (Ω), so prescribing zero boundary data
1,p
1,p
requiring the trace to be zero resp. requiring W0 (Ω) is equivalent. One may rephrase
1,p
this as ker(γ) = W0 (Ω). To this end we rst generalize the integration-by-parts rule.
69
Proposition 11.4 (Integration by parts) .Ω ⊂ RN be a bounded Lipschitz domain
Let
and u ∈ W 1,p (Ω), v ∈ W 1,q (Ω) for 1 ≤ p, q < N such that p1 + 1q ≤ NN+1 . Then, for all
j = 1, . . . , N ,
∫ ∂j uv dx = ∫ γ(u)γ(v)νi dσ − ∫ u∂j v dx
Ω ∂Ω Ω
Beweis:
By Theorem 4.11 we choose un , vn ∈ C0∞ (RN ) such that un → u in W 1,p (Ω) und vn → v
in W 1,q
(Ω). The classical integration-by-parts rule gives
∫ ∂j un vn dx = ∫ un vn νj dσ − ∫ un ∂j vn dx
Ω ∂Ω Ω
(N −1)p (N −1)q
γ(un ) → γ(u) in L N −p (∂Ω), γ(vn ) → γ(v) in L N −q (∂Ω).
Moreover,
Np
∂j un → ∂j u in Lp (Ω), un → u in L N −p (Ω),
Nq
∂j vn → ∂j v in Lq (Ω), vn → v in L N −q (Ω)
N +1 N +1
Next dene r ∈ [1, ∞] via
1
r ∶= N − 1
p − 1 1
q , which is possible in view of p + 1
q ≤ N and
1
p + 1
q > 2
N > 1
N . Then
→0 (n → ∞),
→0 (n → ∞).
Moreover,
≤ C(∥un − u∥W 1,p (Ω) ∥vn ∥W 1,q (Ω) + ∥vn − v∥W 1,q (Ω) ∥u∥W 1,p (Ω) )
70
→0 (n → ∞).
We thus conclude
∫ ∂j uv dx = n→∞
lim ∫ ∂j un vn dx
Ω Ω
=∫ γ(u)γ(v)νj dσ − ∫ u∂j v dx
∂Ω Ω
End Lec 15
Beweis:
Let v ∈ Lp (∂Ω) and ε > 0. Then choose w ∈ C(∂Ω) such that ∥v−w∥Lp (∂Ω) < ε. This can be
justied as in Proposition 4.6, replacing the Lebesgue measure by the surface measure σ .
(It satises a regularity property analogous to the one from Lemma 4.5.) To approximate
w we use Tietze's Extension Theorem, see Theorem 13.6 in the appendix. It provides a
continuous function W ∈ C(RN ) such that W ∣∂Ω = w. Multiplying this function with a
cuto-function function having compact support which is identically 1 on∂Ω, we may
even w.l.o.g. assume that W has compact support. Finally, dene Vε ∶= ρε ∗ W . Then
Vε ∈ C ∞ (RN ) ⊂ C ∞ (Ω) and γ(Vε ) = Vε , γ(W ) = W ∣∂Ω = w imply
→0 as ε → 0.
Lemma 11.6. Let Ω ⊂ RN be a bounded Lipschitz domain, 1≤p<∞ and u ∈ W 1,p (Ω).
Then the following statements are equivalent:
71
(iii) The trivial extension U of u belongs to W 1,p (RN ) with
∂j U = ∂j u ⋅ 1Ω for j = 1, . . . , N.
(iv) There is C>0 such that ∣ ∫Ω u∂i ϕ dx∣ ≤ C∥ϕ∥Lp′ (Ω) for alle ϕ ∈ C0∞ (RN ).
Beweis:
(i) → (ii) That's trivial: If (un ) ⊂ C0∞ (Ω) satises un → u in W 1,p (Ω), then γ(un ) =
un ∣∂Ω = 0 and the Trace Theorem (for q ∶= p) imply
hence γu = 0.
This shows that U has a j -th weak derivative on RN given by ∂j U = ∂j u⋅1Ω . In particular,
U ∈W 1,p
(R ).
N
uε ⋅ 1Ω ∈ C0∞ (Ω).
∥uε ⋅ 1Ω − u∥W 1,p (Ω) ≤ ∥uε − U ∥W 1,p (Ω) ≤ ∥uε − Uε ∥W 1,p (RN ) + ∥Uε − U ∥W 1,p (Ω) → 0 as ε → 0.
72
This proves u ∈ W01,p (Ω).
= ∣∫ (∂j U )ϕ dx∣
RN
≤ ∫ ∣∂j u∣∣ϕ∣ dx
Ω
≤ ∥∂j u∥Lp (Ω) ∥ϕ∥Lp′ (Ω) .
(iv) → (ii) Integration by parts gives for all ϕ ∈ C0∞ (RN ) ⊂ C ∞ (Ω)
function satisfying χj (x) = 0 for x ∈ Kj and χj (x) = 1 for x ∈ ∂Ω. Such a function exists,
see Theorem 4.3. Replacing ϕ by ϕ ⋅ χj , we thus obtain for all j ∈ N
RRR RRR
RRR RR
∥∂j u∥Lp (Ω) ∥ϕχj ∥Lp′ (Ω) ≥ RRRR∫ γ(u)ϕ χj νj dσ − ∫ ∂j uϕχj dxRRRRR .
RRR ∂Ω ¯ Ω RRR
RR =1 RR
Sending j to innity we obtain from the Dominated Convergence Theorem
0 ≥ ∣∫ γ(u)ϕνj dσ∣ .
∂Ω
This holds for all ϕ ∈ C ∞ (Ω) and Proposition 11.5 thus implies γ(u)νj = 0 for all
j = 1, . . . , N . This gives γ(u) = 0, which is (ii). ◻
So we conclude that the functions from W01,p (Ω) are the ones for which the corresponding
trace is zero. One may use the trace operator to rene Poincaré's Inequality. We show
that it is not necessary that the functions vanish on the whole of ∂Ω, but a reasonably
large piece of it is already sucient. The following corollary thus contains the classical
Poincaré Inequality from Theorem ?? as the sepcial case Γ = ∂Ω.
1
∥u − ∫ γ(u) dσ∥ ≤ C∥∇u∥Lp (Ω) for all u ∈ W 1,p (Ω)
∣Γ∣ Γ Lp (Ω)
73
Beweis:
This is a consequence of Theorem 10.5 applied to v ∶= u − 1
∣Γ∣ ∫Γ γ(u) dσ belonging to
In fact, one may check that this subspace is closed and the only constant function in V
is the trivial one because u≡c implies
We sketch some application to our favourite elliptic boundary value problem where now
nontrivial boundary conditions will be allowed. Generalizing the previous approach via
the Lax-Milgram-Lemma we may now try to solve a(u, v) = l(v) where
Quite remarkable: For κ ≡ 0 we thus obtain the unique solution in H01 (Ω) with a dierent
working on H (Ω). In other words, the following two bilinear forms give
1
functional and
the same solution:
74
ã ∶ H01 (Ω) × H01 (Ω) → R, (u, v) ↦ ∫ ∇u ⋅ ∇v + c(x)u(x)v(x) dx
Ω
How is that possible? One can show that the Lax-Milgram Lemma provides the uniquely
determined minimizers of the corresponding functionals
1 1
I ∶ H 1 (Ω) → R, u↦ ∫ ∣∇u∣ + c(x)u(x) dx + ∫ (γu)(x) dσ(x) − ∫ f (x)u(x) dx
2 2 2
2 Ω 2 ∂Ω Ω
1
I˜ ∶ H01 (Ω) → R, u ↦ ∫ ∣∇u∣ + c(x)u(x) dx − ∫ f (x)u(x) dx
2 2
2 Ω Ω
One can see that the minimizer of I wants to make ∣γu∣ as small as possible keeping
the other terms xed. So it is not surprising that the minimizer satises γu = 0 and thus
belongs to H01 (Ω). One can make this rigorous using the Calculus of Variations (Euler-
Lagrange equations). Nevertheless it is preferable to work with H01 (Ω) since it does not
involve the Trace operator machinery. In particular, no boundary regularity is needed.
End Lec 16
12 Separability
Denition 12.1. A Banach space X is called separable if it has a countable dense subset.
We rst investigate whether Lp −spaces have this property. It turns out that the case
p=∞ is dierent from p ∈ [1, ∞). To prove the non-separability of L∞ (Ω) we are ging
to use the following criterion.
Proposition 12.2. Let X be a Banach space with an uncountable set of open, non-empty
and pairwise disjoint sets. Then X is not separable.
Beweis:
Denote the uncountable set by (Ui )i∈I and assume for contradiction that M ∶= {xn ∶ n ∈ N}
is a dense subset of X . Since the sets Ui are open and non-empty and M is dense, there
is some n = n(i) ∈ N such that xn(i) ∈ Ui . On the other hand Ui ∩ Uj = ∅ for i ≠ j , so we
must have n(i) ≠ n(j) for i ≠ j . This shows that n ∶ I → N ist injective, which implies
that I is countable, a contradiction. ◻
75
Beweis:
We rst prove (ii) using the previous proposition. For x ∈ Ω choose rx > 0 such that
Brx (x) ⊂ Ω and set
1
Ux ∶= {f ∈ L∞ (Ω) ∶ ∥f − 1Brx (x) ∥∞ < } .
2
Then (Ux )x∈Ω is an uncountable set of pairwise disjoint open and non-empty sets. The
disjointness follows from
To prove the separability of Lp (Ω) we use Theorem 4.8 where we proved that C0∞ (Ω) is
dense. So it suces to nd a countable subset of Lp (Ω) that approximates C0∞ (Ω) with
45
respect to the norm in L (Ω).
p
We choose
So consider any function ϕ ∈ C0∞ (Ω) and ε > 0. Then choose M ∈ N such that the support
of ϕ is contained in BM (0). Weierstrass' Approximation Theorem provides a polynomial
p̃ such that
1 ε
∥p̃ − ϕ∥C(B ⋅ ∣BM (0)∣ p < .
M (0)) 2
This implies
1 ε
∥p̃ ⋅ 1Ω∩BM (0) − ϕ∥Lp (Ω) = ∥p̃ − ϕ∥Lp (Ω∩BM (0)) ≤ ∥p̃ − ϕ∥C(B ⋅ ∣B M (0)∣ p < . (12.1)
M (0)) 2
Since p̃ is a polynomial, we have p̃(x) = ∑∣α∣≤n ãα xα for some n ∈ N and ãα ∈ R. Choosing
aα ∈ Q suciently close to ãα , we obtain for p(x) ∶= ∑∣α∣≤n aα xα :
1
∥p̃ ⋅ 1Ω∩BM (0) − p ⋅ 1Ω∩BM (0) ∥Lp (Ω) ≤ ∥p̃ − p∥C(B ⋅ ∣BM (0)∣ p
M (0))
1
≤ ∑ ∣ãα − aα ∣∥xα ∥C(B (0)) ∣BM (0)∣ p (12.2)
M
∣α∣≤N
ε
< .
2
As a consequence of (12.1) and (12.2),
45
This means that for each p ∈ P there is N ∈N and aα ∈ Q for multi-indices 0 ≤ ∣α∣ ≤ N such that
p(x) = ∑∣α∣≤N aα xα . This set is countable!
76
We draw the consequences for Sobolev spaces. We rst note that (nite) product spaces
46
Lp (Ω) × . . . × Lp (Ω) are also separable for 1 ≤ p < ∞. Dening now
we nd that the subspace Ψ(W k,p (Ω)) of Lp (Ω)K is closed. Moreover, Ψ is even isometric
by denition of the respective norms, in particular it is injective. We will use the following
criterion.
Beweis:
Let {xn ∶ n ∈ N} be dense in X and x∗ ∈ M . Then dene yn,m = x∗ if B1/m (xn ) ∩ M = ∅
and yn,m ∈ B1/m (xn )∩M (arbitrary) otherwise. We claim that Y ∶= {yn,m ∶ n, m ∈ N} ⊂ M
is dense. Indeed, for all x ∈ M and N > 0 we can nd n = n(N ) ∈ N such that ∥x−xn ∥ <
1
2N .
Then we have x ∈ B1/2N (xn ) ∩ M , so there is yn,2N ∈ B1/2N (xn ) ∩ M . Hence,
1 1 1
∥x − yn,2N ∥ ≤ ∥x − xn ∥ + ∥xn − yn,2N ∥ < + = .
2N 2N N
Since x ∈ M, N ∈ N were arbitrary and yn,2N ∈ Y , which is a countable subset of M, we
infer that M is separable. ◻
Beweis:
In the case 1 ≤ p < ∞ the space Ψ(W k,p (Ω)) is separable as a closed subspace of Lp (Ω)M
in view of the previous proposition. So if P̃ ⊂ Ψ(W (Ω)) is a countable dense subset,
k,p
−1
then P ∶= {Ψ (p) ∶ p ∈ P̃} is countable and dense in W (Ω).
k,p
To prove (ii) we essentially repeat the trick from the proof of Theorem 12.3 (ii). Choose
bounded open subsets Ω′ ⊂ RN −1 , I ⊂ R such that Ω′ × I ⊂ Ω and a cuto function
χ∈ C0∞ (RN ) such that χ(x) = 1 for all x ∈ Ω′ × I . W.l.o.g. 0 ∈ I . For z ∈ I we may then
choose rz > 0 such that Iz ∶= (z − rz , z + rz ) ⊂ I and
xN t1 tk−1
Fz (x) ∶= χ(x) ∫ ∫ ...∫ 1Iz (s) ds . . . dtk−1 where x = (x′ , xN ) ∈ Ω.
0 0 0
46
K can be computed in terms of N and k
47
We do not insist on the fact that M is a subspace. Note that the only issue is that the approximating
countable set has to be a subset of M .
77
48
Then z∈I implies Fz ∈ W k,∞ (Ω). We dene
1
Uz ∶= {f ∈ W k,∞ (Ω) ∶ ∥f − Fz ∥W k,∞ (Ω) < } (z ∈ I).
2
Then (Uz )z∈I is an uncountable set of pairwise disjoint open and non-empty subsets of
W k,∞ (Ω). The disjointness follows from
This result and Proposition 12.4 also imply that W0k,p (Ω) and other subspaces are sepa-
rable for 1 ≤ p < ∞. Moreover, one can modify the proof in such a way that W0k,∞ (Ω) is
seen not to be separable.
13 Reexivity
Let (X, ∥ ⋅ ∥X ) be a real Banach space. Then its dual space is dened by
48
Here χ ∈ C0∞ (RN ) implies that all weak partial derivatives of order ≤k are bounded on Ω.
78
The notion of a reexive space has to do with the nature of X ′′ . More precisely, dene
′′ ′
J ∶X →X via(Jf )(g) ∶= g(f ) for g ∈ X . This is well-dened because this map is
linear, satises J(0) = 0 and for f ∈ X ∖ {0} we have
Hence, J is a bounded linear operator. In a course on functional analysis one shows that
J is injective (using the Hahn-Banach Theorem), but it need not be surjective.
At rst sight it is entirely unclear why such a seemingly articial property should be
important and even if so, how it can be checked. To get a better feeling for this property:
W k,p
(Ω) is reexive if and only if 1<p<∞ (see below)
The sequence spaces l 1 , l ∞ , c0 are not reexive, C([0, 1]) is not reexive.
Its importance comes from the fact that in reexive Banach spaces, bounded sequences
have weakly convergent subsequences, see Corollary 13.8 below. This is the true ge-
neralization of the Bolzano-Weierstraÿ Theorem to innite-dimensional Banach spaces.
This result is the standard tool to prove the existence of minimizers of functionals in
the Calculus of Variations. In fact, the minimizers are in most cases constructed as the
weak limits of suitable bounded minimizing sequences for a given functional. The latter is
often nonlinear, but one can check that a property called weak lower-semicontinuity is
sucient. The reexivity of the space may be checked by proving the uniform convexity
of its norm.
End Lec 17
Denition 13.2 (Uniform Convexity). A normed vector space (X, ∥ ⋅ ∥X ) is called uni-
formly convex if
x+y
∀ε > 0 ∃δ > 0 ∀x, y ∈ X ( ∥x∥ = ∥y∥ = 1, ∥x − y∥ ≥ ε ⇒ ∥ ∥≤1−δ )
2
Lemma 13.3. Let 1<p<∞ and Ω ⊂ RN . Then (Lp (Ω), ∥ ⋅ ∥p ) is uniformly convex.
The same is true for the spaces Lp (Ω)K , 1 < p < ∞, K ∈ N, and subspaces thereof. A proof
can be found in [1, p.41-45]. The proof of the following result is given in the Appendix.
79
Satz 13.4 (Milman-Pettis). Assume that (X, ∥ ⋅ ∥X ) is a uniformly convex Banach space.
Then X is reexive.
Korollar 13.5. Let 1<p<∞ and Ω ⊂ RN . Then (Lp (Ω), ∥ ⋅ ∥p ) and (W k,p (Ω), ∥ ⋅ ∥k,p )
are reexive Banach spaces.
Beweis:
The statement for Lp (Ω) is a direct consequence of the previous two results. Moreover,
choosing Ψ ∶ W (Ω) → Lp (Ω)K as before, one nds that Ψ(W k,p (Ω)) is a closed sub-
k,p
space of the uniformly convex Banach space L (Ω) . Hence, (Ψ(W (Ω)), ∥ ⋅ ∥Lp (Ω)K )
p K k,p
49
is uniformly convex. Since Ψ is a linear isometry , this implies that (W (Ω), ∥ ⋅ ∥k,p )
k,p
We mention that closed subspaces of reexive Banach spaces are again reexive, see [2,
Proposition 3.20]. In particular, this is true for W0k,p (Ω) for k ∈ N, 1 ≤ p < ∞. We
nally provide the main motivation why one cares about the reexivity of Banach spaces,
notably of W k,p (Ω). To this end we introduce the following notions of convergence.
A detailed discussion about weak topologies and weak convergence is beyond the scope
of this course given that we are not interested in abstract functional analysis. Instead
some examples:
80
Satz 13.7 (Banach-Alaoglu) . Let X be a separable Banach space. Then every bounded
′ ⋆
sequences in X has a weak- -convergent subsequence.
Beweis:
We mimick the proof of the Ascoli-Arzelà Theorem. Let M ∶= {xn ∶ n ∈ N} be dense in
X and let (fk )k∈N be any bounded sequence in X ′, ∥fk ∥X ′ ≤ 1. One successively
w.l.o.g.
denes subsequences of (fk ) such that, after suitable relabeling the sequence at each
step,
etc.
f ∶ M → R, y ↦ lim fk (y).
k→∞
F ∶ X → R, x ↦ y→x,
lim f (y)
y∈M
To this end choose x∈X and ε>0 arbitrary. First take xi ∈ M such that ∥xi − x∥X ≤ ε
3.
Then we can nd k0 suciently large such that
ε
sup ∣fk (xi ) − F (xi )∣ ≤ .
k≥k0 3
This proves fk (x) → F (x) as k→∞ for any given x ∈ X, which is the claim. ◻
81
Korollar 13.8. Let X be a
51 (separable and) reexive Banach space, then every bounded
Beweis:
Let (xn ) be a bounded sequence in X . Then (Jxn )n∈N is a bounded sequence in X ′′ =
(X ) . Since X ′ is separable52 , Theorem 13.7 provides a subsequence (J(xnj ))j∈N such
′ ′
⋆ ′ ′
that J(xnj ) ⇀ T in (X ) . This means
Since X is reexive, we have T = J(x) for some x ∈ X. So we nd for any given f ∈ X′
In other words, xn j ⇀ x as j → ∞. ◻
Finally, let us mention that L1 (Ω), L∞ (Ω) are not reexive whenever Ω ⊂ RN is a
nontrivial open set, see [2, p.101f]. This property carries over to the Sobolev spaces
W k,1 (Ω), W k,∞ (Ω) for k ∈ N.
51
This remains true for non-separable Banach spaces, because one may then consider X̃ ∶=
span{xn ∶ n ∈ N} instead. This space is now separable by construction and reexive as a closed sub-
space of a reexive Banach space.
52
Y ′ separable implies
It is a general fact that Y separable. So the separability of X ≃ X ′′ implies the
′
separability of X .
82
End Lec 18
Appendix
We recall the Riesz-Fischer Theorem that establishes the completeness of Lp (Ω) for
1 ≤ p ≤ ∞. As a byproduct, it gives useful additional information about subsequences of
(convergent) Cauchy sequences in Lp (Ω).
Satz 13.1 (Riesz-Fischer [6,20]). Assume that Ω ⊂ RN and 1 ≤ p ≤ ∞. Then (Lp (Ω), ∥⋅∥p )
is complete. Additionally, for any Cauchy sequence (un ) ⊂ L (Ω) there is a subsequence
p
(unk ) ⊂ (un ) and w ∈ L (Ω) such that ∣unk ∣ ≤ w and (unk ) converges pointwise almost
p
Beweis:
We only prove the claim for 1 ≤ p < ∞. Let (un ) be a Cauchy sequence. Choose a
subsequence (unk ) such that
Then dene
∞
w ∶= ∣un1 ∣ + ∑ ∣unk − unk+1 ∣.
k=1
We then have
k−1
∣unk ∣ ≤ ∣un1 ∣ + ∑ ∣unj − unj+1 ∣ ≤ w for all k ∈ N.
j=1
∞
∥w∥p = ∥∣un1 ∣ + ∑ ∣unj − unj+1 ∣∥p
j=1
83
m
= lim ∥∣un1 ∣ + ∑ ∣unj − unj+1 ∣∥p
m→∞
j=1
m
≤ lim inf ∥un1 ∥p + ∑ ∥unj − unj+1 ∥p
m→∞
j=1
∞
≤ ∥un1 ∥p + ∑ 2−k < ∞,
k=1
∣un1 ∣ + ∑∞
In particular, j=1 ∣unj − unj+1 ∣ ≤ w is nite almost everyhwere. So (unj (x)) is a
Cauchy sequence for almost all x ∈ Ω. Since (R, ∣⋅∣) is complete, this subseqeunce converges
pointwise almost everywhere to some measurable function u satisfying ∣u(x)∣ ≤ w(x)
almost everywhere. This proves the additionally-part that we claimed to hold.
ε ε
∥unk − ul ∥p ≤ (l ≥ nk ), ∥unk − u∥p ≤ .
2 2
It follows
Lemma 13.2. Let Ω ⊂ RN be open and ∅ ⊊ Ω ⊊ RN . Then there are closed almost
disjoint dyadic cubes W1 , W2 , . . . with the following properties
(I) ⋃j∈N Wj = Ω,
(II) diam(Wj ) ≤ dist(Wj , Ωc ) ≤ 4 diam(Wj ) for all j ∈ N.
(III) Wi ∩ Wj ≠ ∅ implies
1
4 diam(Wi ) ≤ diam(Wj ) ≤ 4 diam(Wi ),
(IV) #{i ∈ N ∶ Wi ∩ Wj ≠ ∅} ≤ 12N for all j ∈ N.
Furthermore, for any xed κ ∈ (0, 14 ) there are ϕ1 , ϕ2 , . . . ∈ C0∞ (RN ) such that
84
(VI) ∣∂ α ϕj (x)∣ ≤ Cα diam(Wj )−∣α∣ for all α ∈ NN
0 .
Beweis:
For k∈Z we dene the k -th dyadic mesh as follows:
So elements of Sk for ∣k∣ large and k < 0 are large dyadic cubes whereas the cubes from Sk
for large k are small ones (to approximate the ne structures of Ω close to the potentially
complicated boundary). We use
√ √
Ω = ⋃ Ωk where Ωk ∶= {x ∈ Ω ∶ 21−k N < dist(x, Ωc ) ≤ 22−k N } (13.1)
k∈Z
The set F is countable as a countable union of countable sets. Then one can check
Ωk ⊂ ⋃ W ⊂ Ω. (13.3)
W ∈Sk
{W1 , W2 , . . .} ∶= {Ŵ ∶ W ∈ Fk }
Proof of (I): Let x ∈ Ω. By (13.1) there is some k ∈√Z such that x ∈ Ωk . By (13.3) there is
W ∈ Fk such that x ∈ Ωk ∩ W . Then diam(W ) = N 2−k < dist(x, Ωc ) by (13.1) implies
W ⊂ Ω and thus
x ∈ W ⊂ Ŵ ⊂ ⋃ Wj .
j∈N
√
Proof of (II): Wj ∈ Fkj ⊂ Skj implies diam(Wj ) = 2−kj N . By denition of Fkj we may
choose x ∈ Wj ∩ Ωkj , whence
√
dist(Wj , Ωc ) ≤ dist(x, Ωc ) ≤ 22−kj N = 4 diam(Wj ).
53
This is done in order not to count subcubes as new cubes, so [0, 1] × [0, 1] should not be added to the
list of cubes if [0, 2] × [0, 2] is already there. We want to have almost disjoint cubes!
85
On the other hand, the triangle inequality gives
(13.1) √ √
dist(Wj , Ωc ) ≥ dist(x, Ωc ) − diam(Wj ) ≥ 21−kj N − 2−kj N = diam(Wj ).
So (II) is proved.
(II) (II)
diam(Wj ) ≤ dist(Wj , Ωc ) ≤ dist(Wi , Ωc ) + diam(Wi ) ≤ 5 diam(Wi ).
diam(Wi )
Proof of (IV): Assume Wi ∩ Wj ≠ ∅ . Then Wi ∈ Ski , Wj ∈ Skj implies
diam(Wj ) = 2−ki +kj
and (III) gives 2−ki +kj ∈ { 41 , 12 , 1, 2, 4}. If zi , z j are the basepoints of these cubes, we get
for all l ∈ {1, . . . , N }
Hence,
#{i ∈ N ∶ Wi ∩ Wj ≠ ∅} = #{i ∈ N ∶ 2ki −kj ((zi )l + αl ) − βl = (zj )k for some αl , βl ∈ {0, 1}, l = 1, . . . , N }
≤ (5 ⋅ 2 + 2) = 12 .
N N
√
ϕ(x) = 1 for x ∈ [0, 1]N , ϕ(x) = 0 if dist(x, [0, 1]N ) ≥ κ N .
(You may deduce the existence of such a function from Theorem 4.3.) If Wj = 2−kj (zj +
[0, 1] ),
N
then we set
ϕj (x) ∶= ϕ(2kj x − zj ) (x ∈ RN )
√
Then we get ϕj (x) = 1 for x ∈ Wj as well as ϕj (x) = 0 for dist(x, Wj ) ≥ κ N 2−kj =
κ diam(Wj ). In particular,
supp(ϕj ) ⊂ ⋃ Wi
W i ∩W j ≠∅
Furthermore,
∣(∂ α ϕj )(x)∣ ≤ 2kj ∣α∣ ∥∂ α ϕ∥∞ ≤ Cα diam(Wj )−∣α∣ .
◻
86
13.3 A technical fact about Lipschitz domains
Beweis:
For any given a∈A and x, y ∈ X we use ∣d(x, a) − d(y, a)∣ ≤ d(x, y). It implies
Taking the inmums with respect to a∈A gives ∣ dist(x, A) − dist(y, A)∣ ≤ d(x, y) and
the claim follows ◻
Lemma 13.5 (Urysohn's Lemma) . Let A, B ⊂ X be closed disjoint subsets. Then there
is a continuous function f ∶ X → [0, 1] such that f ∣A = 1 and f ∣B = 0.
Beweis:
dist(x,B)
Choose f (x) ∶= dist(x,A)+dist(x,B) . ◻
87
Beweis:
We may w.l.o.g. assume inf A f = −1, supA f = 1, for otherwise consider the continuous
function
1 1 1
x↦ arctan(f (x) − c) where c ∶= (sup f + inf f ), d ∶= arctan( (sup f − inf f )).
d 2 A A 2 A A
2 1
∣f (x) − g0 (x)∣ ≤ ∀x ∈ A ∣g0 (x)∣ ≤ ∀x ∈ X.
3 3
In fact, Urysohn's Lemma provides a function h ∶ X → [0, 1] satisfying h∣{f ≤− 1 } = 0 and
3
h∣{f ≥ 1 } = 1. Then g0 (x) ∶= 1
3 h(x) − 3 has the desired properties because of ∣g0 (x)∣ ≤ 3
1 1
3
and
1 1 2
Case − 1 ≤ f (x) ≤ − ∶ ∣f (x) − g0 (x)∣ = ∣f (x) + ∣ ≤
3 3 3
1 1 2
Case − ≤ f (x) ≤ ∶ ∣f (x) − g0 (x)∣ = ∣f (x)∣ + ∣g(x)∣ ≤
3 3 3
1 1 2
Case ≤ f (x) ≤ 1 ∶ ∣f (x) − g0 (x)∣ = ∣f (x) − ∣ ≤ .
3 3 3
Next apply this preliminary result to the continuous function f˜ ∶= 32 (f − g0 ) ∶ X → [0, 1].
g̃1 ∶ X → [0, 13 ] that ∣f˜(x) − g˜1 (x)∣ ≤
2
We thus obtain a continuous function such
3 for all
x ∈ A. Dening g1 ∶= 2
3 g˜1 we obtain
2 2
∣f (x) − g0 (x) − g1 (x)∣ ≤ ( ) for all x ∈ A.
3
Inductively we obtain a sequence of continuous functions (gn ) that satisfy
n
1 2 n 2 n+1
∣gn (x)∣ ≤ ( ) ∀x ∈ X, ∣f (x) − ∑ gi (x)∣ ≤ ( ) ∀x ∈ A (13.4)
3 3 i=0 3
Dene F (x) ∶= ∑∞i=0 gi (x). This series converges absolutely, so F is continuous with
∣F (x)∣ ≤ 1 and F ∣A = f follows from (13.4). ◻
We now provide a proof of Theorem 13.4 that goes back to [15, 18].
88
Proposition 13.7. Let X
be a normed space and f, f1 , . . . , fn ∈ X ′. Then f is a linear
combination of f1 , . . . , fn if and only if ⋂j=1 ker(fj ) ⊂ ker(f ).
n
Beweis:
We assume w.l.o.g. that {f1 , . . . , fn } is linearly independent, in particular fj ≠ 0 for
j = 1, . . . , n.
n
f (x) = ∑ αj fj (x) = 0,
j=1
Now assume ⋂nj=1 ker(fj ) ⊂ ker(f ) and our aim is to show f = α1 f1 + . . . + αn fn for some
n ∈ N and α1 , . . . , αn ∈ R. We proceed inductively and start with the case n = 1.
In that case we have ker(f1 ) ⊂ ker(f ). Since f ≠ 0 the spaces ker(f ), ker(f1 ) both have
54 ∗ ∗
codimension 1 , we infer ker(f1 ) = ker(f ). Choosing x ∈ X such that f1 (x ) ≠ 0 we
f (x )
∗
obtain f −
f1 (x∗ ) f1 ≡ 0, which settles the case n = 1.
Now assume that the claim has been proved for up to n linearly independent functionals
and let f1 , . . . , fn+1 be given as required. To apply the induction hypothesis dene for
j = 1, . . . , n the functionals gj ∶= fj ∣ker(fn+1 ) and g ∶= f ∣ker(fn+1 ) on the space ker(fn+1 ).
Then
n n
⋂ ker(gj ) = ⋂ (ker(fj ) ∩ ker(fn+1 )) ⊂ ker(f ) ∩ ker(fn+1 ) = ker(g).
j=1 j=1
n n ⎛ n ⎞
x ∈ ker(fn+1 ) ⇒ f (x) = g(x) = ∑ αj gj (x) = ∑ αj fj (x =) ⇒ x ∈ ker f − ∑ αj fj .
j=1 j=1 ⎝ j=1 ⎠
The induction hypothesis gives f − ∑nj=1 αj fj = αn+1 fn+1 for some αn+1 ∈ R. This proves
the claim. ◻
54
More precisely: Choose x∗ ∈ X with f (x∗ ) = 1. If x ∈ ker(f ) is arbitrary, then x − f1 (x)x∗ ∈ ker(f1 ).
By assumption, x − f1 (x)x ∈ ker(f ), which is impossible in case f1 (x) = 0, so we get x ∈ ker(f1 ).
∗
89
If (ii) is satised, then x can be chosen as in (i) with ∥x∥ ≤ M if dim(X) < ∞ or
∥x∥ ≤ M + ε, ε > 0 if dim(X) = ∞.
Beweis:
(i)→(ii): If such an x exists, then
(ii)→ (i): We assume w.l.o.g. that {f1 , . . . , fn } is linearly independent. Dene T (x) =
(f1 (x), . . . , fn (x)) for x ∈ Rn .
fk is not a linear combination of the other n − 1
Since
functionals, Proposition 13.7 implies ⋂j≠k ker(fj ) ⊂ / ker(fk ). In particular there is yk ∈
⋂j≠k ker(fj ) such that fk (yk ) = 1. This implies T (yk ) = ek for all k ∈ {1, . . . , n} and hence
T is surjective. For any c ∈ Rn ∖ {0} we may thus nd y ∈ X such that
n
(f1 (y), . . . , fn (y)) = T (y) = (c1 , . . . , cn ), y ∉ ⋂ ker(fj ).
j=1
It therefore remains to nd x ∈ y + ⋂nj=1 ker(fj ) such that ∥x∥ can be chosen as required.
We may thus nd z ∈ ⋂nj=1 ker(fj ) such that ∥y − z∥ ≤ M if dim(X) < ∞ and ≤ M +ε if
dim(X) = ∞. So the claim follows for x ∶= y − z . ◻
Satz 13.9 (Milman (1938), Pettis (1939)). Let X be a uniformly convex Banach space.
Then X is reexive.
Beweis:
Let F ∈ X ′′ be arbitrary with ∥F ∥ = 1. We have to construct x∈X with F = Jx, which
will be achieved with Helly's Theorem.
By denition of the norm in X ′′ there is a normed sequence (fn ) ⊂ X ′ such that F (fn ) >
1− 1
n . We then apply Helly's Theorem to cj ∶= F (fj ), j = 1, . . . , n. In view of
90
we nd xn ∈ X satisfying
1
∥xn ∥ ≤ 1 + fk (xn ) = F (fk ) for all k = 1, . . . , n.
n
This implies 1− 1
n < F (fk ) = fk (xn ) ≤ ∥xn ∥ ≤ 1 + 1
n and thus for m≥n
2 2
2− ≤ F (fn ) + F (fn ) = fn (xn ) + fn (xm ) = fn (xn + xm ) ≤ ∥xn + xm ∥ ≤ 2 + .
n n
Hence,
xn + xm
lim ∥ ∥ = 1, lim ∥xn ∥ = 1.
n→∞ 2 n→∞
Hence, (xn ) is a Cauchy sequence in X and thus converges to some x ∈ X. This proves
the existence of x∈X such that
Now x any f ∈ X ′ , ∥f ∥X ′ = 1 and dene f0 ∶= f and consider the sequence (fk )k∈N0 for
fk as above. Helly's Theorem yields a sequence (yn ) with
1
∥yn ∥ ≤ 1 + fk (yn ) = F (yk ) for all k = 0, . . . , n,
n
which is Cauchy by the arguments presented above. Hence (yn ) converges and the uni-
queness property proved above implies yn → x as n → ∞. But this implies
91
Literatur
[1] R. A. Adams and J. J. F. Fournier. Sobolev spaces, volume 140 of Pure and Applied
Mathematics (Amsterdam). Elsevier/Academic Press, Amsterdam, second edition,
2003.
[2] H. Brezis. Functional analysis, Sobolev spaces and partial dierential equations.
Universitext. Springer, New York, 2011.
[4] A.-P. Calderón and A. Zygmund. Local properties of solutions of elliptic partial die-
rential equations. Studia Math., 20:171225, 1961. doi:10.4064/sm-20-2-181-225.
[5] H. Federer and W. H. Fleming. Normal and integral currents. Ann. of Math. (2),
72:458520, 1960. doi:10.2307/1970227.
[7] W. H. Fleming and R. Rishel. An integral formula for total gradient variation. Arch.
Math. (Basel), 11:218222, 1960. doi:10.1007/BF01236935.
[8] E. Gagliardo. Proprietà di alcune classi di funzioni in più variabili. Ricerche Mat.,
7:102137, 1958.
[11] W. Kondrachov. Sur certaines propriétés des fonctions dans l'espace. C. R. (Dokla-
dy) Acad. Sci. URSS (N. S.), 48:535538, 1945.
[12] P. D. Lax. A short path to the shortest path. Amer. Math. Monthly, 102(2):158159,
1995. doi:10.2307/2975350.
92
partial dierential equations, Annals of Mathematics Studies, no. 33, pages 167190.
Princeton University Press, Princeton, N. J., 1954.
[14] N. G. Meyers and J. Serrin.H = W. Proc. Nat. Acad. Sci. U.S.A., 51:10551056,
1964. doi:10.1073/pnas.51.6.1055.
[15] D. Milman. On some criteria for the regularity of spaces of type (b). Comptes
Rendus (Doklady) de l'Académie des Sciences de l'URSS, 20:243246, 1938.
[16] J. Moser. A sharp form of an inequality by N. Trudinger. Indiana Univ. Math. J.,
20:10771092, 1970/71. doi:10.1512/iumj.1971.20.20101.
[17] L. Nirenberg. On elliptic partial dierential equations. Ann. Scuola Norm. Sup.
Pisa Cl. Sci. (3), 13:115162, 1959.
[18] B. J. Pettis. A proof that every uniformly convex space is reexive. Duke Math. J.,
5(2):249253, 1939. doi:10.1215/S0012-7094-39-00522-3.
[19] F. Rellich. Ein satz über mittlere konvergenz. Nachrichten von der Gesellschaft
der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse, 1930:3035,
1930. URL: http://eudml.org/doc/59297.
[20] F. Riesz. Sur les systèmes orthogonaux de fonctions. Comptes rendus de l'Académie
des sciences, 144:615619, 1907.
[21] F. Riesz. Sur une espèce de géométrie analytique des systèmes de fonctions somma-
bles. Comptes rendus de l'Académie des sciences, 144:14091411, 1907.
[22] S. Sobolev. Sur un théorème d'analyse fonctionnelle. Rec. Math. [Mat. Sbornik]
N.S., 4:471497, 1938.
[24] G. Talenti. Best constant in Sobolev inequality. Ann. Mat. Pura Appl. (4), 110:353
372, 1976. doi:10.1007/BF02418013.
[25] N. S. Trudinger. On imbeddings into Orlicz spaces and some applications. J. Math.
Mech., 17:473483, 1967. doi:10.1512/iumj.1968.17.17028.
93
[27] W. H. Young. On the multiplication of successions of fourier constants. Proc. R.
Soc. Lond. A, 87:331339, 1912. doi:http://doi.org/10.1098/rspa.1912.0086.
94