Nothing Special   »   [go: up one dir, main page]

Fourier Analysis and Integral Transformations With Applications in Engineering

Download as pdf or txt
Download as pdf or txt
You are on page 1of 347

Foreword

The present book entitled ’Fourier Analysis and Integral Transforms with
Applications in Engineering’ intends to be a useful tool mainly for students
enrolled in a technical university, both in bachelor and master degree pro-
grams, but also to engineers involved in research.
The content is structured in four chapters and annexes, each chapter pre-
senting definitions, properties, examples, and focusing on partially or solved
exercises. If an exercise is not explicitly solved, then it contains hints and
the final answer. An important part of the book is represented by various
applications in engineering together with numerous MATLAB and Maple
examples. All is meant to facilitate a good understanding of the theoretical
notions.
Fourier Analysis is based on the decomposition of periodic functions into a
discrete sum of trigonometric or exponential functions with specific frequen-
cies. It has multiple applications in electrical engineering, vibration analysis,
acoustics, optics, signal processing, image processing, quantum mechanics,
econometrics etc.
The integral transforms covered in detail in this book are the Fourier, the
Laplace and the Z transforms. Their definition is motivated by a plethora
of difficult problems that need to be solved in their original form in the time
domain. These complicated problems become much easier in the frequency
domain as they are reduced to simple algebraic equations. The inverse trans-
form has now the role of producing the solution in the initial time domain.
More information about the topics presented in this book can be found
in [6], [22], [23] and [33]. For exercises and applications one can use [3], [30]
and [27], where the first two references contain problems given at the ’Traian
Lalescu’ Mathematical Contest for Students.
For the use of MATLAB and Maple in mathematical modeling, see [4]
and [1], respectively.
The authors would like to express warm thanks to the referees that read
the material and contributed to its improvement and to all the colleagues for
their precious suggestions and valuable observations. A very special thanks
goes to Mihaela Pitiş for allowing us to use an example from her graduation
thesis related to cryptography, and to to Laurenţiu Toader for designing the
figures. Both are former students of the Politehnica University of Bucharest.

i
ii
Contents

Foreword i

1 Fourier Analysis 1
1.1 The pre-Hilbert space R . . . . . . . . . . . . . . . . . . . . . 2
1.2 Generalized Fourier Series . . . . . . . . . . . . . . . . . . . . 4
1.3 Trigonometric Fourier Series . . . . . . . . . . . . . . . . . . . 8
1.4 Convergence of the Fourier Series . . . . . . . . . . . . . . . . 11
1.5 Fourier Series of Even and Odd Functions . . . . . . . . . . . 13
1.6 Complex Fourier Series . . . . . . . . . . . . . . . . . . . . . . 19
1.7 Signal Fourier Series Representation . . . . . . . . . . . . . . . 22
1.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.9 MATLAB Applications . . . . . . . . . . . . . . . . . . . . . . 52
1.10 Maple Applications . . . . . . . . . . . . . . . . . . . . . . . . 62

2 Fourier Transform 65
2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.2 Properties of the Fourier Transform . . . . . . . . . . . . . . . 74
2.3 The Inversion Formula . . . . . . . . . . . . . . . . . . . . . . 79
2.4 Fourier Integral . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.5 Discrete Fourier Transform (DFT) . . . . . . . . . . . . . . . . 89
2.6 Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . 94
2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.8 MATLAB Applications . . . . . . . . . . . . . . . . . . . . . . 123
2.8.1 Fourier Transform . . . . . . . . . . . . . . . . . . . . . 123
2.8.2 Inverse Fourier Transform . . . . . . . . . . . . . . . . 128
2.8.3 Fast Fourier Transform . . . . . . . . . . . . . . . . . . 131
2.9 Maple Applications . . . . . . . . . . . . . . . . . . . . . . . . 132
2.9.1 Fourier Transform . . . . . . . . . . . . . . . . . . . . . 132

iii
2.9.2 Inverse Fourier Transform . . . . . . . . . . . . . . . . 134
2.9.3 Fourier Cosine Transform . . . . . . . . . . . . . . . . 135
2.9.4 Fourier Sine Transform . . . . . . . . . . . . . . . . . . 136
2.9.5 Discrete Transforms . . . . . . . . . . . . . . . . . . . 136

3 Laplace Transform 141


3.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
3.2 Properties of the Laplace Transform . . . . . . . . . . . . . . . 149
3.3 Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . 168
3.4 Applications of the Laplace Transform . . . . . . . . . . . . . 175
3.4.1 Differential Equations . . . . . . . . . . . . . . . . . . 175
3.4.2 Systems of Differential Equations . . . . . . . . . . . . 179
3.4.3 Integral Equations . . . . . . . . . . . . . . . . . . . . 182
3.4.4 Linear Time-Invariant Control Systems . . . . . . . . . 185
3.4.5 RLC Circuits . . . . . . . . . . . . . . . . . . . . . . . 189
3.4.6 Encryption-Decryption of a Message . . . . . . . . . . 191
3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
3.6 MATLAB Applications . . . . . . . . . . . . . . . . . . . . . . 230
3.6.1 Laplace Transform . . . . . . . . . . . . . . . . . . . . 230
3.6.2 Inverse Laplace Transform . . . . . . . . . . . . . . . . 239
3.6.3 Transfer Matrices . . . . . . . . . . . . . . . . . . . . . 240
3.6.4 Partial Fraction Decomposition. Residue . . . . . . . . 245
3.7 Maple Applications . . . . . . . . . . . . . . . . . . . . . . . . 246
3.7.1 Laplace Transform . . . . . . . . . . . . . . . . . . . . 246
3.7.2 Inverse Laplace Transform . . . . . . . . . . . . . . . . 252
3.7.3 Differential Equations . . . . . . . . . . . . . . . . . . 254
3.7.4 Transfer Matrices/Functions . . . . . . . . . . . . . . . 255

4 Z Transform 257
4.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
4.2 Properties of the Z Transform . . . . . . . . . . . . . . . . . . 261
4.3 Determination of the Original . . . . . . . . . . . . . . . . . . 274
4.4 Applications of the Z Transform . . . . . . . . . . . . . . . . . 278
4.4.1 Difference Equations . . . . . . . . . . . . . . . . . . . 278
4.4.2 Discrete-time Control Systems . . . . . . . . . . . . . . 282
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
4.6 MATLAB Applications . . . . . . . . . . . . . . . . . . . . . . 309
4.6.1 Z Transform . . . . . . . . . . . . . . . . . . . . . . . 309

iv
4.6.2 Inverse Z Transform . . . . . . . . . . . . . . . . . . . 314
4.7 Maple Applications . . . . . . . . . . . . . . . . . . . . . . . . 316
4.7.1 Z Transform . . . . . . . . . . . . . . . . . . . . . . . 316
4.7.2 Inverse Z Transform . . . . . . . . . . . . . . . . . . . 322

A Tables of Integral Transforms 325


A.1 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . 325
A.2 Cosine Fourier Transform . . . . . . . . . . . . . . . . . . . . 327
A.3 Sine Fourier Transform . . . . . . . . . . . . . . . . . . . . . . 329
A.4 Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . 331
A.5 Z Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333

Bibliography 335

Index 338

v
vi
Chapter 1

Fourier Analysis

Fourier analysis is the study of the way general functions may be repre-
sented or approximated by sums of simpler trigonometric functions. Fourier
analysis grew from the study of Fourier series, and is named after Jean-
Baptiste Joseph Fourier (1768–1830), who showed that representing a func-
tion as a sum of trigonometric functions greatly simplifies the study of heat
transfer. Fourier introduced the series for the purpose of solving the heat
equation in a metal plate. Although the original motivation was to solve
the heat equation, it later became obvious that the same techniques could
be applied to a wide array of mathematical and physical problems, espe-
cially those involving linear differential equations with constant coefficients,
for which the eigensolutions are sinusoids. Fourier series have many such
applications as in electrical engineering, vibration analysis, acoustics, optics,
signal processing, image processing, quantum mechanics, econometrics.

A Fourier series is an expansion of a periodic function f (x) in terms of an


infinite sum of sines and cosines. Fourier series make use of the orthogonality
relationships of the sine and cosine functions. It is extremely useful as a way
to break up an arbitrary periodic function into a set of simple terms that can
be plugged in, solved individually, and then recombined to obtain the solution
to the original problem or an approximation to it to whatever accuracy is
desired or practical.

This topic is studied for instance in [29], [31] and [32].

1
1.1 The pre-Hilbert space R
In order to introduce and study Fourier series, one needs a suitable struc-
ture, namely that of a pre-Hilbert space.
We recall that a vector space or a linear space is a nonempty set V
(whose elements are called vectors) on which one has defined two operations,
addition and multiplications by scalars belonging to a field F, subjected to
eight axioms. These axioms need to be verified by any u, v, w ∈ V and
α, β ∈ F and are the following:

1. u + v = v + u (commutativity);

2. (u + v) + w = u + (v + w) (associativity);

3. there exists 0 ∈ V (the zero vector) such that v + 0 = v;

4. for any vector v ∈ V there exists a vector v 0 ∈ V such that v + v 0 =


v 0 + v = 0. This vector v 0 will be denoted by −v;

5. 1 · v = v, where 1 is the multiplicative identity of F;

6. α(βv) = (αβ)v;

7. α(u + v) = αu + αv;

8. (α + β)v = αv + βv.

In the sequel we consider the cases F = R and F = C.

Definition 1.1.1. A pre-Hilbert space V is a vector space over F endowed


with an inner product
(·, ·) : V × V → F,
which verifies the following axioms for any u, v, w ∈ V and α, β ∈ F:

i. (v, v) ≥ 0; (v, v) = 0 ⇔ v = 0;

ii. (u, v) = (v, u) (if F = R, then this becomes (u, v) = (v, u));

iii. (αu + βv, w) = α(u, w) + β(v, w).

If F = R, then V is called a real pre-Hilbert space and if F = C, then V


is called a complex pre-Hilbert space.

2
The advantage of a pre-Hilbert structure is the possibility of defining
various notions.
p
1. The norm of a vector: kvk = (v, v) (see [21, pp. 54-55]);
2. The distance between two vectors: d(u, v) = ku − vk;
3. The angle between two vectors: (d
u, v). It is defined by
(u, v)
cos(d
u, v) = ;
kuk · kvk

4. Fundamental (Cauchy) sequences: the sequence (vn )n≥1 ⊂ V is called


a fundamental sequence if for every  > 0 there exists n() ∈ N∗ such
that for any n, m ∈ N∗ , n ≥ n(), m ≥ 1 we have kvn+m − vn k < ;
5. Convergent sequences: the sequence (vn )n≥1 ⊂ V is convergent (in
norm) and has the limit v ∈ V if for every  > 0 there exists n() ∈ N∗
such that for every n ∈ N∗ , n ≥ n() we have kvn − vk < .
Remark 1.1.2. One can prove that a convergent sequence is a fundamental
one. The converse is not always true.
If the pre-Hilbert space V is complete (any fundamental sequence in V is
convergent), then V is called a Hilbert space.
Remark 1.1.3. The norm of a vector v can be considered a measure of
the magnitude of v. If one approximates a vector v by a vector u, then the
distance d(u, v) can be considered the measure of the approximation error.
We say that two vectors u and v are orthogonal if the angle between
π (u, v)
them is α := (d
u, v) = , i.e. if cos α = 0. Since cos(d u, v) = = 0,
2 kuk · kvk
one obtains a characterization of orthogonality on a pre-Hilbert space: two
vectors u and v are orthogonal if and only if (u, v) = 0.
Definition 1.1.4. Let V be a pre-Hilbert space. A sequence (vn )n≥1 ⊂ V is
called an orthogonal system in V if

6 n
0, m =
(vn , vm ) = 2
an = kvn k > 0, m = n
If follows from axiom i. of the inner product that, in an orthogonal
system, we have vn 6= 0, for every n ≥ 1.

3
1.2 Generalized Fourier Series

A. Case F = R

One denotes by R(a, b) the set of real Riemann integrable functions on


the interval [a, b] ⊂ R. Thus,
 Z b 
R(a, b) = f : [a, b] → R : f (x) dx ∈ R .
a

Proposition 1.2.1. R(a, b) is a real vector space with the following opera-
tions:
1. addition (+) : for f, g ∈ R(a, b), f + g ∈ R(a, b), where (f + g)(x) =
f (x) + g(x), ∀x ∈ [a, b];
2. multiplication by a real scalar (·) : for f ∈ R(a, b) and α ∈ R, α · f ∈
R(a, b), where (α · f )(x) = αf (x), ∀x ∈ [a, b].
Proof. One can easily verify the axioms of a vector space for R(a, b).

Remark 1.2.2. In the space R(a, b), f = g if f (x) = g(x) at all common
Z b Z b
continuity points x of f and g and, in this case, f (x) dx = g(x) dx.
a a
Therefore, f = 0 if f (x) = 0 at all continuity points x of f and in this case
Z b Z b
f (x) dx = 0. If f (x) ≥ 0 at all continuity points x of f and f (x) dx =
a a
0, then f = 0.
We also make the observation that f ∈ R(a, b) if and only if f is bounded
and continuous almost everywhere (which means that the Lebesgue measure
of the set of the discontinuity points of f is 0, i.e. for every  > 0, there exists
a sequence of intervals whose union includes the set of the discontinuity points
of f such that the sum of the series of the lengths of these intervals is smaller
than ). For further details one can consult [21].
Proposition 1.2.3. R(a, b) is a real pre-Hilbert space with the inner product
given by
Z b
(f, g) = f (x)g(x) dx (1.1)
a

4
Proof. Let us verify the axioms of the inner product (see Definition 1.1.1)
for any f, g, h ∈ R(a, b) and α, β ∈ R.
Z b
i. (f, f ) = f 2 (x) dx ≥ 0 since f 2 (x) ≥ 0, for every x ∈ [a, b]. If
a
f = 0, it follows that f 2 (x) = 0 at all its continuity points x. Hence,
Z b Z b
2
(f, f ) = f (x) dx = 0. Conversely, if f 2 (x) dx = 0, it follows
a a
that f 2 (x) = 0 at all continuity points x of f . Hence, f = 0 (see
Remark 1.2.2);
Z b Z b
ii. (f, g) = f (x)g(x) dx = g(x)f (x) dx = (g, f ) since the multipli-
a a
cation of real numbers is commutative;

iii. by linearity of the Riemann integral, one obtains


Z b
(αf + βg, h) = (αf (x) + βg(x))h(x) dx
a
Z b Z b
=α f (x)h(x) dx + β g(x)h(x) dx
a a
= α(f, h) + β(g, h).

In this case, one gets the following:


sZ
b
ˆ the norm kf k = (f, f ) =
p
f 2 (x) dx;
a
sZ
b
ˆ the distance d(f, g) = kf − gk = [f (x) − g(x)]2 dx;
a

Z b
f (x)g(x) dx
(f, g)
ˆ the angle cos(f,
dg) = = sZ a
sZ .
kf k · kgk b b
f 2 (x) dx · g 2 (x) dx
a a

5
Main Problem of Fourier Analysis

Let us consider an orthogonal system (fn )n≥1 ⊂ R(a, b), i.e.


Z b 
0, n 6= m
(fn , fm ) = fn (x)fm (x) dx = (1.2)
a an > 0, n = m

Given f ∈ R(a, b), we would like to write f as a series expansion with


respect to the orthogonal system (1.2), i.e. to determine the coefficients
cn ∈ R, n ∈ N∗ such that
X∞
f= cn f n . (1.3)
n=1

The series (1.3) is called the generalized Fourier series of f and cn are its
generalized Fourier series coefficients.
Due to orthogonality, the formal solution of the problem is very sim-
ple. Calculate the inner product (f, fn ) for an arbitrary index n using the
expression of f written as a series. We obtain that

! ∞
X X
(f, fn ) = cm fm , fn = cm (fm , fn ) = cn (fn , fn ).
m=1 m=1

Hence, the generalized Fourier coefficients can be calculated by the following


formula: Z b
f (x)fn (x) dx
(f, fn ) a
cn = = b
. (1.4)
(fn , fn )
Z
fn2 (x) dx
a

The following question remains open and it must be answered in each


concrete situation:
In which conditions and in which points the series (1.3) is convergent and
its sum is the given function f ?
Without an answer to this question, one can only say that the Fourier
series (1.3) is associated to the function f and write the sign ’∼’ instead of
’=’, i.e.
X∞
f∼ cn f n
n=1

6
B. Case F = C
One denotes by RC (a, b) the set of complex integrable functions on the
interval [a, b] ⊂ R (see [5] or [20]). Thus,
 Z b 
RC (a, b) = f : [a, b] → C : f (x) dx ∈ C .
a

Similarly with the real case (F = R), one obtains the following results.

Proposition 1.2.4. RC (a, b) is a complex vector space with the addition and
the multiplication by scalars α ∈ C defined in a similar manner with those
in Proposition 1.2.1.

Proposition 1.2.5. RC (a, b) is a complex pre-Hilbert space with the inner


product given by
Z b
(f, g) = f (x)g(x) dx (1.5)
a

Proof. We only prove the second axiom. In this case, axiom ii. of the inner
product becomes (f, g) = (g, f ) and it is obviously true since
Z b Z b Z b
(g, f ) = g(x)f (x) dx = g(x)f (x) dx = f (x)g(x) dx = (f, g).
a a a

It follows that the generalized Fourier series (1.3) has coefficients (see
(1.4)
Z b
f (x)fn (x) dx
(f, fn ) a
cn = = Z b (1.6)
(fn , fn ) 2
|fn (x)| dx
a

since for z = x + iy ∈ C, z̄ = x − iy and z z̄ = x2 + y 2 = |z|2 , so


Z b Z b
fn (x)fn (x) dx = |fn (x)|2 dx.
a a

Remark 1.2.6. This approach of generalized Fourier series holds in any


pre-Hilbert space.

7
1.3 Trigonometric Fourier Series
Consider the interval [a, b] = [0, T ], T > 0. The existence of an orthogonal
system is proven by the following result.

Proposition 1.3.1. The sequence

1, cos ωx, sin ωx, cos 2ωx, sin 2ωx, . . . , cos nωx, sin nωx, . . . , (1.7)

is an orthogonal system in R(0, T ), where ω = .
T
Proof. Notice that 1 denotes the function f (x) = 1 = cos(0x), x ∈ [0, T ].
Let us calculate the inner products for (1.7). We obtain that
Z T T
(1, 1) = dx = x = T,
0 0


i.e. the magnitude of the pulse signal f (x) = 1 is k1k = T .
For n ≥ 0 and m ≥ 1 we have
Z T
(cos nωx, cos mωx) = cos nωx cos mωx dx.
0

cos(α + β)x + cos(α − β)x


Using formula cos αx cos βx = , one obtains
2
Z T Z T 
1
(cos nωx, cos mωx) = cos(m + n)ωx dx + cos(m − n)ωx dx .
2 0 0

We analyze two cases: n 6= m and n = m.


If n 6= m, then we get
 
1 sin(m + n)ωx T sin(m − n)ωx T
(cos nωx, cos mωx) = + = 0,
2 (m + n)ω 0 (m − n)ω 0


since ω = ⇒ ωT = 2π and sin(m ± n)2π = 0.
T
If n = m, then we have cos(m − n)ωx = cos 0 = 1. Hence,
 
1 sin 2nωx T T T
(cos nωx, cos mωx) = +x = .
2 2nω 0 0 2

8
Therefore,
0, n 6= m
(
(cos nωx, cos mωx) = T
, n=m
2
r
T
and kcos nωxk = .
2
cos(α − β)x − cos(α + β)x
Similarly, since sin αx sin βx = , one obtains,
2
for n ≥ 0 and m ≥ 1,
0, n 6= m
(
(sin nωx, sin mωx) = T
, n=m
2
r
T
and ksin nωxk = .
2
sin(α + β)x + sin(α − β)x
Also, since sin αx cos βx = , it follows that, for
2
n ≥ 0 and m ≥ 1,
Z T Z T 
1
(sin nωx, cos mωx) = sin(m + n)ωx dx + sin(m − n)ωx dx .
2 0 0

We analyze again two cases: n 6= m and n = m.


If n 6= m, one gets
 
1 cos(m + n)ωx T cos(m − n)ωx T
(sin nωx, cos mωx) = − +
2 (m + n)ω 0 (m − n)ω 0
 
1 cos(m + n)ωT − 1 cos(m − n)ωT − 1
=− +
2 (m + n)ω (m − n)ω
 
1 1−1 1−1
=− + = 0,
2 (m + n)ω (m − n)ω

since ω = ⇒ ωT = 2π and cos(m ± n)2π = cos 0 = 1.
T
If n = m, then
Z T Z T 
1
(sin nωx, cos mωx) = sin 2nωx dx + sin 0 dx = 0.
2 0 0

Hence, (sin nωx, cos mωx) = 0, for n ≥ 0 and m ≥ 1.

9
Therefore, the inner product of any distinct functions in (1.7) is equal to
0. Hence, (1.7) is an orthogonal system of R(0, T ).

Since the orthogonal system (1.7) includes functions of the type cos and
sin, the corresponding Fourier coefficients cn will be denoted by two symbols,
a0
an and bn , while the coefficient of the pulse function 1 will be denoted by .
2
One associates to any function f ∈ R(0, T ) the following trigonometric
Fourier series, which corresponds to the generalized Fourier series (1.3):


a0 X
f (x) ∼ + (an cos nωx + bn sin nωx). (1.8)
2 n=1

The coefficients will be determined by applying formula (1.4) of the gen-


(f, fn )
eralized Fourier series cn = and the inner products from the proof
(fn , fn )
of Proposition 1.3.1.
For the function f0 = 1, x ∈ [0, T ] the coefficient is
Z T
f (x) dx
a0 (f (x), 1) 0
= = .
2 (1, 1) T

For fn = cos nωx and fn = sin nωx, one obtains


Z T
f (x) cos nωx dx
(f (x), cos nωx) 0
an = = ,
(cos nωx, cos nωx) T
2

and
Z T
f (x) sin nωx dx
(f (x), sin nωx) 0
bn = = ,
(sin nωx, sin nωx) T
2
respectively.

10
In conclusion, the coefficients of the trigonometric Fourier series (1.8) are
the following:
2 T
Z
a0 = f (x) dx,
T Z0
2 T
an = f (x) cos nωx dx, ∀n ≥ 1, (1.9)
T Z0
2 T
bn = f (x) sin nωx dx, ∀n ≥ 1.
T 0
Remark 1.3.2. A usual period in many problems is T = 2π. In this case

we have ω = = 1 and the formula of the Fourier series (1.8) becomes
T

a0 X
f (x) ∼ + (an cos nx + bn sin nx), (1.10)
2 n=1

with the following coefficients corresponding to (1.9):

1 2π
Z
a0 = f (x) dx,
π Z0
1 2π
an = f (x) cos nx dx, ∀n ≥ 1, (1.11)
π Z0
1 2π
bn = f (x) sin nx dx, ∀n ≥ 1.
π 0

1.4 Convergence of the Fourier Series


One can write equality instead of ’∼’ in (1.8) if the Fourier series converges
to f . We will discuss three types of convergence: mean-square, pointwise and
uniform. For the proofs see [21], [29].
Let us consider the sequence (SN (x))N ≥1 of partial sums of the Fourier
series (1.8)
N
a0 X
SN (x) = + (an cos nωx + bn sin nωx). (1.12)
2 n=1

Theorem 1.4.1 (Mean-square Convergence). The Fourier series of any


function f ∈ R(0, T ) is mean-square convergent to f , i.e. for every  > 0,
there exists N () ∈ N such that for every N ≥ N () we have kf − SN k < .

11
Using the definition of the norm in R(0, T ), this means that
s
Z T
(f (x) − SN (x))2 dx < .
0

Remark 1.4.2. The Fourier series of f is mean-square convergent to f if



X
the series (a2n + b2n ) is convergent.
n=1

In the sequel we will consider functions f : R → R which are periodic of


period T , i.e f (x + T ) = f (x), for every x ∈ R.
The Fourier series (1.8) is pointwise convergent to f if for every  > 0 and
every x ∈ [0, T ] there exists N (, x) ∈ N such that for every N ≥ N (, x) we
have |f (x) − SN (x)| < .
The Fourier series (1.8) is uniformly convergent to f if for every  > 0
there exists N () ∈ N such that for every N ≥ N () we have |f (x)−SN (x)| <
, for every x ∈ [0, T ].
Remark 1.4.3. The Fourier series of f converges uniformly to f . Therefore,

X
if the series (|an | + |bn |) is convergent, f is continuous.
n=1
If the Fourier series converges pointwise, but not uniformly, then it con-
verges to a discontinuous function.
Remark 1.4.4. The implications that relate the three types of convergence
are the following:
Uniform conv. ⇒ Pointwise conv. ⇒ Mean-square conv.
There are some important criteria which provide conditions for the point-
wise convergence of the Fourier series.
Recall that a function f is piecewise continuous on an interval [a, b] if
there exists a finite number of points a = x0 < x1 < · · · < xn = b such that f
is continuous on each interval (xi , xi+1 ), i = 0, n − 1 and the one-sided limits
f (xi +) and f (xi+1 −) exist for all i = 0, n − 1.
Similarly one defines piecewise differentiable functions.
Theorem 1.4.5 (Dirichlet). Consider a function f : R → R which is peri-
odic, piecewise continuous and piecewise differentiable, with finite one-sided
limits and derivatives. Then the Fourier series associated to f is convergent
f (x+) + f (x−)
at any point x ∈ R and its sum equals .
2

12
It is a standard fact that a function f is continuous at a point x if and
only if f (x+) and f (x−) exist and f (x) = f (x−) = f (x+). In this case the
sum of the series becomes f (x). One obtains the following result.
Corollary 1.4.6. If the function f is periodic and continuous, then, for
every x ∈ R we have

a0 X
f (x) = + (an cos nωx + bn sin nωx). (1.13)
2 n=1

Remark 1.4.7. One says that the function f : R → R is standardized if f


has finite one-sided limits at any discontinuity point x and the equality
f (x+) + f (x−)
= f (x)
2
holds. So, if the periodic function f is not continuous, but standardized,
then (1.13) holds also for any x ∈ R.

1.5 Fourier Series of Even and Odd Functions


In many problems the signals (the functions f ) have some symmetries,
being odd or even. In these cases, the formulas of the Fourier series and
coefficients are simplified.

Figure 1.1: The Symmetry with respect to Oy

A function f : R → R is called even if f (−x) = f (x), for every x ∈ R.


Then the graph of f is symmetrical with respect to the y-axis (see Figure

13
1.1) and the area bounded by the graph, the x-axis and the lines x = −a and
x = 0 is equal to that corresponding to x = 0 and x = a (see Figure 1.2);
hence, Z a Z a
f (x) dx = 2 f (x) dx.
−a 0

Figure 1.2: The Integral of an Even Function f

Figure 1.3: The Symmetry with respect to the Origin

A function f : R → R is called odd if f (−x) = −f (x), for every x ∈ R.


Then the graph of f is symmetrical with respect to O (the origin) (see Figure
1.3) and the area bounded by the graph and the x-axis for limits x = −a

14
and x = 0 is −A, where A is the area corresponding to x = 0 and x = a (see
Figure 1.4); hence, Z a
f (x) dx = 0.
−a

Figure 1.4: The Integral of an Odd Function f


If the functions f and g are even, then their product is even, since
(f g)(−x) = f (−x)g(−x) = f (x)g(x) = (f g)(x).
Similarly, if f and g are odd, their product is even, since
(f g)(−x) = f (−x)g(−x) = [−f (x)][−g(x)] = f (x)g(x) = (f g)(x).
One knows that the cosine is an even function and the sine is an odd
function.
Proposition 1.5.1. If f : R → R is a periodic function of period T , which
is integrable on R, then
Z a+T Z T
f (x) dx = f (x) dx, ∀a ∈ R.
a 0

Proof. Consider a to be arbitrary, fixed for the entire proof. Then there
exists k ∈ Z such that kT ∈ [a, a + T ]. We distinguish between two cases:
1. if a = kT , by the change of variable y = x − kT , one obtains
Z a+T Z T Z T Z T
f (x) dx = f (y + kT ) dy = f (y) dy = f (x) dx,
a 0 0 0

since f (y + kT ) = f (y) by the periodicity of f ;

15
2. if a < kT (see Figure 1.5 and notice the equality of the two hatched
areas which correspond to the definite integrals below), then one writes
Z a+T Z kT Z a+T
f (x) dx = f (x) dx + f (x) dx.
a a kT

Figure 1.5: Integrals of a Periodic Function

By the periodicity of f and using the change of variable y = x + T and


Case 1., one obtains
Z kT Z (k+1)T
f (x) dx = f (x) dx;
a a+T

hence,
Z a+T Z (k+1)T Z a+T
f (x) dx = f (x) dx + f (x) dx
a a+T kT
Z (k+1)T
= f (x) dx
kT
Z T
= f (x) dx.
0

It follows that if f is periodic of period T , in formulas of the coefficients


(1.9) one can change the interval [0, T ] by the interval [a, a + T ], for any

16
T
a ∈ R. For a = − , the coefficients (1.9) become
2
Z T
2 2
a0 = f (x) dx,
T − T2
Z T
2 2
an = f (x) cos nωx dx, ∀n ≥ 1, (1.14)
T − T2
Z T
2 2
bn = f (x) sin nωx dx, ∀n ≥ 1.
T − T2

If the periodic function f is even, one obtains


Z T Z T
2 2
f (x) dx = 2 f (x) dx
− T2 0

and since f (x) cos nωx is even and f (x) sin nωx is odd, one gets
Z T
4 2
a0 = f (x) dx,
T 0
Z T
4 2 (1.15)
an = f (x) cos nωx dx, ∀n ≥ 1
T 0
bn = 0, ∀n ≥ 1.

Finally one obtains the Fourier cosine series (from (1.13))



a0 X
f (x) = + an cos nωx. (1.16)
2 n=1

If the periodic function f is odd, one obtains


Z T
2
f (x) dx = 0
− T2

and since f (x) cos nωx is odd and f (x) sin nωx is even, one gets

a0 = an = 0, ∀n ≥ 1,
Z T
4 2 (1.17)
bn = f (x) sin nωx dx, ∀n ≥ 1.
T 0

17
Finally one obtains the Fourier sine series (from (1.13))

X
f (x) = bn sin nωx. (1.18)
n=1

Remark 1.5.2. In some situations, for instance if one wants to solve the
boundary value problem for the wave equation or the heat equation, one
needs to expand in Fourier sine series a function f which represents the
initial position of the wave or the initial temperature on a bar of length l.

Based upon Remark 1.5.2, we need to solve the following:


Problem: Given a function f : (0, l] → R, expand f in Fourier sine
series.
Solution: First we extend f to an odd function fe : [−l, l] → R. Hence,
(see Figure 1.6) 
 f (x), x ∈ (0, l]
f (x) =
e −f (−x), x ∈ [−l, 0) .
0, x = 0

Figure 1.6: The Extension of f to an Odd Function fe

In this case, the period is the length of the interval [−l, l], i.e. T = 2l
π
and ω = . Therefore, the Fourier sine series expansion of the function
l
f (x), x ∈ (0, l] will be the Fourier sine series (1.13) of fe(x) restricted to the
interval (0, l] and the Fourier coefficients bn (1.17) will contain f (x) since

18
 
T 4 2 T
fe(x) = f (x), for x ∈ 0, . Since = and = l, we have
2 T l 2
Z l
2  nπx 
bn = f (x) sin dx (1.19)
l 0 l

and

X  nπx 
f (x) = bn sin . (1.20)
n=1
l

1.6 Complex Fourier Series


Similarly with the real case, we will use the pre-Hilbert structure of the
spaces RC (a, b) to construct complex Fourier series. First of all, we need an
orthogonal system.

Proposition 1.6.1. The sequence

. . . , e−inωx , . . . , e−i2ωx , e−iωx , 1, eiωx , ei2ωx , . . . , einωx , . . . (1.21)


 
T T 2π
is an orthogonal system in RC − , , where ω = .
2 2 T
 
T T
Proof. The inner product in RC − , is given by (see (1.5))
2 2
Z T
2
(f, g) = f (x)g(x) dx. (1.22)
− T2

Since eiθ = cos θ + i sin θ and cosine is even and sine is odd, eiθ = cos θ −
i sin θ = cos(−θ) + i sin(−θ) = e−iθ . Then the inner product for functions
from the sequence (1.21) is the following:
Z T Z T
2 2
inωx imωx inωx −imωx
(e ,e )= e e dx = ei(n−m)ωx dx,
− T2 − T2

which divides the problem in two cases: n 6= m and n = m.

19
If n 6= m, then we get
T T
inωx imωx ei(n−m)ωx T2 2 ei(n−m)ω 2 − e−i(n−m)ω 2
(e ,e )= = ·
i(n − m)ω − T2 (n − m)ω 2i
2
= sin(n − m)π = 0,
(n − m)ω
eiz − e−iz ωT
since sin z = (Euler’s formula) and = π.
2i 2
If n = m, then we have
Z T Z T T
2 2 2
inωx imωx inω0
(e ,e )= e dx = dx = x = T.
− T2 − T2 − T2

Therefore, the inner product is



inωx imωx 0, n 6= m
(e ,e )= ; (1.23)
T, n = m
 
T T
hence, the sequence (1.21) is an orthogonal system in RC − , .
2 2

Now we can solve


 the  Main Problem of Fourier Analysis. Given a function
T T
(signal) f ∈ RC − , , write f as a complex Fourier series with respect
2 2
to the orthogonal system (1.21), i.e. determine the coefficients cn ∈ C such
that ∞
X
f (x) = cn einωx . (1.24)
−∞

From the generalized Fourier coefficients formula of the type (1.4) we


obtain Z T
(f, fn ) (f (x), einωx ) 1 2
cn = = inωx inωx = f (x)einωx dx.
(fn , fn ) (e ,e ) T − T2
Hence,
Z T
1 2
cn = f (x)e−inωx dx, n ∈ Z. (1.25)
T − T2

In similar convergence conditions as in the real case, the equality (1.24)


holds for every x ∈ R for periodic continuous or standardized functions f .

20
Connections between Real and Complex Fourier Series
   
T T T T
It is obvious that R − , ⊂ RC − , since R ⊂ C and a func-
  2 2 2 2
T T
tion f : − , → R can be considered as having complex values. There-
2 2
fore, such a function can be written using both real and complex Fourier
series expansions.
For n = 0 one obtains from (1.25) and (1.9)
Z T Z T
1 2
−i0ωx 1 2 a0
c0 = f (x)e dx = f (x) dx = .
T − T2 T − T2 2

For n ≥ 1, again by (1.25) and (1.9) and using the fact that e−iθ =
cos θ − i sin θ, we have
Z T
1 2
cn = f (x)e−inωx dx
T − T2
Z T Z T
1 2 1 2
= f (x) cos nωx dx − i f (x) sin nωx dx
T − T2 T − T2
an bn
= −i .
2 2
Similarly, for n ≥ 1 and since eiθ = cos θ + i sin θ, one gets
Z T
1 2 an bn
c−n = f (x)einωx dx = +i .
T − T2 2 2

Hence, the real and the complex Fourier coefficients are related by the for-
mulas  a0
 c0 = ,
2



 an bn
cn = −i , (1.26)
 2 2
 a b
 c−n = n + i n .


2 2
Note that for a real function f we get c−n = cn .
In many problems it is more practical to compute the coefficients c−n
using the substitution z = eiωx and Residue Theorems ([26, Chapter 5]).

21
This is due to the fact that in cn , this change of variable introduces an nth
order pole; or we can just use the substitution z = e−iωx , both methods being
rather complicated.

1.7 Signal Fourier Series Representation


In the signal theory approach, the signals einωx , n ∈ Z belonging to the
orthogonal system (1.21) are called basis functions for the Fourier analysis
and (1.21) is said to be a set of harmonically related exponentials.
Since ei2π = cos(2π) + i sin(2π) = 1, one gets for n 6= 0

einωx = ei(nωx+2π) = einω(x+ nω ) ,

i.e. einωx is a periodic function of period . For n = 1 the common

2π 1
period of the set (1.21) is T = seconds, where F = is the frequency
ω T

in Hz and ω = = 2πF is the angular frequency in rad/s. The Fourier
T
series expansion formula (1.24) is referred to as the synthesis equation of the
periodic signal f ,
X∞
f (x) = cn einωx (SE)
−∞

and the formula of the Fourier coefficients (1.25)


Z T
1 2
cn = f (x)e−inωx dx, n ∈ Z (AE)
T − T2

is called the analysis equation.

The Analysis and the Synthesis Equations Using the Fourier Series
Representations of the Periodic Signal f

Write the coefficient cn in the exponential form cn = An eiϕn . The corre-


sponding term in synthesis equation (SE) is

cn einωx = An ei(nωx+ϕn ) .

22
Hence, An = |cn | represents the amplitude (a measure of the strength) and
ϕn represents the phase of the frequency content of the signal at nω Hz.
By (AE), it follows that the Fourier coefficient cn may be considered as a
measure of the correlation of the signal f (x) and the signal e−inωx . The set
of coefficients {cn : n ∈ Z} is called the signal spectrum. It can be represented
as a frequency spectrum with vertical spectral lines chain between the points
nω and cn in the complex plane (see Figure 1.7).

Figure 1.7: A Frequency Spectrum

Example 1.7.1. Determine the signal spectrum of the 100 Hz sinewave


f (x) = 4 sin(200πx), x ∈ R (see Figure 1.8).

Figure 1.8: The Graph of the Sinewave f


1
Solution. It follows that the frequency is F = 100 Hz; hence, T = =
F

0.01 s and ω = = 200π.
T
eiz − e−iz
Method 1. We will use Euler’s formula sin z = . Hence,
2i
2
f (x) = (ei200πx − e−i200πx ).
i

23
1
Since = −i, one gets
i
f (x) = 2ie−i200πx − 2iei200πx .

The complex Fourier series (SE) is



X
f (x) = cn ein200πx .
−∞

One identifies the coefficients of the exponential in these two formulas and
obtains
c−1 = 2i, c1 = −2i, cn = 0, n ∈ Z\{−1, 1},
with amplitudes |c−1 | = |c1 | = 2 (see Figure 1.9a).
Method 2 (general). Using formula (AE) of the complex Fourier series,
one gets
Z T
1 2
cn = f (x)e−inωx dx
T − T2
Z 0.005
1 2 i200πx
= (e − e−i200πx )e−in200πx dx
0.01 −0.005 i
Z 0.005 Z 0.005 
200 −i(n−1)200πx −i(n+1)200πx
= e dx − e dx .
i −0.005 −0.005

For n ∈ Z\{−1, 1}, one obtains

e−i(n−1)200πx e−i(n+1)200πx
 
0.005 0.005
cn = −200i − + .
i(n − 1)200π −0.005 i(n + 1)200π −0.005

Computing the first term, gives us


0.005
e−i(n−1)200πx = e−i(n−1)π − ei(n−1)π = −2i sin(n − 1)π = 0.
−0.005

0.005
Similarly, e−i(n+1)200πx = 0. Hence,
−0.005

cn = 0 for n ∈ Z\{−1, 1}.

24
For n = −1, one has
Z 0.005 Z 0.005 
200 400iπx
c−1 = e dx − dx
i
−0.005  −0.005
0.005
= −200i −x
−0.005

= 200i · 0.01 = 2i.

Similarly, one obtains c1 = −2i (see Figure 1.9b).

Figure 1.9a: The Amplitude Spectrum

Figure 1.9b: The Frequency Spectrum

Example 1.7.2 ([34]). Determine the frequency of a periodic train of pulses


with amplitude A = 0.5, frequency F = 100 Hz and a pulse of duration
d = 0.004 s.

25
1
Solution. The pulse period is T = = 0.01 s and the angular frequency
F

ω= = 200π rad/s.
T  
T T
In the interval − , the signal is
2 2
  
d d
 A, x ∈ − ,



 2 2
f (x) =     (see Figure 1.10a).

 T d d T
 0, x ∈ − , − ∪ ,


2 2 2 2

Applying the analysis equation (AE), for n ≥ 1, one obtains


Z T Z d
1 2
−inωx 1 2
cn = f (x)e dx = Ae−inωx dx
T − T2 0.01 − d2
Z 0.002
1 e−in200πx 0.002
= 0.5e−in200πx dx = 50
0.01 −0.002 −in200π −0.002
−in0.4π in0.4π
1 e −e sin(0.4nπ)
=− = .
2nπ 2i 2nπ

Due to the fact that the function f and the coefficients cn are real, for
n ≥ 1, one gets
sin(0.4nπ)
c−n = cn = cn = .
2nπ
For n = 0, by (AE), one obtains
T
Z Z 0.002
1 2 1
c0 = f (x) dx = 0.5 dx
T − T2 0.01 −0.002
0.002
= 50x = 0.2.
−0.002

Remark 1.7.3. This example (Example 1.7.2) and the one that follows
(Example 1.7.4) are also solved at the end of this chapter using MATLAB
(see Examples 1.9.9 and 1.9.8, respectively). Note that Figures 1.10a and
1.10b from below are the same as Figures 1.15 and 1.16, respectively.

26
Figure 1.10a: The Pulse Train

Figure 1.10b: The Frequency Spectrum

Example 1.7.4. Determine the synthesis of the pulse train from Example
1.7.2, up to the N th harmonic.

Solution. We determine the truncate Fourier series from the synthesis


equation (SE). We get
N
X N
X N
X
fN (x) = inωx
cn e = c0 + cn e inωx
+ c−n e−inωx .
n=−N n=1 n=1

Hence,
N
X sin(0.4nπ)
ein200πx + e−in200πx ,

fN (x) = 0.2 +
n=1
2nπ

27
so
N
1X1
fN (x) = 0.2 + sin(0.4nπ) cos(200nπx).
π n=1 n

1.8 Exercises
In what follows, the function f : R → R is a periodic one, with period T

and ω = .
T
E 1. Determine the trigonometric Fourier expansion of the following even
functions:
a) f : [−1, 1) → R, f (x) = x2 ;
x
b) f : [−π, π) → R, f (x) = cos ;
2

x + a, x ∈ [−a, 0]
c) f : [−a, a) → R, f (x) = , a > 0.
−x + a, x ∈ (0, a)
Solution. Since all the functions are even, we are using formulas (1.15),
Z T
4 2
a0 = f (x) dx,
T 0
Z T
4 2
an = f (x) cos nωx dx, ∀n ≥ 1
T 0
bn = 0, ∀n ≥ 1,
and the Fourier cosine series (1.16),

a0 X
f (x) = + an cos nωx.
2 n=1

T 2π
a) As = 1, it follows that T = 2 and ω = = π. For n 6= 0, one gets
2 T
Z T Z 1
4 2
an = f (x) cos(nωx) dx = 2 x2 cos(nπx) dx
T 0 0

Integrating by parts, we get


 Z 1 
sin(nπx) 1 sin(nπx)
an = 2 x 2 · − 2x · dx .
nπ 0 0 nπ

28
sin(nπx) 1
Since sin(kπ) = 0, for every k ∈ Z, one has = 0 and
nπ 0

Z 1
4
an = − x sin(nπx) dx,
nπ 0

and, again, integrating by parts, one gets


 Z 1 
4 − cos(nπx) 1 cos(nπx)
an = − x· + dx
nπ nπ 0 0 nπ
 Z 1 
−4
= 2 2 − cos(nπ) + 0 + cos(nπx) dx .
nπ 0

Since cos(kπ) = (−1)k , for every k ∈ Z, one obtains

4(−1)n
 
−4 n sin(nπx) 1
an = 2 2 −(−1) + = 2 2 .
nπ nπ 0 nπ

Also,
T
1
x3
Z Z
4 2 1 2
a0 = f (x) dx = 2 x2 dx = 2 · = .
T 0 0 3 0 3
Therefore, the Fourier cosine series (1.16) is the following:


1 X 4(−1)n
f (x) = + cos(nπx);
3 n=1 n2 π 2


b) In this case, T = 2π and ω = = 1. For n ∈ N, one gets
T
Z π
2 x
an = cos cos(nx) dx.
π 0 2

One method to solve this integral is to use integration by parts two times
until we reach the initial integral. An easier way to solve it is using the
cos(A + B) + cos(A − B)
trigonometric identity cos A cos B = . Our solution
2

29
follows the last method, so
x  x 
2
Z π cos + nx + cos − nx
an = 2 2 dx
π 0 2
 x  x  
1 sin + nx
π − nx π
sin
= 2 + 2 
π
 1 0 1 0

+n −n
2 2
π π
   
1 sin + nπ sin − nπ
=
 2 + 2 
π
 1 1 
+n −n
2 2
(−1)n (−1)n
 
2
= + ,
π 1 + 2n 1 − 2n

where we haveused the fact that sin(kπ) = 0 and cos(kπ) = (−1)k , for every
π  π 
k ∈ Z and sin = 1 and cos = 0. Finally, we obtain
2 2
2 (−1)n (−1)n 4(−1)n
 
2 1 − 2n + 1 + 2n
an = + = · (−1)n · = .
π 1 + 2n 1 − 2n π 1 − 4n2 π(1 − 4n2 )
4
Replacing n with 0 into the expression of an , we get a0 = . Therefore, the
π
Fourier cosine series (1.16) is the following:

2 X 4(−1)n
f (x) = + cos(nx);
π n=1 π(1 − 4n2 )

c) The fact that the function f is even is not obvious. But if one makes
the computations, one gets
 
−x + a, −x ∈ (−a, 0) −x + a, x ∈ (0, a)
f (−x) = = = f (x),
−(−x) + a, −x ∈ [0, a] x + a, x ∈ [−a, 0]

2π π
In this exercise, T = a and ω = = . For n 6= 0, one obtains
T a
Z a
2 a
Z
2  nπx   nπx 
an = f (x) cos dx = (−x + a) cos dx.
a 0 a a 0 a

30
Integrating by parts, one gets
  nπx   nπx  
2 sin a
Z a sin
an = (−x + a) · a + a dx
a nπ 0

0
a 
 nπx a
 
2  cos
 = 2a (−1)n+1 + 1 .
a
= 0−0− a  
nπ nπ 0 nπ2 2
a
2 a 2 a
Z Z
For n = 0, one finds a0 = f (x) dx = (−x + a) dx = a. It follows
a 0 a 0
that the cosine Fourier series (1.16) is the following:

a X 2a[(−1)n+1 + 1]  nπx 
f (x) = + cos .
2 n=1 n2 π 2 a

W 1. Determine the trigonometric Fourier expansion of the following even


functions:
a) f : [−1, 1) → R, f (x) = |x| + a, a ∈ R;
b) f : [−2, 2) → R, f (x) = sin2 (πx);

x − sin x, x ∈ [−π, 0]
c) f : [−π, π) → R, f (x) = .
−x + sin x, x ∈ (0, π)
2
Answer. a) For n 6= 0, an = 2 2 [(−1)n − 1]. Also, a0 = 1 + 2a and the

Fourier cosine series (1.16) is the following:

1 + 2a X 2[(−1)n − 1]
f (x) = + cos(nπx);
2 n=1
n2 π 2

1 − cos(2A)
b) One can notice that since sin2 A = , we immediately find
2
1 1
the finite expansion f (x) = − cos(2πx). By computing the coefficients
2 2
with formulas (1.15), one gets
Z 2 Z 2
2
 nπx  1 − cos(2πx)  nπx 
an = sin (πx) cos dx = cos dx
0 2 0 2 2
Z 2 Z 2 
1  nπx   nπx 
= cos dx − cos(2πx) cos ( dx .
2 0 2 0 2

31
Using the formula of f (x), one gets an = 0, for n 6= 0, 4 and
Z 2
1
a4 = sin2 (πx) cos(2πx)) dx = −
0 2
and Z 2
a0 = sin2 (πx) dx = 1;
0

c) The Fourier cosine series (1.16) is the following:



π2 2 (−1)n−1 + 1 (−1)n−1 − 1
   
2 4 X
f (x) = 2− + cos x+ 2
− 2
cos(nx).
π 2 π n=2
π n 1 − n

E 2. Determine the trigonometric Fourier expansion of the following odd


functions:
a) f : [−2, 2) → R, f (x) = ax, a 6= 0;
b) f : [−π, π) → R, f (x) = sinh x;

 −1 − x, x ∈ [−3, −1)
c) f : [−3, 3) → R, f (x) = 0, x ∈ [−1, 1] .
1 − x, x ∈ (1, 3)

Solution. Since all the functions are odd, we are using formulas (1.17),

a0 = an = 0, ∀n ≥ 1,
Z T
4 2
bn = f (x) sin nωx dx, ∀n ≥ 1.
T 0
and the Fourier sine series (1.18),

X
f (x) = bn sin nωx.
n=1

T 2π π
a) Since = 2, it follows that T = 4 and ω = = . Applying formula
2 T 2
(1.17) we get

2 2
Z  nπx  Z 2  nπx 
bn = ax sin dx = a x sin dx.
2 0 2 0 2

32
Integrating by parts and using that sin(kπ) = 0, for every k ∈ Z, cos(kπ) =
(−1)k , for every k ∈ Z, one gets
  nπx  
−x cos 2 2
Z 2  nπx 
bn = a  2 + cos dx
nπ 0 nπ 0 2
2  nπx  

−4 · (−1) n
2 sin 2
= a + · 2


nπ nπ 0
2
n+1
4 · (−1)
=a· .

It follows that the Fourier sine series is the following:

X 4 · (−1)n+1  nπx 
f (x) = a sin ;
n=1
nπ 2

b) We need to remember the following definitions:

ex − e−x ex + e−x
sinh(x) = and cosh(x) = .
2 2
It follows that (sinh(x))0 = cosh(x) and (cosh(x))0 = sinh(x). The function
e−x − ex ex − e−x
sinh is an odd function since sinh(−x) = =− = − sinh(x),
2 2
for every x ∈ R.
T 2π
Since = π, it follows that T = 2π and ω = = 1. Therefore, for
2 Z π T
2
n ≥ 1, one gets bn = sinh(x) sin(nx) dx. We will use integration by
π 0
parts twice.
First one obtains
2 π
Z
bn = (cosh(x))0 sin(nx) dx
π 0
 Z π 
2 π
= cosh(x) sin(nx) − cosh(x)n cos(nx) dx
π 0 0
−2n π
Z
= cosh(x) cos(nx) dx
π 0

33
Then
−2n π
Z
bn = (sinh(x))0 cos(nx) dx
π 0
 Z π 
−2n π
= sinh(x) cos(nx) + sinh(x)n sin(nx) dx
π 0 0
 Z π 
−2n n
= sinh π · (−1) + n sinh(x) sin(nx) dx
π 0
−2n  nπ 
= sinh π · (−1)n + bn
π 2
−2n
= sinh π · (−1)n − n2 bn .
π
−2n
In conclusion, (n2 + 1)bn = sinh π · (−1)n and
π
2n(−1)n+1 sinh π
bn = .
π(n2 + 1)
The Fourier sine series (1.18) is the following:

X 2n(−1)n+1 sinh π
f (x) = sin(nx);
n=1
π(n2 + 1)

c) Let us verify first that the function f is indeed odd. One obtains
 
 −1 + x, −x ∈ [−3, −1)  −1 + x, x ∈ (1, 3]
f (−x) = 0, −x ∈ [−1, 1] = 0, x ∈ [−1, 1]
1 + x, −x ∈ (1, 3] 1 + x, x ∈ [−3, −1)
 

 −(1 − x), x ∈ (1, 3]
= 0, x ∈ [−1, 1] = −f (x),
−(−1 − x), x ∈ [−3, −1)

for every x ∈ [−3, 3).


2π π
Since T = 6 and ω = = , it follows that, for n ≥ 1, the coefficients
T 3
are the following:

2 3
Z  nπx  Z 1 Z 3  nπx 
2
bn = f (x) sin dx = 0 dx + (1 − x) sin dx.
3 0 3 3 0 1 3

34
Integrating by parts, one finds
2
  nπx  3 Z 3  nπx  
bn = −(1 − x) cos − cos dx
nπ 3 1 1 3
  nπx  3 
2 n 3
= 2(−1) − sin
nπ nπ 3 1
 
2 3 nπ
= 2(−1)n + sin .
nπ nπ 3
The Fourier sine series (1.18) is the following:
∞  
X 2 n 3 nπ  nπx 
f (x) = 2(−1) + sin sin .
n=1
nπ nπ 3 3

W 2. Determine the trigonometric Fourier expansion of the following odd


functions:
h π π
a) f : − , → R, f (x) = x cos x;
2 2
b) f : [−π, π) → R, f (x) = x3 ;
 h πi
 1, x ∈ −π, −
 π π2



c) f : [−π, π) → R, f (x) = 0, x ∈ − , .

 h π 2 2
 −1, x ∈


2
Answer. a) One obtains
Z π
4 2
bn = x cos x sin(2nx) dx
π 0
Z π
4 2 sin(2nx + x) + sin(2nx − x)
= x· dx
π 0 2
Z π !
2 −x cos(2n + 1)x π2 2 cos(2n + 1)x
= + dx +
π 2n + 1 0 0 2n + 1
Z π !
2 −x cos(2n − 1)x π2 2 cos(2n − 1)x
+ + dx
π 2n − 1 0 0 2n − 1
2 sin(2n + 1)x π2 sin(2n − 1)x π2
 
= + .
π (2n + 1)2 0 (2n − 1)2 0

35
Hence,
16n(−1)n+1
bn = ,
π(4n2 − 1)2

X 16n(−1)n+1
so f (x) = sin(2nx);
n=1
π(4n2 − 1)2
b) One gets
π2
 
n 6
bn = 2(−1) −
n3 n

π2
 
X
n 6
and f (x) = 2(−1) − sin(nx);
n=1
n3 n
c) One obtains

2 π 2 π
Z Z
bn = f (x) sin(nx) dx = − sin(nx) dx
π 0 π π2
2  n
 nπ 
= (−1) − cos
nπ 2

X 2   nπ 
and f (x) = (−1)n − cos sin(nx).
n=1
nπ 2

E 3. Expand in trigonometric Fourier series the following functions:


a) f : [−2, 2) → R, f (x) = xex ;
sin x + 1
b) f : [−π, π) → R, f (x) = .
5 − 4 cos x
Solution. a) The present function is not odd or even, so we must find the
coefficients of the trigonometric Fourier series using formulas (1.14),
Z T
2 2
a0 = f (x) dx,
T − T2
Z T
2 2
an = f (x) cos nωx dx, ∀n ≥ 1,
T − T2
Z T
2 2
bn = f (x) sin nωx dx, ∀n ≥ 1.
T − T2

36
π
It follows that T = 4 and ω = .
2
One may try to compute one by one the coefficients a0 , an and bn , but
the integrals are quite complicated. A simpler way is to compute an + ibn .
So, for n ≥ 0,

1 2 x h  nπx 
Z  nπx i
an + ibn = xe cos + i sin dx
2 −2 2 2
1 2 x inπx 1 2 x+ inπx
Z Z
= xe e 2 dx = xe 2 dx.
2 −2 2 −2

Integrating by parts, one gets


inπx inπx
!
2
ex+ ex+
Z
1 2 2 2
an + ibn = x inπ − inπ dx
2 1+ 2
−2 −2 1+ 2
inπx
!
1 2e2+inπ + 2e−2−inπ ex+ 2 2
= − .
2 1 + inπ
2
(1 + inπ
2
)2 −2

But einπ = cos(nπ) + i sin(nπ) = (−1)n and e−inπ = cos(nπ) − i sin(nπ) =


(−1)n , so

2 1 e2 (−1)n − e−2 (−1)n )


an + ibn = (e2 (−1)n + e−2 (−1)n ) + · 2 .
2 + inπ 2

inπ
1+
2

Replacing into the expression e2 +e−2 with 2 cosh 2 and e2 −e−2 with 2 sinh 2,
it follows that
4(−1)n cosh 2 4(−1)n sinh 2
an + ibn = +
2 + inπ (2 + inπ)2
(2 − inπ) cosh 2 (2 − inπ)2 sinh 2
 
n
= 4(−1) +
4 + n2 π 2 (4 + n2 π 2 )2
4(−1)n (4 − 4inπ − n2 π 2 ) sinh 2
 
= (2 − inπ) cosh 2 + ,
4 + n2 π 2 4 + n2 π 2
so
4(−1)n (4 − n2 π 2 ) sinh 2
 
an = 2 cosh 2 + ,
4 + n2 π 2 4 + n2 π 2

37
and
4(−1)n
 
4nπ sinh 2
bn = −nπ cosh 2 − .
4 + n2 π 2 4 + n2 π 2
For n = 0, we obtain a0 = 2 cosh 2 + sinh 2;
b) When the function f is R(sin x, cos x), it is necessary to compute an +
ibn since the integral will be transformed into a complex integral that will be
solved using residues (see [26, Chapter 5, Section 5.4]).
For n ≥ 0, one gets
1 π sin x + 1
Z
an + ibn = (cos(nx) + i sin(nx)) dx
π −π 5 − 4 cos x
1 π sin x + 1 inx
Z
= e dx.
π −π 5 − 4 cos x

Using the substitution eix = z, it follows that

z2 − 1
1
Z +1 dz
an + ibn = 2iz · zn
2
π |z|=1 z +1 iz
5−4
2z
(z + i)2 z n−1
Z
1
= 2
dz.
2π |z|=1 2z − 5z + 2

(z + i)2 z n−1
For n ≥ 1, let us denote by g(z) the fraction . The singular
2z 2 − 5z + 2
1
points of g are z1 = 2 and z2 = , both first order poles, but only z2 is inside
2
the circle |z| = 1. Therefore,
 
1 1
an + ibn = · 2πi · res g,
2π 2
(z + i)2 z n−1
 
1
= i lim1 z −
z→ 2 2 2(z − 2)(z − 21 )
 n  
1 i 1
= · + .
2 4 3
 n  n+2
1 1 1
It follows that an = · and bn = .
3 2 2

38
(z + i)2
For n = 0, g(z) = . Now, the singular points of g are
z(2z 2 − 5z + 2)
1
z1 = 2, z2 = and z3 = 0, all first order poles, but only z2 and z3 are inside
2
the circle |z| = 1. Therefore, we get that
   
1 1
a0 + ib0 = · 2πi · res g, + res (g, 0)
2π 2
1 ( 21 + i)2
 
= i − + −3
2 2
2
= .
3

In conclusion, the trigonometric Fourier expansion of f is the following:


"   #
n  n+2
1 X 1 1 1
f (x) = + cos(nx) + sin(nx) .
3 n≥1 3 2 2

W 3. Expand in trigonometric Fourier series the following functions:



1, x ∈ [−1, 0)
a) f : [−1, 1) → R, f (x) = ;
0, x ∈ [0, 1)
π
b) f : [−π, π) → R, f (x) = ex ;
2 sinh π
1
c) f : [−π, π) → R, f (x) = ;
2 + sin x
1
d) f : [−π, π) → R, f (x) = .
(13 − 12 cos x)2

1 X (−1)n − 1
Answer. a) f (x) =
+ sin(nπx);
2 n≥1 nπ

1 X (−1)n n(−1)n
 
b) f (x) = + cos(nx) + sin(nx) ;
2 n≥1 1 + n2 1 + n2

39
c) One gets
1 π zn
Z Z
1 inx 4
an + ibn = e dx = dz
π −π 2 + sin x π |z|=1 z 2 + 4iz − 1
zn √
 
4
= · 2πi · res , i(−2 + 3)
π z 2 + 4iz − 1
4 · in √
= √ (−2 + 3)n .
3
 π π n nπ nπ
But in = cos + i sin = cos + i sin , so
2 2 2 2
2 4 Xh √ nπ nπ i
f (x) = √ + √ (−2 + 3)n (cos cos nx + sin sin nx) ;
3 3 n≥1 2 2
 n
13 X 2 2
d) f (x) = + · (5n + 13) cos nx.
125 n≥1 125 3

E 4. Expand in complex Fourier series the following functions:


a) f : [−1, 1) → R, f (x) = x + ex ;
b) f : [−π, π) → R, f (x) = cosh x.
Solution. We use formula (1.25),
Z T
1 2
cn = f (x)e−inωx dx, n ∈ Z
T −2 T

and the complex Fourier expansion (1.24),



X
f (x) = cn einωx .
−∞

a) One gets
1 1
Z
cn = (x + ex )e−inπx dx
2 −1
Z 1 Z 1 
1 −inπx x−inπx
= xe dx + e dx
2 −1 −1
Z 1 −inπx
1 xe−inπx 1 ex−inπx
 
e 1
= + dx + .
2 −inπ −1 −1 inπ 1 − inπ −1

40
Hence,
e−inπ + einπ e−inπx e1−inπ − e−1+inπ
 
1 1
cn = − + .
2 −inπ (inπ)2 −1 1 − inπ
Since einπ = cos nπ + i sin nπ = (−1)n = e−inπ , it follows that, for n 6= 0,
2(−1)n (−1)n (e − e−1 )
 
1
cn = − +
2 inπ 1 − inπ
n
 
(−1) 2 2 sinh 1
= − +
2 inπ 1 − inπ
 
n sinh 1(1 + inπ) i
= (−1) +
1 + n2 π 2 nπ
1 1
Z
and c0 = (x + ex ) dx = sinh 1; so, using (1.24), we get
2 −1
∞  
X sinh 1(1 + inπ) i
f (x) = sinh 1 + (−1) n
2π2
+ e−inπx ;
n=−∞,
1 + n nπ
n6=0

b) One obtains
Z π Z π x
1 −inx 1 e + e−x −inx
cn = cosh xe dx = e dx
2π −π 2π −π 2
Z π Z π 
1 x−inx −x−inx
= e dx + e dx
4π −π −π
1 ex−inx π e−x−inx π
 
= −
4π 1 − in −π 1 + in −π
1 e (−1) − e (−1)n e−π (−1)n − eπ (−1)n
−π
 π n

= −
4π 1 − in 1 + in
n
(−1)n sinh π
 
(−1) 2 sinh π 2 sinh π
= + = .
4π 1 − in 1 + in π(1 + n2 π 2 )
In conclusion, the complex Fourier expansion of f is the following:

X (−1)n sinh π −inx
f (x) = e .
n=−∞
π(1 + n2 π 2 )

41
W 4. Expand in complex Fourier series the following functions:

x, x ∈ [−2, 0)
a) f : [−2, 2) → R, f (x) = ;
0, x ∈ [0, 2)
h π π
b) f : − , → R, f (x) = eax , a ∈ R.
2 2
(−1)n i 1 − (−1)n 1
Answer. a) For n 6= 0, cn = + 2 2
and c0 = − ;
nπ nπ 2
π 
∞ 2(−1)n (a + 2in) sinh a
2 e2inx .
X
b) f (x) =
n=−∞
π(a2 + 4n2 )

E 5. Expand in trigonometric Fourier sine series the following functions:


a) f : [0, π) → R, f (x) = sin2 x;

1, x ∈ [0, 1)
b) f : [0, 2) → R, f (x) = .
−1, x ∈ [1, 2)
Solution. Since both functions are defined on the interval [0, l), for deter-
mining the Fourier coefficients we use formula (1.20) and then the expansion
(1.19).
a) For n ≥ 1,
2 π 2 2 π 1 − cos(2x)
Z Z
bn = sin x sin(nx) dx = sin(nx) dx
π 0 π 0 2
Z π Z π 
1
= sin(nx) dx − cos(2x) sin(nx) dx .
π 0 0

sin(a + b) + sin(a − b)
Now we use the formula sin a cos b = (for the second
2
integral) and we get
 Z π Z π 
1 − cos(nx) π 1
bn = − sin(nx + 2x) dx + sin(nx − 2x) dx
π n 0 2 0 0
(−1)n − 1 cos(nx + 2x) π cos(nx − 2x) π
 
1
= − + + .
π n 2(n + 2) 0 2(n − 2) 0
We need to impose now the condition n 6= 2 and the coefficient b2 will be
determined separately. So, for n 6= 2,
(−1)n − 1 (−1)n − 1 (−1)n − 1 4[(−1)n − 1]
 
1
bn = − + + = .
π n 2(n + 2) 2(n − 2) nπ(n2 − 4)

42
On the other hand,
2 π 2
Z Z π Z π 
1
b2 = sin x sin(2x) dx = sin(2x) dx − cos(2x) sin(2x) dx
π 0 π 0 0
 Z π   
1 cos(2x) π sin(4x) 1 cos(4x) π
= − − dx = 0+ = 0.
π 2 0 0 2 π 8 0

It follows that the trigonometric Fourier sine series for f is the following:
X
f (x) = b1 sin x + b2 sin(2x) + bn sin(nx)
n≥3
8 X 4[(−1)n − 1]
= sin x + 2 − 4)
sin(nx);
3π n≥3
nπ(n

b) In this exercise
Z 2  nπx  Z 1  nπx  Z 2  nπx 
bn = f (x) sin dx = sin dx − sin dx
0 2 0 2 1 2
2 h n
 nπ i
= 1 + (−1) − 2 cos ,
nπ 2

X 2 h n
 nπ i  nπx 
so f (x) = 1 + (−1) − 2 cos sin .
n=1
nπ 2 2

W 5. Expand in trigonometric Fourier sine series the following functions:


a) f : [0, 1) → R, f (x) = e2x ;

sin x, x ∈ [0, π)
b) f : [0, 2π) → R, f (x) = .
cos x, x ∈ [π, 2π)
Answer. a) To compute the coefficients bn , one can use integration by
parts two times and gets
nπ 2 n2 π 2
bn = − [e (−1)n − 1] − bn .
2 8
It follows that

X −4nπ[e2 (−1)n − 1]
f (x) = sin(nπx);
n=1
8 + n2 π 2

43
b) For n ∈ N∗ \ {2},
1 h  nπ  
n
 nπ i
bn = 4 sin + 2n (−1) + cos ,
π(4 − n2 ) 2 2
1
and b2 = . Therefore,
2
x X  nx  2 x 1
f (x) = b1 sin + b2 sin x + bn sin = sin + sin x+
2 n≥3
2 3π 2 2
X 1 h  nπ  
n
 nπ i  nx 
+ 4 sin + 2n (−1) + cos sin .
n≥3
π(4 − n2 ) 2 2 2

E 6. Expand in trigonometric Fourier cosine series the following functions:


a) f : [0, π) → R, f (x) = x + cos(2x);

x, x ∈ [0, 1)
b) f : [0, 4) → R, f (x) = .
|x − 3|, x ∈ [1, 4)
Solution. We need to apply an analogous method which involves formulas
(1.19) and (1.20).

Problem: Given a function f : (0, l] → R, expand f in Fourier sine


series.

Solution: First we extend f to an even function fe : [−l, l] → R. Hence,



 f (x), x ∈ (0, l]
fe(x) = f (−x), x ∈ [−l, 0) .
0, x = 0

In this case, the period is the length of the interval [−l, l], i.e. T = 2l
π
and ω = . Therefore, the Fourier cosine series expansion of the function
l
f (x), x ∈ (0, l] will be the Fourier cosine series (1.16) of fe(x) restricted to the
interval (0, l] and the Fourier coefficients
 a0 and an (1.15) will contain f (x)
T 4 2 T
since fe(x) = f (x), for x ∈ 0, . Since = and = l, one obtains
2 T l 2
2 l 2 l
Z Z  nπx 
a0 = f (x) dx, an = f (x) cos dx (1.27)
l 0 l 0 l

44
and

a0 X  nπx 
f (x) = + an cos . (1.28)
2 n=1
l

a) One gets

2 π
Z
an = (x + cos(2x)) cos(nx) dx
π 0
Z π Z π 
2
= x cos(nx) dx + cos(2x) cos(nx) dx
π 0 0
 Z π 
2 x sin(nx) π sin(nx)
= − dx +
π n 0 0 n
2 π cos(nx + 2x) + cos(nx − 2x)
Z
+ dx
π 0 2
 
2 cos(nx) π sin(nx + 2x) π sin(nx − 2x) π
= + + .
π n2 0 2(n + 2) 0 2(n − 2) 0

Now one must impose the conditions for the denominators to be different
from 0, thus n 6= ±2 and n 6= 0. Since n is a positive integer, one is left only
with n 6= 0 and n 6= 2. It follows that

(−1)n − 1
 
2
an = .
π n2

Also,
Z π
2
a0 = (x + cos(2x)) dx = π,
π 0

and

2 π
Z
a2 = (x + cos(2x)) cos(2x) dx
π 0
Z π Z π 
2 2
= x cos(2x) dx + cos (2x) dx
π 0 0
 Z π Z π 
2 x sin(2x) π sin(2x) 1 + cos(4x)
= − dx + dx
π 2 0 0 2 0 2
= 1.

45
In conclusion,
a0 X
f (x) = + a1 cos x + a2 cos(2x) + an cos(nx)
2 n≥3
π 4 X 2  (−1)n − 1 
= − cos x + cos(2x) + cos(nx);
2 π n≥3
π n2

b) One gets

1 4
Z  nπx 
an = f (x) cos dx
2 0 4
Z 1 Z 4 
1  nπx   nπx 
= x cos dx + |x − 3| cos dx
2 0 4 1 4
1 1
Z  nπx 
= x cos dx+
2 0 4
Z 3 Z 4 
1  nπx   nπx 
+ (3 − x) cos dx + (x − 3) cos dx .
2 1 4 3 4

For n 6= 0, using integration by parts, one obtains


    
2  nπ  4  nπ  3nπ n
an = − sin + 2 cos − 2 cos + (−1) − 1
nπ 4 nπ 4 4
Z 4
1 3
and a0 = f (x) dx = . Therefore,
2 0 2

a0 X  nπx  3 X  nπx 
f (x) = + an cos = + an cos ,
2 n≥1
4 4 n≥1
4

where the an ’s were previously determined.

W 6. Expand in cosine Fourier trigonometric series the following functions:


a) f : [0, 1) → R, f (x) = (x − 1)2 ;

0, x ∈ [0, π)
b) f : [0, 2π) → R, f (x) = .
cos x, x ∈ [π, 2π)

46
Answer. For both functions one uses formulas (1.27) and (1.28) from E.
6.
1 X 4(−1)n+1
a) f (x) = + cos(nπx);
3 n≥1 n2 π 2
1 2π
Z  nx  2n  nπ 
b) an = cos x cos dx = sin , for n 6= 0, 2,
π π 2 π(n2 − 4) 2
1
a2 = and a0 = 0. In conclusion,
2
a0 x X  nx 
f (x) = + a1 cos + a2 cos x + an cos
2 2 n≥3
2
2 x 1 X 2n  nπ   nx 
= − cos + cos x + sin cos .
3π 2 2 n≥3
π(n2 − 4) 2 2

E 7. Using the trigonometric Fourier expansion of the specified functions,


find the sum of the following series:
X (−1)n
a) , f : [−π, π) → R, f (x) = x;
n≥0
2n + 1
X 1
b) , f : [−π, π) → R, f (x) = eax , a 6= 0.
n≥1
n2 + a2

Solution. a) The function f is odd, so an = 0, for n ∈ N. For the other


coefficients one gets

2 π
 Z π 
2 −x cos(nx) π
Z
cos(nx)
bn = x sin(nx) dx = + dx
π 0 π n 0 0 n
2 −π(−1)n sin(nx) π 2(−1)n+1
 
= + = .
π n n2 0 n

The Fourier trigonometric expansion of f is


X 2(−1)n+1
f (x) = sin(nx),
n≥1
n

for every x ∈ [−π, π).

47
π 
When one computes f , one obtains
2
π X 2(−1)n+1  nπ 
= sin
2 n≥1
n 2
X −2 X 2  π
= sin(kπ) + sin (2k + 1)
n=2k,
2k n=2k+1,
2k + 1 2
k≥1 k≥0
X 2  π
=0+ sin kπ +
n=2k+1,
2k + 1 2
k≥0
X 2
= (−1)k ,
k≥0
2k + 1
so
X (−1)n π
= ;
n≥0
2n + 1 4

b) In this exercise
1 π ax 1 eax 1 eaπ − e−aπ
Z π 2 sinh(aπ)
a0 = e dx = = = ,
π −π π a −π π a πa
and for n ≥ 1, one obtains
1 π ax inx 1 eax+inx π 1 eaπ+inπ − e−aπ−inπ
Z
an + ibn = e e dx = =
π −π π a + in −π π a + in
a − in
eaπ (−1)n − e−aπ (−1)n

= 2 2
π(a + n )
(−1)n (a − in)2 sinh(aπ)
= .
π(a2 + n2 )
Hence,
(−1)n 2a sinh(aπ) (−1)n+1 2n sinh(aπ)
an = and bn = .
π(a2 + n2 ) π(a2 + n2 )
The Fourier trigonometric expansion of f is
sinh(aπ) X (−1)n 2 sinh(aπ)
f (x) = + 2 + n2 )
(a cos(nx) − n sin(nx)) ,
πa n≥1
π(a

48
for every x ∈ [−π, π).
On one hand, f (−π) = e−aπ and on the other hand,

sinh(aπ) X (−1)n 2 sinh(aπ)


f (−π) = + 2 + n2 )
(a cos(−nπ) − n sin(−nπ))
πa n≥1
π(a
sinh(aπ) X (−1)n 2 sinh(aπ)
= + 2 2
(−1)n
πa n≥1
π(a + n )
sinh(aπ) X 2 sinh(aπ)
= + .
πa n≥1
π(a2 + n2 )

In conclusion,

sinh(aπ) 2 sinh(aπ) X 1
e−aπ = +
πa π n≥1
a2 + n 2

and
X 1 πae−aπ − sinh(aπ)
= .
n≥1
a2 + n 2 2a sinh(aπ)

W 7. Using the trigonometric Fourier expansion of the specified functions,


find the sum of the following series:
X (−1)n − 1
a) , f : [−1, 1) → R, f (x) = |x| + a, a ∈ R;
n≥1
n2
X 1
b) , f : [−2π, 2π) → R, f (x) = x + |x|.
n≥0
(2n + 1)2

Answer. a) The function f is even, so bn = 0, for n ≥ 1. For n 6= 0,


2
an = 2 2 [(−1)n − 1], a0 = 1 + 2a and the Fourier cosine series (1.16) is the

following:

1 + 2a X 2[(−1)n − 1]
f (x) = + 2π2
cos(nπx).
2 n=1
n
When one replaces x with −1, one obtains on the left hand side 1 + a and on

1 + 2a X 2[(−1)n − 1]
the right hand side + cos(−nπ), which is equivalent
2 n=1
n2 π 2

49
1 2 X (−1)n − 1
to =− 2 , so
2 π n≥1 n2

X (−1)n − 1 π2
=− ;
n≥1
n2 4

4((−1)n − 1) 4(−1)n
b) a0 = 2, an = and bn = , for n ≥ 1. The trigono-
n2 π n2
metric Fourier expansion of f is
X  4((−1)n − 1)  nx  4(−1)n  nx 
f (x) = 1 + 2π
cos + 2
sin .
n≥1
n 2 n 2

X 4((−1)n − 1)
When x = 0 we get that 1 + = 0, which implies that
n≥1
n2 π
X X −8
1+ 0+ 2π
= 0. In conclusion,
n=2k, n=2k+1,
(2k + 1)
k≥1 k≥0

X 1 π
= .
n≥0
(2n + 1)2 8

Theorem 1.8.1 (Parseval’s Formula). If a function f : [−l, l) → R admits


a trigonometric Fourier expansion and a0 , an and bn are the trigonometric
Fourier expansion coefficients, then

a20 X 2 1 l 2
Z
2
+ (an + bn ) = f (x) dx.
2 n≥1
l −l

E 8. Find the sum of the following series, using the trigonometric Fourier
expansion of the specified functions and Theorem 1.8.1 (Parseval’s Formula):
X 1
a) 2
, f : [−π, π) → R, f (x) = x;
n≥1
n

X 1 0, x ∈ [−π, 0)
b) , f (x) = .
(2n + 1)4 x, x ∈ [0, π)
n≥0

50
Solution. a) The function f is odd, so an = 0, for n ∈ N. One obtains
2 π
 Z π 
2 −x cos(nx) π
Z
cos(nx)
bn = x sin(nx) dx = + dx
π 0 π n 0 0 n
2 −π(−1)n sin(nx) π 2(−1)n+1
 
= + = .
π n n2 0 n
From Theorem 1.8.1 (Parseval’s Formula), one gets
a20 X 2 1 π 2
Z
2
+ (an + bn ) = x dx.
2 n≥1
π −π

X 2(−1)n+1 2 2π 2

This implies that = , so
n≥1
n 3
X 1 2π 2 X 1 π2
4 2
= and 2
= ;
n≥1
n 3 n≥1
n 6

b) In this exercise
1 π 1 π 1 (−1)n − 1
Z Z
π
a0 = x dx = , an = x cos(nx) dx = ·
π 0 2 π 0 π n2
and
1 π (−1)n+1
Z
bn = x sin(nx) dx = , for n ≥ 1.
π 0 n
From Theorem 1.8.1 (Parseval’s Formula), one obtains
π 2 X 1 2(1 − (−1)n ) 1 π 2
  Z
1
+ 2 4
+ 2 = x dx,
8 n≥1
π n n π 0

which is equivalent to
π2 2 X 1 − (−1)n X 1 π2
+ 2 + = .
8 π n≥1 n4 n≥1
n2 3
X 1 π2
Using the fact that 2
= (from a)), we get that
n≥1
n 6
 
2 X 1 − (−1)n π2 2  X X 2  π2
= ⇔  0 + =
π 2 n≥1 n4 24 π 2 n=2k, n=2k+1,
(2k + 1) 4 24
k≥1 k≥0

51
and
X 1 π4
= .
n≥0
(2n + 1)4 96

W 8. Find the sum of the following series, using the trigonometric Fourier
expansion of the specified functions and Theorem 1.8.1 (Parseval’s Formula):
X 1
a) 4
, f : [−π, π) → R, f (x) = x2 ;
n≥1
n
X 1
b) , f : [−π, π) → R, f (x) = cosh(ax), a 6= 0.
n≥1
(n2 + a2 )2

X 1 π4
Answer. a) = ;
n≥1
n4 90
sinh(aπ)
b) Since f is even, then bn = 0, for n ≥ 1. We obtain that a0 = ,

2a sinh(aπ)(−1)n
an = and
π(n2 + a2 )
X 1 (2aπ sinh(2aπ) − 2 sinh2 (aπ) + a2 π 2 )
= .
n≥1
(n2 + a2 )2 16a4 sinh2 (aπ)

1.9 MATLAB Applications


Example 1.9.1. Expand in Fourier series the following function:

0, x ∈ (−π, 0)
f (x) = .
1, x ∈ (0, π)


Solution. The period is T = 2π; hence, ω = = 1.
T
>> syms n positive;
syms x;
f = 1; % defines the signal f
g = f * cos(n*x);
h = f * sin(n*x);
a0 = int(f, x, 0, pi)/pi;

52
an = (int(g, x, 0, pi)/pi); % computes the Fourier coefficients
bn = int(h, x, 0, pi)/pi;
The answers are the following:

ˆ a0 = 1;

ˆ an = sin(pi*n)/(n*pi) (hence, an = 0);

ˆ bn = (2* sin((pi*n)/2)ˆ2)/(n*pi).

We obtain ∞
1 X 2 sin2 πn

2
f (x) = + .
2 n=1 πn

Example 1.9.2. Determine the Fourier coefficients of the periodic function


f (x) = sin(x), x ∈ (−π, π) and explain the result.
Solution. Since the function f is odd, we get that a0 = an = 0, so one
computes only bn .
>> syms n positive;
syms x;
f = sin(x); % defines the signal f
g = f * sin(n*x);
bn = int(g, x, −pi, pi)/pi; % computes the Fourier coefficients
The answer is the following:

ˆ bn = piecewise(n == 1, 1, n ∼= 1, −(2* sin(pi*n))/(pi*(nˆ2 − 1))).

Actually, bn = 1, if n = 1 and bn = 0, if n 6= 1, since sin(nπ) = 0. Therefore,


the Fourier series reduces to the trivial one, namely sin(x) = sin(x), since
sin(x) is orthogonal to the other functions from the orthogonal system.

Example 1.9.3. Determine the Fourier coefficients of the periodic function


f (x) = 1, x ∈ (−π, π) and explain the result.
Solution.
>> syms n positive;
syms x;
f = 1; % defines the signal f
g = f * cos(n*x);
h = f * sin(n*x);

53
a0 = int(f, x, −pi, pi)/pi;
an = (int(g, x, −pi, pi)/pi); % computes the Fourier coefficients
bn = int(h, x, −pi, pi)/pi;
The answers are the following:

ˆ a0 = 2;

ˆ an = (2* sin(pi*n))/(n*pi) (hence, an = 0);

ˆ bn = 0.

We obtain
a0
f (x) = = 1.
2
Example 1.9.4. Determine the Fourier coefficients of the periodic function
f (x) = ex , x ∈ (−π, π) and explain the result.
Solution.
>> syms n positive;
syms x;
f = exp(x); % defines the signal f
g = f * cos(n*x);
h = f * sin(n*x);
a0 = int(f, x, −pi, pi)/pi;
an = (int(g, x, −pi, pi)/pi); % computes the Fourier coefficients
bn = int(h, x, −pi, pi)/pi;
The answers are the following:

ˆ a0 = (2* sinh(pi))/pi;

ˆ an = −((exp(−pi)*(cos(pi*n) − n* sin(pi*n)))/(n2 + 1) − (exp(pi)*


(cos(pi*n) + n* sin(pi*n)))/(nˆ2 + 1))/pi;

ˆ bn = ((exp(−pi)*(sin(pi*n) + n* cos(pi*n)))/(nˆ2 + 1) + (exp(pi)*


(sin(pi*n) − n* cos(pi*n)))/(nˆ2 + 1))/pi.

We can further simplify the results, using the equalities sin(nπ) = 0 and
cos(nπ) = (−1)n .

54
π
Example 1.9.5. Expand in Fourier series the periodic function f (x) = −x,
2
x ∈ (0, π). Plot the N th truncated Fourier series.

Solution. The period is T = π; hence, ω = = 2. One uses formulas of
T
the coefficients (1.9).
>> syms n natural;
syms x;
f = pi/2 − x; % defines the signal f
g = f * cos(2*n*x);
h = f * sin(2*n*x);
a0 = 2*int(f, x, 0, pi)/pi;
an = 2*(int(g, x, 0, pi)/pi); % computes the Fourier coefficients
bn = 2*int(h, x, 0, pi)/pi;
The answers are the following:

ˆ a0 = 0;

ˆ an = (sin(pi*n)ˆ2 − (n*pi* sin(2*pi*n))/2)/(nˆ2*pi);

ˆ bn = −(cos(pi*n)* sin(pi*n) − n*pi* cos(pi*n)ˆ2)/(nˆ2*pi).

Hence,
1
an = 0 and bn = ;
n
thus, one obtains

X 1
f (x) = sin(2nx).
n=1
n
>> X = 10;
N = 10;
x = [0 : 4/X : X];
f s = zeros(1, numel(x));
for n = 1 : 1 : N
f s = f s + sin(2*n*x)/n;
end;
figure;
plot(x, f s);
grid on;

55
Figure 1.11a: The 10th Truncated Series

Figure 1.11b: The 1000th Truncated Series

Above we have the plots of the 10th and the 1000th truncated Fourier
series (see Figures 1.11a and 1.11b, respectively).

Example 1.9.6. Determine the Fourier coefficients of the periodic function


f (x) = x, x ∈ (−π, π). Plot the signal and the N th truncated Fourier series.
Solution.
>> syms n positive;
syms x;

56
f = x; % defines the signal f
g = f * sin(n*x);
bn = int(g, x, −pi, pi)/pi; % computes the Fourier coefficients
The answer is the following:

ˆ (2*(sin(pi*n) − n*pi* cos(pi*n)))/(nˆ2*pi).

2(−1)n+1
Hence, bn = . Thus, we obtain


X 2(−1)n+1
f (x) = sin(nx).
n=1

Figure 1.12: The Signal and the 2nd Truncated Fourier Series

>> f = x; % initial signal


x = linspace(−pi, pi);
plot(x, f ); % plots the initial signal
hold
N = 2; % values N = 2, 5, 100
f s = 0;
for n = 1 : 1 : N % loop for the series index n
f s = f s + sin(2*n*x)/n;
end;

57
x = linspace(−pi, pi);
plot(x, f s); % plots the N th truncated series
In Figure 1.12 we have the the signal f (in blue) and the N th truncated
Fourier series for N = 2 (in red).

−1, t ∈ (−1, 0)
Example 1.9.7. Plot the periodic function f (t) = and
1, t ∈ (0, 1)
its truncated Fourier series.

Solution. The period is T = 2; hence, ω = = π. Since the function f
T
is odd (see Figure 1.13), we get that a0 = an = 0, so one computes only bn .
2
One obtains bn = (1 − (−1)n ). Thus, the Fourier series expansion is the

following:

X 2
f (x) = (1 − (−1)n ) sin(nπx).
n=1

Figure 1.13: The Graph of the Function and the 13th Truncated Fourier
Series

>> clf;
N = 13;
c = 0;
x = −4 : 0.01 : 4;

58
f s = c*ones(size(x));
for n = 1 : 2 : N
bn = 2*(1 − (−1)ˆn)/(n*pi);
f s = f s + bn* sin(n*pi*x);
end;
plot([−4 − 3 − 3 − 2 − 2 − 1 − 1 0 0 1 1 2 2 3 3 4], . . .
[1 1 − 1 − 1 1 1 − 1 − 1 1 1 − 1 − 1 1 1 − 11]);
hold;
plot(x, f s);
xlabel(’t(seconds)’); ylabel(’y(t)’);
Example 1.9.8. Write a MATLAB program to generate the synthesized
pulse train from Example 1.7.4 composed of N h harmonics, if there are 5
cycles in an array of 500 samples (see [34]).
Solution.
>> N h = 10; N c = 4; N s = 500;
f (1 : N s) = 0.2; j = 1 : N s;
for n = 1 : N h
x(j) = (sin(0.4*pi*n)/(pi*n))* cos(n*2*pi*N c*j/N s);
f = f + x;
plot(f ); pause;
end;

Figure 1.14a: n = 1

59
Figure 1.14b: n = 2

Figure 1.14c: n = 5

The program provides the pulse trains for n harmonics, n = 1, 2, . . . , 10.


Figures 1.14a, 1.14b and 1.14c represent the waveform for n = 1, 2 and 5,
respectively.
Now for the waveform, namely the one for n = N h = 10 (see Figure
1.14d).

60
Figure 1.14d: n = N h = 10

Example 1.9.9. Write MATLAB programs to plot the pulse train and the
frequency spectrum analyzed in Example 1.7.2.

Solution.
The Pulse Train
x = [−18, −16, −16, −14, −14, −6, −6, −4, −4, 4, 4, 6, 6, 14, 14, 16, 16, 18];
y = [0.5, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0.5];
line(x, y);

Figure 1.15: The Pulse Train

61
The Frequency Spectrum
>> figure;
n = linspace(0, 10, 11);
f = sin(0.4*pi*n)./(2*pi*n);
stem(n, f );

Figure 1.16: The Frequency Spectrum of the Pulse Train

1.10 Maple Applications


Example 1.10.1. Determine the Fourier coefficients of the function f (x) =
x, x ∈ [0, 2π].

n :=0 n0 : a :=0 a0 : b :=0 b0 :


1
a := proc(n)option remember ; · int(x · cos(n · x), x = 0..2 · Pi); end:
Pi
1
b := proc(n)option remember ; · int(x · sin(n · x), x = 0..2 · Pi); end:
Pi
simplify(a(n)); simplify(b(n));

2 −1 + cos (πn)2 + 2πn sin (πn) cos (πn)




πn2
2 − sin (πn) cos (πn) + 2πn cos (πn)2 − πn


πn2

62
Using the equalities sin(nπ) = 0 and cos(nπ) = (−1)n , it follows that the
2
coefficients are an = 0 and bn = − .
n

63
64
Chapter 2

Fourier Transform

The Fourier transform is an integral transform that decomposes functions


depending on space or time into functions depending on spatial or temporal
frequency, such as the expression of a musical chord in terms of the volumes
and frequencies of its constituent notes.
One motivation for the Fourier transform comes from the study of Fourier
series. In the study of Fourier series, complicated but periodic functions are
written as the sum of simple waves mathematically represented by sines and
cosines. The Fourier transform is a generalization of the complex Fourier
series that results when the period of the represented function is lengthened
and allowed to approach infinity.
Perhaps the most important use of the Fourier transform is to solve par-
tial differential equations. Many of the equations of the mathematical physics
of the nineteenth century can be treated this way. Other applications include
nuclear magnetic resonance (NMR) and other kinds of spectroscopy (e.g. in-
frared), magnetic resonance imaging (MRI) and mass spectrometry, quantum
mechanics, and signal processing.

2.1 Definition
Definition 2.1.1. A function f : R → C is called absolutely integrable on R
if the integral of the absolute value of f on R is bounded, i.e.
Z ∞
|f (x)| dx < ∞. (2.1)
−∞

65
One denotes by L1 the set of absolutely integrable (on R) functions. It
follows that L1 is a normed linear space with the following operations:

ˆ ’+’ (addition): for f, g ∈ L1 , f +g is defined by (f +g)(x) = f (x)+g(x),


for every x ∈ R;

ˆ ’·’ (multiplication by scalars α ∈ C): for f ∈ L1 , α · f is defined by


(α · f )(x) = αf (x), for every x ∈ R,
Z ∞
and the norm kf k1 = |f (x)| dx.
−∞
By (2.1), the area bounded by the graph of |f | and the x axis is finite
(see Figure 2.1). Hence,
lim f (x) = 0. (2.2)
x→±∞

Figure 2.1: The Representation of a |f (x)| Function

Definition 2.1.2. The Fourier transform of a function f ∈ L1 is the function


fb = F[f ] defined by
Z ∞
F[f ](ω) = f (ω) =
b f (x)eiωx dx, ω ∈ R. (2.3)
−∞
Z ∞
The improper integral given by f (x)eiωx dx is called the Fourier Integral .
−∞

Proposition 2.1.3. The improper integral in (2.3) is absolutely and uni-


formly convergent on R.

66
Proof. Since |eiωx | = 1, one obtains
Z ∞ Z ∞ Z ∞
iωx iωx
f (x)e dx ≤ |f (x)| e dx = |f (x)| dx < ∞,
−∞ −∞ −∞

for every ω ∈ R.

It follows that the function fb(ω) is defined and continuous for every ω ∈ R.

The function fb is called the image of f or the signal in the frequency


domain, while f is called the preimage of fb or the signal in the time domain.
The operator F defined in (2.3) is called the Fourier transform of f and eiωx
is known as the Fourier kernel .
1
Example 2.1.4. Determine the Fourier transform of f (x) = 2 , a > 0.
x + a2
Solution. Let us first prove that f ∈ L1 . One gets
Z ∞ Z ∞ ∞
1 1 x
|f (x)| dx = 2 2
dx = arctan
−∞ −∞ x + a a a −∞
1 π π
  π
= + = < ∞.
a 2 2 a
Hence, f ∈ L1 . Z ∞
1
The Fourier transform (2.3) is fb(ω) = 2 + a2
eiωx dx.
−∞ x
π
For ω = 0, the previous computation shows that fb(0) = .
a
For ω > 0, we associate the complex line integral over the closed contour
Γ in Figure 2.2 with R > a; so,
eiωz
I
I= 2 2
dz.
Γ z +a

eiωz
The function 2 has a simple pole z = ia in the domain bounded by
z + a2
Γ. Hence, one obtains from the Residue Theorem ([26, Theorem 5.1.1]) the
value of I. We get that
 iωz
eiωz eiωia

e π
I = 2πi · res 2 2
, ia = 2πi 2 2 0
= 2πi = e−ωa .
z +a (z + a ) z=ia 2ia a

67
Figure 2.2: The Closed Contour Γ = AB ∪ BCA

Since Γ = AB ∪ BCA and the parametric representation of AB is z = x,


x ∈ [−R, R], one gets
eiωz eiωz eiωz
I Z Z
I= 2 2
dz = 2 2
dz + 2 2
dz
Γ z +a AB z + a BCA z + a
Z R (?)
eiωx eiωz
Z
π −ωa
= 2 2
dx + 2 2
dz = e .
−R x + a BCA z + a a
For z in BCA it follows that |z| = R. Hence, |z 2 + a2 | ≥ |z|2 − |a|2 = R2 − a2
1 1 1
and 2 2
≤ 2 2
→ 0 when R → ∞, i.e. 2 uniformly tends
z +a R −a z + a2
to 0 when |z| → ∞. By Jordan’s Second Lemma ([26, Lemma 5.3.2]), one
obtains
eiωz
Z
lim dz = 0.
R→∞ BCA z 2 + a2

Now using (?), as R → ∞, one gets


Z ∞
1 π
f (ω) =
b
2 2
eiωx dx = e−ωa , for ω > 0.
−∞ x + a a
For ω < 0, consider the closed contour Γ in Figure 2.3. Similarly, one
obtains
eiωz eiωz eiωz
I
dz = 2πi · res ( , −ia) = 2πi
2
Γ z +a
2 z 2 + a2 (z 2 + a2 )0 z=−ia
eiω(−ia) π
= 2πi = − eωa .
−2ia a

68
Figure 2.3: The Closed Contour Γ = BA ∪ ACB

Taking into account the trigonometric sense, one gets


−R
eiωz eiωz eiωz
I Z Z
dz = dz + dz
Γ z 2 + a2 R z 2 + a2 2
ACB z + a
2
Z R
eiωx eiωz
Z
=− 2 2
dx + 2 2
dz
−R x + a ACB z + a
π
= − eωa .
a
The limit when R → ∞, again by Jordan’s Second Lemma ([26, Lemma
5.3.2]), gives the following:
Z ∞
1 π
2 2
eiωx dx = − eωa .
−∞ x + a a
π
Hence, fb(ω) = − eωa , for ω < 0.
a
In conclusion,
π
fb(ω) = e−a|ω| , ω ∈ R.
a
Using similar ideas, we can extend the method to the following case. Let
P (x)
f (x) = , where P and Q are real polynomials with Q(x) 6= 0, for x ∈ R
Q(x)
and deg P ≤ deg Q − 2. Consider a1 , a2 , . . . , an to be the roots of Q(x)
located in the upper half plane. Then the conjugates ā1 , ā2 , . . . , ān are the
roots located in the lower half plane (see Figure 2.4).

69
Figure 2.4: The Roots of Q(x) in the Upper and Lower Half Plane

One obtains
n
  
 P (z) iωz X
2πi res 
 e , aj , ω ≥ 0
 
P j=1

 Q(z)
fb(ω) = F (ω) = n   . (2.4)
Q  X P (z) iωz
 −2πi res e , āj , ω < 0


Q(z)

j=1
Z ∞
2 2
−a2 x2
Example 2.1.5. Compute F[e ](ω) = e−a x eiωx dx, for a > 0.
−∞
Z ∞  
−t2 1
Solution. We will make use of Euler’s integral e dt = Γ =
√ −∞ 2
π. Using the substitution t = ax, one obtains
Z ∞ Z ∞ √
−a2 x2 −t2 1 π
e dx = e dt = < ∞.
−∞ −∞ a a
2 x2
Hence, f (x) = e−a is absolutelyZintegrable on R.

2 2
−a2 x2
It follows that F[e ](ω) = e−(a x −iωx) dx. Using the fact that
−∞
2 2 2
ω2
  
2 2 iω 2 iω iω iω
a x − iωx = (ax) − 2ax + − = ax − + 2
2a 2a 2a 2a 4a
we get
Z ∞ h
iω 2 ω2
i Z ∞
− (ax− 2a ω2 2
F[e −a2 x2
](ω) = e )+ 4a2 dx = e −
4a2

e−(ax− 2a ) dx.
−∞ −∞

70

Substitute now ax − by z. Then (see Figure 2.5)
2a
Z ∞ Z
iω 2
−(ax− 2a ) 1 −z2
I := e dx = e dz.
−∞ AB a

Figure 2.5: The Closed Contour Γ = AB ∪ DC


In the above picture, CD represents the real axis. The parallel lines AB
and CD meet at the point at infinity, hence Γ = AB ∪DC is a closed contour.
2
The function e−z is analytic on the domain ∆ bounded Iby Γ and by Cauchy’s
2
Fundamental Theorem ([26, Theorem 4.2.1]), one gets e−z dz = 0. Then
Γ
Z Z Z Z
2 2 2 2
0= e−z dz + e−z dz = e−z dz − e−z dz;
AB DC AB CD

hence, Z Z Z ∞ √
−z 2 −z 2 2
I= e dz = e dz = e−z dz = π.
AB CD −∞

2
− ω2π 2
− ω2
One obtains fb(ω) = e 4a eI= . 4a
a
Remark 2.1.6. The absolute integrability is a sufficient condition for the
existence of the Fourier transform, but it is not necessary. Now we give a
more general definition. The Fourier transform of a function f : R → C is
the integral below, defined in the sense of the principal value, if it exists:
Z ∞ Z l
iωx
f (ω) = F[f ](ω) =
b f (x)e dx = lim f (x)eiωx dx.
−∞ l→∞ −l

71
Example 2.1.7. Consider the function sinc (the cardinal sine)

sin x
(
, x 6= 0
f (x) = sinc(x) = x ,
1, x = 0

which is not absolutely integrable (Figure 2.6). Compute fb(ω).

Figure 2.6: The Cardinal Sine

Z ∞
sin t π
Solution. We will use the Dirichlet integral dt = . Using the
0 t 2
substitution u = −t, one obtains
Z 0 Z ∞ Z ∞
sin t sin(−u) sin u π
dt = − (− du) = du = .
−∞ t 0 −u 0 u 2
Z ∞
sin t
Therefore, dt = π, i.e. f is integrable on R.
−∞ t
Similarly, by the substitution u = at,
Z ∞ Z ∞ Z ∞
sin(at) sin(u) du sin u π
dt = u = du = , if a > 0,
0 t 0 a 0 u 2
a
and
Z ∞ Z −∞ Z ∞
sin(at) sin u sin u π
dt = du = − du = − , if a < 0.
0 t 0 u 0 u 2

72
Now we start computing the Fourier transform of f . We obtain the following:
Z ∞ Z l
sin x iωx sin x iωx
f (ω) = F[f ](ω) =
b e dx = lim e dx
−∞ x l→∞ −l x
Z l
sin x
= lim (cos(ωx) + i sin(ωx)) dx
l→∞ −l x
Z l Z l
sin x sin x
= lim cos(ωx) dx + i lim sin(ωx) dx.
l→∞ −l x l→∞ −l x
sin x sin x
As the function cos(ωx) is even and the function sin(ωx) is odd,
x x
it follows that
Z l Z ∞
sin x sin x cos(ωx)
f (ω) = lim 2
b cos(ωx) dx = 2 dx
l→∞ 0 x 0 x
Z ∞ Z ∞
sin(1 + ω)x sin(1 − ω)x
= dx + dx.
0 x 0 x
Z ∞ Z ∞
sin(1 + ω)x sin(1 − ω)x
Let us denote dx by I1 and dx by I2 . By
0 x 0 x
the previous computations, we have the following situations:
π π
- if ω < −1, then 1 + ω < 0 and 1 − ω > 0, hence I1 = − , I2 = and
2 2
fb(ω) = 0;
π π
- if ω = −1, then I1 = 0 and I2 = , hence fb(ω) = ;
2 2
π
- if −1 < ω < 1, then 1 + ω > 0 and 1 − ω > 0, hence I1 = I2 = and
2
fb(ω) = 0;
π π
- if ω = 1, then I1 = and I2 = 0, hence fb(ω) = ;
2 2
π π
- if ω > 1, then 1 + ω > 0 and 1 − ω < 0, hence I1 = , I2 = − and
2 2
fb(ω) = 0.
Therefore, 
 π, ω ∈ (−1, 1)

fb(ω) = 0, |ω| > 1 .
π
 , ω = ±1

2

73
The function f is not absolutely integrable and its Fourier transform has
discontinuities at ±1 (see Figure 2.7).

Figure 2.7: The Fourier Representation of the Cardinal Sine

2.2 Properties of the Fourier Transform


In the sequel, for the sake of simplicity, we will assume that all the Fourier
transforms involved exist and some interchanges of the order of integration
are allowed.
A class of functions that have this property is the space S of rapidly
decreasing functions (indefinitely differentiable functions f : R → C which
satisfy sup |xk f (n) (x)| < ∞, for every k, n ∈ N). Moreover, the restrictions
x∈R
to S of the Fourier transforms F is a bijection F : S → S, i.e. for every
f ∈ S, fb = F[f ] ∈ S and the correspondence f 7→ fb is a bijection.
Theorem 2.2.1 (Linearity). For every f, g ∈ L1 and α, β ∈ C,
F[αf + βg](ω) = αF[f ](ω) + βF[g](ω). (2.5)
Proof. Since the integral is linear, one obtains
Z ∞
F[αf + βg](ω) = [αf (x) + βg(x)]eiωx dx
−∞
Z ∞ Z ∞
iωx
=α f (x)e dx + β g(x)eiωx dx
−∞ −∞
= αF[f ](ω) + βF[g](ω).

74
Theorem 2.2.2 (Similarity or Change of the Time Scale). For every f ∈ L1
and a > 0, one gets
1 b ω 
F[f (ax)](ω) = f . (2.6)
a a
Proof. By definition, one obtains
Z ∞
F[f (ax)](ω) = f (ax)eiωx dx.
−∞

Now we restore the signal using the substitution t = ax. It follows that
Z ∞
1 ∞
Z
iω at 1 iω t 1 b ω 
F[f (ax)](ω) = f (t)e dt = f (t)e dt = f
a .
−∞ a a −∞ a a

Theorem 2.2.3 (Time Delay). For every f ∈ L1 and a ∈ R,

F[f (x − a)](ω) = eiωa F[f ](ω). (2.7)

Proof. Using the substitution t = x − a, one obtains


Z ∞ Z ∞
iωx
F[f (x − a)](ω) = f (x − a)e dx = f (t)eiω(t+a) dt
−∞ −∞
Z ∞
= eiωa f (t)eiωt dt = eiωa F[f ](ω).
−∞

1
Example 2.2.4. Determine F[f (x − 3)](ω) if f (x) = .
x2 +4
1 π
Solution. By Example 2.1.4, the image of f (x) = is e−2|ω| . Then
x2 +4 2
 
1 π
F[f (x − 3)](ω) = F 2
(ω) = e−2|ω| eiω3 .
(x − 3) + 4 2

Theorem 2.2.5 (Translation). For every f ∈ L1 and a ∈ R, one obtains

F[eixa f (x)](ω) = fb(ω + a). (2.8)

75
Proof. Using the definition of the Fourier transform, we get that

Z ∞ Z ∞
ixa ixa iωx
F[e f (x)](ω) = e f (x)e dx = eix(ω+a) f (x) dx = fb(ω + a).
−∞ −∞

e5ix
 
Example 2.2.6. Compute F 2 (ω).
x + 16
Solution. Again by Example 2.1.4, one obtains
 i5x 
e π
F 2 (ω) = e−4|ω+5| .
x + 16 4

Theorem 2.2.7 (Differentiation of the Preimage). For every f ∈ L1 such


that f 0 ∈ L1 ,
F[f 0 (x)](ω) = −iω fb(ω). (2.9)

Proof. Using the definition of the Fourier transform,


Z ∞
0
F[f (x)](ω) = f 0 (x)eiωx dx.
−∞

Now we integrate by parts in order to restore the signal. It follows that



Z ∞
0 iωx
F[f (x)](ω) = f (x)e − f (x)iωeiωx dx
−∞ −∞
Z ∞
= −iω f (x)eiωx dx
−∞

= −iω fb(ω),

since f is absolutely integrable and lim |f (x)eiωx | = lim |f (x)| = 0.


x→±∞ x→±∞

Now we are going to prove a generalization of Theorem 2.2.7 (Differen-


tiation of the Preimage). For every f ∈ L1 such that f 0 , f 00 , . . . , f (n) ∈ L1 ,
n ∈ N∗ ,
F[f (n) (x)](ω) = (−iω)n fb(ω), n ∈ N∗ . (2.10)

76
Proof. The statement will be proved by induction. Step n = 1 is true from
formula (2.9). Assume now that (2.10) is correct for n ∈ N∗ . We have to
show that formula (2.10) is true for n + 1. Using (2.9), one gets

F[f (n+1) (x)](ω) = F[(f (n) )0 (x)](ω) = −iωF[f (n) (x)](ω).

This implies that

F[f (n+1) (x)](ω) = (−iω)(−iω)n fb(ω) = (−iω)n+1 fb(ω),

which proves step n + 1. Hence, the proof is done.

Theorem 2.2.8 (Differentiation of the Image).

F[ixf (x)](ω) = [fb(ω)]0 . (2.11)

Proof. By the Leibniz rule of differentiation under the integral sign, one
obtains
Z ∞ 0 Z ∞
0
[f (ω)] =
b iωx
f (x)e dx = f (x)(eiωx )0ω dx
Z ∞−∞ ω −∞

= f (x)ixeiωx dx = F[ixf (x)](ω).


−∞

Now we are going to prove the following generalization of Theorem 2.2.8


(Differentiation of the Image):

F[(ix)n f (x)](ω) = fb(n) (ω), n ∈ N∗ . (2.12)


Proof. We are going to show this by induction. Step n = 1 is just formula
(2.11). Now we assume that (2.12) is true for n ∈ N∗ and we have to prove
it for n + 1. From (2.11), it follows that
F[(ix)n+1 f (x)](ω) = F[ix(ix)n f (x)](ω)
= F[(ix)n f (x)](ω)0
= (fb(n) (ω))0 = fb(n+1) (ω),
which proves step n + 1. Therefore, formula (2.12) is true for all n ∈ N∗ .

77
Definition 2.2.9. The convolution of two functions f, g ∈ L1 is the function
f ∗ g defined by Z ∞
(f ∗ g)(x) = f (t)g(x − t) dt. (2.13)
−∞

Theorem 2.2.10 (Convolution). For every f, g ∈ L1 , we have have that


F[f ∗ g](ω) = F[f ](ω)F[g](ω). (2.14)
Proof. Let us prove first that f ∗ g ∈ L1 . One gets
Z ∞ Z ∞ Z ∞
|(f ∗ g)(x)| dx = f (t)g(x − t) dt dx
−∞ −∞ −∞
Z ∞ Z ∞ 
≤ |f (t)| |g(x − t)| dt dx.
−∞ −∞

From Fubini’s Theorem (see [9]) on the interchange of integrals, it follows


that
Z ∞ Z ∞  Z ∞ Z ∞ 
|f (t)| |g(x − t)| dt dx = |f (t)| |g(x − t)| dx dt.
−∞ −∞ −∞ −∞

It follows that
Z ∞ Z ∞ Z ∞ 
|(f ∗ g)(x)| dx ≤ |f (t)| |g(x − t)| dx dt
−∞ −∞ −∞
Z ∞ Z ∞ 
= |f (t)| kgk1 dt = kgk1 |f (t)| dt
−∞ −∞
= kgk1 kf k1 < ∞.
Z ∞
Hence, k(f ∗ g)k1 = |(f ∗ g)(x)| dx < ∞, i.e. f ∗ g ∈ L1 .
−∞
Using the definition of the convolution,
Z ∞
F[f ∗ g](ω) = (f ∗ g)(x)eiωx dx
−∞
Z ∞ Z ∞ 
= f (t)g(x − t) dt eiωx dx,
−∞ −∞

and the order of the integration, one obtains


Z ∞ Z ∞ 
iωx
F[f ∗ g](ω) = f (t) g(x − t)e dx dt.
−∞ −∞

78
From (2.7), one gets
Z ∞ Z ∞
iωt
F[f ∗ g](ω) = f (t)e gb(ω) dt = gb(ω) f (t)eiωt dt = gb(ω)fb(ω).
−∞ −∞

After we introduce the inversion formula, we can prove two more proper-
ties of the Fourier transform.

2.3 The Inversion Formula


An integral transform is useful if we can recover the signal in the time
domain from its image. This is the role of the inversion formula.
Theorem 2.3.1 (Inversion Formula). If a function f ∈ L1 is continuous at
the point x ∈ R, then
Z ∞
1
f (x) = fb(ω)e−iωx dω. (2.15)
2π −∞
Z ∞
2 2
Proof. Consider the integral I(a) = e−a ω fb(ω)e−iωx dω, a > 0, which
−∞
approaches the integral (2.15) as a → 0. Replace fb(ω) by (2.3) (with y
instead of x) and change the integration order.
Z ∞ Z ∞ 
−a2 ω 2
I(a) = e f (y)e dy e−iωx dω
iωy

Z−∞
∞ Z ∞−∞ 
−a2 ω 2 iω(y−x)
= f (y) e e dω dy
−∞ −∞
Z ∞ Z ∞ 
−a2 ω 2 +iω(y−x)
= f (y) e dω dy.
−∞ −∞
Z ∞ √
−a2 x2 π − ω22
−a2 x2 iωx
In Example 2.1.5, we obtained F[e ](ω) = e e dx =
e 4a ,
−∞ a
for a > 0. Z ω by y − x, one obtains that the second
Substituting x by ω and √

2 2 π − (y−x)2
integral is e−a ω eiω(y−x) dω = e 4a2 . Therefore,
−∞ a
√ Z ∞
π (y−x)2
I(a) = f (y)e− 4a2 dy.
a −∞

79
y−x
Using the substitution = t ⇐⇒ y = 2at + x, which implies that
2a
dy = 2a dt, we obtain that
√ Z ∞ Z ∞
π −t2
√ 2
I(a) = f (2at + x)e 2a dt = 2 π f (2at + x)e−t dt.
a −∞ −∞

Now we take the limit a → 0. Since f is continuous at x, one obtains


lim f (2at + x) = f (x). Hence,
a→0
Z ∞
√ 2
lim I(a) = 2 π f (x)e−t dt
a→0 −∞
Z ∞
√ 2
= 2 πf (x) e−t dt
√ √−∞
= 2 πf (x) π
= 2πf (x).
Z ∞
But lim I(a) = fb(ω)e−iωx dω, so
a→0 −∞
Z ∞
1
f (x) = fb(ω)e−iωx dω.
2π −∞

Theorem 2.3.2 (Product). For every f, g ∈ L1 ,


Z ∞
1
F[f (x)g(x)](ω) = g (ω − u) du.
fb(u)b (2.16)
2π −∞
Proof. We will use formula 2.15 (Inversion Formula) for f (x). We get
Z ∞
F[f (x)g(x)](ω) = f (x)g(x)eiωx dx
Z−∞
∞  Z ∞ 
1 −iux
= f (u)e
b du g(x)eiωx dx.
−∞ 2π −∞
Interchanging the integrals, one obtains
Z ∞ Z ∞ 
1 i(ω−u)x
F[f (x)g(x)](ω) = fb(u) g(x)e dx du
2π −∞ −∞
Z ∞
1
= g (ω − u) du.
fb(u)b
2π −∞

80
Theorem 2.3.3 (Plancherel’s Theorem). For every f, g ∈ L1 ,
Z ∞ Z ∞
1
f (x)g(x) dx = fb(u)b
g (u) du. (2.17)
−∞ 2π −∞

Proof. We first notice that gb(−ω) = F[g(x)](ω). Indeed, since eiθ = cos θ −
i sin θ = cos(−θ) + sin(−θ) = e−iθ , one obtains
Z ∞ Z ∞
gb(−ω) = g(x)e−iωx dx = g(x)eiωx dx = F[g(x)](ω).
−∞ −∞

Let us write (2.16) for g(x) and ω = 0. We get that


Z ∞ Z ∞
iωx 1
f (x)g(x)e dx = fb(u)b
g (−ω + u) du ;
−∞ ω=0 2π −∞ ω=0

hence, Z ∞ Z ∞
1
f (x)g(x) dx = fb(u)b
g (u) du.
−∞ 2π −∞

Corollary 2.3.4 (Parseval’s Formula). For every f ∈ L1 ,


Z ∞ Z ∞
2 1
|f (x)| dx = |fb(ω)|2 dω. (2.18)
−∞ 2π −∞

The last formula represents the total power content of the signal f .
Proof. Since for z = a + ib one gets z z̄ = a2 + b2 = |z|2 , it follows that
f (x)f (x) = |f (x)|2 and (2.18) is obtained from (2.17) for g = f .

Spectral Analysis
The formula (2.15) (Inversion Formula) can be written (by e−iθ = cos θ −
i sin θ) as Z ∞
1
f (x) = fb(ω)(cos(ωx) − i sin(ωx)) dx.
2π −∞
The formula can be interpreted as the expression of the signal f (x) as a
linear combination of simple harmonic functions cos(ωx) and sin(ωx). It fol-
lows that the frequency content of the signal f (x) is spread over a continuous

81
range of frequencies with the amplitude fb(ω) of any frequency ω ∈ R. One
considers |f (x)|2 as a measure of the intensity of the signal f at a moment x
and |fb(ω)|2 as the measure of the intensity at the frequency ω.
Formula (2.18) from Corollary 2.3.4 (Parseval’s Formula) has the follow-
ing interpretation: if |f (x)|2 dx is the power content of f (x) in the interval
1 b
[x, x + dx], then |f (ω)|2 dω is the power content in the frequency range ω
2πZ ∞ Z ∞
2 1
to ω + dω. Hence, |f (x)| dx = |fb(ω)|2 dω represents the total
−∞ 2π −∞
power content of the signal f .
Theorem 2.3.5. If x is a discontinuity of the first kind of the function
f ∈ L1 (i.e. the one sided limits f (x−) and f (x+) are finite), then
Z ∞
f (x−) + f (x+) 1
= fb(ω)e−iωx dω. (2.19)
2 2π −∞
Proof. We modify the proof of Theorem 2.3.1 (Inversion Formula) by writing
Z ∞
√ 2
I(a) = 2 π f (2at + x)e−t dt
−∞
Z 0 Z ∞


−t2 −t2
=2 π f (2at + x)e dt + f (2at + x)e dt .
−∞ 0
Z 0 Z ∞
−t2 2
Let us denote f (2at + x)e dt by I1 and f (2at + x)e−t dt by I2 .
−∞ 0
Since in I1 , t ∈ (−∞, 0) it implies that 2at + x < 0, hence lim f (2at + x) =
a→0
f (x−). Similarly, in I2 , t ∈ (0, ∞), which implies that 2at + x > 0 and
lim f (2at + x) = f (x+). As a → 0, we get that
a→0
Z ∞
fb(ω)e−iωx dω = lim I(a)
−∞ a→0
Z 0 Z ∞

 
−t2 −t2
= 2 π f (x−) e dt + f (x+) e dt .
−∞ 0
Z ∞ √
2 2
From Euler’s integral e−t dt = π, since e−t is an even function, it
Z 0 −∞ Z ∞ √
−t2 −t2 π
follows that e dt = e dt = . So,
−∞ 0 2

√ π
I(a) = 2 π (f (x−) + f (x+)) = π(f (x−) + f (x+);
2

82
hence, Z ∞
π(f (x−) + f (x+)) = fb(ω)e−iωx dω.
−∞

Remark 2.3.6. Remember that a function f that has only discontinuities of


the first kind (jump discontinuities) is said to be standardized if the equality
f (x−) + f (x+)
f (x) =
2
holds for every x ∈ R.
If f is continuous at x, then f (x−) = f (x+) = f (x), so obviously the
previous equality holds.
In the sequel we will consider only standardized functions f , hence for-
mula (2.19) has the form of (2.15), i.e. formula (2.15) is true for every x ∈ R.
We also notice that the kernel of the inverse transform F −1 defined by
F [f ] = f is e−iωx , while the kernel of F is eiωx .
−1 b

2.4 Fourier Integral


We replace in formula (2.15) (Inversion Formula) the image fb(ω) by its
definition (2.3) (with y instead of x). Changing the order of the integration,
one obtains the so called Complex Fourier Integral :
Z ∞ Z ∞ 
1
f (x) = f (y)e dy e−iωx dω.
iωy
(2.20)
2π −∞ −∞

We will obtain other representations of the function f (x) as Fourier integrals.


Interchanging the order of integration and isolating the abstract function f ,
we get Z ∞ Z ∞ 
1 iω(y−x)
f (x) = f (y) e dω dy.
2π −∞ −∞

Since eiθ = cos θ + i sin θ, the right hand side splits into the following sum:
Z ∞ Z ∞ 
1
f (x) = f (y) cos ω(y − x) dω dy+
2π −∞ −∞
Z ∞ Z ∞  (2.21)
i
+ f (y) sin ω(y − x) dω dy.
2π −∞ −∞

83
Z ∞ Z ∞
Denoting cos ω(y − x) dω by I1 and sin ω(y − x) dω by I2 , since
−∞ −∞
cos ω(y − x) and sin ω(y − x) are even and odd functions with respect to ω,
respectively, one gets
Z ∞
I1 = 2 cos ω(y − x) dω and I2 = 0.
0

Hence, interchanging the order of integration, formula (2.20) reduces to the


representation of f as a Real Fourier Integral :

1 ∞
Z Z ∞ 
f (x) = f (y) cos ω(y − x) dy dω. (2.22)
π 0 −∞

But cos ω(y − x) = cos ωy cos ωx + sin ωy sin ωx and taking into account that
cos ωx and sin ωx are constants with respect to the variable of the integration
y, the integral in (2.22) splits as

1 ∞
Z Z ∞ 
f (x) = cos ωx f (y) cos ωy dy dω
π 0 −∞
(2.23)
1 ∞
Z Z ∞ 
+ sin ωx f (y) sin ωy dy dω.
π 0 −∞

The Fourier Integral for Even Functions


If f is even, i.e f (−x) = f (x), for every x ∈ R, then, since cos ωy is even,
Z ∞ Z ∞
f (y) cos ωy dy = 2 f (y) cos ωy dy;
−∞ 0

since sin ωy is odd, we get that


Z ∞
f (y) sin ωy dy = 0.
−∞

Therefore, formula (2.23) becomes the following:


Z ∞ Z ∞ 
2
f (x) = cos ωx f (y) cos ωy dy dω. (2.24)
π 0 0

84
The Fourier Integral for Odd Functions
If f is odd, i.e f (−x) = −f (x), for every x ∈ R, then, since cos ωy is
even, Z ∞
f (y) cos ωy dy = 0;
−∞

since sin ωy is odd,


Z ∞ Z ∞
f (y) sin ωy dy = 2 f (y) sin ωy dy.
−∞ 0

Therefore, formula (2.23) becomes the following:

2 ∞
Z Z ∞ 
f (x) = sin ωx f (y) sin ωy dy dω. (2.25)
π 0 0

The Fourier Cosine and Sine Transforms


Let f : (0, ∞) → C be a function in L1 (0, ∞). By considering the exten-
sions of f to an even function and to an odd function, formulas (2.24) and
(2.25), respectively will hold for these extensions to R.Hence, they will hold
for f (x), x ∈ (0, ∞).

Definition 2.4.1. The functions


Z ∞
Fc [f ](ω) = fbc (ω) = f (y) cos ωy dy (2.26)
0

and Z ∞
Fs [f ](ω) = fbs (ω) = f (y) sin ωy dy (2.27)
0

are called the Fourier Cosine Transform and Fourier Sine Transform of f ,
respectively.

By replacing the inner integrals from formulas (2.24) and (2.25) by fbc (ω)
and fbs (ω), respectively one obtains the Inverse Formulas of the Fourier Co-
sine and Sine Transforms:

2 ∞b
Z
−1 b
Fc [fc ](x) = f (x) = fc (ω) cos ωx dω (2.28)
π 0

85
and
2 ∞b
Z
Fs−1 [fbs ](x)
= f (x) = fs (ω) sin ωx dω. (2.29)
π 0
Next we are going to provide some examples and solve a couple of integral
equations with kernels cos ωx and sin ωx.
Example 2.4.2. Solve the following equation:
Z ∞  ω
e , ω ∈ (0, 1)
f (x) cos ωx dx =
0 0, ω ≥ 1
 ω
e , ω ∈ (0, 1)
Solution. One gets fbc (ω) = . Using the Inversion For-
0, ω ≥ 1
mula (2.28), one obtains

2 ∞b 2 1 ω
Z Z
f (x) = fc (ω) cos ωx dω = e cos ωx dω
π 0 π 0
2 1 ω eiωx + e−iωx
Z Z 1 Z 1 
1 ω(1+ix) ω(1−ix)
= e dω = e dω + e dω
π 0 2 π 0 0
1 eω(1+ix) 1 eω(1−ix) 1 1 e(1+ix) − 1 e(1−ix) − 1
   
= + = +
π 1 + ix 0 1 − ix 0 π 1 + ix 1 − ix
(1+ix) (1−ix)
1 (1 − ix)(e − 1) + (1 + ix)(e − 1)
= ·
π 1 + x2
1 e(1 − ix)(cos x + i sin x − 1) + e(1 + ix)(cos x − i sin x − 1)
= ·
π 1 + x2
In conclusion,
2e cos x − 1 + x sin x
f (x) = ·
π 1 + x2
Example 2.4.3. Solve the following equation:
Z ∞
ω
f (x) sin ωx dx = 2 .
0 ω +1
ω
Solution. As fbs (ω) = , using the Inversion Formula (2.29), one
ω2 +1
gets (for x > 0)
Z ∞ Z ∞
2 2 ω
f (x) = fbs (ω) sin ωx dω = sin ωx dω.
π 0 π 0 ω2 +1

86
We will use residues for solving the integral. Due to the fact that the function
ω
2
sin ωx is even with respect to the variable ω, it follows that
ω +1
1 ∞ ω
Z Z ∞ 
1 ω iωx
f (x) = sin ωx dω = Im e dx
π −∞ ω 2 + 1 π 2
−∞ ω + 1
  ixz 
1 ze 1 −x

= Im 2πi res , i = Im πie
π z2 + 1 π
−x
=e .

Remark 2.4.4. We have showed that I2 and the second integral in (2.21)
are equal to 0. Then formula (2.21) remains true with −I2 = 0, i.e.
Z ∞ Z ∞ 
1
f (x) = f (y) cos ω(y − x) dω dy
2π −∞ −∞
Z ∞ Z ∞ 
i
− f (y) sin ω(y − x) dω dy.
2π −∞ −∞

Since e−iθ = cos θ − i sin θ, this representation of f can be written as


Z ∞ Z ∞ 
1 −iω(y−x)
f (x) = f (y) e dω dy.
2π −∞ −∞

By changing the order of integration, one obtains


Z ∞ Z ∞ 
1 iωx −iωy
f (x) = e f (y)e dy dω. (2.30)
2π −∞ −∞

Let us just use the variable x instead of y. We get the following formula:
Z ∞
fb(ω) = f (x)e−iωx dx. (2.31)
−∞

Then (2.30) can be written as the inverse formula for this Fourier transform
Z ∞
1
f (x) = fb(ω)eiωx dω.
2π −∞

Therefore, if one takes e−iωx as the kernel of the direct formula of the Fourier
transform, then the inversion formula kernel is eiωx .

87
The difference between the two approaches is ±I2 = 0, hence both
definitions (2.3) and (2.31) are valid. Their properties are similar, with
small differences. For instance, the differentiation of the original becomes
F[f 0 (x)](ω) = iω fb(ω) (for the definition given by (2.31)).
Similarly, one can modify the kernel eiωx and one obtains other forms of
the Fourier transform. A frequently used transform is the following:
Z ∞
F[f ](ω) = f (ω) =
b f (x)e−2πiωx dx. (2.32)
−∞

By a slight modification in the proof of Theorem 2.3.1, one can show that
the inverse formula has the form
Z ∞
−1 b
F [f ](x) = f (x) = fb(ω)e2πiωx dω. (2.33)
−∞

In this case some of the properties remain unchanged (linearity, similarity,


convolution), while others suffer minor changes. For instance:

1. time-delay becomes

F[f (x − a)](ω) = e−2πiωa fb(a), a ∈ R;

2. translation becomes

F[e2πiax f (x)](ω) = fb(ω − a), a ∈ R;

3. differentiation of the preimage becomes

F[f 0 (x)](ω) = 2πiω fb(ω),

F[f (n) (x)](ω) = (2πiω)n fb(ω), n ∈ N∗ ;

4. differentiation of the image becomes

F[2πixf (x)](ω) = −fb0 (ω),

F[(−2πix)n f (x)](ω) = fb(n) (ω), n ∈ N∗ .

88
2.5 Discrete Fourier Transform (DFT)
Consider the form (2.31) of the Fourier transform of a continuous signal
f ∈ L1 . Let us restrict ω to the interval [0, 2π] and consider N samples

2π 4π 2kπ (N − 1)2π
ω : 0, , ,..., ,..., .
N N N N

Then   Z ∞
2kπ 2π
fb = f (x)e−i N kx dx, k = 0, N − 1. (2.34)
N −∞

For x take N samples: 0, 1, 2, . . . , n, . . . , N − 1; denote the values of the


signal f (x) at the points
 nby fn (Figure 2.8), hence the signal is represented
f0
 f1 
by the vector f =   · · · .

fN −1

Figure 2.8: The Values of the Signal f (x) at the Points n

Formula (2.34) suggests the form of the DFT.

Definition 2.5.1. If f is a N -dimensional vector and Fd : CN → CN is an


operator such that
N −1

X
Fd [f ]k = fk =
b fn e−i N kn , (2.35)
n=0

89
 
fb0
fb1
 
then the vector Fd [f ] = fb =   is called the Discrete Fourier Trans-
 
 ··· 
fN −1
b
form of f .

Denoting e−i N by w, formula (2.35) becomes

N
X −1
Fd [f ]k = fbk = fn ewkn , k = 0, N − 1. (2.36)
n=0

   
2π 2π
Notice that w = cos − i sin and that fbk = f0 + f1 wk + · · · +
N N
fn wkn + · · · + fN −1 wk(N −1) .
Consider the matrix
 
1 1 1 ··· 1 ··· 1
 1
 w w 2
··· wn ··· wN −1 

 1
 w2 w4 ··· w2n ··· w2(N −1) 

W =  · · · · ·k· ··· ··· ··· ··· ··· .

 1
 w w k2
··· wkn ··· wk(N −1) 

 ··· ··· ··· ··· ··· ··· ··· 
N −1 (N −1)2 (N −1)n (N −1)(N −1)
1 w w ··· w ··· w

Using (2.36), we obtain the following formula for the DFT:

Fd [f ] = fb = W f. (2.37)

Proposition 2.5.2. One gets

W · W = N IN . (2.38)

2π 2π
Proof. Since eiθ = e−iθ , it follows that w = e−i N = ei N = w−1 .
Let us multiply the k th row of W by the nth column of W , 1 ≤ k, n ≤ N .
One obtains
N
X −1 N
X −1 N
X −1
kl ln kl −ln
w w = w w = w(k−n)l .
l=0 l=0 l=0

90
N
X −1
If k = n, then this value is w0 = 1 + 1 + · · · + 1 = N . If k 6= n, then
l=0
we have a geometric progression with ratio q = wk−n , which has the sum
N −1
X
l 1 − qN
q = . Hence, the value is
l=0
1−q
N −1
X 1 − w(k−n)N 1−1
w(k−n)l = k−n
= = 0,
l=0
1−w 1 − wk−n

since w(k−n)N = e−i N (k−n)N = e−i2π(k−n) = cos(2π(k−n))−i sin(2π(k−n)) =
1. Therefore, the product W · W is a matrix with N on the main diagonal
and 0 elsewhere, i.e W · W = N IN .

Corollary 2.5.3. One obtains


1
W −1 = W . (2.39)
N
 
1 1 1
Proof. Indeed, by formula (2.38), W W = W W = N IN = IN .
N N N

Using (2.37), we get the inversion formula f = W −1 fb, which can be


written as
1
Fd−1 [fb] = W fb. (2.40)
N
N −1
1 X kn b
Component-wise, this is equivalent to fn = w fk , i.e.
N k=0
N −1
1 X b −kn
fn = fk w , n = 0, N − 1. (2.41)
N k=0
2π π
Example 2.5.4. Consider the case N = 4. Then w = e−i 4 = e−i 2 =
cos π2 − i sin π2 = −i. Therefore, w = −i, w2 = −1, w3 = i, w4 = 1,
w4q+r = (w4 )q wr = wr , ∀q ∈ Z, r ∈ {0, 1, 2, 3}. The Vandermonde matrix is
   
1 1 1 1 1 1 1 1
 1 w w2 w3   1 −i −1 i 
W =  1 w2 w4 w6  =  1 −1 1 −1  .
  

1 w3 w6 w9 1 i −1 −i

91
Its inverse is  
1 1 1 1
1 1  1 i −1 −i 
W −1 = W =  .
4 4  1 −1 1 −1 
1 −i −1 i
 
1
 1 
Consider the signal f = 
 0  (see Figure 2.9).

−1

Figure 2.9: The Representation of the Signal f

One obtains
    
1 1 1 1 1 1
 1 −i −1 i   1   1 − 2i 
Fd [f ] = fb = W f =  
 1 −1 1 −1   0
=
  1 .

1 i −1 −i −1 1 + 2i

The DFT inverse of fb is


   
1 1 1 1 1 1
1 1 i −1 −i 1 − 2i   1 
Fd−1 [fb] = W −1 fb = 
  
=
  0  = f.
 
4 1 −1 1 −1
   1
1 −i −1 i 1 + 2i −1

Proposition 2.5.5. Properties of the DFT are presented below.

92
1. Linearity: Fd [αf + βg] = αFd [f ] + βFd [g], for every f, g ∈ CN and
α, β ∈ C.

Proof. By (2.37),

Fd [αf + βg] = W (αf + βg) = αW f + βW g = αFd [f ] + βFd [g].

2. Periodicity: fbk+N = fbk , ∀k = 0, N − 1.

Proof. We obtain that


N −1 N −1
−i 2π 2π
X X
fbk+N = fn e N
(k+n)n
= fn e−i N kn e−i2πn
n=0 n=0
N −1 N −1
2π 2π
X X
= fn e−i N kn · 1 = fn e−i N kn
n=0 n=0

= fbk .

h i
i 2π nm
3. Translation (shift): Fd e N fn = fbk−m .
k

Proof. One gets


h i N −1 N −1
i 2π i 2π −i 2π 2π
X X
Fd e N
nm
fn = e N
nm
fn e N
kn
= fn e−i N (k−m)n = fbk−m .
k
n=0 n=0

N −1N −1
1 Xb X
4. Plancherel’s Theorem: f n gn = fk gbk .
n=0
N n=0

−1
N −1
N
1 X b 2
X
2
5. Parseval’s Formula: |fn | = |fk | .
n=0
N n=0

93
2.6 Fast Fourier Transform (FFT)
The FFT represents algorithms which apply iterations of DFTs in order
to significantly reduce the number of operations. For instance, the Cooley-
Tucker FFT algorithm recursively breaks down a DFT of size N = N1 N2
into N1 DFTs of size N2 .
We will denote the number w of a DFT of size N by wN , hence wN =

e−i N .
Consider the case of N an even number and split the DFT of a signal
f = (fn )n=0,N −1 given by
N −1

X
fbk = fn e−i N kn
n=0

N
into samples, one for n even (n = 2m), and the other for n odd (n =
2
2m + 1):
N N
2
−1 2
−1
X X (2m+1)k
2mk
fbk = f2m wN + f2m+1 wN .
m=0 m=0
2π −i 2π
N l
2l
Since wN = e−i N 2l = e 2 = wlN , one can write
2

N N
2
−1 2
−1
X X
mk k
fbk = f2m w N + wN f2m+1 wmk
N . (2.42)
2 2
m=0 m=0

The two sums represent DFTs of the even and odds entries of the signal f ,
ek )k=0,N −1 and (b
respectively. Let us denote them by (b ok )k=0,N −1 , respectively.
Then (2.42) can be written as
k
fbk = ebk + wN obk , k = 0, N − 1. (2.43)
N N
Since ebk and obk are periodic with period , notice that only values ebk
2 2
and obk are necessary.

Example 2.6.1. Consider N = 2. Then w2 = e−i 2 = e−iπ = −1. The
equalities (2.43) are given by
(
fb0 = eb0 + w20 ob0
;
fb1 = eb1 + w21 ob1

94
hence, (
fb0 = eb0 + ob0
(2.44)
fb1 = eb1 − ob1
Relations (2.44) are represented by the following butterfly diagram (see Fig-
ure 2.10):

Figure 2.10: The butterfly diagram

2π π
Example 2.6.2. Consider N = 4. Then w4 = e−i 4 = e−i 2 = −i. One
N
obtains that ebk and obk are periodic with period = 2. Hence, eb2 = eb0 ,
2
eb3 = eb1 , ob2 = ob0 and ob3 = ob1 . The equalities (2.43) are given by


 fb0 = eb0 + w40 ob0

f1 = eb1 + w41 ob1
 b
;

 fb2 = eb2 + w42 ob2

f3 = eb3 + w43 ob4
 b

hence, 

 fb0 = eb0 + ob0

f1 = eb1 − ibo1
 b
(2.45)

 f2 = eb0 − ob0
b

f3 = eb1 + ibo1
 b

Relations (2.45) are represented by the following diagram (see Figure


2.11):

Consider the case N = 2p , for some p ∈ N∗ . Then this process can be


N
repeated. After the first iteration, which involves two -points transforms
2
N
(DFTs), we continue by breaking them down to -points transforms etc.,
4

95
Figure 2.11: The Combination of 2 Butterfly Operations

until we get to 2-points transform (as in (2.44)). These p iterations form the
so called decimation-in-time (DIT) algorithm. An equivalent decimation-in-
frequency algorithm also exists.
The advantages of the FFT with respect to the simple DFT result from
the following evaluation. The DFT requires N 2 multiplications by complex
numbers. At each iteration of the FFT, the number of the complex multipli-
N N
cations is (multiplications by wN0
= 1 and wN2 = e−iπ = −1 being simple
2
additions or subtractions). The number of iterations is p = log2 N . There-
N
fore, the FFT requires log2 N multiplications. To compare, we consider
2
the following cases:

 DFT FFT
N


N N2 log2 N



5
2
 2 = 32 1024 16 · 5 = 80
10
2 = 1024 1.048.576 512 · 10 = 5120




 15
2 = 32768 230 245 · 760
Example 2.6.3. If N = 23 = 8, then the 3-iterations are indicated in Figure
2.12.

−i
Example 2.6.4. Consider
 the Example 2.5.4. We have N = 4, w = e 4 =
1
 1 
−i and the signal f = 
 0 .

−1

96
Figure 2.12: The Combination of 4 Butterfly Operations

N
Iteration 1 (see Figure 2.11): = 2 points DFT of the inputs f0 = 1
2
and f2 = 0. One obtains (see Figure 2.10, i.e. (2.44))

eb0 = f0 + f2 = 1, eb1 = f0 − f1 = 1,

ob0 = f1 + f3 = 0, ob1 = f1 − f3 = 2.

Iteration 2 (see (2.45)):

fb0 = eb0 + ob0 = 1, fb1 = eb1 − ib


o1 = 1 − 2i,

fb2 = eb0 − ob0 = 1, fb3 = eb1 + ib


o1 = 1 + 2i.

These are the same results as in Example 2.5.4.

97
2.7 Exercises
E 9. Determine the Fourier transform fb(ω) (Definition 2.1.2) of the following
functions:

1, x ∈ (a, b)
a) f : R → R, f (x) = , a, b ∈ R, a < b;
0, x ∈ R \ (a, b)
b) f : R → R, f (x) = e−4|x+2| , where | · | is the absolute value;
x 2 − a2
c) f : R → R, f (x) = 2 , a > 0.
(x + a2 )2
Solution. a) We will use formula (2.3). One gets
Z a Z b Z ∞
iωx iωx
f (ω) =
b 0 · e dx + 1 · e dx + 0 · eiωx dx
−∞ a b
Z b iωx b
e
= eiωx dx =
a iω a
iωb iωa
e−e
= .

Since the above
Z b ω is a denominator,
Z b we need to impose the condition ω 6= 0.
But fb(0) = eiω0 dx = dx = b − a. It follows that
a a

 eiωb − eiωa
fb(ω) = , ω ∈ R∗ ;

 b − a, ω = 0

x + 2, x ≥ −2
b) It is obvious that |x + 2| = . We will use again
−x − 2, x < −2
formula (2.3). One obtains
Z ∞ Z −2 Z ∞
f (ω) =
b iωx
f (x)e dx = e4(x+2) iωx
e dx + e−4(x+2) eiωx dx
−∞ −∞ −2
Z −2 Z ∞
= e8+4x+iωx dx + e−8−4x+iωx dx
−∞ −2
e8+4x+iωx −2 e−8−4x+iωx ∞ e−2iω
= + = −
4 + iω −∞ −4 + iω −2 4 + iω
1 1 e−2iω
− lim e8+4x+iωx + lim e−8−4x+iωx − .
4 + iω x→−∞ −4 + iω x→∞ −4 + iω

98
The first limit is equal to lim e8+4x (cos ωx+i sin ωx) = lim e8+4x cos ωx+
x→−∞ x→−∞
i lim e8+4x sin ωx. As e−∞ = 0 and cos ωx, sin ωx ∈ [−1, 1], it follows that
x→−∞
lim e8+4x+iωx = 0. Similarly, lim e−8−4x+iωx = 0. Hence,
x→−∞ x→+∞

e−2iω e−2iω
 
1 1 8
fb(ω) = − = e−2iω − = e−2iω ;
4 + iω −4 + iω 4 + iω −4 + iω 16 + ω 2

P
c) Since f (x) = , where P, Q ∈ R[X], Q(x) 6= 0, for every x ∈ R and
Q
z 2 − a2 iωz
deg P ≤ deg Q − 2, we use formula (2.4). Let us denote 2 e by
(z + a2 )2
g(z), z ∈ C.
Case 1: ω ≥ 0. Then z = ai is the only complex root located in the upper
half plane of the polynomial Q(x) = (x2 + a2 )2 (since Im(ai) = a > 0) and it
is a pole of order 2 of g. Hence,
0
(z 2 − a2 )eiωz

res (g, ai) = lim
z→ai (z + ai)2
[2zeiωz + (z 2 − a2 )iωeiωz ](z + ai)2 − (z 2 − a2 )eiωz 2(z + ai)
= lim
z→ai (z + ai)4
[2zeiωz + (z 2 − a2 )iωeiωz ](z + ai) − 2(z 2 − a2 )eiωz
= lim
z→ai (z + ai)3
e−aω (−4a2 + 4a3 ω + 4a2 )
=
−8a3 i
ω
= e−aω .
−2i

In conclusion, we get that fb(ω) = 2πi · res (g, ai) = −πωe−aω .


Case 2: ω < 0. Then ai = −ai is a pole of order 2 of g. Using the
−ω
same type of computations we obtain that res (g, −ai) = eaω and fb(ω) =
2i
−2πi · res (g, −ai) = πωeaω .
Therefore,
−πωe−aω , ω ≥ 0

fb(ω) = .
πωeaω , ω < 0

99
W 9. Determine the Fourier transform fb(ω) (Definition 2.1.2) of the follow-
ing functions:
 −x
e , x ∈ (0, 1)
a) f : R → R, f (x) = ;
0, x ∈ R \ (0, 1)

sin x, x ∈ (0, π)
b) f : R → R, f (x) = ;
0, x ∈ R \ (0, π)
c) f : R → R, f (x) = e−|3x+2| , where | · | is the absolute value;
2
d) f : R → R, f (x) = e−16x −5 ;
x+3
e) f : R → R, f (x) = 4 ;
x + 5x2 + 4
1
f) f : R → R, f (x) = 2 ;
(x + 1)3
1
g) f : R → R, f (x) = 4 .
x +1
Answer. a) One obtains
Z 1 Z 1
−x iωx
fb(ω) = e e dx = ex(−1+iω) dx
0 0
ex(−1+iω) 1 (1 + iω)(1 − e−1+iω )
= = ;
−1 + iω 0 1 + ω2

b) One gets
π π
eix − e−ix iωx
Z Z
iωx
fb(ω) = sin xe dx = e dx.
0 0 2i

Thus,
Z π Z π 
1 ix+iωx −ix+iωx
f (ω) =
b e dx − e dx
2i 0 0
1 eix(1+ω) π eix(−1+ω) π
 
= −
2i i(1 + ω) 0 i(−1 + ω) 0
1 eiπ(1+ω) − 1 eiπ(−1+ω) − 1
 
=− − .
2 1+ω −1 + ω

100
Since eiπ = cos π + i sin π = −1, it follows that

eiπω + 1 eiπω + 1 eiπω + 1


 
1
fb(ω) = + = ,
2 1+ω 1−ω 1 − ω2

for ω ∈ R \ {±1}.

But

π π ix
e − e−ix ix
Z Z
ix
fb(1) = sin xe dx = e dx
0 0 2i
Z π Z π 
1 2ix π
= e dx − dx = −
2i 0 0 2i
and
π π
eix − e−ix −ix
Z Z
−ix
fb(−1) = sin xe dx = e dx
0 0 2i
Z π Z π 
1 π
= dx − e−2ix dx = .
2i 0 0 2i

Hence,

eiπω + 1
, ω ∈ R \ {±1}






 1 − ω2


f (ω) = π ;
b − , ω=1


 2i



 π

 , ω = −1
2i

c) As in Exercise 9 b),
Z − 32 Z ∞
2iω 6
fb(ω) = e3x+2 iωx
e dx + e−3x−2 eiωx dx = e− 3 .
−∞ − 32 9 + ω2


π −5− ω2
d) One obtains fb(ω) = e 64 (see Example 2.1.5);
4

101
(z + 3)eiωz
e) Denote by g(z). Hence, (see Exercise 9 c))
(z 2 + 1)(z 2 + 4)

 2πi(res (g, i) + res (g, 2i)), ω ≥ 0
fb(ω) =
−2πi(res (g, −i) + res (g, −2i)), ω < 0

(z + 3)eiωz (z + 3)eiωz
  
2πi lim + lim 2 , ω≥0


z→i (z + i)(z 2 + 4) z→2i (z + 1)(z + 2i)



=
(z + 3)eiωz (z + 3)eiωz
  

 −2πi lim + lim , ω<0


z→−i (z − i)(z 2 + 4) z→−2i (z 2 + 1)(z − 2i)

(2i + 3)e−2ω
  
π −ω
(i + 3)e − , ω≥0


 3 2


= ;

 

 π (−2i + 3)e
(−i + 3)eω − , ω<0



3 2

eiωz
f) Denote by g(z). Hence, (see Exercise 9 c))
(z 2 + 1)3

 2πi · res (g, i), ω ≥ 0
fb(ω) = .
−2πi · res (g, −i), ω < 0

It follows that ±i are poles of order 3 of g. Thus,


  iωz 00
1 e
2πi · lim , ω≥0


2! z→i (z + i)3



fb(ω) =
 iωz 00
1 e


 −2πi ·

 lim , ω<0
2! z→−i (z − i)3
 π
−ω 2
 8 e (ω + 3ω + 3), ω ≥ 0


= ;
π
eω (ω 2 − 3ω + 3), ω < 0



8
g) We solve the equation x4 + 1 = 0. So x4 = −1 = cos π + sin π; thus,
√ √
 
4 4 π + 2kπ π + 2kπ
x = −1 = 1 cos + i sin , for k ∈ {0, 1, 2, 3}.
4 4

102
1 1 1
In conclusion, the complex roots are √ (1 + i), √ (1 − i), √ (−1 + i) and
2 2 2
1
√ (−1 − i).
2
eiωz
Let us denote 4 by g(z). Hence,
z +1
     
1 1
2πi res g, √ (1 + i) + res g, √ (−1 + i) , ω ≥ 0




 2 2
fb(ω) =     

 1 1
 −2πi res g, √ (1 − i) + res g, √ (−1 − i) , ω < 0


2 2
ω ω
 !
√ (i−1) √ (−i−1)
 π e 2 e 2
√ + , ω≥0


1+i 1−i



 2
= .
ω ω !
 √ (−i−1) √ (−1+i)

 π e 2 e 2
 − √2 + , ω<0


1−i 1+i

E 10. Determine the cosine Fourier transform fbc (ω) (formula (2.26)) of the
following functions:
 ax
e , x ∈ (0, a)
a) f : (0, ∞) → R, f (x) = , a ∈ R;
0, x ≥ a

 cos x, x ∈ (0, π)
b) f : (0, ∞) → R, f (x) = x, x ∈ [π, 2π] ;
0, x > 2π

1
c) f : (0, ∞) → R, f (x) = .
(x2 + 1)(x4 + 5x2 + 4)
Solution. We use formula (2.26) throughout the whole exercise.
a) One gets
Z a Z ∞ Z a
ax
fbc (ω) = e cos(ωx) dx + 0 · cos(ωx) dx = eax cos(ωx) dx.
0 a 0

One way to solve this integral is to integrate by parts two times until the
initial integral is reached. A better method is to replace into the integral

103
eix + e−ix
Euler’s formula for cosine, cos x = , x ∈ R. It follows that
2
Z a Z a Z a
iωx
+ e−iωx

ax e 1 ax+iωx ax−iωx
fc (ω) =
b e dx = e dx + e dx
0 2 2 0 0
2 2
!
1 eax+iωx a eax−iωx a 1 ea +iωa − 1 ea −iωa − 1
 
= + = +
2 a + iω 0 a − iω 0 2 a + iω a − iω
" 2 2
#
1 (a − iω)(ea +iωa − 1) + (a + iω)(ea −iωa − 1)
= .
2 a2 + ω 2
2 2 2 2
Since ea +iωa = ea (cos ωa + i sin ωa) and ea −iωa = ea (cos ωa − i sin ωa), it
follows that
2 2
1 2aea cos ωa + 2ωea sin ωa − 2a
fc (ω) = ·
b
2 a2 + ω 2
2
ea (a cos ωa + ω sin ωa) − a
= ;
a2 + ω 2

b) One obtains
Z π Z 2π
fbc (ω) = cos x cos(ωx) dx + x cos(ωx) dx + 0.
0 π
Z π Z 2π
Let us denote cos x cos(ωx) dx by I1 (ω) and x cos(ωx) dx by I2 (ω),
0 π
ω > 0.
For the computation of I1 (ω) one can integrate by parts two times un-
til the initial integral is reached. An easier way to solve it is to use the
cos(α + β) + cos(α − β)
trigonometric formula cos α cos β = . Following this
2
method, one obtains
Z 2π
cos(x + ωx) + cos(x − ωx)
I1 (ω) = dx
π 2
Z 2π Z 2π 
1
= cos(x + ωx) dx + cos(x − ωx) dx
2 π π
 
1 sin(x + ωx) π sin(x − ωx) π
= +
2 1+ω 0 1−ω 0
 
1 sin(π + ωπ) sin(π − ωπ)
= + .
2 1+ω 1−ω

104
Since sin(π + α) = − sin α and sin(π − α) = sin α, it follows that for ω 6= ±1,
we have
   
1 sin ωπ sin ωπ sin ωπ 1 1 ω sin ωπ
I1 (ω) = − + = − + = .
2 1+ω 1−ω 2 1+ω 1−ω 1 − ω2
But ω > 0, so the previous expression makes sense for ω 6= 1. On the other
hand,
Z π Z π
I1 (1) = cos x cos(1 · x) dx = cos2 x dx
Z0 π  0 
1 + cos 2x 1 π sin 2x π π
= dx = x + = .
0 2 2 0 2 0 2
For the computation of I2 (ω) we proceed by integrating by parts. There-
fore,
Z 2π
sin ωx 2π sin ωx
I2 (ω) = x − dx
ω π π ω
2π sin 2ωπ − π sin πω cos ωx 2π
= +
ω ω2 π
2π sin 2ωπ − π sin πω cos 2πω − cos πω
= + .
ω ω2
We obtained
ω sin ωπ 2π sin 2ωπ − π sin πω cos 2πω − cos πω
fbc (ω) = I1 (ω)+I2 (ω) = + + ,
1 − ω2 ω ω2
π 0−0+1−1 π
for ω ∈ (0, ∞) \ {1} and fbc (1) = I1 (1) + I2 (1) = + = ;
2 1 2
c) One gets
Z ∞
1
fbc (ω) = cos ωx dx
0 (x2 + 1)(x4 + 5x2 + 4)
Z ∞
1
= cos ωx dx.
0 (x + 1)2 (x2 + 4)
2

Since we are dealing with the integral of an even function, one can write
1 ∞
Z
1
fc (ω) =
b cos ωx dx.
2 −∞ (x + 1)2 (x2 + 4)
2

105

eiωx
Z
The integral is now the real part of integral I = 2 2 2
dx.
−∞ (x + 1) (x + 4)
In order to compute I we use residues. Consider the complex function g(z) =
eiωz
, z ∈ C. So I = 2πi(res (g, i) + res (g, 2i)) (the method used
(z 2 + 1)2 (z 2 + 4)
here is included in [26, p. 150]). Since i is a pole of order 2 and 2i is a pole
of order 1 of g, it follows that
0
eiωz e−ω (3ω + 1)

res (g, i) = lim =
z→i (z + i)2 (z 2 + 4) i · 36
and
eiωz e−2ω
res (g, 2i) = lim = .
z→2i (z 2 + 1)2 (z + 2i) 36i
Hence,
π −2ω
+ e−ω (3ω + 1) .

fbc (ω) = e
36

W 10. Determine the cosine Fourier transform fbc (ω) (formula (2.26)) of the
following functions:

1, x ∈ (0, a)
a) f : (0, ∞) → R, f (x) = , a ∈ R;
0, x ≥ a

sin 2x, x ∈ (0, π)
b) f : (0, ∞) → R, f (x) = ;
0, x ≥ π

1 − x2 , x ∈ (0, 1)
c) f : (0, ∞) → R, f (x) = ;
0, x ≥ 1
x2 − 1
d) f : (0, ∞) → R, f (x) = ;
(x2 + 4)2
1
e) f : (0, ∞) → R, f (x) = 4 .
x + 7x2 + 10
Z a
sin ωa
Answer. a) One gets fc (ω) =
b cos ωx dx = ;
0 ω
b) If ω 6= 2, then
Z π π
sin(2x + ωx) + sin(2x − ωx)
Z
fc (ω) =
b sin 2x cos ωx dx = dx
0 0 2
2(1 − cos ωπ)
= ,
4 − ω2

106
Z π Z π
1
and fbc (2) = sin 2x cos 2x dx = sin 4x dx = 0;
0 2 0
Z 1
c) One obtains fbc (ω) = (1 − x2 ) cos ωx dx. Performing two times
0
integration by parts, one finds
 
2 sin ω
fbc (ω) = 2 − cos ω + ;
ω ω
π
d) One gets fbc (ω) = e−2ω (3 − 10ω) (see Exercise 10 c));
32
√ √ !
− 2ω − 5ω
π e e
e) One obtains fbc (ω) = √ − √ (see Exercise 10 c)).
6 2 5

E 11. Determine the sine Fourier transform fbs (ω) (formula (2.27)) of the
following functions:

2 cos2 x, x ∈ (0, π)
a) f : (0, ∞) → R, f (x) = ;
0, x ≥ π
2x
b) f : (0, ∞) → R, f (x) = , a ∈ R∗ .
x 2 + a2

[x], x ∈ (0, n)
c) f : (0, ∞) → R, f (x) = , where n ∈ N∗ and [·]
0, x ≥ n
denotes the greatest integer function;
Solution. We use formula (2.27) throughout the
Z ∞ Z π whole exercise.
a) One obtains fbs (ω) = f (x) sin ωx dx = 2 cos2 x sin ωx dx. Since
0 0
2 cos2 x = 1 + cos(2x), one has
Z π Z π
fs (ω) =
b sin ωx dx + cos(2x) sin ωx dx.
0 0

sin(a + b) + sin(a − b)
Using the trigonometric formula sin a cos b = , one gets
2
− cos ωx π 1 π
Z
fs (ω) =
b + (sin(2x + ωx) + sin(ωx − 2x)) dx
ω 0 2 0
 
1 − cos ωπ 1 − cos(ωx + 2x) π − cos(ωx − 2x) π
= + +
ω 2 ω+2 0 ω−2 0

107
Thus,
 
1 − cos ωπ 1 1 − cos(ωπ + 2π) 1 − cos(ωπ − 2π)
fbs (ω) = + + .
ω 2 ω+2 ω−2

As 2π is one of the periods of the cosine function (in fact, it is its fundamental
period), one has
 
1 − cos ωπ 1 1 − cos(ωπ) 1 − cos(ωπ)
fbs (ω) = + +
ω 2 ω+2 ω−2
 
1 − cos ωπ 1 − cos(ωπ) 1 1
= + +
ω 2 ω+2 ω−2
1 − cos ωπ ω(1 − cos ωπ)
= +
ω ω2 − 4
2
2ω − 4
= (1 − cos ωπ) · 2 .
ω −4

But the denominator has to be different from 0. Since ω > 0, it follows that
the previous expression is valid for ω 6= 2. So we need to calculate separately
fbs (2). We obtain
Z π
fbs (2) = 2 cos2 x sin 2x dx
Z0 π
= 2 cos2 x · 2 sin x cos x dx
0
Z π
=4 cos3 x sin x dx.
0

Using the substitution cos x = t, one has


Z 1 1
fbs (2) = 4 t3 dt = t4 = 0.
−1 −1

Hence,

2ω 2 − 4
(1 − cos ωπ) · , ω ∈ (0, ∞) \ {2}

fbs (ω) = ω2 − 4 ;
 0, ω = 2

108
Z ∞
2x 2x
b) One gets fbs (ω) = sin ωx dx. But 2 sin ωx is an even
0 x2 +a 2 x + a2
function, so
∞ ∞
xeiωx
Z Z 
1 2x
fbs (ω) = sin ωx dx = Im dx .
2 −∞ x 2 + a2 −∞ x 2 + a2

xeiωx
Z
In order to compute dx we need residues. Consider the
−∞ x 2 + a2
zeiωz
function g(z) = 2 , z ∈ C. The singular points of g are ±ai. Since we
z + a2
need only the points in the upper half plane, we need to distinguish between
the following two cases depending on a:

ˆ if a > 0, then ai is a pole of order 1 of g and

zeiωz e−aω
res (g, ai) = = .
2z z=ai 2
Therefore,

xeiωx
Z
dx = 2πi · res (g, ai) = πie−aω
−∞ x 2 + a2

and fbs (ω) = πe−aω ;

ˆ if a < 0, then −ai is a pole of order 1 of g and

zeiωz eaω
res (g, −ai) = = .
2z z=−ai 2

Therefore,

xeiωx
Z
dx = 2πi · res (g, −ai) = πieaω
−∞ x 2 + a2

and fbs (ω) = πeaω .

It follows that fbs (ω) = πe−|a|ω , for every a ∈ R∗ .

109


 0, x ∈ (0, 1)
1, x ∈ [1, 2)




 ······ ············


c) Since f (x) = k − 1, x ∈ [k − 1, k) , it follows that
 ······ ············



n − 1, x ∈ [n − 1, n)




0, x≥n

Z ∞ n−1 Z
X k+1 Z ∞
fbs (ω) = f (x) sin ωx dx = k sin ωx dx + 0 · sin ωx dx
0 k=0 k n
n−1 Z k+1 n−1
X X − cos ωx k+1
= k sin ωx dx = k· .
k=0 k k=0
ω k

Thus,
n−1
1X
fbs (ω) = k (cos(ωk) − cos(ω(k + 1))
ω k=0
1
= (cos ω − cos(2ω) + 2 cos(2ω) − 3 cos(2ω) + 3 cos(3ω)−
ω
− 3 cos(4ω) + · · · + (n − 1) cos((n − 1)ω) − (n − 1) cos(nω))
 
1
= cos ω + cos(2ω) + · · · + cos((n − 1)ω) −(n − 1) cos(nω))
ω | {z }
S
2n − 1
 
1  sin 2 ω 1
=  − − (n − 1) cos(nω)) ,

ω ω 2
2 sin
2
where the computation of S can be found in [18, p. 77]).

W 11. Determine the sine Fourier transform fbc (ω) (formula (2.27)) of the
following functions:

1, x ∈ (0, a)
a) f : (0, ∞) → R, f (x) = , a ∈ R;
0, x ≥ a

sinh 2x, x ∈ (0, π)
b) f : (0, ∞) → R, f (x) = ;
0, x ≥ π

110

1 − x, x ∈ (0, 1)
c) f : (0, ∞) → R, f (x) = ;
0, x ≥ 1
x3
d) f : (0, ∞) → R, f (x) = ;
(x2 + 4)2
ax
e) f : (0, ∞) → R, f (x) = 4 2
, a ∈ R∗ .
x + 7x + 10
Z a
1 − cos ωa
Answer. a) fbs (ω) = sin ωx dx = ;
0 ω
b) One gets
π π
e2x − e−2x eiωx − e−iωx
Z Z
fs (ω) =
b sinh 2x sin ωx dx = · dx.
0 0 2 2i

Thus,
Z π Z π 
1 2x+iωx 2x−iωx
fs (ω) =
b e dx − e dx −
4i 0 0
Z π Z π 
1 −2x+iωx −2x−iωx
− e dx + e dx
4i 0 0
1 e2π+iωπ − 1 e2π−iωπ − 1 e−2π+iωπ − 1 e−2π−iωπ − 1
 
= − − + .
4i 2 + iω 2 − iω −2 + iω −2 − iω

Since ea+ib = ea (cos b + i sin b), a, b ∈ R, one gets

ω cos ωπ sinh 2π sin ωπ cosh 2π


fbs (ω) = − + ;
2(4 + ω 2 ) 4 + ω2
Z 1
1 sin ω
c) fbs (ω) = (1 − x) sin ωx dx = − 2 ;
0 ω ω
d) One obtains
Z ∞
x3 eiωx z 3 eiωz
 
dx = 2πi · res , 2i
−∞ (x2 + 4)2 (z 2 + 4)2
 3 iωz 0
z e
= 2πi lim
z→2i (z + 2i)2

= πie−2ω (1 − ω).

111
π
Therefore, fbs (ω) = e−2ω (1 − ω);
2
√ √ !
πa e− 2ω e− 5ω
e) fbs (ω) = − .
2 4 3

E 12. Solve the following integral equations:


 
b

Z ∞  aω − b, ω ∈ 0,

a) f (x) cos ωx dx = a , a, b ∈ R∗ , ab > 0;
0 b
0, ω ≥


a
Z ∞ 
cos nω, ω ∈ (0, π)
b) f (x) sin ωx dx = , n ∈ N∗ ;
0 0, ω ≥ π
Z ∞
1
c) f (x) cos ωx dx = 4 , ω > 0;
0 ω + 13ω 2 + 36
Z ∞

d) f (x) sin ωx dx = 2 , ω > 0.
0 (ω + 1)2
Solution. We are using in all exercises either the inversion formula (2.28)
or the inversion
Z ∞formula (2.29).
a) Since f (x) cos ωx dx = fbc (ω), using formula (2.28), we get that
0
2 ∞b
Z
−1 b
Fc [fc ](x) = f (x) = fc (ω) cos ωx dω. It follows that the unknown
π 0
Z b
2 a
of the integral equation is f (x) = (aω − b) cos ωx dω. Integrating by
π 0
parts, one obtains
" Z b #
b
2 sin ωx a a sin ωx
f (x) = (aω − b) · − a· dω
π x 0 0 x
Z b
−2a a 2a cos ωx ab
= sin ωx dω = ·
πx 0 πx x 0
   
2a b
= cos x · −1 ;
πx2 a
Z ∞
b) Since f (x) sin ωx dx = fbs (ω), using formula (2.29), we get that
0

112
Z ∞
−1 2
Fs [fbs ](x) = f (x) = fbc (ω) sin ωx dω. It follows that the unknown of
π 0
2 π
Z
the integral equation is f (x) = cos nω sin ωx dω. By transforming the
π 0
trigonometric product into a sum, one obtains
2 π sin(nω + ωx) + sin(ωx − nω)
Z
f (x) = dω
π 0 2
 
1 − cos(nω + ωx) π − cos(ωx − nω) π
= +
π n+x 0 x−n 0
 
1 − cos(nπ + πx) + 1 − cos(πx − nπ) + 1
= + .
π n+x x−n
Thus,
1 (−1)n+1 cos πx + 1 (−1)n+1 cos πx + 1
 
f (x) = +
π n+x x−n
2x
(−1)n+1 cos πx + 1 .
 
= 2 2
π(x − n )
Since x > 0, it follows that the previous expression makes sense for x 6= n.
For x = n we have to do a separate computation. We get that
2 π 1 π
Z Z
1 cos 2nω π
f (n) = cos nω sin nω dω = sin 2nω dω = − · = 0;
π 0 π 0 π 2n 0

c) One obtains
2 ∞b 2 ∞
Z Z
1
f (x) = fc (ω) cos ωx dω = cos ωx dω.
π 0 π 0 ω 4 + 13ω 2 + 36
1
As 4 cos ωx is an even function with respect to the ω variable,
ω + 13ω 2 + 36
we get
Z ∞
1 ∞
Z
1 1
4 2
cos ωx dω = cos ωx dω.
0 ω + 13ω + 36 2 −∞ ω + 13ω 2 + 36
4

1
Also sin ωx is an odd function with respect to the ω variable.
ω4
+ 13ω 2 + 36
Therefore, Z ∞
1
4 2
sin ωx dω = 0.
−∞ ω + 13ω + 36

113
Putting together the previous facts, one gets
2 1 ∞
 Z 
1
f (x) = cos ωx dω + i · 0
π 2 −∞ ω 4 + 13ω 2 + 36
1 ∞
Z
1
= cos ωx dω+
π −∞ ω + 13ω 2 + 36
4
Z ∞
1 1
+ i sin ωx dω
π −∞ ω + 13ω 2 + 36
4

1 ∞
Z
1
= (cos ωx + i sin ωx)) dω
π −∞ ω + 13ω 2 + 36
4

1 ∞
Z
1
= · eiωx dω.
π −∞ ω + 13ω 2 + 36
4

eizx
Consider now the complex function g(z) = , z ∈ C. The
z 4 + 13z 2 + 36
singular points of g are ±2i and ±3i, all being poles of order 1. We have the
following computation (using residues):
Z ∞
1
4 2 + 36
· eiωx dω = 2πi(res (g, 2i) + res (g, 3i))
−∞ ω + 13ω
eizx eizx
 
= 2πi +
4z 3 + 26z z=2i 4z 3 + 26z z=3i
 −2x
e−3x

e
= 2πi +
20i −30i
 −2x −3x

π e e
= − .
5 2 3
It follows that
e−2x e−3x e−2x e−3x
   
1 π 1
f (x) = · − = − .
π 5 2 3 5 2 3

d) One gets
2 ∞b 2 ∞
Z Z

f (x) = fs (ω) sin ωx dω = sin ωx dω.
π 0 π 0 (ω 2 + 1)2

As 2 sin ωx is an even function with respect to the ω variable, we get
(ω + 1)2
1 ∞
Z

f (x) = sin ωx dω.
π −∞ (ω + 1)2
2

114
2zeizx
Consider the function g(z) = , z ∈ C. It is obvious that i is the only
(z 2 + 1)2
singular point of g situated inthe upper half plane and it is pole of order 2.
0
izx
xe−x

2ze
Since res (g, i) = lim = , it follows that
z→i (z + i)2 2

2ωeiωx
Z
dω = 2πi · res (g, i) = πixe−x
−∞ (ω 2 + 1)2

and

2ωeiωx
Z 
1
f (x) = · Im dω = xe−x .
π −∞ (ω 2 + 1)2

W 12. Solve the following integral equations:


 
b

aω−b
Z ∞  e
 , ω ∈ 0,
a) f (x) sin ωx dx = a , a, b ∈ R∗ , ab > 0;
0 b
0, ω ≥


a
Z ∞ 
sin(2πω), ω ∈ (0, 1)
b) f (x) cos ωx dx = ;
0 0, ω ≥ 1
Z ∞
1
c) f (x) cos ωx dx = 2 , ω > 0;
0 (ω + 4)2
Z ∞
ω3
d) f (x) sin ωx dx = 4 , ω > 0.
0 ω +1
Answer. a) One gets
b
Z ∞ Z
2 2 a
f (x) = fbs (ω) sin ωx dω = eaω−b sin ωx dω.
π 0 π 0

Z b
a
We denote eaω−b sin ωx dω by I and one can solve it integrating by parts
0
twice. We get
 
b
sin x
x2
   
a x b
I= − 2 cos x − 1 − 2 I,
a a a a

115
so 
   
b b
a sin x − x cos x) − 1
a a
I=
x 2 + a2
and       
b b
2 a sin x − x cos x −1
a a
f (x) = ;
π(x2 + a2 )
b) One obtains
2 ∞b 2 1
Z Z
f (x) = fc (ω) cos ωx dω = sin 2πω cos ωx dω
π 0 π 0
1 1
Z
= (sin(2πω + ωx) + sin(2πω − ωx)) dω.
π 0
Thus,
4(1 − cos x)
f (x) = (if x 6= 2π)
4π 2 − x2
and Z 1 Z 1
2 1
f (2π) = sin 2πω cos 2πω dω = sin 4πω dω = 0;
π 0 π 0

c) One gets
2 ∞b 2 ∞
Z Z
1
f (x) = fc (ω) cos ωx dω = cos ωx dω
π 0 π 0 (ω + 4)2
2
Z ∞ Z ∞
1 1 1 1
= 2 2
cos ωx dω = 2 2
eiωx dω
π −∞ (ω + 4) π −∞ (ω + 4)
izx
e−2x (2x + 1)
 
1 e
= · 2πi · res , 2i = ;
π (z 2 + 4)2 2

d) Hint: z 4 + 1 = √
0 ⇔ z 4 + 2z 2 +√1 − 2z 2 = 0 ⇔ (z 2 + 1)2 − ( 2z)2 = 0.
2 2
Hence, the roots are (1 ± i) and (−1 ± i). We get
2 2

√ !
− 22 x 2
f (x) = e cos x .
2

116
E 13. Represent the following functions as a Fourier integral:

a, |x| < b
a) f : R → R, f (x) = , a, b ∈ R, b > 0;
0, |x| ≥ b
 x
e , |x| < 1
b) f : R → R, f (x) = ;
0, |x| ≥ 1
1
c) f : R → R, f (x) = .
x2 +x+1

Solution. We use formula (2.23), and just because we need to make a


simplifying notation, we repeat it once more. It is the following:

1 ∞
Z Z ∞ 
f (x) = cos ωx f (y) cos ωy dy dω
π 0 −∞
1 ∞
Z Z ∞ 
+ sin ωx f (y) sin ωy dy dω.
π 0 −∞

Let us use from now on the following notation:


Z ∞
A(ω) := f (y) cos ωy dy
−∞

and Z ∞
B(ω) := f (y) sin ωy dy
−∞

b Z
sin ωy b 4a sin ωb
a) One obtains A(ω) = a cos ωy dy = 2a = and
−b ω −b ω
Z b
B(ω) = a sin ωy dy = 0 (since the function inside the integral is odd with
−b
respect to the y variable). It follows that
Z ∞
4a sin ωb
f (x) = cos ωx dω;
π 0 ω

Z 1 Z 1
y
b) One gets A(ω) = e cos ωy dy and B(ω) = ey sin ωy dy. The
−1 −1
easiest way to compute the integrals is to organize the computations in the

117
following way:
1 1
ey+iωy 1
Z Z
y iωy y+iωy
A(ω) + iB(ω) = e e dy = e dy =
−1 −1 1 + iω −1
−1−iω
e1+iω
−e (1 − iω)(e1+iω
− e−1−iω )
= = .
1 + iω 1 + ω2
Since, for z = a + ib ∈ C,

ez − e−z = ea (cos b + i sin b) − e−a (cos b − i sin b)


= cos b(ea − e−a ) + i sin b(ea + e−a )
= 2 cos b sinh a + 2i sin b cosh a,

(1 − iω)(2 cos ω sinh 1 + 2i sin ω cosh 1)


it follows that A(ω) + iB(ω) = and
1 + ω2
2(cos ω sinh 1 + ω sin ω cosh 1)
A(ω) = ,
1 + ω2
2(sin ω cosh 1 − ω cos ω sinh 1)
B(ω) = .
1 + ω2
Therefore,
2 ∞ cos ω sinh 1 + ω sin ω cosh 1
Z
f (x) = cos ωx dω
π 0 1 + ω2
2 ∞ sin ω cosh 1 − ω cos ω sinh 1
Z
+ sin ωx dω;
π 0 1 + ω2

c) One obtains
Z ∞
1
A(ω) = cos ωy dy
−∞ y2 + y + 1
and Z ∞
1
B(ω) = sin ωy dy.
−∞ y2 +y+1
Using the same idea as before,
Z ∞
eiωz
 
1 iωy
A(ω) + iB(ω) = 2
e dy = 2πi · res ,ε ,
−∞ y + y + 1 z2 + z + 1

118

−1 + i 3 eiωz
where ε = is a pole of order 1 of the function 2 , z ∈ C.
2 z +z+1
So
eiωz 2π ω√3 ω
A(ω) + iB(ω) = 2πi = √ e− 2 −i 2
2z + 1 z=ε 3
√ 
2π − ω 3 ω ω
=√ e 2 cos − i sin
3 2 2
√ √
2π ω 3 ω 2π ω 3 ω
and A(ω) = √ e− 2 cos , B(ω) = − √ e− 2 sin . It follows that
3 2 3 2
Z ∞ √ 
2 − ω2 3 ω ω 
f (x) = √ e cos cos ωx − sin sin ωx dω
3 Z0 2 2
∞ √
2 ω 3
 ω 
=√ e− 2 cos + ωx dω.
3 0 2

W 13. Represent the following functions as a Fourier integral:



 ax − b, |x| < b

a) f : R → R, f (x) = a , a, b ∈ R∗ , ab > 0;
b

 0, |x| ≥
a

sin x, |x| < π
b) f : R → R, f (x) = ;
0, |x| ≥ π
1
c) f : R → R, f (x) = ;
(2x2
+ 1)2
1
d) f : R → R, f (x) = 2 .
2x + 2x + 1
Answer. We use again formula (2.23) and the notations from the previous
exercise (Exercise 13).
   
b b
2b sin ω 2b cos ω  
a a 2a b
a) A(ω) = and B(ω) = − + 2 sin ω ;
ω ω ω a
b) A(ω) = 0 and
Z π Z π
B(ω) = sin y sin ωy dy = 2 sin y sin ωy dy
−π 0
Z π
2 cos ωπ
= (cos(y − ωy) − cos(y + ωy)) dy = (if ω 6= 1).
0 1 − ω2

119
Z π Z π
2
Also B(1) = 2 sin y dy = (1 − cos 2y) dy = π. Therefore,
0 0
Z ∞
2 cos ωπ
f (x) = sin ωx dω;
π 0 1 − ω2

c) B(ω) = 0 and
∞ Z ∞
eiωy
Z 
1
A(ω) = 2 2
cos ωy dy = Re 2 2
dy
−∞ (2y + 1) −∞ (2y + 1)
eiωz π − √ω √
  
i
= Re 2πi · res , √ = √ e 2 (ω 2 + 2).
(2z 2 + 1)2 2 4 2

Therefore,
1
Z ∞
− √ω

f (x) = √ e 2 (ω 2 + 2) cos ωx dω;
4 2 0

d) One gets

eiωy
Z
A(ω) + iB(ω) = 2
dy
−∞ 2y + 2y + 1
eiωz
 
−1 + i
= 2πi · res ,
2z 2 + 2z + 1 2
ω ω ω
 ω ω
= πe− 2 −i 2 = πe− 2 cos − i sin ,
2 2
ω ω ω ω
so A(ω) = πe− 2 cos and B(ω) = −πe− 2 sin . Therefore,
2 2
Z ∞ ω 
−ω
f (x) = e cos
2 + ωx dω.
0 2

1
E 14. Consider the function f : (0, ∞) → (0, ∞), f (x) = √ .
x
Z ∞
1 1 2
a) Prove that √ = √ e−xu du.
x π 0
b) Compute fbc (ω) (using a)).

120

Solution. a) Employing the substitution xu = t, one gets
Z ∞ Z ∞ Z ∞ √
−xu2 −t2 dt 1 −t2 π
e du = e √ =√ e dt = √ ;
0 0 x x 0 x

b) One obtains
Z ∞ Z ∞ Z ∞ 
1 1 −xu2
fbc (ω) = √ cos ωx dx = √ e du cos ωx dx
0 x 0 π 0
Z ∞ Z ∞ 
1 −xu2
=√ e cos ωx dx du.
π 0 0
Z ∞
2
Let us denote e−xu cos ωx dx by I(u). We are going to solve it inte-
0
grating by parts twice. We obtain
Z ∞
u2 ∞ −xu2

Z
−xu2 sin ωx −xu2 2 sin ωx
I(u) = e + e u dx = 0 + e sin ωx dx
ω 0 0 ω ω 0
Z ∞
u2
 

−xu2 cos ωx −xu2 2 cos ωx
= −e − e u dx
ω ω 0 0 ω
u2 1 u2 u2 u4
 
= − I(u) = 2 − 2 I(u),
ω ω ω ω ω
Z ∞
u2 bc (ω) = √ 1 u2
so I(u) = 4 . Therefore, f du. One way to
u + ω2 π 0 u4 + ω 2
calculate this last integral is to decompose it in simple fractions. We suggest
a different approach, namely the following:
Z ∞ Z ∞ 2 Z ∞ 2
u2

1 u +ω u −ω
du = du + du .
0 u4 + ω 2 2 0 u4 + ω 2 0 u4 + ω 2
Z ∞ 2 Z ∞ 2
u +ω u −ω
Let us denote 4 2
du by I1 (ω) and du by I2 (ω). We
0 u +ω 0 u4 + ω 2
get
Z ∞ 2 Z ∞ 1+ ω
u +ω u2 du
I1 (ω) = 2
 du =
ω2

0 ω 0
u2 u2 + 2 u2 + 2
u u
 ω 0  ω 0
Z ∞ u− Z ∞ u−
= u du = u du.
0 ω 2
0
 ω 2
u2 + 2 u− + 2ω
u u

121
ω
Now we use the substitution u − = t. One gets
u
Z ∞  
dt 1 t ∞ 1 π π  π
I1 (ω) = 2
=√ arctan √ =√ + =√ .
−∞ t + 2ω 2ω 2ω −∞ 2ω 2 2 2ω

On the other hand,


 ω 0
Z ∞ 2
u −ω
Z ∞ u+
I2 (ω) = du = u du
0 u4 + ω 2 0
 ω 2
u+ − 2ω
u
ω
and using the substitution u + = y, one finds I2 (ω) = 0. It follows that
u

1 1 π π
fc (ω) = √ · · √
b = √ .
π 2 2ω 2 2ω

1
W 14. Calculate fbs (ω) for f : (0, ∞) → (0, ∞), f (x) = √ .
x

Answer. Similarly to the computation of fbc (ω) from Exercise 14, we


obtain that √
π
fbs (ω) = √ .
2 2ω

E 15. Calculate the Fourier transform of the function



 sin2 x
h(x) = 2
, x 6= 0 .
 x 1, x = 0

Solution. We use formula (2.16) for the Fourier transform


Z ∞ of the product
1
of two functions f and g. The formula is F[f g](ω) = g (ω −u) du
fb(u)b
2π −∞
sin x
(
, x 6= 0
and we are taking f (x) = g(x) = x .
1, x = 0

122
From Example 2.1.7 (the Cardinal Sine), we know that

 π,
 u ∈ (−1, 1)
fb(u) = 0, |u| > 1 .
π
 , u = ±1

2
It follows that 
 π, ω − u ∈ (−1, 1)

gb(ω − u) = 0, |ω − u| > 1
 π , ω − u = ±1

2
which is equivalent to

 π, u ∈ (−1 + ω, 1 + ω)

gb(ω − u) = 0, u ∈ R \ [−1 + ω, 1 + ω] .
 π , u = ±1 − ω

2
Therefore, we have the following cases:
Z ω+1
1 π
ˆ if ω ∈ [−2, 0], then H[h](ω) = π 2 du = (ω + 2);
2π −1 2
Z 1
1 π
ˆ if ω ∈ [0, 2], then H[h](ω) = π 2 du = (2 − ω);
2π ω−1 2

ˆ if ω ∈ R \ [−2, 2], then H[h](ω) = 0.

2.8 MATLAB Applications


The Fourier transform and the inverse Fourier transform are calculated
with the functions fourier and ifourier.
The Fast Fourier Transform and the inverse Fast Fourier Transform are
calculated with the functions fft and ifft.

2.8.1 Fourier Transform


The syntax is one of the following:

123
1. F = fourier(f ). This returns the Fourier transform F of the symbolic
preimage f . The default time variable is x and the default frequency
variable is w. The definition that is used is similar to formula (2.3) in
Definition 2.1.2, but with the kernel e−iwx (instead of eiwx , see Remark
2.4.4):
Z ∞
F (w) = f (x)e−iwx dx.
−∞

If f = f (w), then fourier returns F as a function of v, namely F =


F (v);

2. F = fourier(f, v). This returns the Fourier transform F as a function


of the parameter v instead of the default variable w;

3. F = fourier(f, u, v). This returns the Fourier transform F as a func-


tion of the variable v instead of the default variable w and considers
that f is a function of the variable u instead of the default variable x.

Exponentials
Example 2.8.1.
>> syms x;
f = exp(−xˆ2);
F = fourier(f );
The answer is F = piˆ(1/2)/ exp(−wˆ2/4).

Example 2.8.2.
>> syms u v;
f = exp(−9*uˆ2);
F = fourier(f, u, v);
The answer is F = (piˆ(1/2)* exp(−vˆ2/36))/3.

Example 2.8.3.
>> syms x;
f = 1/(1 + xˆ2);
F = fourier(f );
The answer is F = pi* exp(−abs(w)).

124
Example 2.8.4. The fourier command can also be applied to an expression.
>> F = fourier(1/(xˆ2 + 4), x, w);
The answer is F = (pi* exp(−2*abs(w)))/2.

Example 2.8.5.
>> syms x;
f = 1/(aˆ2 + xˆ2);
F = fourier(f );
The answer is F = (pi* exp(−abs(w)*(aˆ2)ˆ(1/2)))/(aˆ2)ˆ(1/2).

Example 2.8.6.
>> syms x;
f = (2 − i*x)ˆ(−3);
F = fourier(f );
The answer is F = (wˆ2*pi* exp(−2*w))/2 + (wˆ2*pi* exp(−2*w)*
sign(w))/2.

Example 2.8.7.
>> syms x;
f = (5 − i*x)ˆ(−4);
F = fourier(f );
The answer is F = (wˆ3*pi* exp(−5*w))/6 + (wˆ3*pi* exp(−5*w)*
sign(w))/6.

Example 2.8.8.
>> syms x;
f = exp(−3*abs(x));
F = fourier(f );
The answer is F = 6/(wˆ2 + 9).

Example 2.8.9.
>> syms u w a;
f = exp(−a*abs(u));
F = fourier(f, u, w);
The answer is F = (2*a)/(aˆ2 + wˆ2).

125
Heaviside’s Step Function
Example 2.8.10.
>> syms x;
f = heaviside(x) − heaviside(x − 1);
F = fourier(f );
The answer is F = (sin(w) + cos(w)*1i)/w − 1i/w.
Example 2.8.11.
>> syms x;
f = heaviside(x) − heaviside(x − 1) + heaviside(−x) − heaviside(−x − 1);
F = fourier(f );
The answer is F = −(cos(w)*1i − sin(w))/w + (sin(w) + cos(w)*1i)/w.
Another way that this can be done is the following:
>> syms x;
f = heaviside(x) − heaviside(x − 1) + heaviside(−x) − heaviside(−x − 1);
F = simplify(fourier(f ));
The answer is F = (2* sin(w))/w.
Example 2.8.12. If the transformation variable is not specified, then the
fourier command uses the variable x. Compare the following two examples:
i. >> syms x y z;
f = exp(−xˆ2)* exp(−yˆ2);
F = fourier(f, z);

The answer is F = piˆ(1/2)* exp(−yˆ2)* exp(−zˆ2/4);


ii. >> syms x y z;
f = exp(−xˆ2)* exp(−yˆ2);
F = fourier(f, y, z);

The answer is F = piˆ(1/2)* exp(−xˆ2)* exp(−zˆ2/4);


Example 2.8.13. If f = f (w), then the fourier command returns F as a
function of v, namely F = F (v).
>> syms w;
f = w* exp(−wˆ2);
F = fourier(f );
The answer is F = −(v*piˆ(1/2)* exp(−vˆ2/4)*1i)/2.

126
Example 2.8.14. The fourier command returns F even if the function f
is not absolutely integrable.
>> syms x;
f = x/(xˆ2 + 1);
F = fourier(f );
The answer is F = −pi* exp(−abs(w))*sign(w)*1i.

Dirac’s Delta Function


Example 2.8.15.
>> f = xˆ4;
F = fourier(f );
The answer is F = 2*pi*dirac(4, w).
Example 2.8.16.
>> syms t0;
f = 3*heaviside(t − t0);
F = fourier(f );
The answer is F = 3* exp(−t0*w*1i)*(pi*dirac(w) − 1i/w).
Some properties of the Fourier transform can also be obtained using the
fourier command.

Differentiation of the Original and Generalizations


Example 2.8.17.
>> syms f (x) w;
fourier(diff(f (x)));
ans = w*fourier(f (x), x, w)*1i.
Example 2.8.18.
>> syms f (t) w;
fourier(diff(diff(f (t))));
ans = −wˆ2*fourier(f (t), t, w).
Example 2.8.19.
>> syms f (t) w;
fourier(diff(diff(diff(f (t)))));
ans = wˆ3*fourier(f (x), x, w)*1i.

127
Translation
Theorem 2.2.5 (Translation) and formula (2.8) (with ω − a instead of
ω + a) can be used together with the fourier command.
Example 2.8.20.
>> syms f (x) w;
fourier(exp(i*x*3)*f (x));
ans = fourier(f (x), x, w − 3).
The fourier command can be applied to matrices.

Matrices
Example 2.8.21.
>> syms x y v w z;
m = [exp(−2*xˆ2) 1/(yˆ2 + 16); exp(−3*x)*heaviside(x) 5];
M = fourier(m, [x y; x y], [w z; v w]);
The answer is
M = [(2ˆ(1/2)*piˆ(1/2)* exp(−wˆ2/8))/2, (pi* exp(−4*abs(z)))/4]
[ 1/(3 + v*1i), 10*pi*dirac(w)].

2.8.2 Inverse Fourier Transform


The syntax is one of the following:
1. f = ifourier(F). This returns the inverse Fourier of the image F with
default independent variable w. The default variable of the preimage
f is x. The definition that it is used is similar to formula (2.15) in
Theorem 2.3.1 (different kernels):
Z ∞
1
f (x) = F (w)eiwx dw;
2π −∞

2. f = ifourier(F, u). This returns the inverse Fourier transform f as a


function of the parameter u instead of the default variable x;
3. f = ifourier(F, v, u). This returns the inverse Fourier transform f
as a function of the variable u instead of the default variable x and
considers that F is a function of the variable v instead of the default
variable w.

128
Example 2.8.22.
>> F = exp(−wˆ2/4);
f = ifourier(F );
The answer is f = exp(−xˆ2)/piˆ(1/2).
Example 2.8.23. If F = F (x), then the ifourier command returns f as a
function of t, namely f = f (t).
>> F = −x* exp(−xˆ2/4)*1i;
f = ifourier(F );
The answer is f = (2*t* exp(−tˆ2))/piˆ(1/2).
Example 2.8.24.
>> syms x real;
g = pi* exp(−abs(x));
ifourier(g, z);
ans = 1/(zˆ2 + 1).
Example 2.8.25.
>> syms w u;
syms a positive;
F = exp(−wˆ2/(4*aˆ2));
f = ifourier(F, w, u);
The answer is f = (a* exp(−aˆ2*uˆ2))/piˆ(1/2).
As before, some properties of the Fourier transform can be obtained using
the ifourier command.

Differentiation of the Image


Example 2.8.26.
>> syms F (w) w x;
ifourier(diff(F (w)));
ans = −(x*fourier(F (w), w, −x)*1i)/(2*pi).
Example 2.8.27. Sometimes the simplify command is necessary.
>> f = 1/(xˆ4 + 1);
F = fourier(f );
The answer is F = pi* sin(pi/4 + (2ˆ(1/2)*abs(w))/2)* exp(−(2ˆ(1/2)*
abs(w))/2). Now compare the following:

129
i. g = ifourier(F );

The answer is g = (2ˆ(1/2)*(−1/8 − 1i/8))/(x*1i − 2ˆ(1/2)*(1/2 +


1i/2)) + (2ˆ(1/2)*(−1/8 + 1i/8))/(x*1i − 2ˆ(1/2)*(1/2 − 1i/2)) +
(2ˆ(1/2)*(1/8 − 1i/8))/(x*1i + 2ˆ(1/2)*(1/2 − 1i/2)) + (2ˆ(1/2)*(1/8 +
1i/8))/(x*1i + 2ˆ(1/2)*(1/2 + 1i/2));

ii. h = simplify(ifourier(F ));

The answer is h = 1/(xˆ4 + 1).

Translation
Theorem 2.2.3 (Time Delay) and formula (2.7) (with x + a instead of
x − a) can be used together with the ifourier command.
Example 2.8.28.
>> syms F (w) x;
ifourier(exp(i*w*5)*F (w));
ans = fourier(F (w), w, −x − 5)/(2*pi).
If the ifourier command cannot find an explicit representation of the
transform, then it returns results in terms of the direct Fourier transform.
Example 2.8.29.
>> syms F (w) t;
f = ifourier(F, w, t);
The answer is f = fourier(F (w), w, −t)/(2*pi).
The ifourier command can also be applied to matrices.

Matrices
Example 2.8.30.
>> syms x y v w z;
M = [exp(−wˆ2/8), (pi* exp(−4*abs(z)))/4; 1/(3 + v*1i), 10*pi*dirac(w)];
m = ifourier(M, [w z; v w], [x y; x y]);
The answer is
m = [(2ˆ(1/2)* exp(−2*xˆ2))/piˆ(1/2), 1/(yˆ2 + 16)]
[ (exp(−3*x)*(sign(x) + 1))/2, 5].

130
2.8.3 Fast Fourier Transform
The syntax is the following: F = fft(t). This computes the discrete
Fourier transform (DFT) of f using a fast Fourier transform (FFT) algo-
rithm. If

ˆ f is a vector, then fft(t) returns the Fourier transform of the vector;

ˆ f is a matrix, then fft(t) treats the columns of f as vectors and returns


the Fourier transform of each column.

Example 2.8.31.
>> f = [1 1 0 − 1];
F = fft(f );

F = 1.0000 + 0.0000i 1.0000 − 2.0000i 1.0000 + 0.0000i 1.0000 + 2.0000i.

f 1 = ifft(F );

f1 = 1 1 0 − 1.

Example 2.8.32.
>> F s = 10; % Sampling frequency
t = −0.5 : 1/F s : 0.5;
f = 1/(4*sqrt(2*pi*0.1))* exp(−t.ˆ2/(2*0.1));
F = fft(f ); % Convert a Gaussian pulse from the time domain to the fre-
quency domain

F =

Columns 1 through 3: 2.2982+0.0000i −0.5934−0.1742i −0.0402−0.0258i

Columns 4 through 6: −0.0142−0.0164i −0.0040−0.0087i −0.0004−0.0027i

Columns 7 through 9: −0.0004+0.0027i −0.0040+0.0087i −0.0142+0.0164i

Columns 10 through 11: −0.0402+0.0258i −0.5934+0.1742i

131
f = ifft(F ); % computes the inverse Fast Fourier Transform

f=

Columns 1 through 5

0.0904 0.1417 0.2011 0.2582 0.3000

Columns 6 through 11

0.3154 0.3000 0.2582 0.2011 0.1417 0.0904

2.9 Maple Applications


2.9.1 Fourier Transform
The command with(inttrans) allows the use of the command fourier,
which is the standard one. The syntax is F = fourier(f(t), t, w) and it
returns the Fourier transform F (w) of the symbolic preimage f (t). The
definition used is similar to formula (2.3) in Definition 2.1.2 and it is the
following: Z ∞
F (w) = f (t)e−iwt dt.
−∞

The - sign in the exponent is just a convention, but we will see that it will
also impact the formula for the inverse Fourier transform.
The fourier command recognizes derivatives (diff or Diff ), integrals
(int or Int), the Dirac delta (or unit-impulse) function as Dirac(t) and
Heaviside’s step function as Heaviside(t).

Heaviside’s Step Function


Example 2.9.1.
> with(inttrans):
 
1
F = f ourier , t, w
(t2 + 1)

F = π e−w Heaviside(w) + ew Heaviside(−w)




132
Example 2.9.2.
> with(inttrans):
 
5
f ourier , x, w
(x2 + 3)
5 √  √3w √ 
3π e Heaviside(−w) + e− 3w Heaviside(w)
3

Exponentials
Example 2.9.3.
> with(inttrans):
f ourier exp(−x2 ), x, w

1 2
− w √
e 4 π
Example 2.9.4.
> with(inttrans):
assume(0 < a):
f ourier exp(−a2 · x2 ), x, w


1 w2 r
− π
e 4 a ∼2
a ∼2
Example 2.9.5.
> with(inttrans):
assume(0 < a):
f ourier x · exp(−a2 · x2 ), x, w


1 w2
1 − √
Iwe 4 a ∼2 π
−2
a ∼3
Example 2.9.6.
> with(inttrans):
assume(0 < a):
f ourier x3 · exp(−a2 · x2 ), x, w


1 w2
1 − √
Iwe 4 a ∼2 π −6a ∼2 +w2

8
a ∼7

133
Example 2.9.7.
> with(inttrans):
F = f ourier (x · exp(−5 · x) · Heaviside(x), x, w)
1
F =
(5 + Iw)2

Differentiation of the Preimage


Example 2.9.8.
> with(inttrans):
f ourier (diff(f (t), t), t, w)
Iwf ourier(f (t), t, w)
Example 2.9.9.
> with(inttrans):
f ourier (diff(diff(f (t), t), t), t, w)
−w2 f ourier(f (t), t, w)

Convolution
Example 2.9.10.
> with(inttrans):
F = f ourier (int(g(t)h(x − t), t = −∞..∞), x, w)
F = f ourier(g(x), x, w)f ourier(h(x), x, w)
Example 2.9.11.
> with(inttrans):
 2 
d
f ourier y(t) + y(t) = sin(t), t, s
dt2
−(s − 1)(s + 1)f ourier(y(t), t, s) = I π(Dirac(s + 1) − Dirac(s − 1))

2.9.2 Inverse Fourier Transform


The syntax is f = invfourier(F, w, t) and it returns the inverse Fourier
transform f (t) of F (w). The definition used is similar to formula (2.15) in
Theorem 2.3.1 (Inversion Formula) and it is the following:
1 ∞ F (w)eItw dw
Z
f (t) = .
2 −∞ π

134
Example 2.9.12.
> with(inttrans):  
1
f (t) = invf ourier , w, t
(3 + Iw)2
f (t) = te−3t Heaviside(t)
Example 2.9.13.
> with(inttrans):
assume(0 < a):  
1 w2 r
 − π
f (x) = invf ourier e 4 a ∼2 , w, x

a∼ 2

2 x2
f (x) = e−a∼
Example 2.9.14.
> with(inttrans):
f (x) = invf ourier we−3w Heaviside(w), w, x


1
f (x) =
2(−3 + Ix)2 π

2.9.3 Fourier Cosine Transform


The syntax is F = fouriercos(f, t, s) and it returns the Fourier cosine
transform F (s) of the symbolic preimage f (t). The following normalized
form of formula (2.26) is used as its definition:
Z ∞


2 f (t) cos(st) dt
0
F (s) = √ .
π
The Fourier cosine transform is self-inverting (since the normalized forms of
formulas (2.26) and (2.28) are similar).
Example 2.9.15.
> with(inttrans):
 
1
f ouriercos 2 , t, s
t +1
1 √ √ −s
2 πe
2

135
2.9.4 Fourier Sine Transform
The syntax is F = fouriersin(f, t, s) and it returns the Fourier sine
transform F (s) of the symbolic preimage f (t). The following normalized
form of formula (2.27) is used as its definition:
Z ∞


2 f (t) sin(st) dt
0
F (s) = √ .
π
The Fourier sine transform is self-inverting (since the normalized forms of
formulas (2.27) and (2.29) are similar).
Example 2.9.16.
> with(inttrans):
 
t
f ouriersin 2 , t, s
t +1
1 √ √ −s
2 πe
2

2.9.5 Discrete Transforms


The command with(DiscreteTransforms) allows the use of the com-
mand FourierTransform and InverseFourierTransform. The syntax is
one of the following:
1. FourierTransform(Z, [, options]);
2. FourierTransform(X, Y [, options]).
The FourierTransform and InverseFourierTransform commands com-
pute the discrete forward and inverse Fourier transform of the numerical
input data. The input data Z is interpreted as a complex array, while the
inputs X, Y are interpreted as the real and imaginary parts of the data,
respectively.
Options indicate the way in which the Fourier or inverse Fourier trans-
form are performed. Here are some possibilities:
ˆ algorithm = alg. Here alg can be set to mintime (default), min-
storage or DFT. The mintime algorithm is an efficient implementa-
tion of the twiddle factor FFT algorithm from Brigham and the min-
storage algorithm is a variant of it. The DFT algorithm is simply the
O(N 2 ) discrete Fourier transform;

136
ˆ padding = num. Here num specifies the maximum amount of zero
padding that can be used to compute more efficiently the Fourier trans-
form;

ˆ inplace = bool. Here bool can have the value true or false and
specifies whether the output overwrites the input. By default this is
false;

ˆ workingstorage = data. Here data must be a complex valued rect-


angular Vector or Matrix with between m*n and 2*m*n entries (where
m is the number of transforms being computed and n is the data length
of the transform);

ˆ normalization = symmetric (default), none or full. Three nor-


malizations are in common use, including symmetric normalization
1
(the default), where the data is multiplied by √ in both transform
n
directions. The other two cases correspond to performing no normal-
1
ization on the forward transform and normalization on the inverse
n
transform, and the reverse of this. Specification of normalization =
none on a transform will prevent normalization, while specification of
1
normalization = full will perform normalization.
n

Efficiency
The most efficient transform can be obtained when the data length(s)
are highly composite, the input data has datatype = complex8 and the
computation is being performed inplace.

The evalf command numerically evaluates expressions involving con-


stants (e. g., π, e, γ) and mathematical functions (e. g., exp, ln, sin,
arctan, cosh, GAMMA). The number of significant digits can be restricted
to n with the command evalfn (f ).

137
Example 2.9.17.
    
2πk
> z := V ector 4, k → evalf15 3 + exp I , datatype = complex8
4
 
3. + 1. I
 2. + 0. I 
 
 3. − 1. I 
4. + 0. I
with(DiscreteT ransf orms):
F ourierT ransf orm(z, inplace = true)
 
6. + 0. I
 0. + 2. I 
 
 0. + 0. I 
0. + 0. I
InverseF ourierT ransf orm(z, inplace = true)
 
3. + 1. I
 2. + 0. I 
 
 3. − 1. I 
4. + 0. I
Example 2.9.18.    
k
> X, Y := V ector 5, k → evalf cos , datatype = f loat8 , V ector
    4 
k
5, k → evalf sin , datatype = f loat8
3
   
0.968912421700000 0.327194696800000
 0.877582561900000   0.618369803100000 
   
 0.731688868900000  ,  0.841470984800000 
   
 0.540302305900000   0.971937901300000 
0.315322362400000 0.995407957700000
X1, Y 1 := F ourierT ransf orm(X, Y )
   
1.53564585484536 1.67901037959404
 −0.0567036805932297   −0.576205462152703 
   
 0.133879158532305  ,
 

 −0.253334690322848 

 0.221118058877586   −0.120540144863056 
0.332614647503121 0.00269950166679971

138
InverseF ourierT ransf orm(X1, Y 1, inplace = true)
   
0.968912421700000 0.327194696800000
 0.877582561900000   0.618369803100000 
   
 0.731688868900000  ,  0.841470984800000 
   
 0.540302305900000   0.971937901300000 
0.315322362400000 0.995407957700000

139
140
Chapter 3

Laplace Transform

The Laplace transform was pioneered by the French scholar Pierre-Simon


Laplace (1749–1827), who made important contributions to mathematics,
engineering, physics, astronomy and philosophy. For example, he played
an essential role in the development of astronomy (in a five-volume treatise
on celestial mechanics), statistics (the so-called Bayesian interpretation of
probability) or mathematics (Laplace’s equation, the Laplacian differential
operator).
It is is the most used integral transform thanks to its important applica-
tions in mathematics, physics, engineering, probability theory and systems
and control theory. It is related to the Fourier transform, but whereas this one
settles the signals (the functions) into their modes of vibration, the Laplace
transform settles the signals into their moments.
In mathematics, the Laplace transform is used for solving differential,
integral or integro-differential equations and systems of such equations. Us-
ing the Laplace transform, these equations or systems are transformed into
elementary algebraic equations or systems, which can be easily solved. In
physics and engineering, the Laplace transform is used for mathematical
models, e.g. for the analysis of linear time-invariant (LTI) systems such
as electrical circuits, harmonic oscillators, optical devices, and mechanical
systems. In this analysis, the Laplace transform is interpreted as a transfor-
mation from the time-domain, in which inputs and outputs are functions of
time, to the frequency-domain, where the Laplace transforms of these inputs
and outputs are functions of complex angular frequency. In the frequency
domain, the input-output map of systems is simply described by a matrix of
rational functions, called the transfer matrix (or function). This approach

141
simplifies the process of analyzing the behavior of the system, or allows the
synthesis of a new system of a lesser dimension.
Various facets of the Laplace transform are presented in [8], [24, Chap-
ter 9], [28] and [33].

3.1 Definition
Definition 3.1.1. Consider a function f : [0, ∞) → R(C). The function
F : C → C defined by
Z ∞
F (s) = L[f (t)](s) = f (t)e−st dt (3.1)
0

is called the Laplace transform (or the image) of the function f (t) or the
signal in the frequency domain, provided the Riemann improper integral
exists. At the same time f is called the preimage of F or the signal in the
time domain.
The operator L which associates with the function f (t) its Laplace trans-
form
Z ∞ F (s) is called the Laplace transform. The improper integral given by
f (t)e−st dt is called the Laplace integral .
0
Let us consider
Z ∞ the functions f (t) which are absolutely integrable on
[0, ∞), i.e. |f (t)| dt < ∞, for which F (a) exists for some a ∈ [0, ∞).
0
Then one can show that F (s) is analytic function of s ∈ C in the half plane
Re(s) > a (see [7]). In the sequel we will use a large class of functions
for which the Laplace transform exists. For this we need to introduce the
following definitions.
Definition 3.1.2. A function f has a jump discontinuity at a point a if both
of the one sided limits lim f (t) = f (a−) and lim f (t) = f (a+) exist, are finite
t→a t→a
a<0 a>0
numbers and f (a−) 6= f (a+).
A function f is piecewise continuous on [0, ∞) if it is continuous on any
finite subinterval of [0, ∞) except at finitely many jump discontinuities.
Definition 3.1.3. One says that a function f has exponential order α if
there exist constants M > 0 and α ∈ R such that for some t0 ≥ 0
|f (t)| ≤ M eαt , ∀t ≥ t0 .

142
The smallest exponential order α is called the index of growth of f and it is
denoted by σ or σf .
Definition 3.1.4. A function f : R → R(C) is called an original function if
f satisfies the following conditions:
i) f (t) = 0, ∀t < 0;

ii) f (t) is piecewise continuous on [0, ∞);

iii) f (t) has an index of growth σ.


Due to condition i), Definition 3.1.1 can be extended to such functions
defined on R.
 at
e , t≥0
Example 3.1.5. The function f : R → C, f (t) = , a ∈ C is
0, t < 0
an original function.
Solution. Condition i) is obviously satisfied from the definition of the
function f .
Since lim f (t) = lim eat = 1 and lim f (t) = lim 0 = 0, it follows that 0 is
t→0 t→0 t→0 t→0
t>0 t>0 t<0 t<0
a first order discontinuity point of f , so f is piecewise continuous on [0, ∞).
Thus, condition ii) is also satisfied.
For the third condition, let us consider a = α + iβ, α, β ∈ R. If t ≥ 0,
then
|f (t)| = eat = eαt+iβt = eαt · eiβt = eαt = eαt ,
q
since eiβt = |cos(βt) + i sin(βt)| = cos2 (βt) + sin2 (βt) = 1. Hence, one
can consider M = 1 and the growth index σ = α, if α ≥ 0 and σ = 0, if
α < 0. Thus, condition iii) is satisfied as well.
Theorem 3.1.6 (Existence). Let f : R → R(C) be an original function
and consider σ its exponential order. Then the function F (s) given by (3.1)
exists for any s ∈ C with Re(s) > σ and the improper integral is absolutely
convergent.
Proof. By Definition 3.1.4 iii), |f (t)| ≤ M1 eσt , t ≥ t0 for some M1 > 0. Since
f is piecewise continuous on [0, t0 ], it is bounded on [0, t0 ] by a number M2
(which is the largest bound from the bounds of f on each of the continuity
subintervals). Hence, |f (t)| ≤ M2 for t ∈ [0, t0 ].

143
If σ ≥ 0, then |f (t)| ≤ M2 ≤ M2 eσt for t ∈ [0, t0 ]. If σ < 0, then
σ(t − t0 ) ≥ 0 for t ∈ [0, t0 ]. Hence, |f (t)| ≤ M2 ≤ M2 eσ(t−t0 ) = M2 e−σt0 eσt .
Choosing M3 = M2 e−σt0 , it follows that |f (t)| ≤ M3 eσt for t ∈ [0, t0 ]. If one
denotes M = max(M1 , M2 ) in the case σ ≥ 0 and M = max(M1 , M3 ) in the
case σ < 0, one gets
|f (t)| ≤ M eσt ,
for t ∈ [0, ∞).
Z b Z b
−st
Now, for any b > 0, f (t)e dt ≤ f (t)e−st dt and since |ez | =
0 0
eRe(z) , z ∈ C one obtains
Z b Z b
−st
f (t)e dt ≤ M eσt · e−Re(s)t dt
0 0
Z b
=M e−(Re(s)−σ)t dt
0
M b
= · e−(Re(s)−σ)t
−(Re(s) − σ) 0
M
· 1 − e−(Re(s)−σ)b .

=
Re(s) − σ

Since Re(s) − σ > 0, it follows that lim e−(Re(s)−σ)b = 0. Hence,


b→∞
Z ∞ Z b
−st M
f (t)e dt = lim f (t)e−st dt ≤ < ∞.
0 b→∞ 0 Re(s) − σ
This proves that the improper integral in (3.1) is absolutely convergent and
hence, convergent, i.e. the Laplace transform F (s) exists for any s ∈ C with
Re(s) > σ.

Corollary 3.1.7. If f (t) is an original function, then F (s) → 0, as Re(s) →


∞.
Proof. From the proof of Theorem 3.1.6 (Existence),
Z ∞
M
f (t)e−st dt ≤ → 0, as Re(s) → ∞.
0 Re(s) − σ

144
Remark 3.1.8. The converse is not true. There are functions which have
no index of growth, but  their Laplace transform exists. For example the
t2 2
function f (t) = 2te · cos et (see [28, Example 1.14]).

1
On the other hand, the function f (t) = √ has a non finite right-hand
t
1
limit in 0, since lim √ = ∞; hence 0 is not a jump discontinuity of f and f
t→0
t>0
t

π
is not piecewise continuous, but it still has the Laplace transform F (s) = √
s
(see Example 3.2.33 b)).
Now we will compute the Laplace transform of some common functions
using the definition.
Example 3.1.9 (Heaviside’s Step Function). Consider the function

 0, t < 0

1
h(t) = , t=0 .
 2

1, t > 0

Then h(t) is an original function with M = 1 and σ = 0. We obtain

Figure 3.1: Heaviside’s Step Function

Z ∞ Z ∞
−st
H(s) = L[h(t)](s) = h(t)e dt = e−st dt
0 0
e−st ∞ e−st 1
= = lim + .
−s 0 t→∞ −s s
From Theorem 3.1.6 (Existence), s = s1 + is2 ∈ C with Re(s) = s1 > σ = 0.
Therefore,
lim e−st = lim e−s1 t = 0,
t→∞ t→∞

145
e−st 1
so lim = 0. Hence, H(s) = L[h(t)](s) = for every s ∈ C with
t→∞ −s s
Re(s) > 0.

Example 3.1.10 (Exponential Function). Consider the function


 at
e , t≥0
f (t) = , a ∈ C.
0, t < 0

We proved in Example 3.1.5 that this is an original function with M = 1 and


σ = Re(a), if Re(a) > 0, respectively σ = 0, if Re(a) ≤ 0. We obtain that
Z ∞ Z ∞
−st
F (s) = L[f (t)](s) = f (t)e dt = eat−st dt
0 0
eat−st ∞ at−st
e 1
= = lim − .
a−s 0 t→∞ a−s a−s
We analyze the following two possible situations:

1. if Re(a) > 0, from Theorem 3.1.6 (Existence), s = s1 + is2 ∈ C with


Re(s) = s1 > σ = Re(a), so Re(a) − s1 < 0. Therefore,

lim eat−st = lim e(Re(a)−s1 )t = 0,


t→∞ t→∞

eat−st
so lim = 0;
t→∞ −s

2. if Re(a) ≤ 0, from Theorem 3.1.6 (Existence), s = s1 + is2 ∈ C with


Re(s) = s1 > σ = 0. Therefore,

lim eat−st = lim e(Re(a)−s1 )t = 0,


t→∞ t→∞

eat−st
so lim = 0.
t→∞ −s

1
It follows that F (s) = L[f (t)](s) = . We have now a first important
s−a
Laplace transform, namely
1
L[eat ](s) = , a ∈ C, t ≥ 0. (3.2)
s−a

146
Example 3.1.11 (Power Function). Consider the function
 n
t , t≥0
f (t) = , n ∈ N.
0, t < 0
The power function is continuous for n ∈ N∗ and has a jump discontinuity in

t t
X tn
0, if n = 0. The Taylor expansion of the function e around 0 is e =
0
n!
n
t
for every t ∈ R. It follows that for t ≥ 0, et > or, equivalently, tn < n! · et .
n!
So, for t ≥ 0, one has |tn | = tn < n! · et and the power function is an original
function with M = n! and σ = 1.
Let us now compute the Laplace transform of this function using integra-
tion by parts. We obtain that
Z ∞ Z ∞ Z ∞  −st 0
−st n −st n e
Fn (s) = f (t)e dt = t e dt = t · dt
0 0 0 −s t
−st ∞ Z ∞
n e e−st
=t · − n · tn−1 · dt
−s 0 0 −s
−st
n ∞ n−1 −st
Z
n e
= lim t · −0+ t e dt
t→∞ −s s 0
e−st n
= lim tn · + · Fn−1 (s).
t→∞ −s s
From Theorem 3.1.6 (Existence), s = s1 + is2 ∈ C with Re(s) = s1 > σ = 1.
On the other hand, for t ≥ 0, we get that
lim tn e−st = lim tn · e−st = lim tn · e−s1 t
t→∞ t→∞ t→∞
t −s1 t
≤ lim n!e · e = lim n! · et−s1 t
t→∞ t→∞
(1−s1 )t
= lim n! · e = 0.
t→∞
n
Hence, one obtains the recurrence Fn (s) = · Fn−1 (s), n ∈ N. It fol-
s
n n n−1 n!
lows that Fn (s) = · Fn−1 (s) = · Fn−2 (s) = · · · = n F0 (s). But
Z ∞ s Z ∞ s s s
0 −st −st 1
F0 (s) = te dt = e dt = (see Example 3.1.9 (Heaviside’s
0 0 s
Step Function)). Therefore,
n! 1 n!
Fn (s) = · = ,
sn s sn+1

147
for every n ∈ N and s ∈ C, with Re(s) > 1. We have now a second important
Laplace transform, namely
n!
L[tn ](s) = , n ∈ N, t ≥ 0. (3.3)
sn+1
More general,
Γ(α + 1)
L[tα ](s) = , α > −1, t ≥ 0,
sα+1
Z ∞
where Γ is Euler’s Gamma function, i.e. Γ(α) = xα−1 e−x dx, α > 0.
0

Now let us consider three operations with original functions.


Theorem 3.1.12. If f, g : R → C are original functions and c ∈ C, then
the functions c · f , f + g and f · g are original functions.
Proof. Obviously, from Definition 3.1.4, it follows that the three functions
are equal to 0 for negative time t and are piecewise continuous. We only need
to prove property iii).
Since f and g are original functions, then there exist σ1 , σ2 ≥ 0 and
M1 , M2 > 0 such that |f (t)| ≤ M1 eσ1 t , |g(t)| ≤ M2 eσ2 t , ∀t ≥ 0. This implies
that
|cf (t)| ≤ |c|M1 eσ1 t , ∀t ≥ 0;
hence, cf (t) has the same index of growth σ1 .
One also has |(f + g)(t)| ≤ |f (t)| + |g(t)| ≤ M1 eσ1 t + M2 eσ2 t , ∀t ≥ 0. By
denoting σ0 = max(σ1 , σ2 ) and M = M1 + M2 , one obtains

|(f + g)(t)| ≤ M1 eσ0 t + M2 eσ0 t = (M1 + M2 )eσ0 t = M eσ0 t , ∀t ≥ 0.

Similarly, |(f g)(t)| = |f (t)|·|g(t)| ≤ M1 ·M2 ·e(σ1 +σ2 )t . If we denote σ = σ1 +σ2


and M = M1 · M2 , we get that

|(f g)(t)| ≤ M eσt , ∀t ≥ 0.


2e3t , t ≥ 0
Example 3.1.13. The function f1 (t) = is an original func-
0, t < 0
tion being obtained from the multiplication of the constant c = 2 with the
exponential function for a = 3 (see Example 3.1.10 (Exponential Function).
Hence, for f1 , one has M = 2 and σ = 3.

148
e−3t + t2 , t ≥ 0

Example 3.1.14. The function f2 (t) = is an original
0, t < 0
function since it is the sum of an exponential function with a = −3 (M1 = 1
and σ1 = 0; see Example 3.1.10 (Exponential Function))) and a power func-
tion for n = 2 (M2 = 2! = 2 and σ2 = 1; see Example 3.1.11 (Power Func-
tion)). Hence, for the function f2 , one has σ = max(σ1 , σ2 ) = max(0, 1) = 1
and M = M1 + M2 = 1 + 2 = 3.

t7 e(1+i)t , t ≥ 0
Example 3.1.15. The function f3 (t) = is obtained by
0, t < 0
the multiplication of two original functions: an exponential function for a =
1 + i (M1 = 1 and σ1 = Re(1 + i) = 1; see Example 3.1.10 (Exponential
Function)) and a power function for n = 7 (M2 = 7! and σ2 = 1; see Example
3.1.11 (Power Function)). Therefore, the original function f3 has the growth
index σ = σ1 + σ2 = 2 and M = M1 · M2 = 7!.

3.2 Properties of the Laplace Transform


The Laplace transform has a number of properties that make it useful in
analyzing linear dynamical systems. The most significant advantage is that
differentiation and integration become (mainly) multiplication and division
by the s variable, respectively, while convolution becomes simply the product
of the transforms. These properties change integral equations and differential
equations to algebraic equations, which are much easier to solve. Once solved,
the inverse Laplace transform reverts back to the time domain.
In the sequel we shall take a closer look at these properties and we will
provide examples of how they can be used to find the Laplace transforms of
functions without computing the integrals. In the theorems below we assume
that f (t) and g(t) are original functions with corresponding indices of growth
σf and σg , respectively.

Theorem 3.2.1 (Linearity). If f (t) and g(t) are original functions and a, b ∈
C are arbitrary constants, then the following equality holds for s ∈ C and
Re(s) > max(σf , σg ):

L[af (t) + bg(t)](s) = aL[f (t)](s) + bL[g(t)](s). (3.4)

149
Proof. Since the integral is linear,
Z ∞
L[af (t) + bg(t)](s) = (af (t) + bg(t)) e−st dt
0
Z ∞ Z ∞
−st
=a f (t)e dt + b g(t)e−st dt
0 0
= aL[f (t)](s) + bL[g(t)](s).

Example 3.2.2 (Sine Function). For ω ∈ R and t ≥ 0,

e − e−iωt
 iωt 
(3.4) 1
L[eiωt ](s) − L[e−iωt ](s)

L[sin(ωt)](s) = L (s) =
2i 2i
 
(3.2) 1 1 1 1 2iω ω
= − = · 2 2
= 2 .
2i s − iω s + iω 2i s + ω s + ω2

In conclusion,
ω
L[sin(ωt)](s) = . (3.5)
s2 + ω 2
Example 3.2.3 (Cosine Function). For ω ∈ R and t ≥ 0,

e + e−iωt
 iωt 
(3.4) 1
L[eiωt ](s) + L[e−iωt ](s)

L[cos(ωt)](s) = L (s) =
2 2
 
(3.2) 1 1 1 1 2s s
= + = · 2 2
= 2 .
2 s − iω s + iω 2 s +ω s + ω2

In conclusion,
s
L[cos(ωt)](s) = . (3.6)
s2 + ω 2
Example 3.2.4 (Hyperbolic Cosine and Sine Functions). For ω ∈ R and
eωt + e−ωt
t ≥ 0, the hyperbolic cosine function cosh(ωt) = describes the
2
curve of a hanging cable between two supports. If follows that

e + e−ωt
 ωt 
(3.4) 1
L[eωt ](s) + L[e−ωt ](s)

L[cosh(ωt)](s) = L (s) =
2 2
 
(3.2) 1 1 1 1 2s s
= + = · 2 2
= 2 .
2 s−ω s+ω 2 s −ω s − ω2

150
In conclusion,
s
L[cosh(ωt)](s) = .
s2 − ω2
Similarly,
eωt − e−ωt
 
ω
L[sinh(ωt)](s) = L (s) = 2 .
2 s − ω2
Theorem 3.2.5 (Similarity or Change of Time Scale). If a > 0, then the
following equality holds for Re(s) > a · σf ):
1 s
L[f (at)](s) = L[f (t)] . (3.7)
a a
Proof. By the change of variable u = at ⇒ du = a dt we obtain the following:

Z ∞ Z ∞
−st u 1
L[f (at)](s) = f (at)e dt = f (u)e−s· a · du
0 0 a
1 ∞
Z Z ∞
s 1 s
= f (u)e− a ·u du = f (t)e− a ·t dt
a 0 a 0
1 s
= L[f (t)] .
a a

Example 3.2.6. Let us compute the Laplace transform of the function


g(t) = 2t + e2t . We use formula (3.7) for a = 2 and f (t) = t + et , hence
g(t) = f (2t). It follows that
(3.7) 1 s 1 s
L[f (2t)](s) = L[f (t)] = L[t + et ]
2 2 2 2
(3.4) 1
 s  s 
= L[t] + L[et ]
2 2 2
(3.2) 1 1! 1
= +
(3.3) 2 s2 s= 2s s − 1 s= 2s
 
1 4 2 2 1
= + = + .
2 s2 s − 2 s2 s − 2

Theorem 3.2.7 (Time Delay). If a > 0, then

L[f (t − a)](s) = e−sa · L[f (t)](s), ∀s ∈ C, Re(s) > σf . (3.8)

151
Proof. Knowing that f (t − a) = 0 for t < a, by the substitution u = t − a, it
follows that
Z ∞ Z ∞
−st
L[f (t − a)](s) = f (t − a)e dt = f (t − a)e−st dt
Z0 ∞ a
Z ∞
−s(u+a) −sa
= f (u)e du = e · f (u)e−su du
0 0
−sa
=e · L[f (t)](s).

Example 3.2.8. Now we show how we can explicitly use Theorem 3.2.7
(Time Delay) and formula (3.8).
(3.8) (3.5) 1
1. L[sin(t − 2)](s) = e−2s · L[sin t](s) = e−2s · 2 ;
s +1
(3.7) 1
s
2. L[cos(2t − 2)](s) = L[cos(2(t − 1))](s) = · L[cos(t − 1)] . Using
2 2
s
now the time-delay formula (3.8) for a = 1, f (t) = cos t and s → ,
2
one obtains
1 −s s
L[cos(2t − 2)](s) = · e · L[cos t]
2
2 2
s
(3.6) 1 s s s
= · e− 2 · s2 2 = e− 2 · 2 .
2 4
+1 s +4

Theorem 3.2.9 (Second Time Delay). If a > 0, then ∀s ∈ C with Re(s) > σf
the following holds:
 Z a 
sa −st
L[f (t + a)](s) = e L[f (t)](s) − f (t)e dt . (3.9)
0
Proof. By Definition 3.1.1 (Laplace Transform) and using the substitution
u = t + a, one obtains
Z ∞ Z ∞
−st
L[f (t + a)](s) = f (t + a)e dt = f (u)e−s(u−a) du
0 a
Z ∞
= esa · f (u)e−su du
Za ∞ Z a 
sa −su −su
=e f (u)e du − f (u)e du
 0 Z a 0

sa
= e L[f (t)](s) − f (t)e−st dt .
0

152
Example 3.2.10. Now we use Theorem 3.2.9 (Second Time Delay) and
formula (3.9).
 Z 3   Z 3 
t+3 (3.9) 3s t t −st (3.2) 3s 1 t−st
L[e ](s) = e L[e ](s) − e ·e dt = e − e dt
0 s−3 0
et−st 3 e3−3s
   
3s 1 3s 1 1
= e − =e − + .
s−3 1−s 0 s−3 1−s 1−s
Notice that actually the function f (t) = et+3 is identically 0 for t + 3 < 0.
Theorem 3.2.11 (Translation or Frequency Shifting). If a ∈ C, then ∀s ∈ C
with Re(s) > Re(a) the following holds:
L[eat · f (t)](s) = L[f (t)](s − a). (3.10)
Proof. We make the following computations:
Z ∞
at
L[e · f (t)](s) = eat · f (t) · e−st dt
Z0 ∞
= f (t) · e−(s−a)t dt
0
= L[f (t)](s − a).

Example 3.2.12. For the following functions we employ Theorem 3.2.11


(Translation or Frequency Shifting) and formula (3.10).
(3.10) (3.3) 1 1
1. L[et · t](s) = L[t](s − 1) = = ;
s2 s=s−1 (s − 1)2
(3.10)
2. L[e−t (cos t + sinh t)](s) = L[cos t + sinh t](s + 1). Using Theorem
3.2.1 (Linearity) and formula (3.4) the last Laplace transform is equal
s+1
to L[cos t](s+1)+L[sinh t](s+1), which in turn gives us +
(s + 1)2 + 1
1
(see Examples 3.2.3 (Cosine Function) and 3.2.4 (Hyper-
(s + 1)2 − 1
bolic Sine Function)). In conclusion,
s+1 1
L[e−t (cos t + sinh t)](s) = + .
(s + 1) + 1 (s + 1)2 − 1
2

153
Theorem 3.2.13 (Differentiation of the Original). Let f (t) be a differen-
tiable original function with derivative f 0 (t). Then
L[f 0 (t)](s) = s · L[f (t)](s) − f (0+ ), (3.11)
where f (0+ ) = lim f (t) (the right-hand limit of f at t = 0).
t→0
t>0

Proof. If we integrate by parts, we obtain


Z ∞
0
L[f (t)](s) = f 0 (t)e−st dt = lim f (t)e−st − lim f (t)e−st +
0 t→∞ t→0
t>0
Z ∞
+s f (t)e−st dt = s · L[f (t)](s) − f (0+ ).
0

This is due to the fact that


|f (t)e−st | = |f (t)| · |e−st | ≤ M eσf t · e−Re(s)t = M e(σf −Re(s))t → 0,
when t → ∞ since Re(s) > σf .

Theorem 3.2.14 (General Differentiation of the Original). If the function


f and its derivatives f 0 , f 00 , . . . , f (n) (n ∈ N∗ ) are original functions, then
L[f (n) (t)](s) = sn L[f (t)](s) − sn−1 f (0+ ) − sn−2 f 0 (0+ ) − · · · − f (n−1) (0+ ).
(3.12)
Proof. We will prove the above statement by induction. If n = 1, then
formula (3.12) is true in virtue of Theorem 3.2.13 (Differentiation of the
Original).
Assume now that formula (3.12) is true for n − 1, i.e.
L[f (n−1) (t)](s) = sn−1 L[f (t)](s) − sn−2 f (0+ ) − · · · − f (n−2) (0+ ). (∗)
Then
(3.11)
L[f (n) (t)](s) = L[(f (n−1) )0 (t)](s) = sL[f (n−1) (t)](s) − f (n−1) (0+ )
(∗)
= s sn−1 L[f (t)](s) − sn−2 f (0+ ) − · · · − f (n−2) (0+ ) −


− f (n−1) (0+ )
= sn L[f (t)](s) − sn−1 f (0+ ) − · · · − sf (n−2) (0+ ) − f (n−1) (0+ );
hence, formula (3.12) is true for any n ≥ 1.

154
Example 3.2.15. Now we are providing examples on how to use Theorem
3.2.13 (Differentiation of the Original) and formula (3.11) or their generaliza-
tions, namely Theorem 3.2.14 (General Differentiation of the Original) and
formula (3.12).

(3.11)
1. L[(et (cos t − sin t)](s) = L[(et cos t)0 ](s) = sL[et cos t](s) − lim et cos t
t→0
t>0
(3.10)
= sL[et cos t](s) − 1 = L[cos t](s − 1) − 1
s−1
= − 1 (thanks to (3.6));
(s − 1)2 + 1

(3.12)
2. L[(sinh(2t))00 ](s) = s2 L[sinh(2t)](s) − s lim sinh(2t) − lim(sinh(2t))0
t→0 t→0
t>0 t>0

= s2 L[sinh(2t)](s) − s · 0 − lim 2 cosh(2t)


t→0
t>0
2s2
= 2 − 2 (due to Example 3.2.4).
s −4
In the following three theorems we will use the formula of the derivative
of an integral with respect to a parameter p:
!0
Z b(p)
I 0 (t) = f (p, t) dt
a(p)
(3.13)
Z b(p)
∂f
= (p, t) dt + f (p, b(p)) · b0 (p) − f (p, a(p)) · a0 (p).
a(p) ∂p

We denote the three terms of this formula by A, B and C, respectively.

Theorem 3.2.16 (Differentiation of the Image). For Re(s) > σf ,

F 0 (s) = L[−tf (t)](s). (3.14)

Proof. We use differentiation of an improper integral with respect to the s


parameter. Since a(s) = 0 and b(s) = ∞, hence a0 = b0 = 0, formula (3.13) is
reduced to the A term. For the proof of this formula for a complex variable
s, one should consult [28, Theorem 3.3].

155
We obtain
∞ 0 Z ∞
∂(f (t)e−st )
Z
0 −st
F (s) = f (t)e dt = dt
0 0 ∂s
Z ∞ Z ∞
−st
= f (t)e · (−t) dt = (−tf (t))e−st dt
0 0
= L[−tf (t)](s).

Remark 3.2.17. Formulas (3.11) and (3.14) indicate that the operation
which corresponds to differentiation in the time domain and in the frequency
domain, respectively is the multiplication by the variable in the other do-
main (with some corrections, −f (0+ ) and multiplication by −t in (3.11) and
(3.14), respectively). This property allows us to transform linear differential
equations into polynomial algebraic equations and to solve them in an easier
way.

Theorem 3.2.18 (General Differentiation of the Image). For Re(s) > σf ,

F (n) (s) = L[(−t)n f (t)](s), n ∈ N∗ . (3.15)

Proof. We will prove the above statement by induction. If n = 1, then


formula (3.15) is true in virtue of Theorem 3.2.16 (Differentiation of the
Image).
Assume now that formula (3.15) is true for n − 1. Then since (e−st )0s =
−te−st ,
0 0
F (n) (s) = F (n−1) (s) = L[(−t)n−1 f (t)](s)
Z ∞ 0 Z ∞
−st
= n−1
(−t) f (t)e dt = (−t)n−1 f (t)e−st · (−t) dt
Z ∞0 0

= (−t)n f (t)e−st dt = L[(−t)n f (t)](s).


0

By induction, it follows that formula (3.15) is true for every n ≥ 1.

Example 3.2.19. This example shows how Theorem 3.2.18 (General Differ-
entiation of the Image) and formula (3.15) are explicitly used.

156
 0
−t (3.15) −t 0 (3.2) 1 1
1. L[te ](s) = (−1) · (L[e ](s)) = − = ;
s+1 (s + 1)2
00 
2 −t (3.15) 2 1 −t2 00 (3.2)
2. L[t e ](s) = (−1) · (L[e ](s)) = = ;
s+1 (s + 1)3
 000
3 −t (3.15) 3 −t 000 (3.2) 1 6
3. L[t e ](s) = (−1) · (L[e ](s)) = − = ;
s+1 (s + 1)4
4. In general, for n ∈ N∗ ,
(3.15)
L[tn e−t ](s) = (−1)n · (L[e−t ](s))(n)
 (n)
(3.2) n 1 n!
= (−1) · = .
s+1 (s + 1)n+1

Theorem 3.2.20Z (Integration of the Original). If f (t) is an original func-


t
tion, then g(t) = f (x) dx is an original function and
0
Z t 
1
L[g(t)](s) = L f (x) dx (s) = L[f (t)](s). (3.16)
0 s
Proof. We must first show that g(t) is an original function. The function
verifies g(t) = 0, ∀t ≤ 0 (since f (t) has this property and g(t) is continuous
for t > 0). Moreover,

Z t Z t Z t
|g(t)| = f (x) dx ≤ |f (x)| dx ≤ M eαx dx
0 0 0
eαt − 1 M αt
=M· < e , ∀t ≥ 0.
α α
We derive g(t) by applying formula (3.13) reduced to the B term (since
f (x) is constant with respect to t and a = 0, hence their derivatives are equal
to 0) and we obtain that g 0 (t) = f (t) at the continuity points of the function
f. Z 0
Obviously, g(0) = f (x) dx = 0 and by Theorem 3.2.13 (Differentiation
0
of the Original),
L[f (t)](s) = L[g 0 (t)](s) = sL[g(t)](s) − g(0),

157
which implies
1
L[g(t)](s) = L[f (t)](s).
s

Example 3.2.21. This example shows how Theorem 3.2.20 (Integration of


the Original) and formula (3.16) are explicitly used.
Z t 
(3.16) 1 1 s 1
1. L cos(2x) dx (s) = · L[cos(2t)](s) = · 2 = 2 ;
0 s s s +4 s +4
Z t 
x (3.16) 1 (3.10) 1
2. L e sin x dx (s) = · L[et sin t](s) = · L[sin t](s − 1) =
0 s s
1
.
s[(s − 1)2 + 1]
Theorem 3.2.22 (Integration of the Image). Consider an original function
f (t) and its image F (s). Then
Z ∞  
f (t)
F (x) dx = L (s), (3.17)
s t
if the integral is convergent.
Z ∞
Proof. We define G(s) := F (x) dx. Again, by applying formula (3.13)
s
reduced to the C term, one obtains G0 (s) = −F (s).
Let g(t) be the original of G(s), so G(s) = L[g(t)](s). We have by Theo-
rem 3.2.16 (Differentiation of the Image) that
G0 (s) = L[−t · g(t)](s) = −F (s) = −L[f (t)](s).
 
f (t)
Therefore, t · f (t) = g(t) and L[g(t)](s) = L (s).
t

Example 3.2.23. This example shows how Theorem 3.2.22 (Integration of


the Image) and formula (3.17) are explicitly used.
  Z ∞ Z ∞
sin t (3.17) 1
L (s) = L[sin t](x) dx = 2
dx
t s s x +1
∞ π
= arctan x = − arctan s.
s 2

158
Example 3.2.24. This example shows how Theorem 3.2.22 (Integration of
the Image) and formula (3.17) fail.
  Z ∞ Z ∞
cos t (3.17) x
L (s) = L[cos t](x) dx = 2
dx.
t s s x +1
The last integral is an improper one since the integrand is of the form
P (x)/Q(x) with deg P = 1 and deg Q = 2 and it does not respect the
convergence condition deg P ≤ deg Q − 2 as in the previous example.
Corollary 3.2.25. If the integral in Theorem 3.2.22 (Integration of the Im-
age, formula (3.17)) converges as s → 0, then
Z ∞ Z ∞
f (t)
F (x) dx = dt. (3.18)
0 0 t
Proof. Indeed, from Theorem 3.2.22 (Integration of the Image) and the hy-
pothesis, we get
Z ∞ Z ∞   Z ∞
f (t) f (t) −st
lim F (x) dx = F (x) dx = L (0) = lim ·e dt
s→0 s 0 t s→0 0 t
Z ∞   Z ∞
f (t) −st f (t)
= lim ·e dt = dt.
0 s→0 t 0 t

Formulas (3.16) and (3.17) emphasize the correspondence between inte-


gration in a domain and division by the variable in the other domain. This
property is useful for solving integral equations. Another method of solving
some integral equations refers to convolution.

Convolution
Consider two original functions f and g. If f, g ∈ L1 (see Definition 2.1.1),
then the general definition of their convolution is given by the integral
Z ∞
(f ∗ g)(t) = f (x)g(t − x) dx. (3.19a)
−∞

Since f is an original function, it verifies f (x) = 0 for x < 0 and


Z ∞ Z 0 Z ∞
f (x)g(t − x) dx = f (x)g(t − x) dx + f (x)g(t − x) dx.
−∞ −∞ 0

159
Hence, the convolution can be written as
Z ∞
(f ∗ g)(t) = f (x)g(t − x) dx. (3.19b)
0

But g(t − x) = 0 for t − x < 0, i.e. for x > t, hence formula (3.19b) becomes
Z t
(f ∗ g)(t) = f (x)g(t − x) dx. (3.19c)
0

This formula is valid without the assumption f, g ∈ L1 since the original


functions are piecewise continuous. It follows that for original functions, one
can use any of the three formulas of the convolution.
Theorem 3.2.26 (Convolution). If f and g are original functions, then

L[(f ∗ g)(t)](s) = F (s) · G(s). (3.20)

Proof. We will use formula (3.19a) for the convolution. By changing the
order of integration, we have the following formula:
Z ∞
L[(f ∗ g)(t)](s) = (f ∗ g)(t) · e−st dt
Z0 ∞ Z ∞ 
= f (x)g(t − x) dx e−st dt
Z0 ∞ 0 Z ∞ 
−st
= f (x) g(t − x)e dt dx.
0 0

Using the substitution y = t − x, the last double integral becomes


Z ∞ Z ∞ 
−s(y+x)
f (x) g(y)e dy dx =
0 −x
Z ∞ Z 0 Z ∞ 
−s(y+x) −s(y+x)
f (x) g(y)e dy + g(y)e dy dx.
0 −x 0

Since g(y) = 0 for y < 0, the integral on (−x, 0) is equal to 0 and the double
integral splits in the product of the following two integrals:
Z ∞ Z ∞  Z ∞ Z ∞
−s(y+x) −sx
f (x) g(y)e dy dx = f (x)e dx · g(y)e−sy dy
0 0 0 0
= F (s) · G(s).

160
Due to the hypothesis that f and g are original functions, the Laplace inte-
grals of f and g are absolutely
Z convergent
Z and hence, in view of the preced-
∞ ∞
ing calculation, the integral f (x) g(t − x)e−st dt dx is absolutely
0 0
convergent. This fact allowed us to change the integration order.

Example 3.2.27. Now we present how Theorem 3.2.26 (Convolution) and


formula (3.20) work.
1 1
L[te−t ∗ sin t](s) = L[te−t ](s) · L[sin t](s) = 2
· 2 .
(s + 1) s + 1

The last equality is due to the first calculation in Example 3.2.19.

Theorem 3.2.28 (Initial Value). If the original function f (t) is continuous


and piecewise smooth on [0, ∞), then

f (0+ ) = lim sF (s), s ∈ R. (3.21)


s→∞

Proof. By Theorem 3.2.13 (Differentiation of the Original),

L[f 0 (t)](s) = sF (s) − f (0+ )

and formula (3.21) follows by Theorem 3.1.6 (Existence) and Corollary 3.1.7,
since L[f 0 (t)](s) → 0 as Re(s) → ∞.

Theorem 3.2.29 (Final Value). If lim f (t) exists, then


t→∞

lim f (t) = lim sF (s), s ∈ R. (3.22)


t→∞ s→0

Proof. Since f (t) is piecewise continuous and lim f (t) exists, it follows that
t→∞
f (t) is bounded on [0, ∞). Hence, there exists M > 0 such that |f (t)| ≤
M = M · e0·t , ∀t ≥ 0. This implies that the growth index of f is σf ≤ 0 and
its Laplace transform exists for Re(s) > 0.
In virtue of Theorem 3.2.13 (Differentiation of the Original) we obtain
that L[f 0 (t)](s) = sF (s) − f (0+ ) and

lim L[f 0 (t)](s) = lim sF (s) − f (0+ ). (?)


s→0 s→0

161
But, by Definition 3.1.1 (Laplace Transform),
Z ∞
0
lim L[f (t)](s) = lim f 0 (t)e−st dt
s→0 s→0 0
Z ∞  
= f 0 (t) lim e−st dt
s→0
Z0 ∞ ∞
= f 0 (t) dt = f (t)
0 0

= lim f (t) − lim f (t);


t→∞ t→0+

hence,
lim L[f 0 (t)](s) = lim f (t) − f (0+ ). (??)
s→0 t→∞
Formula (3.22) follows from (?) and (??).

Theorem 3.2.30 (Laplace Transform of Periodic Functions). If the original


function f (t) is periodic with period T > 0 for t ≥ 0, then
Z T
1
L[f (t)](s) = f (t)e−st dt. (3.23)
1 − e−sT 0
Proof. By periodicity, f (x + T ) = f (x) for x ≥ 0. We use the definition
of the Laplace transform and we split the interval [0, ∞) in [0, T ) ∪ [T, ∞).
Thus,
Z ∞ Z T Z ∞
−st −st
F (s) = f (t)e dt = f (t)e dt + f (t)e−st dt.
0 0 T
In the last integral, we substitute the integration variable t = x + T and we
obtain that
Z ∞ Z ∞
−st
f (t)e dt = f (x + T )e−s(x+T ) dt
T 0
Z ∞
−sT
=e f (x)e−sx dx
0
−sT
=e · F (s).
Z T
This implies that F (s) = f (t)e−st dt + e−sT · F (s); hence,
0
Z T
1
L[f (t)](s) = F (s) = f (t)e−st dt.
1 − e−sT 0

162
Example 3.2.31. Now we give two examples in which Theorem 3.2.30 and
formula (3.23) are used.
1. The function sin t is periodic with period T = 2π. Then
Z 2π
(3.23) 1
L[sin t](s) = −2πs
sin t · e−st dt.
1−e 0

We get that
Z 2π 2π
eit − e−it −st
Z
−st
I := sin te dt = ·e dt
0 0 2i
Z 2π Z 2π 
1 t(i−s) −t(i+s)
= e dt − e dt
2i 0 0
1 et(i−s) 2π e−t(i+s) 2π
 
= +
2i i − s 0 i+s 0
− 1 e−2π(i+s) − 1
 2π(i−s) 
1 e
= + .
2i i−s i+s
Since e2πi = e−2πi = 1, it follows that
1 e−2πs − 1 e−2πs − 1 e−2πs − 1
   
1 1
I= + = +
2i i−s i+s 2i i−s i+s
e−2πs − 1 2i 1 − e−2πs
= · = .
2i −s2 − 1 s2 + 1
1 1
In conclusion, L[sin t](s) = −2πs
·I = 2 ;
1−e s +1

1, t ∈ [0, 1)
2. Let us consider the function f (t) = , which is periodic
0, t ∈ [1, 2)
with period T = 2. Then
Z 2 Z 1 
(3.23) 1 −st 1 −st
L[f (t)](s) = f (t)e dt = e dt + 0
1 − e−2s 0 1 − e−2s 0
1 e−st 1 e−s − 1 e−s − 1
= · = =
1 − e−2s −s 0 s(e−2s − 1) s(e−s − 1)(e−s + 1)
1
= −s
.
s(e + 1)

163
Theorem 3.2.32 (Laplace Transform of Power Series). Assume that the

X
power series an tn , an ∈ R is convergent and has the sum f (t). If there
n=0
exist M > 0 and b > 0 such that

bn
|an | ≤ M · , n ∈ N, (3.24)
n!

then, for Re(s) > b,

∞ ∞
X X n!
L[f (t)](s) = an L[tn ](s) = an · . (3.25)
n=0 n=0
sn+1

Proof. Consider the restriction of the Laplace transform to the real axis,
namely
Z ∞
L[f (t)](s) = f (t)e−xt dt, x ∈ R.
0

We use the following properties:

1. if f (t) ≥ 0 for every t ∈ [0, ∞), then f (t)e−xt ≥ 0. Hence, L[f (t)](x) ≥
0. Therefore, if g(t) ≤ h(t), for every t ∈ [0, ∞), we obtain that
L[g(t)](x) ≤ L[h(t)](x) (take f = h − g);

2. if s = x + iy, then, since |e−st | = e−Re(s)t ,


Z ∞
|L[f (t)](s)| = f (t)e−st dt
0
Z ∞ Z ∞
−Re(s)t
≤ |f (t)|e dt = |f (t)|e−xt dt.
0 0

Hence,
|L[f (t)](s)| ≤ L[|f (t)|](x); (3.26)


x
X xn
3. using the exponential series e = and then applying formula
n=0
n!

164
(3.24), one gets that
n
X ∞
X n
X ∞
X
k k k
f (t) − ak t = ak t − ak t = ak tk
k=0 k=0 k=0 k=n+1
∞ ∞ n
!
k
(bt)
X (bt)k X (bt)k
X
≤M =M −
k=n+1
k! k=0
k! k=0
k!
n
!
bt
X (bt)k
=M e − .
k=0
k!
Hence, !
n n
X X (bt)k
f (t) − ak tk ≤ M ebt − . (3.27)
k=0 k=0
k!

Using formulas (3.26) and (3.27) and also property 1. from above, we have
n
" n
#
X X
L[f (t)](s) − ak L[tk ](s) ≤ L f (t) − ak tk (x)
k=0 k=0
n
!
bt
X bk k
≤M L[e ](x) − L[t ](x)
k=0
k!
n
!
1 X bk k!
=M − ·
x − b k=0 k! xk+1
n  k
!
1 1X b
=M −
x − b x k=0 x
n+1 !
1 1 − xb
=M −
x−b 1 − xb
Bare in mind that above we have used the sum of the geometric progression,
n
X 1 − q n+1
namely qk = . Finally,
k=0
1 − q
n  n+1 !
X 1 1 1 b
L[f (t)](s) − ak L[tk ](s) ≤ M − + ·
k=0
x−b x−b x x
 n+1
M b
= · .
x x

165
b
Since Re(s) > b, it follows that x > b ⇐⇒ < 1, which implies that
 n+1 n
x
b X
lim = 0. Hence, lim L[f (t)](s) − ak L[tk ](s) = 0, so
n→∞ x n→∞
k=0

∞ ∞
X
n
X n!
L[f (t)](s) = an L[t ](s) = an · .
n=0 n=0
sn+1

Example 3.2.33. Now we show how Theorem 3.2.32 (Laplace Transform of


Power Series) and formula (3.25) are used.
2
a) The function f (t) = e−t , t ≥ 0 is continuous on [0, ∞) and the limit
at t = ∞ exists; lim f (t) = 0. Therefore, f (t) is bounded on [0, ∞).
t→∞
−t2
Indeed, |f (t)| = |e | ≤ 1 = 10·t , i.e. its growth hindexi is σf ≤ 0. Being
2
an original function, the Laplace transform L e−t (s) exists. It is

Z ∞
π ( s )2 2 2
s
F (s) = 2 e 2 erfc( 2 ), where erfc(s) = √ e−u du.
π s

z
X zn
Using the exponential series e = , the power series expansion of
n=0
n!

−t2
X (−1)n t2n k!
e is . Since L[tk ](s) =
, the term by term Laplace
n!
n=0
sk+1
transform of the series is
"∞ # ∞
X (−1)n t2n X (−1)n  2n 
L (s) = L t (s)
n=0
n! n=0
n!

X (2n)! 1
= (−1)n · · 2n+1 .
n=0
n! s

(2n)!
But because (−1)n · = (2n)(2n − 1) · . . . · (n + 1) → ∞, the
n!
transformed series is divergent for any s ∈ C, so we can see that the
condition (3.24) is not verified. Therefore, the method of the Laplace
transform of power series fails in this case.

166
e−at
b) Consider the function f (t) = √ , a ∈ C, a 6= 0. It follows that
πt
∞ ∞
−at
X (−a)n n
X (−1)n an
e = t = tn ,
n=0
n! n=0
n!


e−at 1 X (−1)n an n− 1 (−1)n an |a|n
so f (t) = √ = √ t 2 . Since ≤ , the
πt π n=0 n! n! n!
condition (3.24) is verified. Hence,

1 X (−1)n an h n− 1 i
(3.24)
L[f (t)](s) = √ L t 2 (s)
π n=0 n!

1 X (−1)n an Γ n + 12

= √ · 1 .
π n=0 n! sn+ 2

Using the properties of the Γ function, it follows that

(2n − 1) · · · 1 √
     
1 1 1 1
Γ n+ = n− ··· Γ = π,
2 2 2 2 2n
so

1 X (−1)n an (2n − 1) · · · 1 √ 1
L[f (t)](s) = √ · n
π · n+ 1
π n=0 n! 2 s 2

X (−1)n an (2n − 1) · · · 1 1
= · n
· n+ 1
n=0
n! 2 s 2

1 X (−1)n (2n − 1) · · · 1  a n
=√ · .
s n=0 n! · 2n s

1
Since we know that the expansion of (1 + z)− 2 is
∞ ∞
− 12 · · · − 2n−1
 
X
2 n
X (−1)n (2n − 1) · · · 1 n
z = n
z ,
n=0
n! n=0
n! · 2

1  a − 12 1
it follows that L[f (t)](s) = √ · 1 + =√ .
s s s+a

167
3.3 Inverse Laplace Transform
The Laplace transform is used to transform some (difficult) problems into
simple algebraic ones. We obtain the solution in the frequency domain, but
we must return it to the time domain and obtain the corresponding original
solution.

Theorem 3.3.1. Let f (t) be an original function with the growth index σ
and the Laplace transform F (s). Then
Z a+i∞
f (t− ) + f (t+ ) 1
= F (s)est ds, a > σ, t ∈ R. (3.28)
2 2πi a−i∞

f (t− ) + f (t+ )
Proof. Consider the function g(t) = e−at · . Obviously, if f is
2
continuous at the point t, then f (t− ) = f (t+ ) = f (t), which implies that
g(t) = e−at f (t) (for t ≥ 0) and g(t) = 0 (for t < 0). The function g(t) is
absolutely integrable, since, for t > 0,

|g(t)| = e−at |f (t)| ≤ e−at · M eσt = M e−(a−σ)t

and a > σ implies that lim e−(a−σ)t = 0. Then


t→∞
Z ∞ ∞
1 −(a−σ)t 1
e−(a−σ)t dt = − e = > 0.
0 a−σ 0 a−σ

Writing the Fourier integral for g(t) (see formula (2.20)), one obtains
Z ∞ Z ∞ 
1
g(t) = g(x)e dx e−iωt dω.
iωx
2π −∞ −∞

Since g(x) < 0 for x < 0, the expression becomes


Z ∞ Z ∞ 
−at f (t− ) + f (t+ ) 1
e · = e f (x)e dx e−iωt dω.
−ax iωx
2 2π −∞ 0

We multiply with eat and we obtain that


Z ∞ Z ∞ 
f (t− ) + f (t+ ) 1 −(a−iω)x
= f (x)e dx e(a−iω)t dω.
2 2π −∞ 0

168
Consider the substitution s = a − iω (formally, ds = −i dω, so ω = ∞ ⇒
s = a − i∞ and ω = −∞ ⇒ s = a + i∞). It follows that
Z a−i∞ Z ∞   
f (t− ) + f (t+ ) 1 −sx st ds
= f (x)e dx e · −
2 2π a+i∞ 0 i
Z a+i∞ Z ∞ 
1
= f (x)e−sx dx est ds.
2πi a−i∞ 0
Z ∞
Since F (s) = f (x)e−sx dx, it follows that
0

Z a+i∞
f (t− ) + f (t+ ) 1
= F (s)est ds.
2 2πi a−i∞

Remark 3.3.2. We make the following observations in connection with for-


mula (3.28):

ˆ if t < 0, then the left hand side of formula (3.28) becomes 0;

ˆ if f is continuous at the point t, then the left hand side of formula


(3.28) becomes f (t);

f (t− ) + f (t+ )
ˆ If f is standardized (see Remark 2.3.6), i.e. = f (t) at the
2
discontinuity points, then formula (3.28) becomes the Mellin-Fourier
formula, namely
Z a+i∞
1
f (t) = F (s)est ds, a > σ. (3.29)
2πi a−i∞

Formula (3.29) is a representation of the inverse Laplace transform L−1 .


This operator is defined by L−1 [F (s)](t) = f (t) and recovers the time
domain signal from the frequency domain.

One can derive from the Mellin-Fourier formula a practical method to


determine the signal f (t) using residues. To achieve this goal, one needs the
following result.

169
Lemma 3.3.3 (Jordan’s Lemma). Let Cn be the half-circles of radius Rn
centered at s0 ∈ C (see Figure 3.2) such that R1 < R2 < · · · < Rn < · · · and
lim Rn = ∞; so each half-circle can be written as follows:
n→∞

Cn = {s ∈ C : |s − s0 | = Rn , Re(s) ≤ Re(s0 )}.

If the function F (s) is continuous on Cn , n ∈ N∗ , and it converges uniformly


to 0 on Cn as n → ∞, then
Z
lim F (s)est ds = 0, ∀t > 0.
n→∞ Cn

Figure 3.2: The Half-circles Cn , n ∈ N∗

Proof. See Lemma 1.4.1 in [11].

The formula for determining the original function f (t) is given by formula
(3.30) below. We denote by la the vertical line from a − i∞ to a + i∞, so

la := {z = a + iy : y ∈ R}.

Theorem 3.3.4. Let F (s) be an analytic function on C, except for a finite


number of isolated singular points s1 , s2 , . . . , sk belonging to the half plane
Re(s) < σ (for some σ ∈ R). We define the circles Cn by

Cn := {s ∈ C : |s| = Rn }, R1 < R2 < · · · Rn < · · · , lim Rn = ∞.


n→∞

170
Z
u ∗
Assume that F (s) −→ 0 with respect to s ∈ Cn , n ∈ N and F (s) ds is
s→∞ la
absolutely convergent for any a > σ. Then

k
X
f (t) = res (F (s)est , sj ). (3.30)
j=1

Proof. Take n ∈ N∗ to be sufficiently large such that Rn > max{|sj |, |a|}


j=1,k
p
2 2
and let rn = Rn − a . Consider the points An (a − irn ) and Bn (a + irn ),
the vertical segment An Bn and the arc γn ⊂ Cn with endpoints An and Bn
included in the half-plane Re(s) < a. We denote by Γn the closed contour
γn ∪ An Bn (see Figure 3.3).

Figure 3.3: The Closed Contour Γn for the Mellin-Fourier Theorem

By [26, Theorem 5.1.1] (Residue Theorem),

I k
1 X
F (s)est ds = res (F (s)est , sj ). (3.31)
2πi Γn j=1

We decompose the complex line integral in


Z Z a+iRn k
X
st
F (s)e ds + F (s)est ds = res (F (s)est , sj ). (3.32)
γn a−iRn j=1

171
If we let n → ∞ in formula (3.32), then, using Lemma 3.3.3 (Jordan’s
Lemma), we get that Z
lim F (s)est ds = 0.
n→∞ γn

Bare in mind that the segment An Bn becomes la (the vertical line from a−i∞
to a + i∞) and that the right hand member of (3.32) remains constant, since
by the assumption Rn > max |sj |, no isolated singular point appears as
j=1,k
n → ∞. Therefore, when n → ∞, the equality (3.32) implies formula (3.30).
By applying formula (5.5) in [26] (for the computation of the residues at
poles), one obtains from (3.30), formula (3.33) from below.

Corollary 3.3.5. If F (s) satisfies the assumption of Theorem 3.3.4 and its
isolated singular points sj are poles of order nj ≥ 1, j = 1, k, then
k
X 1 (n −1)
lim (s − sj )nj F (s)est j , t ≥ 0.

f (t) = (3.33)
j=1
(nj − 1)! s→sj

We just mention that in the case of a simple pole, i.e. nj = 1, the


corresponding term in (3.33) is

lim (s − sj )F (s)est .
 
s→sj

e−s
Example 3.3.6. Consider the function F (s) = . We take G(s) =
(s2 + 1)2
st est−s
F (s)e = 2 . The isolated singular points of G are ±i, both being
(s + 1)2
poles of order 2. Now we use formula (3.30), so we need to compute the
residues at these points. Since
 st−s 0
e est−s [(t − 1)(s + i) − 2]
res (G, i) = lim = lim
s→i (s + i)2 ) s→i (s + i)3
e−i+it [2i(t − 1) − 2]
= ,
−8i

ei−it [−2i(t − 1) − 2]
res (G, −i) = res (G, i) = ,
8i

172
it follows that
f (t) = res (G, i) + res (G, −i)
e−i+it [2i(t − 1) − 2] ei−it [−2i(t − 1) − 2]
= +
−8i 8i
2i(t − 1) −i+it 2
=− (e + ei−it ) + (e−i+it − ei−it ).
8i 8i
But
e−i+it = ei(t−1) = cos(t − 1) + i sin(t − 1)
and
ei−it = e−i(t−1) = cos(t − 1) − i sin(t − 1).
Hence,
1 1
f (t) = (t − 1) · 2 cos(t − 1) + · 2i sin(t − 1)
4 4i
(t − 1) cos(t − 1) sin(t − 1)
= + .
2 2
These formulas can be applied in the case of rational functions

P (s)
F (s) = ,
Q(s)

where P and Q are polynomials such that deg(P ) < deg(Q). Hence, the
poles of F are the roots of Q.
In the case of simple poles, the conditions of the Theorem 3.3.4 hold by
formula (5.7) in [26]. One obtains from (3.30) the following formula:
k
X P (sj ) sj t
f (t) = e , t ≥ 0. (3.34)
j=1
Q0 (sj )

The general method to determine the original a rational functions is based


on partial fraction decomposition and identification of the originals of each
partial fraction.
Using the following known results:
1
1. L[h(t)](s) = , h(t) = 1, t > 0,
s

173
k!
2. L[tk ](s) = , k ∈ N,
sk+1
3. L[eλt f (t)] = F (s − λ), λ ∈ C,
one obtains Proposition 3.3.7 from below.
P (s)
Proposition 3.3.7. If the rational function F (s) = has the partial
Q(s)
fraction decomposition
n
A X Aj
F (s) = + kj
, n, kj ∈ N∗ , j = 1, k,
s j=1
(s − s j )

then the original of F (s) is


n
X Aj
f (t) = A + esj t tkj −1 , t ≥ 0. (3.35)
j=1
(kj − 1)!

A similar method can be applied to the Laurent series expansion of the


Laplace transform F (s).
Proposition 3.3.8. If F (s) is analytic on C, F (∞) = 0 and F (s) has the
Laurent series expansion around the point of infinity given by

X
F (s) = an s−n−1 ,
n=0

then its original is the function



X an
f (t) = tn . (3.36)
n=0
n!

Proof. One can prove that the series in formula (3.36) is convergent, ∀t ≥ 0
(see [11, Theorem 1.4.15]).
k!
Since L[tk ](s) = k+1 , k ∈ N, and using Theorem 3.2.1 (Linearity), it
s
follows that
"∞ # ∞ ∞
X an X an n X an n!
n
L[f (t)](s) = L t (s) = L[t ](s) = · = F (s).
n=0
n! n=0
n! n=0
n! sn+1

174
Example 3.3.9. Now we are going to make some explicit computations using
partial fraction decomposition.
1
1. Let F (s) = . Then
s2 − 3s + 2
1 1 1
F (s) = =− + ,
(s − 1)(s − 2) s−1 s−2

so L−1 [F (s)](t) = −et + e2t , t ≥ 0;

3s2 + 3s + 3
2. Let F (s) = . Then
s3 − 3s + 2

3s2 + 3s + 3 1 2 3
F (s) = = + + ,
(s + 2)(s − 1)2 s + 2 s − 1 (s − 1)2

so L−1 [F (s)](t) = e−2t + 2et + 3tet , t ≥ 0.

3.4 Applications of the Laplace Transform


3.4.1 Differential Equations
Consider the nth order linear differential equation (DE) with constant
coefficients

an y (n) + an−1 y (n−1) + · · · + a2 y 00 + a1 y 0 + a0 y = f (t), (3.37)

where ai ∈ R, i = 0, n, an 6= 0 and the unknown is y = y(t).


Assume that f (t) is a given continuous original function. One can prove
that in this situation any solution y = y(t) is a continuous original function
(see [28, Theorem A.6]).
Let F (s) = L[f (t)](s) and Y (s) = L[y(t)](s). By Theorem 3.2.14 (Gen-
eral Differentiation of the Original), one obtains the following:

ˆ L[y 0 (t)](s) = sY (s) − y(0);

ˆ L[y 00 (t)](s) = s2 Y (s) − sy(0) − y 0 (0);


·············································

175
ˆ L[y (n−1) (t)](s) = sn−1 Y (s) − sn−2 y(0) − sn−3 y 0 (0) − · · · − sy (n−3) (0) −
y (n−2) (0);
ˆ L[y (n) (t)](s) = sn Y (s)−sn−1 y(0)−sn−2 y 0 (0)−···−sy (n−2) (0)−y (n−1) (0).
One applies the Laplace transform to the DE (3.37) and by Theorem 3.2.1
(Linearity), one transforms it into the following algebraic equation:
(an sn + · · · + a1 s + a0 )Y (s) − y(0)(an sn−1 + · · · + a2 s + a1 )−
− y 0 (0)(an sn−2 + · · · + a3 s + a2 ) − · · · −
− y (n−2) (0)(an s + an−1 )−
− y (n−1) (0)an = F (s).
Let us denote by p(s) the coefficient of Y (s). Hence,

p(s) = an sn + an−1 sn−1 + · · · + a1 s + a0

is the characteristic polynomial of the DE (3.37).


We also define for any polynomial the operator δ : R[s] → R[s] by

δ(p(s)) = an sn−1 + an−1 sn−2 + · · · + a2 s + a1 .

Therefore, the algebraic equation can be written as


p(s)Y (s) = F (s) + y(0)δ(p(s)) + y 0 (0)δ 2 (p(s)) + · · · +
+ y (n−2) (0)δ n−1 (p(s)) + y (n−1) (0)δ n (p(s)).
Consider the initial conditions

y(0) = y0 , y 0 (0) = y1 , . . . , y (n−1) (0) = yn−1 , (3.38)

where yi ∈ R, i = 0, n − 1. We obtain that the initial value problem (3.37)


and (3.38) has the following solution in the frequency domain:
1 
F (s) + y0 δ(p(s)) + y1 δ 2 (p(s)) + · · · + yn−1 δ n (p(s)) .

Y (s) = (3.39)
p(s)
Hence, the solution in the time domain is y(t) = L−1 [Y (s)]. If y0 , y1 , . . . , yn−1
are arbitrary constants, then formula (3.39) provides the general solution of
the DE (3.37). This method can be applied even if f (t) is only piecewise
continuous .
Similar methods can be used to solve linear partial differential equations.

176
Example 3.4.1. Solve the equation

y 00 (t) − 5y 0 (t) + 6y(t) = et

with initial conditions y(0) = −1, y 0 (0) = 1.


Applying the Laplace transform, one gets

L[y 00 (t) − 5y 0 (t) + 6y(t)](s) = L[et ](s).

Using Theorem 3.2.1 (Linearity), one gets

L[y 00 (t)](s) − 5L[y 0 (t)](s) + 6L[y(t)](s) = L[et ](s).

Due to Theorem 3.2.14 (General Differentiation of the Original),

L[y 00 (t)](s) = s2 Y (s)(s) − sy(0) − y 0 (0) = s2 Y (s) + s − 1

and
L[y 0 (t)](s) = sY (s) − y(0) = sY (s) + 1.
Also formula (3.2) gives us
1
L[et ](s) = .
s−1
Combining these equalities, one gets
1
L[y(t)](s)(s2 − 5s + 6) = s − 6 + .
s−1
Therefore,
s2 − 7s + 7
Y (s) = .
(s − 1)(s − 2)(s − 3)
Decomposing this into partial fractions, one gets
1 5 7
Y (s) = − + .
2(s − 1) s − 2 2(s − 3)

Hence, using again formula (3.2), one obtains


1 7
y(t) = et − 5e2t + e3t .
2 2

177
Example 3.4.2. Solve the equation
y 000 (t) − y 0 (t) = sinh t
with initial conditions y(0) = 0, y 0 (0) = 0, y 00 (0) = 1.
We proceed in the same way as in Exercise 3.4.1. The algebraic equation
which results is
1
s3 Y (s) − 1 − sY (s) = 2 ,
s −1
s
which implies that Y (s) = 2 .
(s − 1)2
To determine the original, we will use residues and formula (3.30). Con-
sider G(s) = Y (s)est , t ≥ 0. The isolated singular points of G are ±1, both
poles of order 2. Since
0
sest est [(st + 1)(s + 1) − 2s] tet

res (G, 1) = lim = lim = ,
s→1 (s + 1)2 s→1 (s + 1)3 4
0
sest est [(st + 1)(s − 1) − 2s] te−t

res (G, −1) = lim = lim = − ,
s→−1 (s − 1)2 s→−1 (s − 1)3 4
it follows that
t(et − e−t ) t sinh t
y(t) = res (G, 1) + res (G, −1) = = .
4 2
The Laplace transform can be used by similar method to solve some
differential equations whose coefficients are polynomial functions of t.
Example 3.4.3. Solve the equation
ty 00 − 2y 0 − ty = −2 cosh t
with initial conditions y(0) = 0, y 0 (0) = 1.
Applying Laplace transform, one gets
L[ty 00 ](s) − 2L[y 0 ](s) − L[ty](s) = −2L[cosh t](s).
Using Theorem 3.2.16 (Differentiation of the Image), and then Theorem
3.2.14 (General Differentiation of the Original),
(3.15) (3.12)
L[ty 00 ](s) = (−1)[L[y 00 ](s)]0 = (−1)(s2 Y (s) − 1)0
= (−1)(2sY (s) + s2 Y 0 (s)) = −2sY (s) − s2 Y 0 (s).

178
From the same formulas as before we get that

L[y 0 ](s) = sY (s) and L[ty](s) = (−1)Y 0 (s)

Also the formula for the Laplace transform of the hyperbolic cosine (Exercise
3.2.4) is the following:
2s
L[cosh t](s) = 2 .
s −1
−2s
Hence, (−s2 + 1)Y 0 (s) − 4sY (s) = 2 , which is equivalent to
s −1
(s2 − 1)2 · Y 0 (s) + 4s(s2 − 1)Y (s) = 2s.

This is the same as


[(s2 − 1)2 · Y (s)]0 = 2s.
Integrating, we find that (s2 − 1)2 Y (s) = s2 + C, where C ∈ R. Therefore,

s2 + C
Y (s) = .
(s2 − 1)2

To determine the original, we will use again residues and formula (3.30).
(s2 + C)est
Consider G(s) = . The isolated singular points of G are ±1, both
(s2 − 1)2
poles of order 2. Since
 2 0
(s + C)est (1 − C)et + (1 + C)tet
res (G, 1) = lim = ,
s→1 (s + 1)2 4
0
−(1 − C)e−t + (1 + C)te−t
 2
(s + C)est
res (G, −1) = lim = ,
s→−1 (s − 1)2 4
it follows that
1+C 1−C
y(t) = res (G, 1) + res (G, −1) = t cosh t + sinh t.
2 2

3.4.2 Systems of Differential Equations


A first order linear system of differential equations (SDE) with constant
coefficients has the form
Ay 0 (t) = f (t), (3.40)

179
where A is a real nonsingular n × n matrix, f (t) = [f1 (t), f2 (t), . . . , fn (t)]T is
an n-dimensional vector, whose coefficients are continuous original functions
and y(t) = [y1 (t), y2 (t), . . . , yn (t)]T is the unknown vector function. For the
study of systems of differential equations, see [12] and [13].
The Laplace transform of the vector function f (t) is the vector function

F (s) = [L[f1 (t)](s), L[f2 (t)](s), · · · , L[fn (t)](s)]T .

One applies the Laplace transform to the SDE (3.40) and by the extension
of linearity to vectors and matrices, one obtains the algebraic system

A[sY (s) − y(0)] = F (s),

1  −1 
which has the solution Y (s) = A F (s) + y(0) .
s
Notice that the SDE (3.40) can be written in the form y 0 = A−1 f (t) since
the matrix A is nonsingular.
If the equations of the SDE have different orders, one can apply the
Laplace transform to each equation as in Subsection 3.4.1 (Differential Equa-
tions). One obtains an algebraic system which can be solved by suitable
methods (substitution, reduction, Cramer’s rule etc.) in the frequency do-
main. Then using the inverse Laplace transform, we determine the solution
y(t).

Example 3.4.4. Solve the system


  0   t 
2 1 y1 e
=
1 2 y20 e−t

with initial conditions y1 (0) = y2 (0) = 0.


It follows that
1
 

F (s) = [L[et ](s), L[e−t ](s)]T =  s − 1 



1 
s+1
and  
−1 1 2 −1
A = ;
3 −1 2

180
hence,
1
   
   
1 1 2 −1  s−1  0 
Y (s) =  1 +
s 3 −1 2 0
 
s+ 1
2 1

1  s−1 − s+1 
= .
3s − 1 + 2

s−1 s+1
It follows that  
1 2 1
Y1 (s) = −
3s s−1 s+1
and  
1 1 2
Y2 (s) = − + .
3s s−1 s+1
Using the partial fractions decompositions
1 1 1 1 1 1
= − and = − ,
s(s − 1) s−1 s s(s + 1) s s+1
one gets
 
1 2 1 3 1
2et + e−t − 3

Y1 (s) = + − ⇒ y1 (t) =
3 s−1 s+1 s 3
and
 
1 1 2 3 1
−et − 2e−t + 3 .

Y2 (s) = − − + ⇒ y2 (t) =
3 s−1 s+1 s 3
Example 3.4.5. Solve the system
 0
y (t) − y(t) + 2x(t) = 0
y 00 (t) + 2x0 (t) = 2t − cos 2t

with initial conditions y(0) = 0, y 0 (0) = 2, x(0) = −1.


Applying Laplace transform to each equation and also Theorem 3.2.1
(Linearity), one gets

L[y 0 (t)](s) − L[y(t)](s) + 2L[x(t)](s) = 0




L[y 00 (t)](s) + 2L[x0 (t)](s) = L[2t − cos 2t](s)

181
Using Theorem 3.2.14 (General Differentiation of the Original), we obtain
that
sY (s) − y(0) − Y (s) + 2X(s) = 0
(
2 s ,
s2 Y (s) − sy(0) − y 0 (0) + 2sX(s) − 2x(0) = 2 − 2
s s +4
which is equivalent to
(s − 1)Y (s) + 2X(s)) = 0
(
2 s
s2 Y (s) + 2sX(s) = 2 − 2 .
s s +4
Solving the algebraic system with unknowns Y (s) and X(s), one gets
2 1 2 1
Y (s) = − ⇒ y(t) = t − sin(2t)
s3 s2 + 4 2
and
(s − 1)Y (s) 1 1 1 s 1 1
X(s) = − =− 2 + 3 + · 2 − · 2 ⇒
2 s s 2 s +4 2 s +4
1 1
x(t) = −t + t2 + cos(2t) − sin(2t).
2 4

3.4.3 Integral Equations


An integral equation (IE) has the form
Z
Ay(t) + k(r, t)y(r) dr = f (t),
I

where f (t) and k(t, r) are given original functions with respect to t, I ⊂ R
is an interval, r ∈ I, A ∈ R and y(t) is the unknown original function. The
function k(t, r) is called the kernel of the IE.
An (IE) can be solved using Laplace transform if it is of convolution type,
i.e. of the form Z t
Ay(t) + k(r − t)y(r) dr = f (t). (3.41)
0
One denotes as usual L[f (t)](s) = F (s), L[y(t)](s) = Y (s) and L[k(t)](s) =
K(s). Using one of the equivalent definitions of the convolution (3.19c) and
Theorem 3.2.26 (Convolution), we obtain
Z t 
L k(r − t)y(r) dr (s) = L[(k ∗ y)(t)](s) = K(s) · Y (s).
0

182
By applying Laplace transform to the IE (3.41), one obtains the algebraic
equation
(A + K(s))Y (s) = F (s), (3.42)
F (s)
which has the solution Y (s) = .
A + K(s)

Example 3.4.6. Solve the equation


Z t
2y(t) − sin(t − x)y(x) dx = cos t.
0

In the frequency domain,


Z t 
2L[y(t)](s) − L sin(t − x)y(x) dx (s) = L[cos t](s).
0

Z t
But sin(t − x)y(x) dx = sin t ∗ y(t) (see (3.19c)), hence, using Theorem
0
3.2.26 (Convolution), one gets
Z t 
L sin(t − x)y(x) dx (s) = L[sin t ∗ y(t)](s)
0
= L[sin t](s) · L[y(t)](s)
1
= 2 Y (s).
s +1

It follows that
1 s
2Y (s) − Y (s) = 2 .
s2 +1 s +1
This is equivalent to
  
s 1 s 1 1
Y (s) = 2 = · 2 1 = · L cosh √ t (s),
2s + 1 2 s + 2
2 2

which implies that


 
1 1
y(t) = cosh √ t .
2 2

183
By combining the DEs (3.37) with the IEs (3.41), one obtains integro-
differential equations (IDEs) of the following form:
Z t
(n) 0
an y + · · · + a1 y + a0 y + k(r − t)y(r) dr = f (t). (3.43)
0

By applying Laplace transform, one gets an algebraic equation with the


following solution (see (3.39) and (3.42)):

1 F (s)
Y (s) = [F (s) + y0 δ(p(s)) + · · · + yn−1 δ n (p(s))] + . (3.44)
p(s) A + K(s)

Example 3.4.7. Solve the equation


Z t
0
y (t) + y(t) + et−x y(x) dx = t
0

with initial condition y(0) = 1.


We apply Laplace transform, use Theorem 3.2.1 (Linearity) and we get
that Z t 
0 t−x
L[y (t)](s) + L[y(t)](s) + L e y(x) dx (s) = L[t](s).
0
Z t
Since et−x y(x) dx = et ∗ y(t) (see (3.19c)), one obtains the algebraic
0
equation in the frequency domain
1 1
sY (s) − 1 + Y (s) + · Y (s) = 2 .
s−1 s
This is equivalent to
 
1 1
s+1+ Y (s) = 1 + ;
s−1 s2

hence,   
1 1 1 1 1 1 1
Y (s) = − 2 1 + 2 = − 2 + 3 − 4,
s s s s s s s
which implies that
1 1
y(t) = 1 − t + t2 − t3 .
2 6

184
3.4.4 Linear Time-Invariant Control Systems
A linear time-invariant control system (LTI system) has the state space
representation (see [2] and [25])

ẋ(t) = Ax(t) + Bu(t), (3.45)
Σ
y(t) = Cx(t) + Du(t), (3.46)

where

ˆ x(t) = [x1 (t), x2 (t), . . . , xn (t)]T ∈ Rn is the state,

ˆ u(t) = [u1 (t), u2 (t), . . . , um (t)]T ∈ Rm is the input or control,

ˆ y(t) = [y1 (t), y2 (t), · · ·, yp (t)]T ∈ Rp is the output,

ˆ A, B, C, D are real constant matrices of dimensions n × n, n × m, p × n,


p × m, respectively.

The number n is called the dimension of the system Σ and it is denoted


by dim Σ.
As in the case of differential equations, by adapting to vectors the proof
of [28, Theorem A.6], one can show that if the components of the vector
function u(t) are continuous original functions, then this is also true for the
components of the state x(t) and for the components of the output y(t).
We apply Laplace transform to the equation (3.45) using the notations

X(s) = L[x(t)](s) = [L[x1 (t)](s), L[x2 (t)](s), . . . , L[xn (t)](s)]T

and

U (s) = L[u(t)](s) = [L[u1 (t)](s), L[u2 (t)](s), . . . , L[um (t)](s)]T .

If we use Theorem 3.2.1 (Linearity) and Theorem 3.2.13 (Differentiation of


the Original), then (3.45) is transformed into the algebraic equation

sX(s) − x(0) = AX(s) + BU (s) ⇔ (sIn − A)X(s) = BU (s) + x(0). (•)

Since det(sIn −A) = 0 implies that s ∈ σ(A) (s is an eigenvalue of the matrix


/ σ(A), det(sIn − A) 6= 0. Hence, the inverse (sIn − A)−1 exists.
A), for s ∈

185
We multiply equation (•) by (sIn − A)−1 for s ∈ C \ σ(A) and we get the
formula of the state of the system Σ in the frequency domain
X(s) = (sIn − A)−1 BU (s) + (sIn − A)−1 x(0). (3.47)
By Theorem 3.2.1 (Linearity), the Laplace transform of (3.46) is the equa-
tion
Y (s) = CX(s) + DU (s),
where Y (s) = L[y(t)](s) = [L[y1 (t)](s), L[y2 (t)](s), . . . , L[yp (t)](s)]T .
By replacing X(s) from (3.47) into the previous equation, one obtains
the general response of the system Σ in the frequency domain
Y (s) = C(sIn − A)−1 BU (s) + C(sIn − A)−1 x(0) + DU (s). (3.48)
For the initial state x(0) = 0, it follows that the forced response of the
system Σ has the formula
Y (s) = C(sIn − A)−1 B + D U (s).
 
(3.49)
Definition 3.4.8. The matrix T (s) = C(sIn − A)−1 B + D is called the
transfer matrix of the system Σ.
If m = p = 1, i.e. Σ is a single input-single output system (SISO), then
T (s) is the transfer function of Σ.
By (3.49), one obtains the following characterization of the LTI systems
with null initial state.
Proposition 3.4.9. If x(0) = 0, then the input-output map of the system Σ
is
Y (s) = T (s)U (s). (3.50)
Example 3.4.10. Consider the LTI system (3.45), (3.46) given by the fol-
lowing matrices:
   
0 1 0 0 1
A =  0 0 1  , B =  0 −1  , C = [1 0 2], D = [0 1].
2 3 0 1 0
Determine the transfer matrix
 T (s)  and the output of the system pro-
h(t)
duced by the control u(t) = , where h(t) is the Heaviside’s step
0
function.

186
It follows that T (s) = C(sI3 −A)−1 B +D. One method to find (sI3 −A)−1
is
1
(sI3 − A)−1 = (sI3 − A)∗ .
det(sI3 − A)
 
s −1 0
Since (sI3 − A) =  0 s −1 , the characteristic polynomial of A is
−2 −3 s

s −1 0
p(s) = det(sI3 − A) = 0 s −1 = s3 − 3s − 2.
−2 −3 s
 
s2 − 3 s 1
Then the adjoint matrix (sI3 − A)∗ =  2 s2 s . Therefore,
2s 3s + 2 s2

C(sI3 − A)∗ = [s2 + 4s − 3 7s + 4 2s2 + 1],

C(sI3 − A)∗ B = [2s2 + 1 s2 − 3s − 7]


and
2s2 + 1 s2 − 3s − 7
 
−1
C(sI3 − A) B = 3 .
s − 3s − 2 s3 − 3s − 2
The transfer matrix is

2s2 + 1 s3 + s2 − 6s − 9
 
−1
T (s) = C(sI3 − A) B + D = 3
s − 3s − 2 s3 − 3s − 2

The Laplace transform of the control is


  " 1 #
L[h(t)](s)
U (s) = L[u(t)](s) = = s .
0 0

Then the output in the frequency domain is

2s2 + 1
Y (s) = T (s)U (s) = .
s(s3 − 3s − 2)

187
In order to determine the output in the time domain, we determine the
original y(t) = L−1 [Y (s)](t). We decompose Y (s) using partial fraction de-
composition and we get that
2s2 + 1 2s2 + 1
Y (s) = =
s(s3 − 3s − 2) s(s + 1)2 (s − 2)
A B C D
= + + + , A, B, C, D ∈ R.
s s + 1 (s + 1)2 s − 2
We obtain that
2s2 + 1 1
ˆ A = lim sY (s) = lim 3
=− ;
s→0 s→0 s − 3s − 2 2
2s2 + 1
ˆ C = lim (s + 1)2 Y (s) = lim = 1;
s→−1 s→−1 s(s − 2)

2s2 + 1 1
ˆ D = lim(s − 2)Y (s) = lim = .
s→2 s→0 s(s + 1)2 2
Also if one adds the simple fractions in the decomposition of Y (s), one can
see that the coefficient of s3 is A + B + D and this must be equal to the
coefficient of s3 in the numerator of Y (s). Hence, A + B + D = 0, so B = 0.
Therefore,
1 1 1 1 1
Y (s) = − · + 2
+ · .
2 s (s + 1) 2 s−2
It follows that the original is
1 1
y(t) = − + te−t + e2t , t ≥ 0.
2 2
In conclusion, this is the response of the system to Heaviside’s step function.
P (s)
A rational function , where P and Q are polynomials is said to be
Q(s)
proper if deg P ≤ deg Q. A matrix M (s) is said to be proper rational if all
its entries are proper rational functions. The above example suggests the
following result (the proof is an exercise for the reader).
Proposition 3.4.11. The transfer matrix T (s) of the LTI system (3.45),
(3.46) is a p × m proper rational matrix.
We solved the problem of determining the transfer matrix of a given
LTI system (3.45), (3.46) when the matrices A, B, C and D are known. An
important problem is the converse.

188
Realization Problem
Given a p × m proper rational matrix T (s), determine a realization of
T (s), i.e. a quadruplet of matrices Σ = (A, B, C, D) such that

T (s) = C(sIn − A)−1 B + D.

A realization is called minimal if dim Σ ≤ dim Σ


e for any realization Σ
e of
T (s). For a MATLAB solution of finding a minimal realization, one should
consult Subsection 3.6.3.

3.4.5 RLC Circuits


Electrical circuits which contain resistors (R), inductors (L) and capaci-
tors (C) are called RLC circuits.
In the study of RLC circuits, one applies Kirchhoff’s laws.

1st Law (Kirchhoff’s Current Law (KCL))

The electrical current flowing into a node is equal to the current out of it.

2nd Law (Kirchhoff’s Voltage Law (KVL))

The sum of all voltages around any closed loop in a circuit is equal to 0.

189
Figure 3.4: The RLC Circuit

Now we analyze the RLC circuit presented in Figure 3.4. Consider R, L


and C to be the resistance, the inductance and the capacitance, respectively.
By VR , VL and VC we denote the voltage across the resistor, inductor and
capacitor, respectively. The function V (t) is the time varying voltage from
the source and I(t) is the current through the circuit. One considers V (t) as
the input of the system and I(t) as the output. By (KVL), one obtains

VR + VL + VC = V (t),
1 t
Z
dI(t)
where VR = RI(t), VL = L · and VC = V (0) + I(x) dx. Hence,
dt C 0
the following IDE holds:
1 t
Z
dI(t)
RI(t) + L · + V (0) + I(x) dx = V (t).
dt C 0

We apply the Laplace transform and make the notations I(s) e := L[I(t)](s)
and Ve (s) := L[V (t)](s). Using Theorem 3.2.1 (Linearity), Theorem 3.2.13
(Differentiation of the Original) and Theorem 3.2.20 (Integration of the Orig-
inal), one obtains the equation

e − I(0) + V (0) + 1 · I(s) = Ve (s).


 e
e + L · sI(s)
RI(s)
C s
For I(0) = V (0) = 0, we get that the input-output map in the frequency
domain becomes
s
I(s)
e = · Ve (s),
Ls2 + Rs + C1

190
hence the transfer function (Laplace admittance) is

I(s)
e s
T (s) = = 2 1 .
V (s)
e Ls + Rs + C

3.4.6 Encryption-Decryption of a Message


The current subsection, which contains applications of the Laplace trans-
form in cryptography, was inspired by the works of Hiwarekar [14] and Jayan-
thi and Srinivas [16]. The Laplace transform is used for encrypting the plain
text and the corresponding inverse Laplace transform is used for decryption.
Before we explain how this works, a few definitions are in order.

Definition 3.4.12. A plain text signifies a message that can be understood


by the sender, the recipient and also by anyone else who gets access to it.
When a plain text message is codified using any suitable scheme, the resulting
message is called a cipher text.

Definition 3.4.13. Encryption transforms a plain text message into a cipher


text, whereas decryption transforms a cipher text message back into plain
text. A key is a piece of information used in combination with an algorithm
(a cipher ) for encryption and vice versa (for decryption).

Before encryption, the plain text is coded and for this we can use, for
example, the code that allocates to each letter its position in the English
alphabet. Let us denote this function by Φ. Thus, character codes are
positive integers starting with 1 and ending with 26. For example, A is 1, B
is 2 and so on.
Now consider a message

m1 m2 . . . mi ,

where i ∈ N∗ represents its length. For encryption we are going to use the
following algorithm:

1) Assign to each letter its code and construct the following infinite se-
quence {Gn }n of positive integers, where

Gn = Φ(mn+1 ), n = 0, i − 1 and Gn = 0, ∀n ≥ i;

191

X
2) Consider the function tp f (t) such that f (t) = an tn , an ≥ 0 and
n=0
p ∈ N∗ . Define the function

X i−1
X
p n
g(t) = t Gn an t = Gn an tn+p ;
n=0 n=0

3) Use the Laplace transform and Example 3.1.11 (Power Function) to


compute
" i−1 # i−1 i−1
X
n+p
X (n + p)! X Fn
L[g(t)](s) = L Gn an t = Gn an n+p+1 = ;
n=0 n=0
s n=0
sn+p+1

4) Determine the sequence {Hn }n of positive integers, where

Hn ≡ Fn mod 26, n = 0, i − 1;

5) Encrypt the initial message into b1 b2 . . . bi , where

bn+1 = Φ−1 (Hn + 1), n = 0, i − 1;

6) Determine the encryption key k0 , k1 , . . . , ki−1 , where

kn = (Fn − Hn )/26, n = 0, i − 1.

Remark 3.4.14. Given an integer n > 1, two integers a and b are said
to be congruent modulo n, if n is a divisor of their difference. Congruence
modulo n is denoted by a ≡ b mod n. For example, 38 ≡ 2 mod 12 because
38 − 2 = 36, which is a multiple of 12. Another way to express this is to say
that the remainder that we obtain by dividing 38 to 12 is 2.

Remark 3.4.15. By choosing different values of p and different functions


f (t), one can encrypt a plain text into a different cipher text. In [15] the
author has chosen p = 2 and f (t) = sinh 2t, but other possibilities would be
f (t) = cosh at or f (t) = eat , a ∈ N∗ .

192
At the end of the encryption algorithm we obtain the cipher text

b1 b2 . . . bi

and the encryption key


k0 , k1 , . . . , ki−1 .
For decryption we are going to use the following algorithm:

1) Assign to each letter its code and construct a finite sequence {G0n }n of
positive integers, where

G0n = Φ(bn+1 ) − 1, n = 0, i − 1;

2) Use the encryption key and determine a finite sequence {Fn }n of posi-
tive integers, where

Fn = 26 · kn + G0n , n = 0, i − 1;

3) Construct the function F (s), where


i−1
X Fn
F (s) = n+p+1
;
n=0
s

4) Determine the function f (t), where

f (t) = L−1 [F (s)];

5) Denote by Gn the coefficients of the polynomial function f (t) and de-


crypt the cipher text into m1 m2 . . . mi , where

mn+1 = Φ−1 (Gn ), n = 0, n − 1.

Now we show how to use the algorithms described above on the plain text
LAPLACE using p = 1 and f (t) = e2t .

Example 3.4.16 (Encryption). We have the plain text LAPLACE and i =


7.

193
1) We obtain that

G0 = G3 = 12, G1 = G4 = 1, G2 = 16, G5 = 3, G6 = 5

and Gn = 0, ∀n ≥ 7;

X 2n
2) Since p = 1 and f (t) = e = 2t
tn , we get the function
n=0
n!

∞ 6
X 2n n X 2n n+1
g(t) = t Gn t = Gn t
n=0
n! n=0
n!
2 2 22 23 24 25 26
= G0 t + G1 t + G2 t3 + G3 t4 + G4 t5 + G5 t6 + G6 t7
1! 2! 3! 4! 5! 6!
2 2 22 3 23 4 24 5 25 6 26 7
= 12 · t + t + 16 · t + 12 · t + t + 3 · t + 5 · t ;
1! 2! 3! 4! 5! 6!

3) We compute
" 6
# 6
2n n+1
X X 2n (n + 1)!
L[g(t)](s) = L Gn t = Gn
n=0
n! n=0
n! sn+2
12 4 192 384 80 576 2240
= 2
+ 3+ 4 + 5 + 6 + 7 + 8 ;
s s s s s s s

4) One gets
H0 ≡ 12 mod 26, H1 ≡ 4 mod 26,
H2 ≡ 10 mod 26, H3 ≡ 20 mod 26,
H4 ≡ 2 mod 26, H5 ≡ 4 mod 26, H6 ≡ 4 mod 26;

5) We obtain that

ˆ b1 = Φ−1 (H0 + 1) = Φ−1 (13) = M;


ˆ b2 = Φ−1 (H1 + 1) = Φ−1 (5) = E;
ˆ b3 = Φ−1 (H2 + 1) = Φ−1 (11) = K;
ˆ b4 = Φ−1 (H3 + 1) = Φ−1 (21) = U;
ˆ b5 = Φ−1 (H4 + 1) = Φ−1 (3) = C;

194
ˆ b6 = Φ−1 (H5 + 1) = Φ−1 (5) = E;
ˆ b7 = Φ−1 (H6 + 1) = Φ−1 (5) = E.

Hence, the encryption of the plain text LAPLACE is MEKUCEE;

6) We calculate

ˆ k0 = (F0 − H0 )/26 = (12 − 12)/26 = 0;


ˆ k1 = (F1 − H1 )/26 = (4 − 4)/26 = 0;
ˆ k2 = (F2 − H2 )/26 = (192 − 10)/26 = 7;
ˆ k3 = (F3 − H3 )/26 = (384 − 20)/26 = 14;
ˆ k4 = (F4 − H4 )/26 = (80 − 2)/26 = 3;
ˆ k5 = (F5 − H5 )/26 = (576 − 4)/26 = 22;
ˆ k6 = (F6 − H6 )/26 = (2240 − 4)/26 = 86.

Hence, the encryption key is 0, 0, 7, 14, 3, 22, 86.

Example 3.4.17 (Decryption). We have the cipher text MEKUCEE, thus


i = 7, and the encryption key 0, 0, 7, 14, 3, 22, 86.

1) We obtain that

G00 = 12, G01 = G05 = G06 = 4, G02 = 10, G03 = 20 and G04 = 2;

2) We calculate

• F0 = 26 · k0 + G00 = 26 · 0 + 12 = 12;
• F1 = 26 · k1 + G01 = 26 · 0 + 4 = 4;
• F2 = 26 · k2 + G02 = 26 · 7 + 10 = 192;
• F3 = 26 · k3 + G03 = 26 · 14 + 20 = 384;
• F4 = 26 · k4 + G04 = 26 · 3 + 2 = 80;
• F5 = 26 · k5 + G05 = 26 · 22 + 4 = 576;
• F6 = 26 · k6 + G06 = 26 · 86 + 4 = 2240;

195
3) We construct the function
6
X Fn
F (s) =
n=0
sn+2
12 4 192 384 80 576 2240
= 2
+ 3+ 4 + 5 + 6 + 7 + 8 ;
s s s s s s s

4) We determine the function


 
−1 −1 12 4 192 384 80 576 2240
f (t) = L [F (s)] = L + 3+ 4 + 5 + 6 + 7 + 8
s2 s s s s s s
2 4 4
= 12t + 2t2 + 32t3 + 16t4 + t5 + t6 + t7
3 5 9
1 2 3
2 2 2 24
= 12t + 1 · t2 + 16 · t3 + 12 · t4 + 1 · t5 +
1! 2! 3! 4!
25 6 26 7
+3· t +5· t ;
5! 6!

5) The coefficients of the polynomial function f (t) are

G0 = G3 = 12, G1 = G4 = 1, G2 = 16, G5 = 3 and G6 = 5.

One obtains

ˆ m1 = Φ−1 (G0 ) = Φ−1 (12) = L;

ˆ m2 = Φ−1 (G1 ) = Φ−1 (1) = A;

ˆ m3 = Φ−1 (G2 ) = Φ−1 (16) = P;

ˆ m4 = Φ−1 (G3 ) = Φ−1 (12) = L;

ˆ m5 = Φ−1 (G4 ) = Φ−1 (1) = A;

ˆ m6 = Φ−1 (G5 ) = Φ−1 (3) = C;

ˆ m7 = Φ−1 (G6 ) = Φ−1 (5) = E.

Thus, the original message was indeed LAPLACE.

196
3.5 Exercises
E 16. Calculate the Laplace transform of the following original functions
f (t):
a) f (t) = t · sinh(2t) + 2 cos2 t, t ≥ 0;
Z t
b) f (t) = e −t
xex cos x dx, t ≥ 0;
0

e−2t + 1
c) f (t) = · sin(3t), t > 0;
t
1
d) f (t) = √ − cosh2 t, t > 0;
t
 t
 e , t ∈ [0, 1)
e) f (t) = 1, t ∈ [1, 2) .
0, t ≥ 2

Solution. a) We use Theorem 3.2.1 (Linearity), Theorem 3.2.16 (Dif-


ferentiation of the Image) and Examples 3.2.3 (Cosine Function) and 3.2.4
(Hyperbolic Sine Function). It follows that

L[t · sinh(2t) + 2 cos2 t](s) = L[t · sinh(2t)](s) + 2L[cos2 t](s)


 
0 1 + cos 2t
= (−1) · (L[sinh(2t)](s)) + 2L (s)
2
 0
2
= (−1) · + L[1](s) + L[cos 2t](s)
s2 − 4
4s 1 s
= 2 + + .
(s − 4)2 s s2 + 4

Finally,
4s s2 + 2
L[t · sinh(2t) + 2 cos2 t](s) = + ;
(s2 − 4)2 s(s2 + 4)

b) We use Theorem 3.2.11 (Translation or Frequency Shifting), Theorem


3.2.20 (Integration of the Original), Theorem 3.2.16 (Differentiation of the

197
Image) and Example 3.2.3 (Cosine Function). It follows that
 Z t  Z t 
−t x x
L e xe cos x dx (s) = L xe cos x dx (s + 1)
0 0
1
= · L[tet cos t](s + 1)
s+1
1 0
= · (−1) · L[et cos t](s + 1)
s+1
1
= · (−1) · (L[cos t](s + 1 − 1))0
s+1
 0
−1 0 −1 s
= · (L[cos t](s)) = · .
s+1 s+1 s2 + 1

Finally,
t
1 − s2
 
−1 s−1
Z
−t x
L e xe cos x dx (s) = · 2 2
= 2 ;
0 s + 1 (s + 1) (s + 1)2

c) We use Theorem 3.2.22 (Integration of the Image), Theorem 3.2.1 (Lin-


earity), Theorem 3.2.11 (Translation or Frequency Shifting) and Examples
3.2.2 (Sine Function). It follows that

e−2t + 1
   −2t 
e sin 3t + sin 3t
L · sin(3t) (s) = L (s)
t t
Z ∞
= L[e−2t sin 3t + sin 3t](x) dx
Zs ∞ Z ∞
−2t
= L[e sin 3t](x) dx + L[sin 3t](x) dx
Zs ∞ Z s∞
3
= L[sin 3t](x + 2) dx + 2
dx
x +9
Zs ∞ s
x ∞
3
= dx + arctan
s (x + 2)2 + 9 3 s
 
x+2 ∞ π s
= arctan + − arctan
3 s 2 3
 
π s+2 π  s
= − arctan + − arctan .
2 3 2 3

198
Finally,
e−2t + 1
   
s+2 s
L · sin(3t) (s) = π − arctan − arctan ;
t 3 3

d) We use 3.2.1 (Linearity) and Examples 3.1.11 (Power Function) and


3.1.10 (Exponential Function). It follows that
   
1 1
L √ − cosh t (s) = L √ (s) − L[cosh2 t](s)
2
t t
" #
t −t 2
1 e + e
= L[t− 2 ](s) − L (s)
2
Γ 1 − 21

e + 2 + e−2t
 2t 
= 1 −L (s)
s1− 2 4
Γ 21

1
L[e2t ](s) + 2L[1](s) + L[e−2t ](s)

= √ −
s 4
√  
π 1 1 1 1
= √ − +2· + .
s 4 s−2 s s+2
Finally, √
s2 − 1
 
1 2 π
L √ − cosh t (s) = √ − ;
t s s(s2 − 4)

e) Since the function f (t) changes for t ≥ 0, we cannot use the properties
of the Laplace transform, but only its definition (Definition 3.1.1). It follows
that
Z ∞
L[f (t)](s) = f (t)e−st dt
Z0 1 Z 2 Z ∞
−st −st
= t
e ·e dt + 1·e dt + 0 · e−st dt
0 1 2
Z 1 Z 2
et(1−s) 1
e−st 2
= et(1−s) dt + e−st dt + 0 = − .
0 1 1−s 0 s 1
Finally,
e1−s − 1 e−2s − e−s
L[f (t)](s) = − .
1−s s

199
W 15. Calculate the Laplace transform of the following original functions
f (t):
a) f (t) = t2 + 2 cos t − et sin 2t, t ≥ 0;
b) f (t) = sin(t + π) + 4 sin2 3t, t ≥ 0;
Z t
2t
c) f (t) = te − x2 e−ax dx, t > 0, a ∈ R∗ ;
0
Z t
d) f (t) = ex (t − x)2021 dx, t > 0;
0
cos t − cos 3t
e) f (t) = , t > 0;
t

 1, t ∈ [0, 1)
f) f (t) = t2 , t ∈ [1, π) ;
0, t ≥ π


t, t ∈ [0, π)
g) f (t) = , f (t + 2π) = f (t) for t > 0.
π − t, t ∈ [π, 2π)
Answer. a) One gets
2 2s 2
L[f (t)](s) = + − ;
s3 s2 + 1 s2 − 2s + 2

b) Since sin(t + π) = sin t cos π + cos t sin π = − sin t and sin2 3t =


1 − cos 6t
, we use Theorem 3.2.1 (Linearity) and Examples 3.2.2 (Sine Func-
2
tion) and 3.2.3 (Cosine Function) and one obtains
1 72
L[f (t)](s) = − + ;
s2 2
+ 1 s(s + 36)

c) One gets
1 2
L[f (t)](s) = − ;
(s − 2)2 s(s + a)3

d) First we notice that the integral is a convolution product (see Theorem


3.2.26 (Convolution)), namely et ∗ t2021 . Then
(2021)!
L[f (t)](s) = ;
s2022 (s
− 1)

200
e) One obtains
Z ∞ Z ∞ 
x x
L[f (t)](s) = L[cos t − cos 3t](x) dx = − dx
s s x2 + 1 x2 + 9
 2   2 
1 x +1 ∞ 1 s +1
= · ln = − ln 2 ;
2 x2 + 9 s 2 s +9

f) One gets
1 − π 2 e−πs
    
2 −πs 1 −s 1
L[f (t)](s) = + 2 e −π − +e 1+ ;
s s s s

g) Since f is a periodic function with T = 2π, one obtains


Z 2π
1
L[f (t)](s) = f (t)e−st dt
1 − e−2πs 0
Z π Z 2π 
1 −st −st
= te dt + (π − t)e dt
1 − e−2πs 0 π
πe−πs −πs  1 2
= e − 1 + 2 e−πs − 1 .
s s

E 17. Calculate, using the Laplace transform, the following improper inte-
grals:
Z ∞ −at
e − e−bt
a) dt, a, b > 0, a 6= b;
0 t
Z ∞
sin t + t cos t −at
b) ·e dt, a > 0.
0 t
Solution. The idea is to use Corollary 3.2.25 and formula (3.18).
a) One gets
Z ∞ −at Z ∞
e − e−bt
dt = L[e−at − e−bt ](x) dx
0 t
Z0 ∞
L[e−at ](x) − L[e−bt ](x) dx

=
Z0 ∞    

1 1 x+a
= − dx = ln
0 x+a x+b x+b 0
a  
b
= ln 1 − ln = ln ;
b a

201
b) One obtains
Z ∞ Z ∞
sin t + t cos t −at
·e dt = L[(sin t + t cos t)e−at ](x) dx
0 t 0
Z ∞
(3.10)
= L[(sin t + t cos t)](x + a) dx
Z0 ∞
= L[(t sin t)0 ](x + a) dx.
0

Denoting t sin t by g(t) and using Theorem 3.2.13 (Differentiation of the


Original),
L[g 0 (t)](x) = xL[g(t)](x) − g(0) = xL[g(t)](x) − 0 = xL[g(t)](x),
which implies that
L[(t sin t)0 ](x + a) = (x + a)L[t sin t](x + a).
Now from Theorem 3.2.16 (Differentiation of the Image) and formula (3.14),
one obtains
 0
0 1
L[t sin t](x + a) = (−1) · (L[sin t](x + a)) = (−1) · .
(x + a)2 + 1
Hence,
Z ∞ Z ∞  0
0 1
L[(t sin t) ](x + a) dx = − (x + a) dx
0 0 (x + a)2 + 1
Z ∞
(x + a) ∞ 1
=− 2
+ dx
(x + a) + 1 0 0 (x + a)2 + 1
a ∞
= 2 + arctan(x + a)
a +1 0
a π
= 2 + − arctan a.
a +1 2

W 16. Calculate, using the Laplace transform, the following improper inte-
grals:
Z ∞
sin(at) + sin(bt)
a) dt, a, b > 0;
Z0 ∞ t
sin(at) · cos(bt)
b) dt, a, b > 0, a 6= b.
0 t

202
Answer. We employ again Corollary 3.2.25 and formula 3.18 and various
properties of the Laplace transform.
a) One gets
Z ∞ Z ∞  
sin(at) + sin(bt) a b
dt = + dx
0 t 0 x 2 + a2 x 2 + b 2
x ∞ x ∞
= arctan + arctan
a 0 b 0
= π;

b) We use the trigonometric identity

sin(a + b)t + sin(a − b)t


sin(at) · cos(bt) =
2
π
and then the result from a). The value of the integral is .
2

E 18. Using partial fraction decomposition, determine the originals f (t),


t > 0 of the following Laplace transforms:
1
a) F (s) = 2 ;
s + 5s + 6
s+3
b) F (s) = 3 ;
s + 9s
s3 + s2 + 1
c) F (s) = 4 .
s + 5s2 + 4

Solution. a) Since s2 + 5s + 6 = (s + 2)(s + 3), one gets the decomposition

1 A B
= + , A, B ∈ R.
s2 + 5s + 6 s+2 s+3
One way to determine the constants A and B is the following:
1
ˆ A = lim (s + 2)F (s) = lim = 1;
s→−2 s→−2 s+3
1
ˆ B = lim (s + 3)F (s) = lim = −1.
s→−3 s→−3 s + 2

203
Therefore,
1 1
F (s) = − = L[e−2t ](s) − L[e−3t ](s),
s+2 s+3
which implies that
f (t) = e−2t − e−3t ;

b) One obtains
s+3 s+3 A Bs + C
F (s) = 3
= 2
= + 2 , A, B, C ∈ R.
s + 9s s(s + 9) s s +9
We determine the constants A, B and C equalizing the corresponding nu-
merators. Hence,

s + 3 = A(s2 + 9) + s(Bs + C) = s2 (A + B) + Cs + 9A

 A+B =0 1 1
We obtain the system C = 1 , which has the solution A = , B = −
3 3
9A = 3

and C = 1. Therefore,
1 1 1 s 1
F (s) = · − · 2 + 2
3 s 3 s +9 s +9
1 1 1 3
= · L[1](s) − · L[cos 3t](s) + · 2
3 3 3 s +9
1 1 1
= · L[1](s) − · L[cos 3t](s) + · L[sin 3t](s),
3 3 3
which implies that
1 − cos 3t + sin 3t
f (t) = ;
3
c) One gets

s3 + s2 + 1 s3 + s2 + 1 As + B Cs + D
F (s) = = = + 2 ,
s4 + 5s2 + 4 (s2 + 1)(s2 + 4) s2 + 1 s +4
where A, B, C, D ∈ R. Equalizing again the corresponding numerators, we
obtain that

s3 + s2 + 1 = s3 (A + C) + s2 (B + D) + s(4A + C) + 4B + D.

204


 A+C =1
B+D =1

Now we identify the coefficients and we get the system ,

 4A +C =0
4B + D = 1

1 4
which has the solution A = − , C = , B = 0 and D = 1. Therefore,
3 3
1 s 4 s 1
F (s) = − · 2 + · 2 + 2
3 s +1 3 s +4 s +4
1 4 1 2
= − L[cos t](s) + L[cos 2t](s) + · 2
3 3 2 s +4
1 4 1
= − L[cos t](s) + L[cos 2t](s) + L[sin 2t](s),
3 3 2
which implies that
1 4 1
f (t) = − cos t + cos 2t + sin 2t.
3 3 2

W 17. Using partial fraction decomposition, determine the originals f (t),


t > 0 of the following Laplace transforms:
s+1
a) F (s) = 2 ;
s − 4s + 3
1
b) F (s) = 4 ;
s − 13s2 + 36
s
c) F (s) = 3 .
s − 3s + 2
Answer. a) One obtains
s+1 1 1
F (s) = =− +2· ;
(s − 1)(s − 3) s−1 s−3
hence,
f (t) = −et + 2e3t ;
b) One gets
1 1
F (s) = = 2
s4 2
− 13s + 36 (s − 4)(s2 − 9)
1
=
(s − 2)(s + 2)(s − 3)(s + 3)
1 1 1 1 1 1 1 1
=− · + · + · − · ;
20 s − 2 20 s + 2 30 s − 3 30 s + 3

205
hence,

−e2t + e−2t e3t − e−3t sinh(2t) sinh(3t)


f (t) = + =− + ;
20 30 10 15

c) One gets
s s
F (s) = =
s3
− 3s + 2 (s − 1)2 (s + 2)
2 1 2 1 1 1
= · − · + · ;
9 s − 1 9 s + 2 3 (s − 1)2

hence,
2 2 1
f (t) = et − e−2t + tet .
9 9 3

E 19. Using residues, determine the originals f (t), t > 0 of the following
Laplace transforms:
s2 + s + 2
a) F (s) = ;
(s − 1)4
s2 − 1
b) F (s) = ;
s3 + 9s
s−1
c) F (s) = 2 .
(s + 1)2

Solution. We use the following algorithm:

1. Construct the function G(s) = F (s) · est and determine the isolated
singular points of G;

2. Compute the residues of G in all isolated singular points;

3. Conclude that f (t) is the sum of the residues calculated at Step 2.

(s2 + s + 2)est
a) 1. One gets G(s) = and s = 1 is the only isolated
(s − 1)4
singular point of G (pole of order 4);

206
2. We compute the residue of G in s = 1 and we obtain that

1 000 1 000
lim (s − 1)4 · G(s) = lim (s2 + s + 2)est
 
res (G, 1) =
3! s→1 6 s→1
1
= lim est t3 (s2 + s + 2) + 3t2 (2s + 1) + 6t

6 s→1
et (4t3 + 9t2 + 6t)
= ;
6

3. We conclude that

et (4t3 + 9t2 + 6t)


f (t) = ;
6

(s2 − 1)est (s2 − 1)est


b) 1. One gets G(s) = = and the isolated singular
s3 + 9s s(s2 + 9)
points are s1 = 0, s2,3 = ±3i, all of them being simple poles;
2. We compute the residues of G in all isolated singular points and
we get that

(s2 − 1)est (s2 − 1)est 1


res (G, 0) = = =− ,
(s3 + 9s)0 s=0 (3s2 + 9) s=0 9

(s2 − 1)est (s2 − 1)est 5e3it


res (G, 3i) = = = ,
(s3 + 9s)0 s=3i (3s2 + 9) s=3i 9

(s2 − 1)est (s2 − 1)est 5e−3it


res (G, −3i) = = = ;
(s3 + 9s)0 s=−3i (3s2 + 9) s=−3i 9
3. We conclude that

1 5e3it 5e−3it 1 5(e3it + e−3it )


f (t) = − + + =− +
9 9 9 9 9
1 5 · (2 cos 3t) 10 cos 3t − 1
=− + = ;
9 9 9

(s − 1)est
c) 1. One gets G(s) = and the isolated singular points are ±i,
(s2 + 1)2
both being poles of order 2;

207
2. We compute the residues of G in s = ±i and we obtain that
0
(s − 1)est

 2
0
res (G, i) = lim (s − i) G(s) = lim
s→i s→i (s + i)2
[est + (s − 1)test ](s + i) − 2(s − 1)est
= lim
s→i (s + i)3
eit (t − 1 + it)
= ,
4i

e−it (t − 1 − it)
res (G, −i) = res (G, i) = ;
−4i
3. We conclude that

eit (t − 1 + it) e−it (t − 1 − it)


f (t) = +
4i −4i
(t − 1)(e − e ) + it(eit + e−it )
it −it
=
4i
(t − 1)(2i sin t) + it(2 cos t)
=
4i
(t − 1) sin t + t cos t
= .
2

W 18. Using residues, determine the originals f (t), t > 0 of the following
Laplace transforms:
1
a) F (s) = 3 2
;
s − 3s + 3s − 1
1
b) F (s) = 2 ;
(s + 4)2
se−s
c) F (s) = .
s2 − 1
Answer. We will follow the algorithm described in Exercise 19.
est
a) G(s) = , s = 1 is the only isolated singular point of G (pole of
(s − 1)3
t2 et
order 3) and f (t) = ;
2

208
est
b) G(s) = , s1,2 = ±2i are the isolated singular points of G
(s2 + 4)2
(both poles of order 2),

e2it (1 − 2it) e−2it (1 + 2it)


res (G, 2i) = , res (G, −2i) =
64i −64i
sin 2t − 2t cos 2t
and f (t) = ;
32
sest−s
c) G(s) = 2 , s1,2 = ±1 are the isolated singular points of G (both
s −1
et−1 + e−t+1
simple poles) and f (t) = .
2

E 20. Determine the originals f (t), t > 0 of the following Laplace transforms:
s2 − s + 2
a) F (s) = ;
(s2 + 1)(s − 1)
1
b) F (s) = ;
(s − 1)3 (s2 + 4)
s+4 s2 + s + 1
c) F (s) = + .
s2 − 3s + 2 (s − 2)3

Solution. a) Based on how the denominator looks like, the easiest method
is to use partial fraction decomposition. One obtains

s2 − s + 2 As + B C
F (s) = = + , A, B, C ∈ R.
(s2 + 1)(s − 1) s2 + 1 s−1

Hence, we get the equality

s2 − s + 2 = (As + B)(s − 1) + C(s2 + 1), ∀s ∈ C.

For s = 1 we get that 2 = 2C, so C = 1 and for s = i 


the equality becomes
−A − B = 1
1 − i = (Ai + B)(i − 1) = −A − B + i(B − A). Hence, , so
−A + B = −1
A = 0 and B = −1. It follows that
1 1
F (s) = − + ,
s2 +1 s−1

209
which implies that
f (t) = − sin t + et ;

b) Since partial fraction decomposition is difficult to perform, we follow


the steps from the residues method (see Exercise 19).
est
1. One gets G(s) = and the isolated singular points are
(s − 1)3 (s2 + 4)
s1 = 1 (pole of order 3) and s2,3 = ±2i (both simple poles);
2. We compute the residues of G in all isolated singular points and we
obtain that
 st 00
1  3
00 1 e
res (G, 1) = lim (s − 1) G(s) = lim 2
2! s→1 2 s→1 s + 4
 st 2 0
1 e (s t − 2s + 4t)
= lim
2 s→1 (s2 + 4)2
1 est [(s2 t2 + 4t2 − 2)(s2 + 4) − 4s(s2 t − 2s + 4t)]
= lim
2 s→1 (s2 + 4)3
et (25t2 − 20t − 2)
= ,
125

est
(s − 1)3 e2it (2i + 1)3
res (G, 2i) = =− ,
2s s=2i 500i
est
(s − 1)3 e−2it (−2i + 1)3
res (G, −2i) = = .
2s s=−2i 500i
3. We conclude that

et (25t2 − 20t − 2) e2it (2i + 1)3 e−2it (−2i + 1)3


f (t) = − +
125 500i 500i
t 2 2it −2it
e (25t − 20t − 2) e (2i + 11) + e (−11 + 2i)
= +
125 500i
et (25t2 − 20t − 2) 11(e2it − e−2it ) + 2i(e2it + e−2it )
= +
125 500i
et (25t2 − 20t − 2) 11 sin 2t + 2 cos 2t
= + ;
125 250

210
s+4
c) We write F (s) = F1 (s)+F2 (s), where F1 (s) = and F2 (s) =
s2 − 3s + 2
s2 + s + 1
. We determine the original f1 (t) of F1 (s) using partial fraction
(s − 2)3
decomposition. Hence,
s+4 s+4 A B
F1 (s) = = = + , A, B ∈ R.
s2 − 3s + 2 (s − 1)(s − 2) s−1 s−2
It follows that s + 4 = A(s − 2) + B(s− 1) = s(A + B) − 2A − B and
A+B =1
identifying the coefficients, we get that , so A = −5 and
−2A − B = 4
B = 6. Therefore,
1 1
F1 (s) = −5 · +6· ;
s−1 s−2
hence,
f1 (t) = −5et + 6e2t .
For the second part of the exercise, which is determining the original f2 (t)
of F2 (s), we will use residues.
est (s2 + s + 1)
1. One gets G(s) = and s = 1 is the only isolated singular
(s − 2)3
point of G (pole of order 3);
2. We compute the residue of G in s = 1 and we get that
1 00 1 00
res (G, 1) = lim (s − 1)3 G(s) = lim est (s2 + s + 1)
 
2! s→1 2 s→1
1  st 2 0
= lim e (ts + ts + t + 2s + 1)
2 s→1
1
= lim test (ts2 + ts + t + 2s + 1) + est (2st + t + 2)
 
2 s→1
1
= et (3t2 + 6t + 2);
2
3. We conclude that
1
f2 (t) = et (3t2 + 6t + 2).
2
Hence,
1
f (t) = f1 (t) + f2 (t) = −5et + 6e2t + et (3t2 + 6t + 2)
2
2t 1 t 2
= 6e + e (3t + 6t − 8).
2

211
W 19. Determine the originals f (t), t > 0 of the following Laplace trans-
forms:
s2 + s + 4
a) F (s) = 3 ;
s + s2 + s + 1
1
b) F (s) = 2 ;
(s − 4)2
s 1
c) F (s) = 2 + 2 .
s + 3s + 2 (s + s + 1)2

Answer. a) f (t) = − cos t + 2 sin t + 2e−t ;


e2t (2t − 1) e−2t (2t + 1) t cosh(2t) − 2 sinh(2t)
b) f (t) = + = ;
32 32 32
t
 √ √   √ 
e− 2 −2 3t cos 23t + 4 sin 23t
c) f (t) = −e−t + 2e−2t + √ .
3 3

E 21. Determine the solution y(t), t ≥ 0 of the following initial value prob-
lems:
a) y 00 + y 0 = sin t, y(0) = 1, y 0 (0) = 2;
b) y 00 − 2y 0 + y = tet , y(0) = 0, y 0 (0) = 1;
c) y 000 − 3y 00 = sinh(3t), y(0) = 1, y 0 (0) = 0, y 00 (0) = 1.

Solution. We are determining the solution using the Laplace transform


and the Laplace inverse (see Subsection 3.4.1 (Differential Equations)). We
denote as usual by Y (s) the Laplace transform of the function y(t), namely
L[y(t)](s) = Y (s).
a) Applying Laplace transform to the differential equation and using The-
orem 3.2.1 (Linearity), one gets

L[y 00 ](s) + L[y 0 ](s) = L[sin t](s).

From Theorem 3.2.14 (General Differentiation of the Original) and formula


(3.12),
L[y 00 ](s) = s2 Y (s) − sy(0) − y 0 (0) = s2 Y (s) − s − 2,

L[y 0 ](s) = sY (s) − y(0) = sY (s) − 1.

212
1
Hence, s2 Y (s) − s − 2 + sY (s) − 1 = , which implies that
s2 + 1
s+3 1 s3 + 3s2 + s + 4
Y (s) = + = .
s2 + s (s2 + 1)(s2 + s) s(s + 1)(s2 + 1)
For determining the original y(t), we will use partial fraction decomposition.
Therefore,
A B Cs + D
Y (s) = + + 2 , A, B, C, D ∈ R.
s s+1 s +1
We get the equality s3 + 3s2 + s + 4 = A(s + 1)(s2 + 1) + Bs(s2 + 1) + (Cs +
D)(s2 + s), ∀s ∈ C. Hence,
- for s = 0, we get that 4 = A;
5
- for s = −1, we get that 5 = −2B ⇒ B = − ;
2
for s = i, we get that 1 = (Ci + D)(−1 + i) = −C − D + i(D − C) ⇒
- 
D−C =0 1 1
⇒C=− ,D=− .
−C − D = 1 2 2
It follows that
1 5 1 1 s 1 1
Y (s) = 4 · − · − · 2 − · 2
s 2 s+1 2 s +1 2 s +1
5 1 1
= 4L[1](s) − L[e−t ](s) − L[cos t](s) − L[sin t](s),
2 2 2
so the original is
5 1 1
y(t) = 4 − e−t − cos t − sin t;
2 2 2

b) Applying Laplace transform to the differential equation and using The-


orem 3.2.1 (Linearity), one gets

L[y 00 ](s) − 2L[y 0 ](s) + L[y](s) = L[tet ](s).

From Theorem 3.2.14 (General Differentiation of the Original) and formula


(3.12),
L[y 00 ](s) = s2 Y (s) − sy(0) − y 0 (0) = s2 Y (s) − 1,

213
L[y 0 ](s) = sY (s) − y(0) = sY (s).
For computing L[tet ](s), one can use Theorem 3.2.16 and formula (3.14) and
we obtain that
 0
t t
0 1 1
L[te ](s) = (−1) · L[e ](s) = (−1) · = .
s−1 (s − 1)2
1
It follows that s2 Y (s) − 1 − 2sY (s) + Y (s) = , which implies that
(s − 1)2
1 1
Y (s) = 2
+ .
(s − 1) (s − 1)4
1 1
As L[tet ](s) = 2
, we just need to determine the original of .
(s − 1) (s − 1)4
For this we will use the residues method and the algorithm described in
Exercise 19.
est
1. One gets G(s) = and s = 1 is the only isolated singular point
(s − 1)4
of G (pole of order 4);
2. We compute the residue of G in s = 1 and we get that
1 000 1  000
res (G, 1) = lim (s − 1)4 G(s) = lim est

3! s→1 6 s→1
3 t
1 te
= lim(t3 est ) = .
6 s→1 6
1 t3 et
3. We conclude that the original of is .
(s − 1)4 6
It follows that the solution of the initial value problem is
t3 et
y(t) = tet + ;
6
c) Applying Laplace transform to the differential equation and using The-
orem 3.2.1 (Linearity), one gets
L[y 000 ](s) − 3L[y 00 ](s) = L[sinh(3t)](s).
From Theorem 3.2.14 (General Differentiation of the Original) and for-
mula (3.12),
L[y 000 ](s) = s3 Y (s) − s2 y(0) − sy 0 (0) − y 00 (0) = s3 Y (s) − s2 ,

214
L[y 00 ](s) = s2 Y (s) − sy(0) − y 0 (0) = s2 Y (s) − s.
3
Using this and the fact that L[sinh(3t)](s) = (see Example 3.2.4
−9 s2
(Hyperbolic Sine Function)), we obtain that s3 Y (s) − s2 − 3(s2 Y (s) − s) =
3
2
, which implies that
s −9
1 3
Y (s) = + 2 .
s s (s − 3)2 (s + 3)

1
We already know from Example 3.1.9 that = L[1](s), so we just need
s
3
to determine the original of F (s) = 2 . For this we will use
s (s − 3)2 (s + 3)
the residues method and the algorithm described in Exercise 19.
3est
1. One gets G(s) = 2 and the isolated singular points are
s (s − 3)2 (s + 3)
s1 = 0 (pole of order 2), s2 = 3 (pole of order 2) and s3 = −3 (simple pole);
2. We compute the residues of G in all isolated singular points and we
get that 0
est

3t + 1
res (G, 0) = 3 lim = ,
s→0 (s − 3)2 (s + 3) 27
0
est e3t (6t − 5)

res (G, 3) = 3 lim 2 = ,
s→3 s (s + 3) 108
est e−3t
 
res (G, −3) = 3 lim = .
s→−3 s2 (s − 3)2 108
3t + 1 e3t (6t − 5) e−3t
3. We conclude that f (t) = + + .
27 108 108
It follows that the solution of the initial value problem is

3t + 1 e3t (6t − 5) e−3t


y(t) = 1 + f (t) = 1 + + + .
27 108 108

W 20. Determine the solution y(t), t ≥ 0 of the following initial value


problems:
a) y 00 − 3y 0 + 2y = et + e2t , y(0) = 1, y 0 (0) = 1;

215
b) y 00 − y = t cos t, y(0) = 1, y 0 (0) = 0;
Z t
00 0
c) y − 4y + 4y = xex dx, y(0) = 0; y 0 (0) = 1;
0
1
d) y 00 + y = , y(0) = 0, y 0 (0) = −1;
cos t
e) y 000 − 3y 00 + 3y 0 − y = cos t − sin t, y(0) = 0, y 0 (0) = 1, y 00 (0) = −1;
f) y 000 + y 00 = t · sinh t, y(0) = 1, y 0 (0) = 2, y 00 (0) = 0.

Answer. a) One gets


1 2s − 3
Y (s) = + ;
s − 1 (s − 1)2 (s − 2)2

hence,
y(t) = et − tet + te2t ;

b) One obtains
s 1
Y (s) = + 2 ;
s2 − 1 (s + 1)2
hence,
sin t − t cos t
y(t) = cosh t + ;
2
c) One gets
1 1
Y (s) = + ;
(s − 2)2 s(s − 1)2 (s − 2)2
hence,
e2t (2t − 5) + 1
y(t) = te2t + et (t + 1) + ;
4
d) One obtains
 
1 1 1
Y (s) = − 2 + ·L (s)
s + 1 s2 + 1 cos t
 
1
= −L[sin t](s) + L[sin t](s) · L (s)
cos t
 
1
= −L[sin t](s) + L sin t ∗ (s);
cos t

216
hence,
Z t
1 sin(t − x)
y(t) = − sin t + sin t ∗ = − sin t + dx
cos t 0 cos x
Z t
sin t cos x − sin x cos t
= − sin t + dx
0 cos x
Z t Z t
= − sin t + sin t dx − cos t tan x dx
0 0
= − sin t + t sin t + cos t · ln(| cos t|);

e) One gets
s−4 1
Y (s) = 3
+ ;
(s − 1) (s − 1) (s2 + 1)
2

hence,
et (−3t2 + 2t) et (t − 1) cos t
y(t) = + +
2 2 2
et (−3t2 + 3t − 1) + cos t
= ;
2

f) One obtains
s2 + 3s + 2 2s
Y (s) = 3 2
+ 2
s +s (s − 1)2 (s3 + s2 )
(s + 1)(s + 2) 2s
= 2
+ 2
s (s + 1) s (s + 1)(s2 − 1)2
s+2 2
= +
s2 s(s + 1)3 (s − 1)2
1 2 2
= + 2+ ;
s s s(s + 1)3 (s − 1)2
hence,
et (2t − 5)
  2 
−t t 11
y(t) = 1 + 2t + 2 1 + +e − −t−
16 4 8
t
 2 
e (2t − 5) t 11
= 3 + 2t + − e−t + 2t + .
8 2 4

217
E 22. Determine the solution of the following systems of differential equa-
tions with the specified initial conditions:
 0
x + x + y0 = 0
a) , x(0) = 1, y(0) = 2, y 0 (0) = 0;
x − y 00 = e−t
x − y 0 = sin t + cos t

b) , x(0) = 1, x0 (0) = 1, y(0) = 1, y 0 (0) =
x + y0 = 0
00

0;
 0
 x + x + y 0 + y = e−t
c) y 0 + z 00 = 0 , x(0) = 0, y(0) = 0, z(0) = 0, z 0 (0) = 1.
x + z 0 = et

Solution. a) Applying Laplace transform together with Theorem 3.2.1


(Linearity) and Theorem 3.2.18 (General Differentiation of the Original),
one gets the following equivalent algebraic systems:
L[x0 ](s) + L[x](s) + L[y 0 ](s) = 0


L[x](s) − L[y 00 ](s) = L[e−t ](s)
sL[x](s) − x(0) + L[x](s) + sL[y](s) − y(0) = 0
(
1 ⇔
L[x](s) − (s2 L[y](s) − sy(0) − y 0 (0)) =
s+1
(
(s + 1)L[x](s) + sL[y](s) = 3
1 .
L[x](s) − s2 L[y](s) = −2s +
s+1
Solving this last system, one finds
1
(s2 + s + 1)L[x](s) = s + ,
s+1
1
so L[x](s) = and x(t) = e−t . From the first equation we obtain that
s+1
1 + sL[y](s) = 3,
2
so L[y](s) = and y(t) = 2;
s
b) Applying Laplace transform together with Theorem 3.2.1 (Linearity)
and Theorem 3.2.18 (General Differentiation of the Original), one gets the
following equivalent algebraic systems:
L[x](s) − L[y 0 ](s) = L[cos t + sin t](s)


L[x00 ](s) + L[y 0 ](s) = 0

218
( s+1
L[x](s) − sL[y](s) + y(0) =
s2 + 1 ⇔
s2 L[x](s) − sx(0) − x0 (0) + sL[y](s) − y(0) = 0
( s+1
L[x](s) − sL[y](s) = 2 −1
s +1 .
s2 L[x](s) + sL[y](s) = 2 + s
s+1
Adding the two algebraic equations, we get that (s2 + 1)L[x](s) = 2 +
s +1
1 + s, so
s+1 s+1
L[x](s) = 2 2
+ 2 .
(s + 1) s +1
It follows that
3+s s+1
L[y](s) = − .
s(s + 1) s(s2 + 1)2
2

For determining the original x, one notices that


s+1 s 1
2
= 2 + 2 = L[cos t](s) + L[sin t](s),
s +1 s +1 s +1
s+1
so we need just to find the original of . For this we use residues.
(s2 + 1)2
(s + 1)est
1. One gets G(s) = and the isolated singular points are s1,2 =
(s2 + 1)2
±i (both poles of order 2);
2. We compute the residues of G in all isolated singular points and we
obtain that
0
(s + 1)est

res (G, i) = lim
s→i (s + i)2
[est + test (s + 1)](s + i) − 2(s + 1)est
= lim
s→i (s + 1)3
eit [i(t − 2) + t]
= ,
−8i
e−it [−i(t − 2) + t]
res (G, −i) = res (G, i) = .
8i
3. We conclude that
 
−1 s+1 (t − 2) cos t + t sin t
L 2 2
(t) = .
(s + 1) 4

219
Hence,
(t − 2) cos t + t sin t
x(t) = + cos t + sin t.
4
For determining the original y, one can use partial fraction decomposition
3+s s+1
for 2
and residues for .
s(s + 1) s(s2 + 1)2
3+s 3 s 1
For the first fraction we get that 2
= −3 2 + 2 , so
s(s + 1) s s +1 s +1
 
−1 3+s
L (t) = 3 − 3 cos t + sin t.
s(s2 + 1)

For the second one we use residues.


(s + 1)est
1. One gets G(s) = and the isolated singular points are s1 = 0
s(s2 + 1)2
(simple pole) and s2,3 = ±i (both poles of order 2);
2. We compute the residues of G in all isolated singular points and we
obtain that
res (G, 0) = 1,
eit (i + 2 + (1 − i)t)
res (G, i) = ,
4
e−it (−i + 2 + (1 + i)t)
res (G, −i) = ;
4
3. We conclude that
 
−1 s+1 (2 + t) cos t − (1 − t) sin t
L 2 2
(t) = 1 + .
s(s + 1) 2

Hence,
 
(2 + t) cos t − (1 − t) sin t
y(t) = 3 − 3 cos t + sin t − 1 +
2
(2 + t) cos t − (1 − t) sin t
= 2 − 3 cos t + sin t − ;
2

c) Applying Laplace transform together with Theorem 3.2.1 (Linearity)


and Theorem 3.2.18 (General Differentiation of the Original), one gets the

220
following algebraic system:
 1
 L[x](s) + L[y](s) =
(s + 1)2



1

L[y](s) + sL[z](s) =
 s
1


 L[x](s) + sL[z](s) =

s−1
It is advisable to use Cramer’s rule for obtaining the solutions X(s) =
1 1 0
L[x](s), Y (s) = L[y](s) and Z(s) = L[z](s). Hence, ∆ = 0 1 s = 2s
1 0 s
and
1
(s+1)2
1 0
s s
ˆ ∆X = 1
s
1 s = 2
+ − 1,
1 (s + 1) s−1
s−1
0 s
1
1 (s+1)2
0
s s
ˆ ∆Y = 0 1
s
s =1+ 2
− ,
1 (s + 1) s−1
1 s−1
s
1
1 1 (s+1)2 1 1 1
ˆ ∆Z = 0 1 1
s
= + − .
1 s − 1 s (s + 1)2
1 0 s−1

We get that
 
∆X 1 1 1 1
ˆ X(s) = = + − ,
∆ 2 (s + 1)2 s − 1 s
 
∆Y 1 1 1 1
ˆ Y (s) = = − + ,
∆ 2 (s + 1)2 s − 1 s
 
∆Z 1 1 1 1
ˆ Z(s) = = + − .
∆ 2 s(s − 1) s2 s(s + 1)2
As X(s) and Y (s) are already decomposed in partial fractions, it follows that
the originals are
1
te−t + et − 1

x(t) =
2

221
and
1
te−t − et + 1 .

y(t) =
2
1 1 1 1 1 1 1
Since = − and 2
= − − , it follows
s(s − 1) s−1 s s(s + 1) s s + 1 (s + 1)2
that  
1 1 2 1 1 1
Z(s) = − + + +
2 s − 1 s s2 s + 1 (s + 1)2
and the original is
1
te−t + et + e−t − 2 + t .

z(t) =
2

W 21. Determine the solution of the following systems of differential equa-


tions with the specified initial conditions:
( 1
x00 + x0 + y = sin 2t
a) 2 , x(0) = 0, x0 (0) = 1, y(0) = 1;
0
2x + y = cos 2t
 0
x − 2x + y 0 + 2y = 0
b) , x(0) = 1, x0 (0) = 1, y(0) = −1,
x00 + y 00 = cos t
y 0 (0) = −1.

 2x0 + 3y 0 = 0
c) x00 + z 00 = 1 , x(0) = 1, y(0) = 0, z(0) = −1, x0 (0) = 0,
x+y+z = t

0
z (0) = 0.

Solution. a) One obtains


s−1 s−1
L[x](s) = X(s) = =
s3 2
+s −2 (s − 1)(s2 + 2s + 2)
1 1
= 2 = ,
s + 2s + 2 (s + 1)2 + 1

so x(t) = e−t sin t.


On the other hand,
1 1 2
L[y](s) = Y (s) = + 2 − 2
.
s s + 4 s(s + 2s + 2)

222
Hence, using partial fraction decomposition for the last ratio, we obtain that

1 1 1 s+2
Y (s) = + 2 − + 2 ,
s s + 4 s s + 2s + 2
1
so y(t) = sin 2t + e−t cos t + e−t sin t;
2
b) One gets

s+2 1 1 1 s 1 1
L[x](s) = X(s) = 2
= · − · 2 + · 2 ,
4s(s + 1) 2 s 2 s +1 4 s +1

1 1 1
so x(t) = − cos t + sin t.
2 2 4
On the other hand,

s−2 1 1 1 s 1 1
L[y](s) = Y (s) = − 2
= · − · 2 − · 2 ,
4s(s + 1) 2 s 2 s +1 4 s +1

1 1 1
so y(t) = − cos t − sin t;
2 2 4
c) One obtains

3 1 1 3 1
L[x](s) = X(s) = − · 2 + + · 3 ,
2 s s 2 s
3 3
so x(t) = − t + 1 − t2 .
2 4
On the other hand,

1 1
L[y](s) = Y (s) = − 3
+ 2,
s s
1
so y(t) = − t2 + t.
2
Finally,
3 1 1 1 1
L[z](s) = Z(s) = · 2 − − · 3,
2 s s 2 s
3 1
so z(t) = t − 1 − t2 .
2 4

223
E 23. Determine the solution y(t), t ≥ 0 of the following integral equations:
Z t
a) y(t) − cosh(2(t − x))y(x) dx = 1 − t − 2t2 ;
0
Z t
b) y(t) − et−x (t − x)y(x) dx = sin t;
0
Z t Z t
x
c) y(t − x)e dx − y(x) dx = t sinh t.
0 0

Solution. As usual we make the following notation: L[y(t)](s) = Y (s).


a) We notice that we have a convolution product (see (3.41)), namely
Z t
cosh(2(t − x))y(x) dx = cosh 2t ∗ y(t).
0

Hence, the integral equation can be written as

y(t) − cosh 2t ∗ y(t) = 1 − t − 2t2 .

Applying Laplace transform together Theorem 3.2.1 (Linearity), one gets

L[y(t)](s) − L[cosh 2t ∗ y(t)](s) = L[1](s) − L[t](s) − 2L[t2 ](s).

Using Theorem 3.2.26 (Convolution) and formula (3.20) alongside with Ex-
amples 3.2.4 (Hyperbolic Cosine Function) and 3.1.11 (Power Function), it
follows that
s 1 1 2!
Y (s) − Y (s) · 2 = − 2 − 2 · 3,
s −4 s s s
which implies that
s2 − 4 1 4
Y (s) = 3
= − 3,
s s s
2
so y(t) = 1 − 2t ;
b) We notice again that we have a convolution product (see (3.41)),
namely Z t
et−x (t − x)y(x) dx = tet ∗ y(t).
0
Hence, the integral equation can be written as

y(t) − tet ∗ y(t) = sin t.

224
Applying Laplace transform together Theorem 3.2.1 (Linearity), one gets
1
Y (s) − L[tet ](s) · Y (s) = .
s2 +1
Using Theorem 3.2.26 (Convolution) and formula (3.20) alongside with Ex-
ample 3.2.12, it follows that
(s − 1)2
 
1 1
1− Y (s) = ⇒ Y (s) = .
(s − 1)2 s2 + 1 s(s − 2)(s2 + 1)
Now we use partial fraction decomposition. Therefore,
(s − 1)2 A B Cs + D
2
= + + 2 ,
s(s − 2)(s + 1) s s−2 s +1
1 1 2 4
which implies that A = − , B = , C = and D = . Hence,
2 10 5 5
1 1 1 1 2 s 4 1
Y (s) = − · + · + · 2 + · 2 ,
2 s 10 s − 2 5 s + 1 5 s + 1
1 1 2 4
so y(t) = − + e2t + cos t + sin t;
2 10 5 5
c) Here we have two integrals. The first is the convolution product

et ∗ y(t)

and the second one is just a primitive. Hence, applying Laplace transform
and Theorem 3.2.1 (Linearity), one gets
Z t 
t
L[e ∗ y(t)](s) − L y(x) dx (s) = L[t sinh t](s).
0

But now we have the following:


1
ˆ L[et ∗ y(t)](s) = L[et ](s) · Y (s) = · Y (s) (using Theorem 3.2.26
s−1
(Convolution) and formula (3.20));
Z t 
1
ˆ L y(x) dx (s) = · Y (s) (using Theorem 3.2.20 (Integration of
0 s
the Original) and formula (3.16));

225
0 
1
0 2s
ˆ L[t sinh t](s) = (−1) · (L[sinh t](s)) = (−1) · 2
= 2
s −1 (s − 1)2
(using Theorem 3.2.16 (Differentiation of the Image), formula (3.14)
and Example 3.2.4 (Hyperbolic Sine Function)).
We get that
 
1 1 2s 1 2s
− Y (s) = 2 2
⇔ Y (s) = ,
s−1 s (s − 1) s(s − 1) (s − 1)2 (s + 1)2
which implies that
2s2
Y (s) = .
(s − 1)(s + 1)2
Now we use to use residues.
2s2 est
1. One gets G(s) = and the isolated singular points are
(s − 1)(s + 1)2
s1 = 1 (simple pole) and s2 = −1 (pole of order 2);
2. We compute the residues of G in all isolated singular points and we
obtain that
2s2 est et
res (G, 1) = lim = ,
s→1 (s + 1)2 2
 2 st 0
se e−t (3 − 2t)
res (G, −1) = 2 lim = ;
s→−1 s − 1 2
3. We conclude that
et + e−t (3 − 2t)
y(t) = .
2

W 22. Determine the solution y(t), t ≥ 0 of the following integral equations:


1 t
Z
a) y(t) − sin(2(t − x))y(x) dx = t cos 2t;
2 0
Z t
b) 2y(t) + sinh(t − x)y(x) dx = t;
0
Z t Z t
c) y(t − x) cos x dx + 2 y(x) sin(t − x) dx = te−t .
0 0

Answer. a) One obtains


s2 − 4 −7 8
L[y](s) = Y (s) = 2 2
= 2 + 2 ,
(s + 3)(s + 4) s +3 s +4

226
7 √
so y(t) = − √ sin( 3t) + 4 sin 2t;
3
b) One gets
s2 − 1
L[y](s) = Y (s) = ,
s2 (2s2 − 1)
 
1 1
so y(t) = t − √ sinh √ t ;
2 2
c) Here we have two convolution products. The first one is

y(t) ∗ cos t

and the second one is


y(t) ∗ sin t.
We obtain that
s2 + 1
L[y](s) = Y (s) = ,
(s + 2)(s + 1)2
so y(t) = 5e−2t − 4e−t + 2te−t .

E 24. Determine the solution y(t), t ≥ 0 of the following integro-differential


equations:
Z t
00 0
a) y (t) + 2y (t) − 2 et−x y(x) dx = et , y(0) = 0, y 0 (0) = 1;
0
Z t Z t
0
b) y(t) + 3 cos(2t − 2x)y (x) dx = (t − x) sin 2x dx, y(0) = 0.
0 0

Solution. As usual we make the following notation: L[y(t)](s) = Y (s).


a) We notice that we have a convolution product (see (3.41)), namely
Z t
et−x y(x) dx = et ∗ y(t).
0

Applying Laplace transform together with Theorem 3.2.1 (Linearity), Theo-


rem 3.2.26 (Convolution) and Theorem 3.2.14 (General Differentiation of the
Original), one gets
1 1
s2 Y (s) − 1 + 2sY (s) − 2 · · Y (s) = ,
s−1 s−1

227
which is equivalent to
s3 + s2 − 2s − 2 s
Y (s) = ,
s−1 s−1
so
s
Y (s) = .
− 2)(s + 1)(s2
We use partial fraction decomposition to determine the original. Hence,
s As + B C
= 2 +
(s2 − 2)(s + 1) s −2 s+1
−s + 2 1
= 2 +
s −2 s+1
s 1 1
=− 2 +2· 2 + ,
s −2 s −2 s+1
so √ √ √
y(t) = − cosh( 2t) + 2 sinh( 2t) + e−t ;

b) Here we have two convolution products. The first one is


Z t
cos(2t − 2x)y 0 (x) dx = cos 2t ∗ y 0 (t)
0

and the second one is


Z t
(t − x) sin 2x dx = t ∗ sin 2t.
0

Now the initial equation can be rewritten as


y(t) + 3 cos 2t ∗ y 0 (t) = t ∗ sin 2t.
Applying Laplace transform together with Theorem 3.2.1 (Linearity), one
gets
L[y(t)](s) + 3L[cos 2t ∗ y 0 (t)](s) = L[t ∗ sin 2t](s).
But now we have the following:
s
ˆ L[cos 2t ∗ y 0 (t)](s) = L[cos 2t](s) · L[y 0 (t)](s) = · (sL[y(t)](s)
s2 + 4
s2
−y(0)) = Y (s) (using Theorems 3.2.26 (Convolution) and 3.2.13
s2 + 4
(Differentiation of the Original) and Example 3.2.3 (Cosine Function));

228
2
ˆ L[t ∗ sin 2t](s) = L[t](s) · L[sin 2t](s) = (using Theorems
s2 (s2 + 4)
3.2.26 (Convolution) and Examples 3.2.2 (Sine Function) and 3.1.11
(Power Function)).

We get that
3s2
 
2
1+ 2 Y (s) = ,
s +4 s2 (s2 + 4)
which implies that
 
1 1 1 1
Y (s) = 2 2 = 2
− 2 ,
2s (s + 1) 2 s s +1

so
1
y(t) = (t − sin t) .
2

W 23. Determine the solution y(t), t ≥ 0 of the following integro-differential


equations:
Z t
00 0
a) y (t) − y (t) − 2 et−x y(x) dx = sin t, y(0) = 0, y 0 (0) = 0;
0
Z t
b) y 0 (t) + y(t) + (t − x)y(x) dx = −1 + t + e−t , y(0) = 1.
0

Answer. a) We get that

s−1 s−1
Y (s) = = .
(s2 + 1)(s3 2
− 2s + s − 2) (s − 2)(s2 + 1)2

Now we use residues.


est (s − 1)
1. One gets G(s) = and the isolated singular points are
(s − 2)(s2 + 1)2
s1 = 2 (simple pole) and s2,3 = ±i (both poles of order 2);
2. We compute the residues of G in all isolated singular points and we
obtain that
e2t
res (G, 2) = ,
25

229
 0
st s−1 1
res (G, i) = lim e · ·
s→i s − 2 (s + i)2
 
st s − 1 1
= lim te · · −
s→i s − 2 (s + i)2
 
st 1 1 s−1 2
− lim e · + ·
s→i (s − 2)2 (s + i)2 s − 2 (s + i)3
eit
 
1−i 1 i−1
= t· + +
4 i − 2 (i − 2)2 i − 2
eit
 
i − 3 3 + 4i
= (t − 1) · + ,
4 5 25

res (G, −i) = res (G, i);


3. We conclude that
e2t (t − 1)(sin t + 3 cos t) 3 cos t 2 sin t
y(t) = − + − ;
25 10 50 25

b) We get that
 
1 1 1 1 1
s + 1 + 2 Y (s) = 1 + 2 + − =1+ 3 ,
s s s+1 s s + s2

which implies that


s2 1
Y (s) = 3 2
= ,
s +s s+1
so
y(t) = e−t .

3.6 MATLAB Applications


3.6.1 Laplace Transform
The syntax is one of the following:

1. F = laplace(f ). This computes the Laplace transform F of the sym-


bolic expression f . By default the variable of f is t and the variable of
the computed transform F is s;

230
2. F = laplace(f, p). This computes the Laplace transform F as a func-
tion of the parameter p instead of the default variable s;

3. F = laplace(f, p, x). This computes the Laplace transform F as a


function of the variable p instead of the default variable s and considers
that f is a function of the variable x instead of the default variable t.

Example 3.6.1.
>> f = exp(t) − sin(t);
F = laplace(f );
The answer is F = 1/(s − 1) − 1/(sˆ2 + 1).

Example 3.6.2.
>> syms p;
f = exp(t) − sin(t);
F = laplace(f, p);
The answer is F = 1/(p − 1) − 1/(pˆ2 + 1).

Example 3.6.3.
>> syms p x;
f = exp(x) − sin(x);
F = laplace(f, p);
The answer is F = 1/(p − 1) − 1/(pˆ2 + 1).

Transforms of Dirac’s Delta Function (Distribution) and Heavi-


side’s Step Function
Example 3.6.4.
>> syms t;
f = dirac(t);
F = laplace(f );
The answer is F = 1.

Example 3.6.5.
>> syms t;
f = heaviside(t);
F = laplace(f );
The answer is F = 1/s.

231
Dirac and Delay
Example 3.6.6.
>> syms t s;
F = laplace(dirac(t − 5), t, s);
The answer is F = exp(−5*s).

Heaviside and Delay


Example 3.6.7.
>> F = laplace(heaviside(t − 2*pi), t, s);
The answer is F = exp(−2*pi*s)/s.

Example 3.6.8.
>> syms t;
f = 1 − heaviside(t − 3);
F = laplace(f );
The answer is F = 1/s − exp(−3*s)/s.

Example 3.6.9.
>> syms t a;
f = heaviside(t) − 2*heaviside(t − 4) + heaviside(t − 2*4);
F = laplace(f );
The answer is F = exp(−8*s)/s − (2* exp(−4*s))/s + 1/s.

Example 3.6.10.
>> f = t*(heaviside(t) − heaviside(t − 3)) + heaviside(t − 3);
F = laplace(f );
The answer is F = 1/s2 − exp(−3*s)/s2 − (2* exp(−3*s))/s.

Exponentials
Example 3.6.11.
>> syms t a;
f = exp(a*t);
F = laplace(f );
The answer is F = −1/(a − s).

232
Example 3.6.12.
>> syms t a;
f = exp(i*a*t);
F = laplace(f );
The answer is F = −1/(a*i − s).

Example 3.6.13.
>> syms a b;
F = laplace(exp(a*t) − exp(b*t));
The answer is F = 1/(b − s) − 1/(a − s).

Translation
Example 3.6.14.
>> F = laplace(tˆ3* exp(5*t));
The answer is F = 6/(s − 5)ˆ4.

Example 3.6.15.
>> syms a n;
F = laplace(tˆn* exp(a*t));
The answer is F = piecewise(−1 < real(n) | 1 <= n & in(n, ’integer’),
gamma(n + 1)/(−a + s)ˆ(n + 1)).

Sine and Cosine


Example 3.6.16.
>> syms t a;
f = sin(a*t);
F = laplace(f );
The answer is F = a/(aˆ2 + sˆ2).

Example 3.6.17.
>> syms a t;
f = cos(a*t);
F = laplace(f );
The answer is F = s/(aˆ2 + sˆ2).

233
Example 3.6.18.
>> f = 1 − cos(2*t);
F = laplace(f );
The answer is F = 1/s − s/(sˆ2 + 4).
Example 3.6.19.
>> f = 1 − cos(2*t);
F = simplify(laplace(f ));
The answer is F = 4/(s*(sˆ2 + 4)).
Example 3.6.20.
>> syms a;
F = laplace((cos(a*t))ˆ2);
The answer is F = (2*aˆ2 + sˆ2)/(s*(4*aˆ2 + sˆ2)).
Example 3.6.21.
>> syms a;
F = laplace(sinh(a*t));
The answer is F = −a/(aˆ2 − sˆ2).
Example 3.6.22.
>> syms a;
F = laplace(cosh(a*t));
The answer is F = −s/(aˆ2 − sˆ2).
Example 3.6.23.
>> syms a;
F = simplify(laplace(cos(a*t) + cosh(a*t)));
The answer is F = −(2*sˆ3)/(a4 − s4 ).
Example 3.6.24.
>> F = laplace(sin(2*t + pi));
The answer is F = −2/(sˆ2 + 4).
Example 3.6.25.
>> F = laplace(sin(2*t + 0.7*pi));
The answer is F = −((2ˆ(1/2)*(5 − ˆ(1/2))ˆ(1/2))/2 − s*(5ˆ(1/2)/4 +
1/4))/(sˆ2 + 4).

234
Example 3.6.26.
>> syms t a;
f = sin(a*t)/t;
F = laplace(f );
The answer is F = atan(a/s).
Example 3.6.27.
>> syms t n;
f = tˆn;
F = laplace(f );
The answer is F = piecewise([−1 < Re(n), gamma(n + 1)/sˆ(n + 1)]).
Example 3.6.28.
>> syms t a;
f = cos(a*t)/t;
F = laplace(f );
The answer is F = −eulergamma − log(s − a*i)/2 − log(s + a*i)/2.
Example 3.6.29.
>> syms t;
f = 1/sqrt(t);
F = laplace(f );
The answer is F = piˆ(1/2)/sˆ(1/2).
Example 3.6.30.
>> syms x y;
f = 1/sqrt(x);
laplace(f, x, y);
ans = piˆ(1/2)/yˆ(1/2)
Example 3.6.31.
>> syms a;
f = (1 − exp(a*t))/t;
F = laplace(f );
The answer is F = log(s − a) − log(s).
Example 3.6.32.
>> syms a b;

235
f = (exp(a*t) − exp(b*t))/t;
F = laplace(f );
The answer is F = log((b − s)/(a − s)).

Laplace Transform of Periodic Functions


Example 3.6.33.
>> syms s t;
F = simplify((1/(1−exp(−4*s)))*laplace(heaviside(t−2)−heaviside(t−4)));
The answer is F = 1/(s*(exp(2*s) + 1)).

Example 3.6.34.
>> F = simplify((1/(1 − exp(−pi*s/2)))*laplace((t* (pi/2 − t)*(heaviside(t)
− heaviside(t − pi/2)))));
The answer is F = (pi*s − 4* exp((pi*s)/2) + s*pi* exp((pi*s)/2) + 4)/
(2*sˆ3*(exp((pi*s)/2) − 1)).

Some properties of the Laplace transform can be obtained using MAT-


LAB.

Differentiation of the Original and Generalizations


Example 3.6.35.
>> syms f (t) s;
laplace(diff(f (t), t), t, s);
ans = s*laplace(f (t), t, s) − f (0)

Example 3.6.36.
>> syms f (t) s;
laplace(diff(diff(f (t)), t), t, s);
ans = sˆ2*laplace(f (t), t, s) − s*f (0) − D(f )(0)

Example 3.6.37.
>> syms f (t) s;
F = laplace(diff(diff(diff(f (t))), t), t, s);
The answer is F = sˆ3*laplace(f (t), t, s) − D(D(f ))(0) − s*D(f )(0) −
sˆ2*f (0).

236
Time Delay
Example 3.6.38.
>> syms t a f (t);
F = laplace(exp(a*t)*f (t));
The answer is F = laplace(f (t), t, s − a).
Example 3.6.39.
>> syms t a f (t);
F = laplace(f (t)* sin(a*t));
The answer is F = −(laplace(f (t), t, s − a*1i)*1i)/2 + (laplace(f (t), t,
s + a*1i)*1i)/2.
If laplace cannot find an explicit representation of the transform, it re-
turns an unevaluated call. As an example we have Theorem 3.2.5 (Similarity
or Change of Time Scale) and formula (3.7).
Example 3.6.40.
>> F = laplace(f (a*t));
The answer is F = laplace(f (a*t), t, s).
One can determine the Laplace transform of matrices. One uses matrices
of the same size to specify the transformation variables and evaluation points.

Matrices
Example 3.6.41.
>> syms t a b c s x y z;
M = laplace([sin(t), exp(2*a); dirac(b − 3), 1], [t, a; b, c], [s, x; y, z]);
The answer is
M=
[1/(sˆ2 + 1), 1/(x − 2)]
[ exp(−3*y), 1/z].
Example 3.6.42.
>> syms t s w x y z;
laplace([exp(t), t; sin(t), i*t], [t, x; t, t], [s, s; s, s]);
ans =
[1/(s − 1), t/s]
[1/(sˆ2 + 1), 1i/sˆ2].

237
When the input arguments are matrices, then laplace applies element-
wise. If the arguments are both scalar and matrix, then laplace expands the
scalar arguments into arrays of the same size as the matrix arguments with
all elements of the array equal to the scalar.

Example 3.6.43.
>> syms t a b c s x y z;
M = laplace(tˆ2, [t, a; b, c], [s, x; y, z]);
The answer is
M=
[2/sˆ3, tˆ2/x]
[tˆ2/y, tˆ2/z].

Example 3.6.44.
>> syms t s x y z;
M = laplace(tˆ2, t, [s, x; y, z]);
The answer is
M=
[2/sˆ3, 2/xˆ3]
[2/yˆ3, 2/zˆ3].

Example 3.6.45.
>> syms t s;
M = laplace(sinh(3*t), t, [s, s; s, s]);
The answer is
M=
[3/(sˆ2 − 9), 3/(sˆ2 − 9)];
[3/(sˆ2 − 9), 3/(sˆ2 − 9)].

If the first argument is a symbolic function, then the second argument


must be a scalar.

Example 3.6.46.
>> syms f 1(t) f 2(t) s u;
f 1(t) = cosh(t);
f 2(t) = t* exp(4*t);
F = laplace([f 1(t) f 2(t)], t, [s u]);
The answer is F = [s/(sˆ2 − 1), 1/(u − 4)ˆ2].

238
3.6.2 Inverse Laplace Transform
The syntax is one of the following:

1. f = ilaplace(F). This computes the inverse Laplace transform (the


original) f of the symbolic expression F . By default the variable of F
is s and the variable of the computed inverse transform f is t;

2. f = ilaplace(F, x). This computes the inverse Laplace transform f


as a function of the parameter x instead of the default variable t;

3. f = ilaplace(F, y, x). This computes the inverse Laplace transform


f as a function of the variable x instead of the default variable t and
considers that F is a function of the variable y instead of the default
variable s.

Example 3.6.47.
>> syms s;
f = ilaplace(5/(sˆ2 − 25));
The answer is f = exp(5*t)/2 − exp(−5*t)/2.
Another way that this can be done is the following:
>> syms s;
f = simplify(ilaplace(5/(sˆ2 − 25)));
The answer is f = sinh(5*t).

Example 3.6.48.
>> syms x s;
F = (sˆ2 − s + 2)/(sˆ3 − sˆ2 + s − 1); f = ilaplace(F, x);
The answer is f = exp(x) − sin(x).

Example 3.6.49.
>> syms a x y;
F = 1/sqrt(y + a); f = ilaplace(F, y, x);
F = 1/(a + y)ˆ(1/2);
The answer is f = exp(−a*x)/(xˆ(1/2)*piˆ(1/2)).

Example 3.6.50.
>> syms s;

239
F = exp(−pi*s)/s;
f = ilaplace(F );
The answer is f = heaviside(t − pi).

Example 3.6.51. >> f = dirac(t);


F = laplace(f );
The answer is F = 1.
>> g = ilaplace(F );
The answer is g = dirac(t).

Example 3.6.52.
>> f = ilaplace(exp(−5*s));
The answer is f = dirac(t − 5).

Example 3.6.53.
>> syms a s;
F = piecewise(a < 0, 0, 0 <= a, exp(−a*s)); % i.e. F = 0 for a < 0 and
F = e−as for a ≥ 0
f = ilaplace(F );
The answer is f = piecewise(a < 0, 0, 0 <= a, dirac(a − t)).

3.6.3 Transfer Matrices


Using the tf command, one computes the transfer matrix/function of a
MIMO/SISO system or creates a transfer function model. The syntax is one
of the following:

1. T M sys = tf(sys), which computes the transfer matrix/function of a


MIMO/SISO system sys;

2. sys = tf(num, den), which creates a continuous-time transfer func-


tion with polynomial numerator(s) and denominator(s) specified by
num and den and deg(num) ≤ deg(den). In the SISO case, num and
den are the real or complex-valued row vectors of numerator and de-
nominator coefficients ordered in descending powers of s. To create
MIMO transfer matrices, one specifies the numerator and denomina-
tor of each SISO entry. In this case num and den are cell arrays of

240
row vectors with as many rows as outputs and as many columns as in-
puts. One can set den to the row vector representation of the common
denominator;
3. sys = tf(num, den, Ts) creates a discrete-time transfer function with
sample time Ts (in seconds). If T s = −1 or T s = [ ], then the sample
time is unspecified.
The default variable of the transfer matrix of a discrete-time system is z.
This is because the Z transform is used (instead of the Laplace transform).
Example 3.6.54.
>> A = [1 3 − 4 3; 0 0 1 0; 0 0 0 1; −2 − 1 − 3 0];
B = [2 1; 0 0; 0 0; 0 1];
C = [4 − 4 1 0; 0 9 0 0]; D = [1 2; 0 0];
sys = ss(A, B, C, D); % generates the continuous-time system sys
T M = tf(sys); % computes the transfer matrix TM of sys
The answer is
sys =
A=
x1 x2 x3 x4
x1 1 3 −4 3
x2 0 0 1 0
x3 0 0 0 1
x4 −2 −1 −3 0
B=
u1 u2
x1 2 1
x2 0 0
x3 0 0
x4 0 1
C=
x1 x2 x3 x4
y1 4 −4 1 0
y2 0 9 0 0
D=
u1 u2
y1 1 2
y2 0 0

241
Continuous-time state-space model.

TM =

From input 1 to output...


sˆ4 + 7sˆ3 + 9sˆ2 + 10s + 29
1:
sˆ4 − sˆ3 + 9sˆ2 − 10s + 5
−36
2:
sˆ4 − sˆ3 + 9sˆ2 − 10s + 5
From input 2 to output...
2sˆ4 + 2sˆ3 + 31sˆ2 − 31s + 38
1:
sˆ4 − sˆ3 + 9sˆ2 − 10s + 5
9s − 27
2:
sˆ4 − sˆ3 + 9sˆ2 − 10s + 5
Continuous-time transfer function.

Example 3.6.55.
>> h = 0.1;
sysd = ss(A, B, C, D, h); % generates the discrete-time system sysd with the
sample time h
M T = tf(sysd); % computes the transfer matrix TM of sysd
The answer is

sysd =

A=
x1 x2 x3 x4
x1 1 3 −4 3
x2 0 0 1 0
x3 0 0 0 1
x4 −2 −1 −3 0
B=
u1 u2
x1 2 1
x2 0 0
x3 0 0
x4 0 1

242
C=
x1 x2 x3 x4
y1 4 −4 1 0
y2 0 9 0 0
D=
u1 u2
y1 1 2
y2 0 0

Sample time: 0.1 seconds


Discrete-time state-space model.

TM =

From input 1 to output...


zˆ4 + 7zˆ3 + 9zˆ2 + 10z + 29
1:
zˆ4 − zˆ3 + 9zˆ2 − 10z + 5
−36
2:
zˆ4 − zˆ3 + 9zˆ2 − 10z + 5
From input 2 to output...
2zˆ4 + 2zˆ3 + 31zˆ2 − 31z + 38
1:
zˆ4 − zˆ3 + 9zˆ2 − 10z + 5
9z − 27
2:
zˆ4 − zˆ3 + 9zˆ2 − 10z + 5

Sample time: 0.1 seconds


Discrete-time transfer function.

Example 3.6.56.
>> num = {[1 − 2]; 3};
den = {[1 0 4]; [1 7]};
T M = tf(num, den);
The answer is

TM =

243
From input 1 to output...
s−2
1:
sˆ2 + 4
−36
2:
sˆ4 − sˆ3 + 9sˆ2 − 10s + 5
From input 2 to output...
3
1:
s+7
9s − 27
2:
sˆ4 − sˆ3 + 9sˆ2 − 10s + 5
Continuous-time transfer function.
Example 3.6.57.
>> num = {[1 0 3][1 − 1]; 0 [1 − 4]};
den = [1 − 5 6];
T s = 0.2;
T M d = tf(num, den, T s); % computes the discrete transfer matrix TMd with
common denominator d(z) = z 2 − 5z + 6 and sample time T s = 0.2 seconds
The answer is
TS =
0.2000
T MD =
From input 1 to output...
zˆ2 + 3
1:
zˆ2 − 5z + 6
2:0
From input 2 to output...
z−1
1:
zˆ2 − 5z + 6
z−4
2:
zˆ2 − 5z + 6
Sample time: 0.2 seconds
Discrete-time transfer function.

244
3.6.4 Partial Fraction Decomposition. Residue
The syntax is one of the following:

1. [r, p, k] = residue(P, Q). This computes the coefficients, poles and quo-
P
tient of the partial fraction decomposition of the ratio , where P and
Q
Q are polynomials;

2. [P, Q] = residue(r, p, k). This returns the coefficients of the two poly-
nomials P and Q corresponding to the partial fraction expansion.
P
If the ratio has simple poles p1 , p2 , . . . , pn , then the partial fraction
Q
decomposition is
P r1 r2 ri rn
=k+ + + ··· + ··· + ,
Q s − p1 s − p2 s − pi s − pn
P
where r1 , r2 , . . . rn are the residues of at these poles and k is a poly-
Q
nomial (see Example 3.6.58). The output is r = [r1 r2 . . . ri . . . rn ],
p = [p1 p2 . . . pi . . . pn ] and k.
If a pole pi has multiplicity m, then the partial fraction decomposition
contains the terms
ri1 ri2 rij rim
+ 2
+ ··· j
+ ··· +
s − pi (s − pi ) (s − pi ) (s − pi )m

and in the vector r, ri is replaced by the sequence ri1 ri2 . . . rij . . . rim (see
Example 3.6.59).

Example 3.6.58.
>> P = [1 0 1 − 4]; Q = [1 − 3 2];
[r, p, k] = residue(P, Q);
r=6
2

p=2
1

k= 1 3

245
s3 + s − 4 6 2
Therefore, 2
=s+3+ + .
s − 3s + 2 s−2 s−1
Example 3.6.59.
>> P = [3 3 3]; Q = [1 0 − 3 2];
[r, p, k] = residue(P, Q);
r=
1.0000
2.0000
3.0000

p=
− 2.0000
1.0000
1.0000

k=
[]

3s2 + 3s + 3 1 2 3
Therefore, = + + and
s3 − 3s + 2 s + 2 s − 1 (s − 1)2
 2 
−1 3s + 3s + 3
L = e−2t + 2et + 3tet .
s3 − 3s + 2

3.7 Maple Applications


3.7.1 Laplace Transform
The commands with(inttrans, laplace) or with(inttrans) allow the
use of the command laplace, which is the standard one. The syntax is one
of the following:

1. F = laplace(f(t), t, s). This computes the Laplace transform F of


the symbolic expression f . The variable of the function f is t and the
computed transform F is a function of the variable s;

2. F = laplace(f(t), t, p). This computes the Laplace transform F as a


function of the parameter p instead of the default variable s;

246
3. F = laplace(f(t), x, p). This computes the Laplace transform F as a
function of the variable p instead of the default variable s and considers
that f is a function of the variable x instead of the default variable t.

Example 3.7.1.
> with(inttrans):
laplace(exp(t) − sin(t), t, s)
1 1
− 2
s−1 s +1
Example 3.7.2.
> with(inttrans):
laplace(exp(t) − sin(t), t, p)
1 1
− 2
p−1 p +1
Example 3.7.3.
> with(inttrans):
laplace(exp(t) − sin(t), x, p)
1 1
− 2
p−1 p +1

Transforms of Dirac Impulse Function (Distribution) and Heavi-


side’s Step Function
Example 3.7.4.
> with(inttrans):
F = laplace(Dirac(t) − sin(t), t, s)

F =1

Example 3.7.5.
> f := t → 1:
assume(t, positive):
with(inttrans):
F = laplace(f (t), t, s)
1
F =
s

247
Dirac and Delay
Example 3.7.6.
> assume(a, positive):
F = laplace(Dirac(t − a), t, s)
F = e−sa∼

Heaviside and Delay


Example 3.7.7.
> assume(a, positive):
F = laplace(Heaviside(t − a), t, s)
e−sa∼
F =
s
Example 3.7.8.
> assume(a, positive):
laplace(Heaviside(t − a)myfunc(t), t, s)
e−sa∼ laplace(myf unc(t ∼ +a ∼), t ∼, s)
Example 3.7.9.
> with(inttrans):
laplace(1 − Heaviside(t − 3), t, s)
1 − e−3s
s
Example 3.7.10.
> with(inttrans):
laplace(Heaviside(t) − 2 · Heaviside(t − 4) + Heaviside(t − 2 · 4), t, s)
1 − 2e−4s + e−8s
s
Remark 3.7.11. In Maple the product is indicated by an asterisk, namely
*. Hence, one writes 2*Heaviside for 2 · Heaviside.
Example 3.7.12.
> with(inttrans):
laplace(t · (Heaviside(t) − Heaviside(t − 3)) + Heaviside(t − 4), t, s)
e−4s 1 − (3s + 1)e−3s
+
s s2

248
Exponentials
Example 3.7.13.
> with(inttrans):
F = laplace(exp(a · t), t, s)
1
F =
s−a
Example 3.7.14.
> with(inttrans):
F = laplace(exp(I · a · t), t, s)
1
F =
s − Ia
Example 3.7.15.
> with(inttrans):
F = laplace(exp(a · t) − exp(b · t), t, s)

−b + a
F =
(s − a)(s − b)

Translation
Example 3.7.16.
> with(inttrans):
F = laplace(t3 · exp(5 · t), t, s)
6
F =
(s − 5)4

Remark 3.7.17. In Maple one writes nˆ2 for n2 .

Example 3.7.18.
> with(inttrans):
F = laplace((t3 ) · exp(a · t), t, s)
6
F =
(s − a)4

Example 3.7.19.
> with(inttrans):

249
a := ’a’: b := ’b’:
f := t → exp(a · t) · cosh(b · t):
assume(t, positive):
laplace(f (t), t, s)
s − a)
(s − a)2 − b2

Sine and Cosine


Example 3.7.20.
> with(inttrans):
F = laplace(sin(a · t), t, s)
a
F =
s2 + a2
Example 3.7.21.
> with(inttrans):
F = laplace(cos(a · t), t, s)
s
F =
s2 + a2
Example 3.7.22.
> with(inttrans):
F = laplace((cos(a · t))2 , t, s)
2a2 + s2
F =
(s2 + 4a2 )s
Example 3.7.23.
> with(inttrans):
F = laplace(sinh(a · t), t, s)
a
F =
s 2 − a2

Power Functions
Example 3.7.24.
> with(inttrans):
f := t → t5 − 1:
F = laplace(f (t), t, s)
120 1
F = −
s6 s

250
Example 3.7.25.
> with(inttrans):
assume(n, natural):
F = laplace(tn , t, s)
F = n ∼! s−n∼−1
Example 3.7.26.
> with(inttrans):
 
sin(a · t)
F = laplace , t, s
t
a
F = arctan
s
Example 3.7.27.
> with(inttrans):
 
1
F = laplace , t, s
sqrt(t) r
π
F =
s
Example 3.7.28.
> with(inttrans):
F = laplace((1 − exp(a · t))/t, t, s)

F = ln(s − a) − ln(s)

Some properties of the Laplace transform can be obtained using Maple.

Time Delay
Example 3.7.29.
> with(inttrans):
laplace(Heaviside(t − a)myfunc(t − a), t, s)

e−sa∼ laplace(myf unct(t ∼), t ∼, s)

Differentiation of the Image


Example 3.7.30.
> with(inttrans):

251
addtable(laplace, myf unc(t), M yf unc(s), t, s):
laplace(t · myfunc(t), t, s)
 

− laplace(myf unc(t), t, s)
∂s
laplace(t2 · myfunc(t), t, s)
∂2
laplace(myf unc(t), t, s)
∂s2

3.7.2 Inverse Laplace Transform


The commands with(inttrans, invlaplace) or with(inttrans) allow
the use of the command invlaplace, which is the standard one. The syntax
is one of the following:
1. f = invlaplace(F(s), s, t). This computes the inverse Laplace trans-
form (the original) f of the symbolic expression F . The variable of the
function F is s and the computed inverse transform f is a function of
the variable t;
2. f = invlaplace(F(s), s, x). This computes the inverse Laplace trans-
form f as a function of the parameter x;
3. f = invlaplace(F(s), p, x). This computes the inverse Laplace trans-
form f as a function of the variable x and considers that F is a function
of the variable p.
Example 3.7.31.
> with(inttrans):
 
1 s
f = invlaplace 3 − 2 , s, t
s (s + 25)
1
f = t2 − cos(5t)
2
Example 3.7.32.
> with(inttrans):
 
1 s
f = invlaplace 3 − 2 , s, x
s (s + 25)
1
f = x2 − cos(5x)
2

252
Example 3.7.33.
> with(inttrans):
 
1 s
f = invlaplace 3 − 2 , p, x
p (p + 25)

1 1
f = x2 − s sin(5x)
2 5

Translation
Example 3.7.34.
> with(inttrans):
assume(a, positive):
 
1
f = invlaplace , s, t
sqrt(s + a)

e−at
f=√
πt

Dirac and Heaviside


Example 3.7.35.
> with(inttrans):
f = invlaplace(1, s, t)
f = Dirac(t)

Example 3.7.36.
> with(inttrans):
 
1
f = invlaplace , s, t
s
f =1

Example 3.7.37.
> with(inttrans):
f = invlaplace(exp(t), s, t)

f = et Dirac(t)

Example 3.7.38.
> with(inttrans):

253
assume(a, positive):
 
1
f = invlaplace , s, t
(s − a)

f = eat

Example 3.7.39.
> with(inttrans):
F := s → 1/s5 :
invlaplace(F (s), s, t)
1 4
t
24
Example 3.7.40.
> with(inttrans):
F := s → −7/(s2 + 16):
invlaplace(F (s), s, t)
7
− sin(4t)
4
Example 3.7.41.
> with(inttrans):
F := s → 1/(s · (s − 2)):
invlaplace(F (s), s, t)
1 e2t
− +
2 2

3.7.3 Differential Equations


Example 3.7.42.
> with(inttrans):
ode1 := dif f (y(t), t, t) + 2 · dif f (y(t), t) + 5 · y(t) = 0:
y(0) := 3: D(y)(0) := 0:
Lap1 := laplace(ode1, t, s):
Lp1 := subs(laplace(y(t), t, s) = Y (s), Lap1):
solve(Lp1, Y (s)): Y (s) := %:
invlaplace(Y (s), s, t)

3
3e−t cos(2t) + e−t sin(2t)
2

254
Example 3.7.43.
> with(inttrans):
ode1 := dif f (y(t), t) + 2 · y(t) = exp(2 · t):
y(0) := 0:
Lap1 := laplace(ode1, t, s):
Lp1 := subs(laplace(y(t), t, s) = Y (s), Lap1):
solve(Lp1, Y (s)): Y (s) := %:
invlaplace(Y (s), s, t)
1 −t
e sinh(2t)
2
Example 3.7.44.
> with(inttrans):
ode1 := dif f (y(t), t, t) + dif f (y(t), t) = cos(t) + sin(t):
y(0) := 0: D(y)(0) := 0:
Lap1 := laplace(ode1, t, s):
Lp1 := subs(laplace(y(t), t, s) = Y (s), Lap1):
solve(Lp1, Y (s)): Y (s) := %:
invlaplace(Y (s), s, t)
1 − cos(t)

Example 3.7.45.
> with(inttrans):
ode1 := dif f (y(t), t, t) + 4 · y(t) = 0:
y(0) := 5: D(y)(0) := 0:
Lap1 := laplace(ode1, t, s):
Lp1 := subs(laplace(y(t), t, s) = Y (s), Lap1):
solve(Lp1, Y (s)): Y (s) := %:
invlaplace(Y (s), s, t)
5 cos(2t)

3.7.4 Transfer Matrices/Functions


The command with(DynamicSystems) allows the use of the command
TransferFunction, which is the standard one. The syntax is one of the
following:

1. A := M atrix([[a11, a12, . . . , a1n], . . . , [an1, an2, . . . , ann]])


B := M atrix(. . .)

255
C := M atrix(. . .)
D := M atrix(. . .)
sys := TransferFunction(A, B, C, D)
P rintSystem(sys)

2. sys := TransferFunction(num, den)


P rintSystem(sys)

The parameters num and den are the coefficients of the numerator and
denominator, respectively. For single-input/single-output systems, num and
den are vectors formed with the coefficients of polynomials starting with the
coefficient of the highest order term. For multi-input/multi-output systems,
num and den are matrices of vectors.

Example 3.7.46.
> with(DynamicSystems):
sys := T ransf erF unction([3, −1], [1, 5, 6]):
P rintSystem(sys)
Transfer Function

 continuous

 1 output(s); 1 input(s)

 inputvariable = [u1(s)]

 outputvariable = [y1(s)]
 3s − 1
tf1,1 = 2
s + 5s + 6

256
Chapter 4

Z Transform

The Z transform converts a discrete-time signal, which is a sequence of


real or complex numbers, into a complex frequency-domain representation.
This makes it widely used in the analysis and design of digital control, and
signal processing.
This transform method may be traced back to De Moivre around the year
1730 when he introduced the concept of ’generating functions’ in probability
theory. Closely related to generating functions is the Z transform, which
may be considered as the discrete analogue of the Laplace transform.
The basic idea, known now as the Z transform, was familiar to Laplace,
and it was re-introduced in 1947 by Hurewicz and others as a way to treat
sampled-data control systems used with radar. It gives a tractable way to
solve linear, constant-coefficient difference equations. It was later dubbed
’the Z transform’ by Ragazzini and Zadeh in the sampled-data control group
at Columbia University in 1952.
From a mathematical side the Z transform can also be viewed as a Lau-
rent series where one views the sequence of numbers under consideration as
the (Laurent) expansion of an analytic function.
A complete study of the Z Transform is included in [8].

4.1 Definition
The Laplace transform of an original function f : R → C is defined by
the improper integral Z ∞
F (p) = f (t)e−pt dt.
0

257
By analogy, for a function f : Z → C one defines the discrete Laplace trans-
form or the Dirichlet transform by the series

X
F (p) = f (t)e−pt .
t=0

One obtains a simpler formula by using a change of variable z = ep ; if one


denotes the sum of the new series by F ∗ (z), one produces a transform

X

F (z) = f (t)z −t
t=0

which is called the Z transform of the discrete function f . This justifies the
following definitions.

Definition 4.1.1. A function f : Z → C is called an original function if it


has the following properties:

i) f (t) = 0 for t < 0;

ii) there exist M > 0 and R > 0 so that |f (t)| ≤ M Rt , ∀t ∈ {0, 1, . . .}.

The number R (denoted sometimes by Rf ) represents the radius of a disk.


The function f is also called the signal in the time domain.

Definition 4.1.2. The function F ∗ : C \ {0} → C, defined by



X

F (z) = f (t)z −t (4.1)
t=0

is called the Z transform of the original function f .

The Z transform F ∗ (z) denoted by Z[f (t)] is also called the image of the
function f or the signal in the frequency domain.

Proposition 4.1.3. The series (4.1) is convergent on |z| > R (the exterior
of the disk of radius R centered in 0, where R = Rf is the radius of the
original function f ). In any closed region |z| ≥ R0 > R the series (4.1) is
uniformly convergent.

258

X 1
Proof. One uses the geometric series zn = with the convergence
t=0
1−z
R
condition |z| < 1. Since < 1 is equivalent to |z| > R, it follows that
|z|

X ∞
X ∞
X
∗ −t −t
|F (z)| = f (t)z ≤ |f (t)||z | ≤ M Rt |z|−t
t=0 t=0 t=0
∞  t
X R |z|
=M =M < ∞.
t=0
|z| |z| − R
 t
0 −t R
Similarly, for |z| ≥ R > R (see Figure 4.1) one obtains |f (t)z | ≤ M .
R0
∞  t
X R R
Since the geometric series 0
with 0 < 1 is convergent, according
t=0
R R
to the Weierstrass Criterion (see [10, pp. 265-266] and [20, pp. 108-109]), the
series (4.1) is uniformly convergent.

Figure 4.1: The Domain |z| ≥ R0 > R

It follows from Proposition 4.1.3 that the function F ∗ (z) is analytic on


the domain |z| > R.

Example 4.1.4. Let us consider the Kronecker function (see Figure 4.2)

0, t 6= 0
δ0 (t) = .
1, t = 0

259
Figure 4.2: The Kronecker Function

This function plays the role of the Dirac distribution δ for the case of
the discrete-time control systems and signals. It is also called the discrete δ
function or the discrete impulse function.

Solution. Obviously,
1 1
Z[δ(t)] = δ0 (0) + δ0 (1) + · · · + δ0 (t) t + · · · = 1.
z z
Example 4.1.5. Consider Heaviside’s discrete step function (see Figure 4.3)

0, t < 0
u(t) = .
1, t ≥ 0

Figure 4.3: Heaviside’s Discrete Step Function

Solution. Since |u(t)| = 1, ∀t ≥ 0, it follows that R = 1. By apply-


ing Proposition 4.1.3 and the geometric series, one obtains for |z| > 1 the
transform ∞ ∞  t
X
−t
X 1 z
Z[u(t)] = u(t)z = = .
t=0 t=0
z z−1

260
Example 4.1.6. Consider the exponential function

t 0, t < 0
p(t) = u(t)a = .
at , t ∈ Z+
Solution. Since p(t) verifies the inequality |p(t)| = |a|t , its radius is R =
a
|a|. Then, it follows for |z| > |a| that < 1 and we can apply again the
z
geometric series. We get that
∞ ∞  
X
t −t
X a t z
Z[p(t)] = az = = .
t=0 t=0
z z−a

In particular, for a = eλ , λ ∈ C one gets R = |a| = |eλ | = eRe(λ) hence, for


|z| > eRe(λ) one obtains the Z transform of the exponential, namely
z
Z eλt =
 
.
z − eλ

4.2 Properties of the Z Transform


The following results can be used to extend the list of Z transforms to
the most important functions.
Theorem 4.2.1 (Linearity). If f and g are original functions with radii Rf
and Rg , respectively, then for |z| > max(Rf , Rg ),
Z[αf (t) + βg(t)] = αZ[f (t)] + βZ[g(t)], ∀α, β ∈ C. (4.2)
Proof. If |f (t)| ≤ M1 Rft and |g(t)| ≤ M2 Rgt , then by denoting max(Rf , Rg )
with R, one obtains the following:
|αf (t) + βg(t)| ≤ |α| |f (t)| + |β| |g(t)| ≤ (|α| M1 + |β| M2 ) Rt .
Hence, the linear combination αf (t) + βg(t) verifies condition ii) from Defi-
nition 4.1.1 and it is an original. For |z| > R the three series from formula
(4.2) are convergent and

X
Z[αf (t) + βg(t)] = [αf (t) + βg(t)]z −t
t=0
X∞ ∞
X
−t
=α f (t)z +β g(t)z −t
t=0 t=0
= αZ[f (t)] + βZ[g(t)].

261
Example 4.2.2. Consider the function f (t) = cos(ωt), ω > 0.

Solution. According to Definition 4.1.1,



0, t < 0
f (t) = u(t) cos(ωt) = .
cos(ωt), t ∈ Z+

Using Theorem 4.2.1 (Linearity), the expression of the cosine from Euler’s
formula and the transform of the exponential (see Example 4.1.6), one obtains

e + e−iωt
 iωt 
1
Z[eiωt ] + Z[e−iωt ]

Z[cos(ωt)] = Z =
2 2
 
1 z z
= +
2 z − eiω z − e−iω
1 z[2z − (eiω + e−iω )]
= · 2
2 z − (eiω + e−iω )z + eiω e−iω
z(z − cos ω)
= 2 .
z − 2z cos ω + 1
Analogously one gets the following Z transforms:
z sin ω
Z[sin(ωt)] = ;
z 2 − 2z cos ω + 1

z(z − cosh ω)
Z[cosh(ωt)] = ;
z2 − 2z cosh ω + 1
z sinh ω
Z[sinh(ωt)] = ;
z2 − 2z cosh ω + 1
Z[sin(ωt + ϕ)] = cos ϕZ[sin ωt] + sin ϕZ[cos ωt]
z(z sin ϕ + sin(ω − ϕ))
= .
z 2 − 2z cos ω + 1
Theorem 4.2.3 (Similarity). If R is the radius of the original function f
and a ∈ C \ {0}, then for |z| > |a|R, it follows that
z 
Z[at f (t)] = F ∗ . (4.3)
a

262
Proof. It follows that |at f (t)| ≤ |a|t M Rt = M (|a|R)t . Hence, the radius of
the original function at f (t) is |a|R. For |z| > |a|R, one obtains
X∞ X∞  z −t z 
−t
t
Z[a f (t)] = t
[a f (t)]z = f (t) = F∗ .
t=0 t=0
a a

Example 4.2.4. Determine the Z transform of the function f (t) = at cos ωt,
a 6= 0, ω > 0.
Solution. According to Theorem 4.2.3 (Similarity) and Example 4.2.2, it
follows that
z z 
− cos ω z(z − a cos ω)
Z[at cos ωt] = 2a a = 2 .
z z z − 2az cos ω + a2
− 2 cos ω + 1
a2 a
In particular, for a = eλ , one gets
z(z − eλ cos ω)
Z[eλt cos ωt] = .
z 2 − 2zeλ cos ω + e2λ
Theorem 4.2.5 (Time Delay).
Z[f (t − n)] = z −n F ∗ (z), ∀n ∈ N∗ . (4.4)
Proof. According to Definition 4.1.2, it follows that

X
Z[f (t − n)] = f (t − n)z −t .
t=0

We restore the signal f (t) changing the summation index: t − n = k. Hence,


t = k + n and the lower limit t = 0 is transformed in k = −n. Since f (k) = 0
for k < 0, one gets
∞ −1 ∞
!
X X X
Z[f (t − n)] = f (k)z −(k+n) = z −n f (k)z −k + f (k)z −k
k=−n k=−n k=0

X
= z −n f (k)z −k = z −n F ∗ (z).
k=0

263
Example 4.2.6. Compute Z[u(t − n)], ∀n ∈ N∗ .

Solution. One gets


z
Z[u(t − n)] = z −n Z[u(t)] = z −n
z−1
1
= , ∀n ∈ N∗ (see Figure 4.4).
z n−1 (z − 1)

Figure 4.4: The Discrete Signal u(t − n)

Example 4.2.7. Compute Z[u(t − 5)et−5 ].

Solution. One obtains


z 1
Z[u(t − 5)et−5 ] = z −5 Z[et ] = z −5 = 4 .
z−e z (z − e)

Example 4.2.8. Compute Z[u(t − 1)(−1)t−1 ].


z
Solution. According to Example 4.1.6 we get that Z[(−1)t ] = . It
z+1
1
follows that Z[u(t − 1)(−1)t−1 ] = .
z+1

Theorem 4.2.9 (Second Time Delay).

n−1
!
X
Z[f (t + n)] = z n F ∗ (z) − f (t)z −t , ∀n ∈ N∗ . (4.5)
t=0

264
Proof. As in the proof of Theorem 4.2.5, one makes the following change in
the index of summation: t + n = k; hence, t = 0 becomes k = n. Then one
adds and subtracts the sum which lacks in the series of F ∗ (z). One obtains

X ∞
X
−t
Z[f (t + n)] = f (t + n)z = f (k)z −(k−n)
t=0 k=n
∞ n−1
!
X X
= zn f (k)z −k − f (k)z −k
k=0 k=0
n−1
!
X
= z n F ∗ (z) − f (t)z −t .
t=0

Bare in mind that in the last series we have changed k by t.

In particular, for n = 1, we get that Z[f (t + 1)] = z(F ∗ (z) − f (0)).


Example 4.2.10. Compute Z[et+3 ].
Solution. One gets
z4
Z[et+3 ] = z 3 F ∗ (z) − e0 − ez −1 − e2 z −2 = − z 3 − ez 2 − e2 z.

z−e
Example 4.2.11 (Right Periodic Functions). Consider the function f with
the property f (t + T ) = f (t), ∀t ∈ N (hence, the period is T ∈ N∗ ). We
T −1
X
denote by FT∗ (z) the transform of the first period, i.e. FT∗ (z) = f (t)z −t .
t=0
By using the periodicity of the function f and Theorem 4.2.9 (Second Time
Delay) one obtains

X ∞
X
∗ −t
F (z) = f (t)z = f (t + T )z −t = z T (F ∗ (z) − FT∗ (z)) ;
t=0 t=0

hence,
z T FT∗ (z)
F ∗ (z) = .
zT − 1
This implies that
T −1
∗ 1 X
F (z) = T f (t)z T −t .
z − 1 t=0

265

 0, t = 3k
For example, the function f (t) = 1, t = 2k + 1 (see Figure 4.5) has the
2, t = 3k + 2

z 2 + 2z
period T = 3 and the transform F ∗ (z) = 3 .
z −1

Figure 4.5: The Graphic Representation of f (t)

Conversely, if a function F ∗ (z) has the form

a0 z T + a1 z T −1 + · · · + aT −1 z
F ∗ (z) = ,
zT − 1
then it is the transform of the function f which is periodic of period T and
has the values f (kT + i) = ai , i = 0, T − 1, k ∈ N.

For discrete functions, the role of the derivative is played by the difference
function.

Definition 4.2.12. The function ∆f (t) defined by ∆f (t) = 0, for t < 0


and ∆f (t) = f (t + 1) − f (t), for t ∈ {0, 1, . . .} is called the difference of the
function f (t). The difference of order n is defined by the following recurrence:
∆n f (t) = ∆(∆n−1 f (t)).

Obviously, the operator ∆ is linear and one can prove by recurrence that
n  
n−k n
X
n
∆ f (t) = (−1) f (t + k).
k=0
k

Theorem 4.2.13 (Difference).

Z[∆f (t)] = (z − 1)F ∗ (z) − zf (0). (4.6)

266
Proof. By using Theorem 4.2.1 (Linearity) and Theorem 4.2.9 (Second Time
Delay) for n = 1, one obtains

Z[∆f (t)] = Z[f (t + 1) − f (t)] = Z[f (t + 1)] − Z[f (t)]


= z (F ∗ (z) − f (0)) − F ∗ (z)
= (z − 1)F ∗ (z) − zf (0).

One can prove by induction the generalization of the previous theorem.


We obtain
n−1
X
n n ∗
Z[∆ f (t)] = (z − 1) F (z) − z (z − 1)n−k−1 ∆k f (0),
k=0

where ∆0 f (t) = f (t).

Theorem 4.2.14 (Differentiation of the Image).

Z[−tf (t)] = z (F ∗ (z))0 . (4.7)

Proof. The proof is given by the following straightforward computation:



!0 ∞
!
X X
z((F ∗ (z))0 = z f (t)z −t = z f (t)(−t)z −t−1
t=0 t=1

X
= (−tf (t))z −t = Z[−tf (t)].
t=0

In the first sum, the index of summation starts from 0 by the definition of the
Z transform. Also notice that the first term is f (0), which does not depend
on t. In the second sum, because of the derivative, this f (0) vanishes, so the
lower index of summation is now 1. Then we want to use again the definition
of the Z transform. Thus, the series needs to start from 0. This is not an
issue because the corresponding term for t = 0 is 0, so adding it does not
change anything.

Example 4.2.15. Compute Z[t], Z[t2 ] and Z[t3 ].

267
Solution. One gets
 0
z z
Z[t] = −Z[−t · u(t)] = −z = ;
z−1 (z − 1)2
 0
2 z z(z + 1)
Z[t ] = −Z[−t · t] = −z 2
= .
(z − 1) (z − 1)3
0
z(z 2 + 4z + 1)

3 2 z(z + 1)
Z[t ] = −Z[−t · t ] = −z = .
(z − 1)3 (z − 1)4
The inverse function of the difference, which plays the role of the integral
in the case of discrete functions, is the sum.
Definition 4.2.16. One calls the sum of the function f and one denotes it
by Sf (t) the function


 0, t ≤ 0
t−1
Sf (t) = X . (4.8)

 f (k), t ∈ 1, 2, . . .
k=0

Obviously,
t
X t−1
X
∆Sf (t) = Sf (t + 1) − Sf (t) = f (k) − f (k) = f (t)
k=0 k=0

and
t−1
X t−1
X
S (∆f (t)) = ∆f (k) = (f (k + 1) − f (k)) = f (t) − f (0).
k=0 k=0

Hence, S (∆f (t)) = f (t) if f (0) = 0.


Theorem 4.2.17 (Sum). For |z| > max(R, 1),
F ∗ (z)
Z[Sf (t)] = . (4.9)
z−1
Proof. We denote Sf (t) by g(t). We had noticed that ∆g(t) = f (t); more-
over, g(0) = 0. According to Theorem 4.2.13 (Difference) applied to the
function g(t), we obtain that
F ∗ (z) = Z[f (t)] = Z[∆g(t)] = (z − 1)G∗ (z) − zg(0) = (z − 1)G∗ (z).

268
F ∗ (z)
Hence, G∗ (z) = , equality which is equivalent to formula (4.9).
z−1

Example 4.2.18. Compute the Z transform of the sum of the function f


which has the values f (0) = 3, f (1) = 5, f (2) = −2, f (3) = 0, f (4) = −1
and f (t) = 0, for t ≥ 5.
Solution. One gets
3 + 5z −1 − 2z −2 − z −4 3z 4 + 5z 3 − 2z 2 − 1
Z[Sf (t)] = G∗ (z) = = .
z−1 z 4 (z − 1)
Example 4.2.19. Compute the Z transform of the function f defined as
the sum of the squares of the first t natural numbers.
Solution. According to Example 4.2.15, the Z transform is
1 z(z + 1) z(z + 1)
F ∗ (z) = Z[St2 ] = · 3
= .
z − 1 (z − 1) (z − 1)4
Theorem 4.2.20 (Integration of the Image). If f (0) = 0, then
  Z ∞ ∗
f (t) F (u)
Z = du. (4.10)
t z u
Proof. According to Proposition 4.1.3, the Laurent series whose sum is the
analytic function F ∗ (z) is uniformly convergent on any closed set |z| ≥ R0
with R0 > R; hence, one can integrate this series term by term. One obtains
(using f (0) = 0)

Z b ∗ Z b ! ∞ Z b
F (u) 1 X −t
X
du = f (t)u du = f (t) u−t−1 du
z u z u t=0 t=0 z
∞   ∞
X 1 b X f (t)
u−t = z −t − b−t .

= f (t) −
t=1
t z
t=1
t

Taking the limit as b → ∞, we get that b−t → 0, ∀t ≥ 1. Hence,


Z ∞ ∗ ∞  
F (u) X f (t) −t f (t)
du = z =Z .
z u t=1
t t

269
(−1)t−1
 
Example 4.2.21. Compute Z .
t
Solution. Using Example 4.2.8 andapplying Theorem 4.2.20 (Integration
 0, t≤0
of the Image) to the function f (t) = (−1)t−1 , one obtains
 , t = 1, 2, . . .
t
the transform
 Z ∞
(−1)t−1
  
du u ∞
Z = = ln
t z u(u + 1) u+1 z
   
z 1
= − ln = ln 1 + .
z+1 z

In the previous chapter, the convolution


Z t of two continuous functions f
and g was given by (f ∗ g)(t) = f (x)g(t − x) dx (see (3.15iii)). This
0
justifies the following notion.
Definition 4.2.22. The discrete convolution of two original functions f and
g is the function denoted by f ∗ g defined by


 t 0, t < 0
(f ∗ g)(t) = X . (4.11)

 f (k)g(t − k), t = 0, 1, . . .
k=0

Theorem 4.2.23 (Convolution). For |z| > max(Rf , Rg ),

Z[(f ∗ g)(t)] = F ∗ (z) · G∗ (z). (4.12)

Proof. First of all,



! ∞
!
X X
F ∗ (z) · G∗ (z) = f (k)z −t g(l)z −l .
k=0 l=0

Then for k + l = t ⇔ l = t − k, one gets


∞ t
! ∞
X X X
−t
f (k)g(t − k) z = (f ∗ g)(t)z −t = Z[(f ∗ g)(t)].
t=0 k=0 t=0

270
Theorem 4.2.24 (Product).
Z  
1 ∗ z dζ ∗
Z[f (t) · g(t)] = F (ζ) · G , (4.13)
2πi |ζ|=r ζ ζ

|z|
where Rf < r < .
Rg
Z  
1 z dζ
∗ ∗
Proof. We get that F (ζ)G is the same as
2πi |ζ|=r ζ ζ


Z ! ∞  −k !
1 X X z dζ
f (t)ζ −t g(k) ,
2πi |ζ|=r t=0 k=0
ζ ζ

∞ X
∞ Z
X 1 −k
which is equal to f (t)g(k)z · ζ k−t−1 dζ.
t=0 k=0
2πi |ζ|=r
The last equality was obtained integrating term by term the uniformly
convergent series. By Cauchy’s Fundamental Theorem ([26, Theorem 4.2.1])
and using the substitution ζ = reiθ , θ ∈ [0, 2π),
Z 
k−t−1 0, k − t − 1 6= −1 ⇔ k =
6 t
ζ dζ = .
|ζ|=r 2πi, k − t − 1 = −1 ⇔ k = t


X
Hence, the last double series reduces to f (t)g(t)z −t = Z[f (t)g(t)].
t=0

F ∗ (ζ)
By applying Residue Theorems ([26, Chapter 5]), if the function
ζ
has poles a1 , . . . , an , one obtains the following result.

Corollary 4.2.25.
n    
X
∗ z 1∗
Z[f (t)g(t)] = res F (ζ) · G , aj .
j=1
ζ ζ
 
Example 4.2.26. Compute Z teλt .

271
Solution. From Example 4.1.6 and Example 4.2.15 one obtains, for 1 <
|z|
r < Re(λ) the following:
e
z
I
1 ζ ζ dζ
Z teλt =
 
2
· 
2πi |ζ|=r (ζ − 1) z ζ
− eλ
ζ
I
1 z
= dζ
2πi |ζ|=r (ζ − 1) (z − ζeλ )
2
 
z
= res ,1
(ζ − 1)2 (z − ζeλ )
0


1
= z lim = z lim
ζ→1 z − ζeλ ζ→1 (z − ζeλ )2

zeλ
= .
(z − eλ )2
Notice that the inequalities that r verifies imply that 1 belongs to the disk
z
|ζ| < r, while ζ = λ lies outside of it. This is why the residue at this second
e
point is not taken into consideration.
The same result can be obtained using Theorem 4.2.3 (Similarity). We
get
z
λ zeλ
Z e t = Z (e ) t =  e 2 =
 λt   λ t
.
z (z − eλ )2
− 1

One can also use Theorem 4.2.14 (Differentiation of the Image) and obtain
the following identical result:
0
z − eλ − z zeλ

 λt  z
Z te = −z = −z · = .
z − eλ (z − eλ )2 (z − eλ )2
Theorem 4.2.27 (Initial Value).
f (0) = lim F ∗ (z). (4.14)
z→∞

1 1 1
Proof. The function F ∗ (z) = f (0) + f (1) + f (2) 2 + · · · + f (t) t + · · · can
z z z
∗ G(z) 1
be written as F (z) = f (0) + , where G(z) = f (1) + f (2) + · · · . The
z z

272
function G(z) is analytic on |z| > R since F ∗ (z). Hence, lim G(z) exists, is
z→∞
G(z)
finite and lim = 0. In conclusion,
z→∞ z
G(z)
lim F ∗ (z) = f (0) + lim = f (0).
z→∞ z→∞ z

Remark 4.2.28. In the same manner one can prove the following formulas,
which together with (4.14) can be used to determine the original function
f (t) when its transform F ∗ (z) is known:
f (1) = lim z (F ∗ (z) − f (0)) ,
z→∞
f (2) = lim z 2 F ∗ (z) − f (0) − f (1)z −1 ,

z→∞
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·!
······· , (4.15)
t−1
X
f (t) = lim z t F ∗ (z) − f (k)z −k , t = 1, 2, . . . .
z→∞
k=0

Theorem 4.2.29 (Final Value). If lim f (t) exists, then


t→∞

lim f (t) = lim (z − 1)F ∗ (z). (4.16)


t→∞ z→1
|z|>1

Proof. By Theorem 4.2.13 (Difference), Z[f (t)] = (z − 1)F ∗ (z) − zf (0). It


follows that
lim (z − 1)F ∗ (z) = lim (zf (0) + Z[f (t)])
z→1 z→1
|z|>1 |z|>1

!
X
= lim zf (0) + (f (t + 1) − f (t))z −t
z→1
|z|>1 t=0

X
= f (0) + (f (t + 1) − f (t))
t=0
= lim [f (0) + (f (1) − f (0)) + · · · + (f (t) − f (t − 1))]
t→∞
= lim f (t).
t→∞

Bare in mind that by definition, the sum of the series is equal to the limit of
the sequence (St )t∈N of the partial sums.

273
z
Example 4.2.30. For the transform F ∗ (z) = , it is obvious that
z−1

lim ((z − 1)F (z)) = lim z = 1. Actually, this is Z the transform of the
z→1 z→1
|z|>1 |z|>1
unit function u(t) (see Example 4.1.5) for which lim u(t) = 1.
t→∞

4.3 Determination of the Original


The theorems from the previous section allowed the computation of the
transforms of some original functions. Now we consider the inverse problem,
namely the determination of the original function f (t) for a given transform
F ∗ (z).

Method I
Theorem 4.3.1. If F ∗ (z) is an analytic function on the domain |z| > R,
then its original function exists, it is unique and it is given by the formula

 Z 0, t < 0
f (t) = 1 ∗
 2πi |z|=r F (z)z
t−1
dz, t = 0, 1, . . . . (4.17)
r>R

Proof. The coefficients of the Laurent series of a certain function F (z) =


∞ Z
X
n 1 F (z)
cn (z − a) are given by the formula cn = n+1
dz.
n=−∞
2πi Γ (z − a)
X∞
Since F ∗ (z) = f (t)z −t , it follows that f (t) is the coefficient of the
t=0
Laurent series with n = −t, a = 0 and f (t) = c−t . Hence, f (t) is given by
formula (4.17) in which the closed contour Γ is the circle of radius r centered
at 0 and F (z) is replaced by F ∗ (z).

Let a1 , . . . , aj , . . . , an be the singular points of the function F ∗ (z). Since



F (z) is analytic in the domain |z| > R (see Figure 4.6), these points belong
to the disc |z| < R.
We use Residue Theorems ([26, Chapter 5]) and we obtain
Z Xn
F ∗ (z)z t−1 dz = 2πi res F ∗ (z)z t−1 , aj .

|z|=r j=1

274
Figure 4.6: The Singular Points of F ∗ (z) Inside the Disk |z| < R

One gets by (4.17) the formula


n
X
res F ∗ (z)z t−1 , aj , t = 1, 2, . . . .

f (t) = (4.18)
j=1

In the case t = 0 this formula remains true if the numerator of F ∗ (z) has a
factor z, otherwise one adds in formula (4.18) the residue at z = 0.
z
Example 4.3.2. Determine the original of the function F ∗ (z) = .
z2 − 1
z
Solution. The function F ∗ (z) = is analytic on the domain |z| > 1.
−1 z2
According to formulas (4.17) and (4.18), its original is

zt zt
Z    
1 z t−1
f (t) = z dz = res , 1 + res , −1
2πi |z|=r z 2 − 1 z2 − 1 z2 − 1
r>1
zt zt 1
· 1 + (−1)t−1 .

= + =
2z z=1 2z z=−1 2

Method II
Theorem 4.3.3. If F ∗ (z) is an analytic function on the domain |z| > R,
then its original function is given by the formula

    0, t < 0
(t)
f (t) = 1 1 . (4.19)
 F∗ , t = 0, 1, . . .
t! z z=0

275
1
Proof. If one replaces z with in the Laurent series expansion F ∗ (z) =

z   X ∞
X
−t 1
f (t)z , then one obtains the Taylor series F ∗
= f (t)z t . Hence,
t=0
z t=0
f (t),
  t = 0, 1, . . . are the coefficients of the Taylor series of the function
1
F∗ (which is analytic on the disc |z| < R1 ).
z
If a function g(z) is analytic on a disc centered at a, it has the Taylor

X g (n) (a)
series expansion g(z) = cn (z − a)n , where cn = . By replacing
n!
  t=0
1
a = 0, g(z) = F ∗ , n = t and f (t) = ct , one gets formula (4.19).
z
 
∗ z
Example 4.3.4. Compute the original of the function F (z) = Ln .
z−1
   
1 1
Solution. One gets F ∗ = Ln = −Ln(1 − z) and
z 1−z
   0   (t)
∗1 1 ∗ 1 1 (t − 1)!
F = 0, F = ,..., F∗ = ;
z z=0 z 1−z z (1 − z)t

hence, the original f is the function f (t) = 0, for t ≤ 0 and

1 (t − 1)! 1
f (t) = · = , for t = 1, 2, . . . .
t! (1 − z)t z=0 t

Method III
The original f (t) can be determined by formulas (4.14) and (4.15) (see
Remark 4.2.28).

Method IV
Theorem 4.3.5 (Recurrence Computation of the Original). If F ∗ (z) is a
rational function, so

P (z) an z n + an−1 z n−1 + · · · + a1 z + a0


F ∗ (z) = =
Q(z) z n + bn−1 z n−1 + · · · + b1 z + b0

276
then the original is given by

 0, t<0



 an , t=0
t−1



 X
f (t) = an−t − bn−t+j f (j), t = 1, n . (4.20)

 j=0

 Xt−1

− bn+j−t f (j), t≥n+1




j=t−n

Proof. We denote ct by f (t) and we obtain the Laurent series F ∗ (z) =


X∞ X ∞
−t
ct z . Then P (z) = Q(z) ct z −t , i.e. an z n + an−1 z n−1 + · · · + a1 z + a0 =
t=0 t=0  
n n−1 1 1 1
(z + bn−1 z + · · · + b1 z + b0 ) c0 + c1 + c2 2 + · · · + ct t + · · · . One
z z z
obtains two equal Laurent series; hence, the coefficients of the same powers
of z are equal. One gets
zn : an = c 0 ,
n−1
z : an−1 = c1 + bn−1 c0 ,
······ ······························ ,
n−t
z : an−t = ct + bn−1 ct−1 + · · · + bn−t c0 , if 1 < t < n,
.
······ ···································· ,
z0 : a0 = cn + bn−1 cn−1 + · · · + b0 c0 ,
······ ································· ,
z −t : 0 = ct+n + bn−1 ct+n−1 + · · · + b0 ct , if t > 0.
Hence, the sequence ct can be determined by recurrence using the following
formulas:
c 0 = an ,
t−1
X
ct = an−t − bn−t+j cj , 1 ≤ t ≤ n,
j=0
t+n−1
X
ct+n = bj−t cj , t > 0.
j=t
In the last equality we perform a change of the index of summation by
replacing t + n with t and one obtains
t−1
X
ct = − bn+j−t cj , t > n.
j=t−n

277
Thus, we get (4.20) by replacing ct with f (t).

Method V
If F ∗ (z) is a rational function, one decomposes it in partial fractions and
expands them in Laurent series around the point at infinity (i.e. for |z| > R,
for a suitable R) using the geometric series. For other types of functions
F ∗ (z), the exponential series, binomial series etc. can be used.
Example 4.3.6. Determine the original of the function given by F ∗ (z) =
z2 + 1
.
z 2 (z 2 − 3z + 2)
Solution. The function F ∗ (z) is analytic on the domain |z| > 2 and has
the following decomposition in partial fractions:
3 1 1 1 1 5 1
F ∗ (z) = · + · 2 −2· + · .
4 z 2 z z−1 4 z−2
For |z| > 2,
∞  t ∞ ∞
1 1 1 1X 2 X 2t X
= · = = = 2t−1 z −t .
z−2 z 2 z t=0 z z t+1
1− t=0 t=1
z

1 X
Analogously, = z −t . One obtains the expansion in Laurent series
z−1 t=0

∗ 1 X
F (z) = 2 + (5 · 2t−3 − 2)z −t ;
z t=3

hence, the original function is the following:


f (t) = 0, t ≤ 1; f (2) = 1; f (t) = 5 · 2t−3 − 2, t ≥ 3.

4.4 Applications of the Z Transform


4.4.1 Difference Equations
The discrete analogue of the differential equations is represented by the
difference equations. Consider the linear difference equation with constant

278
coefficients
an ∆n y(t) + an−1 ∆n−1 y(t) + · · · + a1 ∆y(t) + a0 y(t) = f (t), (4.21)
where ai ∈ R, i ∈ {0, 1, . . . , n}, an 6= 0 and the right-hand side f (t) and the
unknown function y(t) are original functions.
One imposes the following the initial conditions:
y(0) = y0 , ∆y(0) = y1 , . . . , ∆n−1 y(0) = yn−1 . (4.22)
According to Theorem 4.2.13 (Difference),
Z[∆2 y(t)] = (z − 1)2 Y ∗ (z) − z[(z − 1)y0 + ∆y(0)]
= (z − 1)2 Y ∗ (z) − z[(z − 1)y0 + y1 ],
...................................................,
n−1
X (4.23)
Z[∆n y(t)] = (z − 1)n Y ∗ (z) − z (z − 1)n−i−1 ∆i y(0)
i=0
n−1
X
= (z − 1)n Y ∗ (z) − z (z − 1)n−i−1 yi .
i=0

One applies the Z transform to equation (4.21) and one replaces the
corresponding images with those given by the formulas (4.23). The difference
equation (4.21) becomes the algebraic equation
n−1
X

n
an [(z − 1) Y (z) − z (z − 1)n−i−1 yi ] + · · · + a1 [(z − 1)Y ∗ (z) − zy0 ]+
i=0

+a0 Y ∗ (z) = F ∗ (z).


This equation can be written as
C(z)Y ∗ (z) − G(z) = F ∗ (z), (4.24)
where we have denoted by C(z) the polynomial
C(z) = an (z − 1)n + an−1 (z − 1)n−1 + · · · + a2 (z − 1)2 + a1 (z − 1) + a0 ,
which can be considered the characteristic polynomial of equation (4.21), and
by G(z) the polynomial
n−1
X
G(z) = z[an (z − 1)n−i−1 yi + · · · + a2 ((z − 1)y0 + y1 ) + a1 y0 ].
i=0

279
The solution of equation (4.24)

F ∗ (z) + G(z)
Y ∗ (z) = ,
C(z)

and the original of this solution y(t) = Z −1 [Y ∗ (z)] is the solution of the
initial value (Cauchy) problem (4.21), (4.22).
A second method of solving the initial value problem represented by the
difference equation (4.21) and the initial conditions (4.22) is based upon
Theorem 4.2.9 (Second Time Delay). Using the definition of the nth order
differences k ∈ {1, 2, . . . , n} we have

∆y(t) = y(t + 1) − y(t),


∆2 y(t) = y(t + 2) − 2y(t + 1) + y(y),
∆3 y(t) = y(t + 3) − 3y(t + 2) + 3y(t + 1) − y(t),
(4.25)
................................................,
n  
k n
X
n
∆ y(t) = (−1) y(t + n − k).
k=0
k

By replacing these differences in equation (4.21), one obtains the equation

bn y(t + n) + bn−1 y(t + n − 1) + · · · + b1 y(t + 1) + b0 y(t) = f (t), (4.26)

where

b n = an ,
 
n
bn−1 = an−1 − an ,
1
.................................,
   
3 n−2 n
b 2 = a2 − a3 + · · · + (−1) an ,
1 n−2
   
2 n−1 n
b 1 = a1 − a2 + · · · + (−1) an ,
1 n−1
   
1 n
b 0 = a0 − a1 + · · · + (−1)n an .
1 n

280
According to Theorem 4.2.9 (Second Time Delay),

Z[y(t + 1)] = z[Y ∗ (z) − y(0)],


Z[y(t + 2)] = z 2 [Y ∗ (z) − y(0) − y(1)z −1 ],
..........................................,
Z[y(t + n)] = z n [Y ∗ (z) − y(0) − y(1)z −1 − · · · − y(n − 1)z −n+1 ].

By applying the Z transform to equation (4.26) and taking into account the
initial conditions (4.22), equation (4.26) is transformed into the algebraic
equation

bn z n [Y ∗ (z) − y(0) − · · · − y(n − 1)z −n+1 ]+


(4.27)
+ · · · + b1 z[Y ∗ (z) − y(0)] + b0 Y ∗ (z) = F ∗ (z).

We denote by C ∗ (z) the characteristic polynomial of equation (4.26). Thus,

C ∗ (z) = bn z n + bn−1 z n−1 + · · · + b2 z 2 + b1 z + b0

and by H(z) the polynomial

H(z) = y0 (bn z n + bn−1 z n−1 + · · · + b2 z 2 + b1 z)+


+ y1 (bn z n−1 + bn−1 z n−2 + · · · + b2 z) + · · · + yn−1 bn z.

Equation (4.27) becomes

C ∗ (z)Y ∗ (z) − H(z) = F ∗ (z).

with the solution given by

F ∗ (z) + H(z)
Y ∗ (z) =
C ∗ (z)

and the solution of the initial value problem (4.26), (4.22) is


 ∗ 
−1 F (z) + H(z)
y(t) = Z .
C ∗ (z)

Example 4.4.1. Solve the initial value problem ∆2 y(t)−5∆y(t)+6y(t) = 0,


y(0) = 1, y(1) = 3.

281
Solution. Using formulas (4.25), one writes the equation in the form
y(t + 2) − 2y(t + 1) + y(t) − 5y(t + 1) + 5y(t) + 6y(t) = 0.
Hence, y(t + 2) − 7y(t + 1) + 12y(t) = 0. One applies the Z transform (and
Theorem 4.2.9 (Second Time Delay)) and one obtains the algebraic equation
z 2 Y ∗ (z) − y(0) − y(1)z −1 − 7z (Y ∗ (z) − y(0)) + 12Y ∗ (z) = 0.


This is equivalent to
z 2 Y ∗ (z) − 1 − 3z −1 − 7z (Y ∗ (z) − 1)) + 12Y ∗ (z) = 0,


which can be written as


Y ∗ (z)(z 2 − 7z + 12) = z 2 − 4z ⇔
Y ∗ (z)(z − 3)(z − 4) = z(z − 4).
z
with the solution Y ∗ (z) = . By applying the geometric series, one
z−3
3
obtains, for |z| > 3 (i.e. for < 1), the expansion
z

1 X
Y ∗ (z) = = 3t z −t .
3
1− t=0
z
Hence, the solution of the given initial value problem is the following:

0, t < 0
y(t) = .
3t , t = 0, 1, . . .

4.4.2 Discrete-time Control Systems


A discrete-time control system Σ has a finite number of input terminals,
a finite number of output terminals and a finite number of primitive compo-
nents (see [23]). It has the black box representation (see Figure 4.7).
The time set is T ∈ Z, hence t ∈ Z. The quantities uj (t), j = 1, m and
yi (t), i = 1, p are called input variables and output variables, respectively.
They belong to a field K. Usually, K is one of the fields R, C or the Galois
field of characteristic p (with p ∈ N a prime number) GF(p) = {0, 1, ..., p − 1}
with the standard addition and multiplication modulo p.

282
Figure 4.7: Black Box Representation of a Discrete-Time Control System

The system Σ is said to be linear if it has linear primitive components.


These components are:

1. summators
A summator (or adder ), represented in Figure 4.8, has m inputs and
one output. These variables verify the input-output map y(t) = u1 (t)+
u2 (t) + · · · + um (t);

Figure 4.8: A Summator

2. amplifiers, multipliers or scalars


An amplifier, represented in Figure 4.9, has one input, one output and
the input-output map y(t) = a(t)u(t). The quantity a(t) ∈ K is called
gain. The system Σ is said to be time-invariant if all its gains are
constant, in other words if a(t) = a ∈ K, ∀t ∈ Z;

Figure 4.9: An Amplifier

283
3. delayers
A delayer , represented in Figure 4.10, has one input, one output and
the input-output map y(t + 1) = u(t). If a system Σ has n delayers,
one associates to the system n state variables xi (t), where xi (t) is the
output variable of the ith delayer at the moment t.

Figure 4.10: A Delayer

We denote by aij (t), bij (t), cij (t) and dij (t) the gains on the following
connections:

ˆ aij (t) - connection between the delayer j and the delayer i, i, j = 1, n;

ˆ bij (t) - between the input j and the delayer i, j = 1, m, i = 1, n;

ˆ cij (t) - between the delayer j and the output i, j = 1, n, i = 1, p;

ˆ dij (t) - between the input j and the output i, j = 1, m, i = 1, p,

respectively.
The scheme of a linear system Σ is represented in Figure 4.11.

Figure 4.11: The Scheme of a Linear System

At moment t, the signal from the output of the summator connected with
the ith delayer is xi (t+1) (the signal which will be at the output of the delayer
i at moment t + 1). It is equal to the sum of the signals which come from

284
the inputs j, j = 1, p and the delayers j, j = 1, n. One obtains the state
equations of the system Σ, namely
n
X m
X
xi (t + 1) = aij (t)xj (t) + bij (t)uj (t), i = 1, n. (4.28)
j=1 j=1

Analogously, by the examination of the input and output signals at the sum-
mator of the output terminal i, one obtains the output equations of the system
Σ, namely
n
X m
X
yi (t) = cij (t)xj (t) + dij (t)uj (t), i = 1, p. (4.29)
j=1 j=1

One calls the vectors x(t) = (x1 (t), ..., xn (t))T , u(t) = (u1 (t), ..., um (t))T ,
y(t) = (y1 (t), ..., yp (t))T the state, the input (or the control ) and the output
of the system Σ at the moment t, respectively.
One denotes by A(t), B(t), C(t) and D(t) the n × n, n × m, p × n, p × m
matrices which have the elements aij (t), bij (t), cij (t) and dij (t), respectively.
Then the equations (4.28) and (4.29) can be written as

x(t + 1) = A(t)x(t) + B(t)u(t), (4.30)
Σ
y(t) = C(t)x(t) + D(t)u(t). (4.31)

Equations (4.30) and (4.31) form the so known state-space representation of


the discrete-time system Σ. Such discrete-time systems can appear by the
discretization of the continuous-time systems, which have the state equation
ẋ(t) = A(t)x(t) + B(t)u(t), and have numerous applications in engineer-
ing (signal analysis and processing, coding theory etc.), economy, ecology,
medicine etc.
Consider now time invariant systems. In this case, A, B, C and D are
constant matrices (with elements from the field K) and the initial moment is
t = 0. The Z transform of a vector function x(t) is defined to be the vector
whose components are the Z transforms of the components of x(t). So,

Z[x(t)] = (Z[x1 (t)], . . . , Z[xn (t)])T .

We make the following notations:

X ∗ (z) = Z[x(t)], U ∗ (z) = Z[u(t)], Y ∗ (z) = Z[y(t)]

285
Using Theorem 4.2.1 (Linearity) and Theorem 4.2.9 (Second Time Delay)
and applying the Z transform to equations (4.30) and (4.31), one gets

z(X ∗ (z) − x(0)) = AX ∗ (z) + BU ∗ (z),



(4.32)
Y ∗ (z) = CX ∗ (z) + DU ∗ (z). (4.33)

Equation (4.32) can be written as (zI − A)X ∗ (z) = BU ∗ (z) + zx(0). By left
multiplication with (zI − A)−1 , for z ∈ C \ σ(A), one obtains

X ∗ (z) = (zI − A)−1 BU ∗ (z) + z(zI − A)−1 x(0).

Replacing X ∗ (z) in (4.33), one gets the so called input-output map of the
system Σ in the frequency domain

Y ∗ (z) = C(zI − A)−1 B + D U ∗ (z) + zC(zI − A)−1 x(0).


 

For the initial state x(0) = 0, this map has the form

Y ∗ (z) = T (z)U ∗ (z),

where T (z) = C(zI − A)−1 B + D. The matrix T (z) is called the transfer
matrix of the system Σ and plays an important role in systems and control
theory.
Notice that the transfer matrices of continuous-time and discrete-time
systems coincide, the only difference being that the variables are s and z,
respectively.

4.5 Exercises
E 25. Determine Z[f (t)](z) for the following original functions:
a) f (t) = 2t − t, t ∈ N;
π 
b) f (t) = sinh t + 2 sin t , t ∈ N;
4
c) f (t) = t2 − cosh ωt, ω > 0, t ∈ N;
d) f (t) = tet − 2 · 4t cos(πt), t ∈ N.

286
Solution. a) Using Theorem 4.2.1 (Linearity), Theorem 4.2.14 (Differen-
tiation of the Image) and Examples 4.1.6 and 4.1.5, one obtains
z
Z[f (t)](z) = Z[2t − t](z) = Z[2t ](z) − Z[t](z) = − Z[t · u(t)](z)
z−2
 0
z 0 z z
= − (−z)(Z[1](z)) = +z
z−2 z−2 z−1
z −1 z z
= +z 2
= − ;
z−2 (z − 1) z − 2 (z − 1)2

b) Using Theorem 4.2.1 (Linearity) and Examples 4.1.6, one gets


h  π i h  π i
Z[f (t)](z) = Z sinh t + 2 sin t (z) = Z[sinh t](z) + 2Z sin t (z)
4  4
π π
e − e−t e 4 it − e− 4 it
 t  
=Z (z) + 2Z (z)
2 2i
1 1 1  π  1  π 
= Z[et ](z) − Z[e−t ](z) + Z e 4 it (z) − Z e− 4 it (z)
2 2 i i
1 z 1 z 1 z 1 z
= · − · + · π − · π
2 z−e 2 z−e −1 i z − e4 i i z − e− 4 i
π π
z e − e−1 z e 4 i − e− 4 i
= · 2 + · π π .
2 z − z(e + e−1 ) + 1 i z 2 − z(e 4 i + e− 4 i ) + 1
√ √
π π π 2 2
Since e − e−1 = 2 sinh e, e + e−1 = 2 cosh e, e 4 i = cos + i sin = +i
√ √ 4 4 2 2
π π π 2 2
and e− 4 i = cos − i sin = −i , it follows that
4 4 2 2

z sinh e z 2
Z[f (t)](z) = 2 + √ .
z − 2z cosh e + 1 z 2 − 2z + 1
We can also use Example 4.2.2 directly;
c) Using Theorem 4.2.1 (Linearity), Theorem 4.2.14 (Differentiation of
the Image) and Examples 4.1.6 and 4.1.5, one obtains
Z[f (t)](z) = Z[t2 − cosh ωt](z) = Z[t2 ](z) − Z[cosh ωt](z)
e + e−ωt
 ωt 
0
= (−z) · (Z[t](z)) − Z (z)
2
 0
z 1 ωt −ωt

= (−z) · − Z[e ](z) + Z[e ](z)
(z − 1)2 2

287
Thus,
 
z(z + 1) 1 z z
Z[f (t)](z) = − +
(z − 1)3 2 z − eω z − e−ω
z(z + 1) 1 z(2z − eω − e−ω )
= − ·
(z − 1)3 2 (z − eω )(z − e−ω )
z(z + 1) z(z − cosh ω)
= 3
− 2 ;
(z − 1) z − 2z cosh ω + 1

d) Using Theorem 4.2.1 (Linearity), Theorem 4.2.14 (Differentiation of


the Image), Theorem 4.2.3 (Similarity) and Examples 4.1.6 and 4.2.8, one
obtains
Z[f (t)](z) = Z[tet − 2 · 4t cos(πt)](z) = Z[tet ](z) − 2Z[4t cos(πt)](z)
0 z 
t
= (−z) · Z[e ](z) − 2Z[cos(πt)]
 0 4
z z 
= (−z) · − 2Z[(−1)t ]
z−e 4
z
ze ze 2z
= 2
−2· z 4 = 2
− .
(z − e) +1 (z − e) z+4
4

W 24. Determine Z[f (t)](z) for the following original functions:


a) f (t) = t2 eαt , t ∈ N;
b) f (t) = (t + 2) cosh ωt, t ∈ N;
π 
c) f (t) = tat − 1 − 4t sin t, t ∈ N∗ ;
6
d) f (t) = 12 + 22 + · · · + t2 , t ∈ N∗ ;
e) f is a periodic function of period T , T ∈ N.
Answer. Throughout this exercises, one employs again Theorem 4.2.1
(Linearity), Theorem 4.2.14 (Differentiation of the Image), and the examples
used in E. 25.
a) One gets
0
Z[f (t)](z) = Z[t · (teαt )](z) = (−z) · Z[teαt ](z)

αt
0 0 z(z + eα )eα
= (−z) · (−z) Z[e ](z) = ;
(z − eα )3

288
b) One obtains

Z[f (t)](z) = Z[t · cosh ωt](z) + 2Z[cosh ωt](z)


 0
z(z − cosh ω) z(z − cosh ω)
= −z 2
+2· 2
z − 2z cosh ω + 1 z − 2z cosh ω + 1
3 2
z cosh ω − 2z + z cosh ω z(z − cosh ω)
= 2 2
+2· 2 ;
(z − 2z cosh ω + 1) z − 2z cosh ω + 1

c) One gets
π 
2
z z(z − 1) sin
Z[f (t)](z) = 2
−4·   π  6 2
(z − a)
z 2 − 2z cos +1
6
z 2z(z 2 − 1)
= − √ ;
(z − a)2 (z 2 − z 3 + 1)2

d) One obtains
 
t(t + 1)(2t + 1)
Z[f (t)](z) = Z (z)
6
1
= · 2Z[t3 ](z) + 3Z[t2 ](z) + Z[t](z) (z).

6
Since (see Example 4.2.15)
 0
z z
ˆ Z[t](z) = −Z[−t · u(t)] = −z = ,
z−1 (z − 1)2
 0
z z(z + 1)
ˆ Z[t2 ](z) = −Z[−t · t](z) = −z 2
= ,
(z − 1) (z − 1)3
0
z(z 2 + 4z + 1)

z(z + 1)
ˆ Z[t ](z) = −Z[−t · t ] = −z
3 2
= ,
(z − 1)3 (z − 1)4
it follows that
1 z(z 2 + 4z + 1) 1 z(z + 1) 1 z
Z[f (t)](z) = · 4
+ · 3
+ · ;
3 (z − 1) 2 (z − 1) 6 (z − 1)2

289
e) One gets

X f (1) f (T − 1)
Z[f (t)](z) = f (t)z −t = f (0) + + ··· + +
t=0
z z T −1
f (0) f (T − 1) f (0) f (T − 1)
+ T + ··· + 2T +1
+ · · · + kT
+ · · · + 2kT +T −1
+ ···
z z  z z 
f (T − 1) 1 1
= f (0) + · · · + 1 + T + · · · + kT + · · ·
z T −1 z z
T
 
f (1) f (T − 1) z
= f (0) + + ··· + T −1
· T .
z z z −1

E 26. Determine the original function of the following Z transforms:


z
a) F ∗ (z) = 2 ;
z −1
z
b) F ∗ (z) = ;
(z − 1)3
z−1
c) F ∗ (z) = 2 .
z +4
Solution. In all exercises for which the Z transform F ∗ (z) is given, we
determine the original function f (t) following the steps:

1. Take G∗ (z) = F ∗ (z)z t−1 , t ∈ N, z ∈ C;

2. Find the isolated singular points of G∗ (z) and label them z1 , z2 , . . . , zn ;

3. Compute the residues of G∗ (z) at the points z1 , z2 , . . . , zn ;


n
X
4. Conclude that f (t) = res (G∗ (z), zj ), t = 0, 1, . . ..
j=1

a)
zt
1. G∗ (z) = , t ∈ N, z ∈ C;
z2 − 1
2. z 2 − 1 = 0 ⇒ z = ±1 are isolated singular points, both poles of order
1;

290
zt 1
3. One obtains res (G∗ (z), 1) = lim = and res (G∗ (z), −1) =
z→1 z + 1 2
zt (−1)t
lim = ;
z→−1 z − 1 −2
1 (−1)t
4. f (t) = − , t ∈ N;
2 2

b)

zt
1. G∗ (z) = , t ∈ N, z ∈ C;
(z − 1)3

2. (z − 1)3 = 0 ⇒ z = 1 is the only isolated singular point, a pole of order


3;
00
zt

∗ 1 3 t(t − 1)
3. One gets res (G (z), 1) = · lim (z − 1) 3
= ;
2! z→1 (z − 1) 2

t(t − 1)
4. f (t) = , t ∈ N;
2

c)

(z − 1)z t−1
1. G∗ (z) = , t ∈ N, z ∈ C;
z2 + 4
When we try to find the isolated singular points, since t = 0, 1, . . ., we
identify two cases, namely t = 0 and t = 1, 2, . . ..

(z − 1)
2. (a) For t = 0, G∗ (z) = . Thus, 0, ±2i are the isolated singular
z(z 2 + 4)
points of G∗ (z), all of them being poles of order 1;

z−1 −1
3. (a) One obtains res (G∗ (z), 0) = lim 2
= , res (G∗ (z), 2i) =
z +4
z→0 4
z−1 2i − 1 ∗ −2i − 1
lim = and res (G (z), −2i) = ;
z→2i z(z + 2i) −8 −8

−1 2i − 1 2i + 1
4. (a) f (0) = + + = 0;
4 −8 8

291
(z − 1)z t−1
2. (b) For t ≥ 1, G∗ (z) = . Thus, ±2i are the isolated singular
z2 + 4

points of G (z), both poles of order 1;
(z − 1)z t−1 (2i − 1)(2i)t−1
3. (b) One obtains res (G∗ (z), 2i) = lim = =
z→2i z + 2i 4i
(2i − 1)(2i)t−2 (−2i − 1)(−2i)t−2
and res (G∗ (z), −2i) = ;
2 2
(2i − 1)(2i)t−2 (−2i − 1)(−2i)t−2
4. (b) f (t) = + . Since i4n = 1, n ∈ Z,
2 2
we have 4 cases:
• t = 4k + 1. We get that
(2i − 1)(2i)4k−1 (−2i − 1)(−2i)4k−1
f (t) = +
2 2
4k 4k
(2i − 1)2 (−2i − 1)2 24k (2i − 1 + 2i + 1)
= − =
4i 4i 4i
4k
=2 ;

• t = 4k + 2. We get that
(2i − 1)(2i)4k (−2i − 1)(−2i)4k
f (t) = + = −24k ;
2 2
• t = 4k + 3. We get that
(2i − 1)(2i)4k+1 (−2i − 1)(−2i)4k+1
f (t) = + = −24k+2 ;
2 2
• t = 4k + 4. We get that
(2i − 1)(2i)4k+2 (−2i − 1)(−2i)4k+2
f (t) = + = 24k+2 .
2 2
From all cases above (for k ∈ N), it follows that


 0, t = 0
4k
2 , t = 4k + 1



f (t) = −24k , t = 4k + 2 .
−24k+2 , t = 4k + 3




24k+2 , t = 4k + 4

292
W 25. Determine the original function of the following Z transforms:
z
a) F ∗ (z) = ;
(z − 1)2 (z − e)
z2
b) F ∗ (z) = ;
z4 − 1
z(z + 1)
c) F ∗ (z) = ;
(z − 1)4
z+5
d) F ∗ (z) = 2 ;
z + 3z + 2
z2
e) F ∗ (z) = 2 ;
(z + 1)2
f) F ∗ (z) = ea/z , a ∈ R∗ .
zt
Answer. a) We get G∗ (z) = and
(z − 1)2 (z − e)
et − t(e − 1) − 1
f (t) = res (G∗ (z), 1) + res (G∗ (z), e) = , t ∈ N;
(e − 1)2

z t+1
b) We get G∗ (z) = and
z4 − 1
f (t) = res (G∗ (z), 1) + res (G∗ (z), −1) + res (G∗ (z), i) + res (G∗ (z), −i)


 0, t = 4k
1 
0, t = 4k + 1
1 − (−1)t+1 − it − (−i)t = , k ∈ N∗ ;

=
4 
 1, t = 4k + 2
0, t = 3k + 3

1
c) f (t) = · t(t − 1)(2t − 1), t ∈ N∗ ;
6
z+5
d) For t = 0 we obtain G∗ (z) = . Thus,
z(z + 1)(z + 2)
f (0) = res (G∗ (z), 0) + res (G∗ (z), −1) + res (G∗ (z), −2) = 0.
z t−1 (z + 5)
For t ≥ 1 we obtain G∗ (z) = . Thus,
(z + 1)(z + 2)
f (t) = res (G∗ (z), −1) + res (G∗ (z), −2) = 4 · (−1)t−1 − 3 · (−2)t−1 .

293

0, t = 0
In conclusion, f (t) = ;
4 · (−1) t−1
− 3 · (−2) , t ∈ N∗
t−1


 −2k, t = 4k
t t 
0, t = 4k + 1
e) f (t) = − (i + (−i)t ) = , k ∈ N;
4 
 2k + 1, t = 4k + 2
0, t = 3k + 3

f) We get that G∗ (z) = z t−1 · ea/z , t ∈ N. Since z = 0 is an essential


a a2
singular point and the Laurent expansion of G∗ (z) is z t−1 + z t−2 + z t−3 +
1! 2!
at −1 a t
a t
· · · + z + · · · , it follows that res (G∗ (z), 0) = and f (t) = , t ∈ N.
t! t! t!
E 27. Find the solution y(t) of the following homogeneous difference equa-
tions:
a) 3∆2 y(t) − 4∆y(t) − 4y(t) = 0, y(0) = 0, y(1) = 2;
b) ∆2 y(t) − 4y(t) = 0, y(0) = 1, y(1) = 3;
c) ∆3 y(t) + ∆2 y(t) + ∆y(t) − 3y(t) = 0, y(0) = 0, y(1) = 1, y(2) = 0.
Solution. We find the solution using the Z transform in the following
manner:
1. Replace into the equation the explicit form of the differences ∆y(t),
∆2 y(t), ∆3 y(t) (see formulas (4.25)). Obtain the equivalent form,
namely an equation of the type
An y(t + n) + An−1 y(t + n − 1) + · · · + A0 y(t) = f (t), Ai ∈ R;

2. Apply the Z transform to the above equation, use Theorem 4.2.9 (Sec-
ond Time Delay) and determine Z[y(t)](z);
3. Find the original y(t), t = n, n + 1, n + 2, . . ..
a)
1. Since ∆y(t) = y(t + 1) − y(t) and ∆2 y(t) = y(t + 2) − 2y(t + 1) + y(t),
the equation becomes
3(y(t + 2) − 2y(t + 1) + y(t)) − 4(y(t + 1) − y(t)) − 4y(t) = 0.
This is equivalent to y(t + 2) − 10y(t + 1) + 3y(t) = 0;

294
2. Now we apply the Z transform and we obtain

Z[3y(t + 2) − 10y(t + 1) + 3y(t)](z) = Z[0](z).

Our goal is to determine Z[y(t)](z). Using Theorem 4.2.1 (Linearity)


and Theorem 4.2.9 (Second Time Delay), we get
 
2 y(1)
3z Z[y(t)](z) − y(0) − − 10z(Z[y(t)](z) − y(0)) + 3Z[y(t)](z)
z

is equal to 0. Employing the initial conditions, one obtains


 
2 2
3z Z[y(t)](z) − 0 − − 10z(Z[y(t)](z) − 0) + 3Z[y(t)](z) = 0 ⇔
z

3z 2 Z[y(t)](z) − 6z − 10Z[y(t)](z) + 3Z[y(t)](z) = 0 ⇔


(3z 2 − 10z + 3)Z[y(t)](z) = 6z,
6z
so Z[y(t)](z) = .
3z 2 − 10z + 3
3. Now we know the Z transform of y(t). Let us follow the steps of the
algorithm presented and used in E. 26:
6z
3.1 Take G∗ (z) = · z t−1 , t = 2, 3, . . . and z ∈ C; so,
3z 2 − 10z + 3
t
6z
G∗ (z) = , t = 2, 3, . . . and z ∈ C;
3z 2
− 10z + 3
3.2 Find the isolated singular points of G∗ (z). The roots of the equa-
1
tion 3z 2 − 10z + 3 = 0 are z1 = 3 and z2 = , both poles of order
3
1;
3.3 We compute the residues of G∗ (z) at z1 and z2 . We obtain

6z t 3
res (G∗ (z), 3) = lim 1 = · 3t
z→3 3(z − ) 4
3

and  t
6z t
 
∗ 1 −3 1
res G (z), = lim1 = · ;
3 z→ 3 3(z − 3) 4 3

295
2
X
3.4 We conclude that y(t) = res (G∗ (z), zj ), t = 2, 3, . . ., so
j=1

 t  t !
3 −3 1 3 1
y(t) = · 3t + · = 3t − , t = 2, 3, . . . ;
4 4 3 4 3

b)

1. Since ∆y(t) = y(t + 1) − y(t) and ∆2 y(t) = y(t + 2) − 2y(t + 1) + y(t),


the equation becomes y(t + 2) − 2y(t + 1) − 3y(t) = 0;

2. Now we apply the Z transform and using Theorem 4.2.1 (Linearity),


we get

Z[y(t + 2)](z) − 2Z[y(t + 1)](z) − 3Z[y(t)](z) = 0.

From Theorem 4.2.9 (Second Time Delay),


 
2 y(1)
z Z[y(t)](z) − y(0) − −2z(Z[y(t)](z)−y(0))−3Z[y(t)](z) = 0
z
z
and making use of the initial conditions, one gets Z[y(t)](z) = .
z−3
3. In this case it is not necessary to use residues for finding the original
since it is obvious that y(t) = 3t , t = 2, 3, . . .;

c)

1. Since ∆y(t) = y(t + 1) − y(t), ∆2 y(t) = y(t + 2) − 2y(t + 1) + y(t) and


3  
k 3
X
3
∆ y(t) = (−1) y(t+3−k) = y(t+3)−3y(t+2)+3y(t+1)−y(t),
k=0
k
the equation becomes y(t + 3) − 2y(t + 2) + 2y(t + 1) − 4y(t) = 0;

2. As we did in the previous exercise, applying the Z transform and then


using Theorem 4.2.1 (Linearity) and afterward Theorem 4.2.9 (Second
Time Delay), we get

Z[y(t + 3)](z) − 2Z[y(t + 2)](z) + 2Z[y(t + 1)](z) − 4Z[y(t)](z) = 0 ⇒

296
 
3 y(1) y(2)
z Z[y(t)](z) − y(0) − − 2 −
z z
 
2 y(1)
−2z Z[y(t)](z) − y(0) − +
z
+2z(Z[y(t)](z) − y(0)) − 4Z[y(t)](z) = 0.
Employing the initial condition, one obtains

(z 3 − 2z 2 + 2z − 4)Z[y(t)](z) = z 2 − 2z ⇔

z(z − 2) z(z − 2) z
Z[y(t)](z) = = 2 = 2 ;
z3 2
− 2z + 2z − 4 z (z − 2) + 2(z − 2) z +2
3. Now we know the Z transform of y(t). Let us follow the steps of the
algorithm presented and used in E. 26:
zt
3.1 Take G∗ (z) = , t = 3, 4, . . ., z ∈ C;
z2 + 2
3.2 Find the isolated singular√ points of G∗ (z).√The roots of the equa-
tion z 2 + 2 = 0 are z1 = i 2 and z2 = −i 2, both poles of order
1;
3.3 We compute the residues of G∗ (z) at z1 and z2 . We obtain
√ t−1
√ z t
(i 2)
res (G∗ (z), i 2) = lim√ √ =
z→i 2 z + i 2 2
and


√ zt (−i 2)t−1
res (G (z), −i 2) = lim√ √ =
z→−i 2 z − i 2 2
√ √
(i 2)t−1 (−i 2)t−1
3.4 We conclude that y(t) = + , t = 3, 4, . . ., so
2 2


 22k , t = 4k + 1
0, t = 4k + 2,

y(t) = 2k+1 , k ∈ N.

 −2 , t = 4k + 3,
0, t = 4k + 4,

297
W 26. Find the solution y(t) of the following homogeneous difference equa-
tions:
a) ∆2 y(t) + ∆y(t) − 2y(t) = 0, y(0) = 1, y(1) = 2;
b) ∆2 y(t) + 2∆y(t) + 2y(t) = 0, y(0) = 3, y(1) = 3;
√ √
c) ∆3 y(t) + y(t) = 0, y(0) = 0, y(1) = 3, y(2) = 3.
Answer. a) y(t) = 2t ;

 3, t = 4k + 1
t−1 t−1

3i (i + 1) 3(−i) (i + 1) 
−3, t = 4k + 2
b) y(t) = + = , k ∈ N;
2 2 
 −3, t = 4k + 3
3, t = 4k + 4

√ √ t−1
3(z − 2) 3z (z − 2)
c) One gets Z[y(t)](z) = 2 and G∗ (z) = . Fi-
z − 3z + 3 z 2 − 3z + 3
nally, one obtains
√ √ !t−1 √ √ !t−1
3+i 3+i 3 3−i 3−i 3
y(t) = · + · .
2 2 2 2

3+i 3
Since z1 = is a root of z 2 − 3z + 3 = 0, it follows that
2
2
• z1 = 3z1 − 3;
• z13 = 3z12 − 3z1 = 3(3z1 − 3) − 3z1 = 6z1 − 9;
• z14 = 6z12 − 9z1 = 6(3z1 − 3) − 9z1 = 9z1 − 18;
• z15 = 9z12 − 18z1 = 9(3z1 − 3) − 18z1 = 9z1 − 27;
• z16 = 9z12 − 27z1 = 9(3z1 − 3) − 27z1 = −27.
Therefore, we have to divide the problem into 6 cases by considering t =
6k + p, k ∈ N, p ∈ {1, 2, . . . , 6}. One gets
 √ k k

 √3 · (−1)k · 27k , t = 6k + 1



 3 · (−1) · 27 , t = 6k + 2
0, t = 6k + 3

y(t) = √ .

 3√3 · (−1) k+1
· 27k , t = 6k + 4
9 3 · (−1)k+1 · 27k , t = 6k + 5


 √


18 3 · (−1)k+1 · 27k , t = 6k + 6

298
E 28. Find the solution y(t) of the following nonhomogeneous difference
equations:
a) ∆2 y(t) − ∆y(t) = t · 4t , y(0) = 0, y(1) = 4;
 
2 3π
b) ∆ y(t) − 5∆y(t) + 6y(t) = sin t , y(0) = 1, y(1) = 3;
2
c) ∆3 y(t) + 3∆2 y(t) = (−2)t , y(0) = 1, y(1) = 0, y(2) = 0.

Solution. We are going to find the solution using the Z transform follow-
ing the same algorithm as the one presented in E. 27.
a)

1. Since ∆y(t) = y(t + 1) − y(t) and ∆2 y(t) = y(t + 2) − 2y(t + 1) + y(t),


the equation becomes

(y(t + 2) − 2y(t + 1) + y(t)) − y(t + 1) + y(t)) = t · 4t .

This is equivalent to y(t + 2) − 3y(t + 1) + 2y(t) = t · 4t ;

2. Now we apply the Z transform and we obtain

Z[y(t + 2) − 3y(t + 1) + 2y(t)](z) = Z[t · 4t ](z).

Our goal is to determine Z[y(t)](z). Using Theorem 4.2.1 (Linearity)


and Theorem 4.2.14 (Differentiation of the Image), we get that

Z[y(t + 2)](z) − 3Z[y(t + 1)](z) + 2Z[y(t)](z) = (−z)(Z[4t ](z))0 .

It follows from Theorem 4.2.9 (Second Time Delay) and from the initial
conditions that
   0
2 4 z
z Z[y(t)](z) − −3zZ[y(t)](z)+2Z[y(t)](z) = (−z)· ⇔
z z−4
4z
z 2 Z[y(t)](z) − 4z − 3zZ[y(t)](z) + 2Z[y(t)](z) = ⇔
(z − 4)2
4z
(z 2 − 3z + 2)Z[y(t)](z) − 4z = ,
(z − 4)2
4z((z − 4)2 + 1)
so Z[y(t)](z) = ;
(z − 4)2 (z 2 − 3z + 2)

299
3. Now we know the Z transform of y(t). Let us follow the steps of the
algorithm presented and used in E. 26:

∗ 4z t ((z − 4)2 + 1)
3.1 Take G (z) = , t = 2, 3, . . ., z ∈ C;
(z − 4)2 (z 2 − 3z + 2)
3.2 Find the isolated singular points of G∗ (z). The roots of the equa-
tion (z − 4)2 (z 2 − 3z + 2) = 0 are z1,2 = 4, z3 = 1 and z4 = 2,
where z1 is a pole of order 2 and z3 , z4 are both poles of order 1;
3.3 We compute the residues of G∗ (z) at z1 , z3 and z4 . We obtain
 t 0
∗ 4z ((z − 4)2 + 1)
res (G (z), 4) = lim
z→4 z 2 − 3z + 2
t4t−1 (16 − 12 + 2) − 4t (8 − 3)
=4·
36
6t − 20 t−1
= ·4 ,
9
4z t ((z − 4)2 + 1) 40
res (G∗ (z), 1) = lim = −
z→1 (z − 4)2 (z − 2) 9
and
∗ 4z t ((z − 4)2 + 1)
res (G (z), 2) = lim = 5 · 2t ;
z→1 (z − 4)2 (z − 1)

6t − 20 t−1 40
3.4 We conclude that y(t) = ·4 − + 5 · 2t , t = 2, 3, . . .;
9 9

b)
1. Since ∆y(t) = y(t + 1) − y(t) and ∆2 y(t) = y(t + 2) − 2y(t + 1) + y(t),
the equation becomes
 

(y(t + 2) − 2y(t + 1) + y(t)) − 5(y(t + 1) − y(t)) + 6y(t) = sin t .
2
 

This is now equivalent to y(t + 2) − 7y(t + 1) + 12y(t) = sin t ;
2
2. Now we apply the Z transform and we obtain
  

Z[y(t + 2) − 7y(t + 1) + 12y(t)](z) = Z sin t (z).
2

300
Our goal is to determine Z[y(t)](z). Using Theorem 4.2.1 (Linearity)
and the sine expression from Euler’s formula, we get
" 3π 3π
#
e 2 it − e− 2 it
Z[y(t + 2)](z) − 7Z[y(t + 1)](z) + 12Z[y(t)](z) = Z (z).
2i

It follows from Theorem 4.2.9 (Second Time Delay) and from the initial
conditions that the left-hand side of the above equation is
 
2 3
z Z[y(t)](z) − 1 − − 7z(Z[y(t)](z) − 1) + 12Z[y(t)](z)
z
or

z 2 Z[y(t)](z) − z 2 − 3z − 7zZ[y(t)](z) + 7z + 12Z[y(t)](z),

which is equaivalent to

(z 2 − 7z + 12)Z[y(t)](z) − z 2 + 4z.

The right-hand side can be reduced to


1  3π
i t −3π
i t
 1
Z[(−i)t ](z) − Z[it ](z)

· Z[(e ) ](z) − Z[(e
2 2 ) ](z) =
2i 2i  
1 z z
= −
2i z + i z − i
z
=− 2 .
z +1
Putting all this together yields
z
(z 2 − 7z + 12)Z[y(t)](z) − z 2 + 4z = − ⇔
z2 +1
z
(z 2 − 7z + 12)Z[y(t)](z) = z 2 − 4z − ⇔
z2 +1
z 4 − 4z 3 + z 2 − 5z
(z 2 − 7z + 12)Z[y(t)](z) = ,
z2 + 1
z 4 − 4z 3 + z 2 − 5z
so Z[y(t)](z) = ;
(z 2 + 1)(z 2 − 7z + 12)

301
3. Now we know the Z transform of y(t). Let us follow the steps of the
algorithm presented and used in E. 26:
z t (z 3 − 4z 2 + z − 5)
3.1 Take G∗ (z) = , t = 2, 3, . . ., z ∈ C;
(z 2 + 1)(z 2 − 7z + 12)
3.2 Find the isolated singular points of G∗ (z). The roots of the equa-
tion (z 2 + 1)(z 2 − 7z + 12) = 0 are z1,2 = ±i, z3 = 3 and z4 = 4,
all being poles of order 1;
3.3 We compute the residues of G∗ (z) at zn , n ∈ {1, 2, 3, 4}. We
obtain
z t (z 3 − 4z 2 + z − 5) it (−i + 4 + i − 5)
res (G∗ (z), i) = lim =
z→i (z + i)(z 2 − 7z + 12) 2i(−1 − 7i + 12)
t t
−i −i (11 + 7i) −it−1 (11 + 7i)
= = = ,
2i(11 − 7i) 2i(121 + 49) 340
−(−i)t−1 (11 − 7i)
res (G∗ (z), −i) = ,
340
z t (z 3 − 4z 2 + z − 5) 11 t
res (G∗ (z), 3) = lim 2
= ·3
z→3 (z + 1)(z − 4) 10
and
z t (z 3 − 4z 2 + z − 5) 1
res (G∗ (z), 4) = lim 2
= − · 4t .
z→4 (z + 1)(z − 3) 17
3.4 We conclude that
−it−1 (11 + 7i) −(−i)t−1 (11 − 7i)
y(t) = + +
340 340
11 t 1
+ ·3 − · 4t , t = 2, 3, . . . ,
10 17
so
14 11 4k+2 1


 + ·3 − · 44k+2 , t = 4k + 2


 340 10 17



 22 11 4k+3 1
· 44k+3 , t = 4k + 3

+ ·3 −


340 10 17


y(t) = , k ∈ N;
 −14 11 1
· 34k+4 − · 44k+4 , t = 4k + 4


 +
340 10 17






 −22 + 11 · 34k+5 − 1 · 44k+5 , t = 4k + 5



340 10 17

302
c)
1. Since ∆y(t) = y(t + 1) − y(t), ∆2 y(t) = y(t + 2) − 2y(t + 1) + y(t) and
3  
k 3
X
3
∆ y(t) = (−1) y(t+3−k) = y(t+3)−3y(t+2)+3y(t+1)−y(t)
k=0
k
the equation becomes y(t + 3) − 3y(t + 1) + 2y(t) = (−2)t ;
2. Using the same idea as before, we obtain that

Z[y(t + 3)](z) − 3Z[y(t + 1)](z) + 2Z[y(t)](z) = Z[(−2)t ](z) ⇒


z
(z 3 − 3z + 2)Z[y(t)](z) = z 3 − 3z + ⇔
z+2
z 4 + 2z 3 − 3z 2 − 5z
Z[y(t)](z) = .
(z + 2)(z 3 − 3z + 2)
3. Now we know the Z transform of y(t). Let us follow the steps of the
algorithm presented and used in E. 26:

∗ z t (z 3 + 2z 2 − 3z − 5)
3.1 Take G (z) = , t = 3, 4, . . ., z ∈ C;
(z + 2)(z 3 − 3z + 2)
3.2 Find the isolated singular points of G∗ (z). The equation (z +
2)(z 3 − 3z + 2) = 0 can be rewritten as (z + 2)2 (z − 1)2 = 0, so its
roots are z1 = −2 and z2 = 1, both poles of order 2;
3.3 We compute the residues of G∗ (z) at z1 and z2 . We obtain
 t+3 0
∗ z + 2z t+2 − 3z t+1 − 5z t
res (G (z), −2) = lim
z→−2 (z − 1)2
3t − 10
= · (−2)t−1
27
and
0
z t+3 + 2z t+2 − 3z t+1 − 5z t


res (G (z), 1) = lim
z→1 (z + 2)2
−15t + 22
= ;
27
3t − 10 −15t + 22
3.4 We conclude that y(t) = · (−2)t−1 + , t =
27 27
3, 4, . . ..

303
W 27. Find the solution y(t) of the following nonhomogeneous difference
equations:
a) ∆2 y(t) + 2∆y(t) − 3y(t) = 5t , y(0) = −1, y(1) = 5;
b) 2 · ∆2 y(t) + 4∆y(t) + y(t) = t, y(0) = 1, y(1) = 0;
c) ∆3 y(t) + ∆2 y(t) + ∆y(t) + y(t) = 2t − 3t , y(0) = 0, y(1) = 0, y(2) = 0.
Answer. a) We get the equation y(t + 2) − 4y(t) = 5t . Applying the
z(1 − (z − 5)2 )
Z transform, we get Z[y(t)](z) = . The solution is y(t) =
(z − 5)(z 2 − 4)
5t 2t+1 6(−2)t+1
+ + , t = 2, 3, . . .;
21 3 7
b) We get the equation 2y(t + 2) − y(t) = t. Applying the Z transform,
z(2z 3 − 4z 2 + 2z + 1)
we get Z[y(t)](z) = . The solution is y(t) = t − 4 +
√ (z − 1)2 (2z 2 −√1)
 t  t
1 5+3 2 1 5−3 2
√ · + −√ · , t = 2, 3, . . .;
2 2 2 2
c) We get the equation y(t+3)−2y(t+2)+2y(t+1) = 2t −3t . Applying the
−1
Z transform, we get Z[y(t)](z) = . The solution
(z − 2)(z − 3)(z 2 − 2z + 2)
3t−1 3−i i+3
is y(t) = 2t−2 − − (1 + i)t−1 · − (1 − i)t−1 · , t = 3, 4, . . .. Since
5 20 20
4 2 4 2
(1 + i) = (2i) = −4 and (1 − i) = (−2i) = −4, it follows that
34k 3(−4)k

4k−1

 2 − − , t = 4k + 1



 5 10


34k+1 2(−4)k


 4k
 2 − 5 − , t = 4k + 2


5

y(t) = , k ∈ N.
4k+2 k
3 (−4)


24k+1 − − , t = 4k + 3





 5 5



4k+3
 24k+2 − 3 2(−4)k



+ , t = 4k + 4
5 5

E 29. Find the general term yn of the following recurrent sequences:


a) yn+1 = 2yn + 3 · 2n , y0 = 1, n ∈ N;

304
b) yn+2 = yn+1 + yn , y0 = 0, y1 = 1, n ∈ N (the well know Fibonacci
sequence);
c) yn+2 = 3yn+1 − 2yn + n, y0 = 1, y1 = 1, n ∈ N.

Solution. a) We interpret the sequence yn as a discrete function yn = y(n),


n ∈ N. For convenience, we consider n = t, t ∈ N. The recurrent relation
becomes y(t + 1) = 2y(t) + 3 · 2t , y(0) = 1, t ∈ N. Applying the Z transform
and Theorem 4.2.1 (Linearity), we get

Z[y(t + 1)](z) = 2Z[y(t)](z) + 3Z[2t ](z).

It follows from Theorem 4.2.9 (Second Time Delay) and from the initial
conditions that
z
z · (Z[y(t)](z) − y(0)) = 2Z[y(t)](z) + 3 · ⇔
z−2
3z
z · Z[y(t)](z) − z = 2Z[y(t)](z) + ,
z−2
z(z + 1)
so Z[y(t)](z) = . Now we know the Z transform of y(t). Let us
(z − 2)2
follow the steps of the algorithm presented and used in E. 26:

z t (z + 1)
1. G∗ (z) = , t ∈ N, z ∈ C;
(z − 2)2

2. (z − 2)2 = 0 ⇒ z = 2 is the only isolated singular point, a pole of order


2 of G∗ (z);
0
3. One gets res (G∗ (z), 2) = lim z t (z + 1) = 2t−1 (3t + 2);
z→2

4. y(t) = 2t−1 (3t + 2), t ∈ N; as a consequence, yn = 2n−1 (3n + 2), n ∈ N;

b) We interpret the sequence yn as a discrete function yn = y(n), n ∈ N.


For convenience, we consider n = t, t ∈ N. The recurrent relation becomes
y(t + 2) = y(t + 1) + y(t), y(0) = 0, y(1) = 1, t ∈ N. Applying the Z
transform and Theorem 4.2.1 (Linearity), we get

Z[y(t + 2)](z) = Z[y(t + 1)](z) + Z[y(t)](z).

305
It follows from Theorem 4.2.9 (Second Time Delay) and from the initial
conditions that
 
2 y(1)
z (Z[y(t)](z) − y(0) − = z · (Z[y(t)](z) − y(0)) + Z[y(t)](z) ⇔
z
 
2 1
z (Z[y(t)](z) − 0 − = z · (Z[y(t)](z) − 0) + Z[y(t)](z),
z
z
so Z[y(t)](z) = 2 . Now we know the Z transform of y(t). Let us
z −z−1
follow the steps of the algorithm presented and used in E. 26:
zt
1. G∗ (z) = , t ∈ N, z ∈ C;
z2 − z − 1
√ √
2 1+ 5 1− 5
2. z − z − 1 = 0 ⇒ z1 = and z2 = are the isolated
2 2
singular points, both first order poles of G∗ (z);
3. One gets
√ !t
zt 1 1+ 5
res (G∗ (z), z1 ) = =√ ·
2z − 1 z=z1 5 2

and √ !t
∗ zt 1 1− 5
res (G (z), z2 ) = = −√ · .
2z − 1 z=z2 5 2
" √ !t √ !t #
1 1+ 5 1− 5
4. y(t) = √ − , t ∈ N; as a consequence,
5 2 2
" √ !t √ !t #
1 1+ 5 1− 5
yn = √ − , n ∈ N;
5 2 2

c) We interpret the sequence yn as a discrete function yn = y(n), n ∈ N.


For convenience, we consider n = t, t ∈ N. The recurrent relation becomes
y(t + 2) = 3y(t + 1) − 2y(t) + t, y(0) = 1, y(1) = 1, t ∈ N. Applying the Z
transform and Theorem 4.2.1 (Linearity), we get
Z[y(t + 2)](z) = 3Z[y(t + 1)](z) − 2Z[y(t)](z) + Z[t](z).

306
It follows from Theorem 4.2.9 (Second Time Delay) and from the initial
conditions that
 
2 1 z
z (Z[y(t)](z) − 1 − = 3z(Z[y(t)](z) − 1) − 2Z[y(t)](z) + ⇒
z (z − 1)2
z
(z 2 − 3z + 2)Z[y(t)](z) = z 2 + z − 3z + ⇔
(z − 1)2
z(z − 2) z
Z[y(t)](z) = + ,
z 2 − 3z + 2 (z − 1)2 (z 2 − 3z + 2)
z z z
so Z[y(t)](z) = + 3
. We know that the original of
z − 1 (z − 1) (z − 2) z−1
z
is 1 (or the function u(t)). For the function we will follow
(z − 1)3 (z − 2)
the steps of the algorithm presented and used in E. 26:
zt
1. G∗ (z) = , t = 0, 1, . . ., z ∈ C;
(z − 1)3 (z − 2)
2. (z −1)3 (z −2) = 0 ⇒ z1 = 1 and z2 = 2 are the isolated singular points,
the first one a pole of order 3 and the second one a pole of order 1;
3. One gets
 t 00 0
(t − 1)z t − 2tz t−1

∗ 1 z 1
res (G (z), 1) = · lim = · lim
2! z→1 z − 2 2! z→1 (z − 2)2
t2 + t + 2
=−
2
and
zt
res (G∗ (z), 2) = lim = 2t ;
z→2 (z − 1)3

z t t2 + t + 2
4. The original of the function is 2 − , t ∈ N.
(z − 1)3 (z − 2) 2
t2 + t + 2
It follows that y(t) = 1 + 2t − , t ∈ N. Hence,
2
n2 + n + 2
yn = 1 + 2 n − , n ∈ N.
2

307
W 28. Find the general term yn of the following recurrent sequences:
a) yn+1 = yn + n · 3n , y0 = 3, n ∈ N;
b) yn+2 + yn = 3n , y0 = 0, y1 = 3, n ∈ N;
c) yn+3 + 3yn+2 + 3yn+1 + yn = 0, y0 = 0, y1 = 0, y2 = 1, n ∈ N.
3z 3z
Answer. a) We get that Z[y(t)](z) = + and
z − 1 (z − 1)(z − 3)2
3 3t (2t − 3) 15 + 3t (2t − 3)
yn = 3 + + = ;
4 4 4

z(3z − 8)
b) We get Z[y(t)](z) = and
(z − 3)(z 2 + 1)
3t it (−27i − 1) (−i)t (27i − 1)
y(t) = + + ;
10 20 20
This implies that
34k 1


 − , n = 4k



 10 10


34k+1 27



 10 + 10 , n = 4k + 1



yn = , k ∈ N;
34k+2 1


+ , n = 4k + 2





 10 10



4k+3
 3 27




, n = 4k + 3
10 10

z (−1)n−2 · n(n − 1)
c) We get Z[y(t)](z) = and y n = .
(z + 1)3 2

E 30. Determine the transfer matrix of the system Σ in Figure 4.12.


Solution. The state space representation of the system is given by the
equations (4.30) and (4.31):

x(t + 1) = Ax(t) + Bu(t)
Σ ,
y(t) = Cx(t) + Du(t)

308
   
4 −1 −2 1 −1  
1 1 1
where A =  2 1 −2 , B =  −1 2 , C = and
2 0 −1
1 −1 1  0  3
  x1 (t)  
12 −30 u 1 (t)
D= . Also, x(t) =  x2 (t) , u(t) = and y(t) =
−6 13 u2 (t)
  x 3 (t)
y1 (t)
. By a straightforward computation, det(sI −A) = s3 −6s2 +11s−6
y2 (t)  2 
s − 2s − 1 −s − 3 −2s + 4
and (sI − A)∗ =  2s − 4 s2 − 5s + 6 −2s + 4 . One obtains
2
s−3 −s + 3 s − 5s + 6
the following transfer matrix:
 
−1 8s − 2 4s2 − 42s + 32
T (s) = C · (sI − A) · B + D = .
3s2 − 4s + 4 −5s2 + 6s

Figure 4.12: The Scheme of the Linear System Σ

4.6 MATLAB Applications


4.6.1 Z Transform
The syntax is one of the following:

309
1. F = ztrans(f ). This computes the Z transform F of the symbolic
expression f . By default the variable of f is n (which must be declared)
and the variable of the computed transform F is z. Any other variable
can be used instead of n, if it is declared, with one exception: if z is
used as time variable, then ztrans returns F as a function of w, namely
F (w);

2. F = ztrans(f, v). This computes the Z transform F as a function of


the parameter v instead of the default variable z;

3. F = ztrans(f, m, v). This computes the Z transform F as a function


of the variable v instead of the default variable z and considers that f
is a function of the variable m instead of the default variable n.

Example 4.6.1.
>> syms m;
f = exp(2*m) − sin(3*m);
F = ztrans(f );
The answer is F = z/(z − exp(2)) − (z* sin(3))/(zˆ2 − 2* cos(3)*z + 1)

Example 4.6.2.
>> syms z v;
f = exp(2*z) − sin(3*z);
F = ztrans(f, v);
The answer is F = v/(v − exp(2)) − (v* sin(3))/(vˆ2 − 2* cos(3)*v + 1)

Example 4.6.3.
>> syms m v;
f = exp(2*m) − sin(3*m);
F = ztrans(f, m, v);
The answer is F = v/(v − exp(2)) − (v* sin(3))/(vˆ2 − 2* cos(3)*v + 1)

Power Functions
Example 4.6.4.
>> syms n; % discrete ramp function
F = ztrans(n);
The answer is F = z/(z − 1)ˆ2.

310
Example 4.6.5.
>> syms n;
F = ztrans(nˆ2);
The answer is F = (z*(z + 1))/(z − 1)ˆ3.
Example 4.6.6.
>> syms n;
F = ztrans(nˆ7);
The answer is F = (z*(zˆ6+120*zˆ5+1191*zˆ4+2416*zˆ3+1191*zˆ2+
120*z + 1))/(z − 1)ˆ8.
Example 4.6.7.
>> syms n;
f = nˆ3;
F = ztrans(f );
The answer is F = (z*(zˆ2 + 4*z + 1))/(z − 1)ˆ4.
Example 4.6.8.
>> syms n;
F = ztrans(7ˆn);
The answer is F = z/(z − 7).
Example 4.6.9.
>> syms n;
F = ztrans((1 + 2i)ˆn);
The answer is F = z/(z − 1 − 2i).
Example 4.6.10.
>> syms n a;
F = ztrans(aˆn);
The answer is F = −z/(a − z).

Exponentials
Example 4.6.11.
>> syms n a;
f = exp(a*n);
F = ztrans(f );

311
The answer is F = z/(z − exp(a)).

Example 4.6.12.
>> syms n a;
f = exp(i*a*n);
F = ztrans(f );
MATLAB rewrites f = exp(a*n*1i).
The answer is F = z/(z − exp(a*1i)).

Example 4.6.13.
>> syms a b t;
F = ztrans(exp(a*t) − exp(b*t));
The answer is F = z/(z − exp(a)) − z/(z − exp(b)).

Heaviside and Delay


Example 4.6.14.
>> syms n;
F = ztrans(heaviside(n − 5));
The answer is F = (1/(z − 1) + 1/2)/zˆ5.

Example 4.6.15.
>> syms n;
F = ztrans(heaviside(n + 5));
The answer is F = (zˆ3*(1/(z − 1) + 1/2) − z − zˆ2 − zˆ3/2.

Example 4.6.16.
>> syms t a;
f = heaviside(t) − 2*heaviside(t − 4) + heaviside(t − 2*4);
F = ztrans(f );
The answer is F = exp(−8*s)/s − (2* exp(−4*s))/s + 1/s.

Example 4.6.17.
>> syms t;
f = t*(heaviside(t));
F = ztrans(f );
The answer is F = z/(z − 1)ˆ2.

312
Example 4.6.18.
>> syms t;
f = tˆ3*(heaviside(t));
F = ztrans(f );
The answer is F = (z*(zˆ2 + 4*z + 1))/(z − 1)ˆ4.
Example 4.6.19.
>> syms n;
F = ztrans(nˆ3* exp(5*n));
The answer is F = (z* exp(5)*(zˆ2 + 4* exp(5)*z + exp(10)))/(z−
exp(5))ˆ4.

Sine and Cosine


Example 4.6.20.
>> syms n;
F = ztrans(sin(n));
The answer is F = (z* sin(1))/(zˆ2 − 2* cos(1)*z + 1).
Example 4.6.21.
>> syms n a;
f = sin(a*n);
F = ztrans(f );
The answer is F = (z* sin(a))/(zˆ2 − 2* cos(a)*z + 1).
Example 4.6.22.
>> syms n;
F = ztrans(cos(n));
The answer is F = (z*(z − cos(1)))/(zˆ2 − 2* cos(1)*z + 1).
Example 4.6.23.
>> syms n a;
f = cos(a*n);
F = ztrans(f );
The answer is F = (z*(z − cos(a)))/(zˆ2 − 2* cos(a)*z + 1).
Example 4.6.24.
>> syms n a;

313
f = sinh(a*n);
F = ztrans(f );
The answer is F = (z* sinh(a))/(zˆ2 − 2* cosh(a)*z + 1).

Example 4.6.25.
>> syms n a;
f = cosh(a*n);
F = ztrans(f );
The answer is F = (z*(z − cosh(a)))/(zˆ2 − 2* cosh(a)*z + 1).

Matrices
We can also determine the Z transform of matrices. One uses matrices of
the same size to specify the transformation variables and evaluation points.

4.6.2 Inverse Z Transform


The syntax is one of the following:

1. f = iztrans(F). This computes the inverse Z transform (the original)


f of the symbolic expression F . By default the variable of F is z and
the variable of the computed inverse transform f is n;

2. f = iztrans(F, t). This computes the inverse Z transform f as a


function of the parameter t instead of the default variable n;

3. f = iztrans(F, y, x). This computes the inverse Z transform f as a


function of the variable x instead of the default variable n and considers
that F is a function of the variable y instead of the default variable z.

Example 4.6.26.
>> syms z;
f = iztrans(z/(z − 1));
The answer is f = 1.
% f is memorized as the Heaviside’s discrete step function
Now we do it the other way around.
>> F = ztrans(f );
The answer is F = z/(z − 1).

314
Example 4.6.27.
>> syms z;
F = z/(z + 1);
f = iztrans(F );
The answer is f = (−1)ˆn.

Example 4.6.28.
>> syms z;
F = z/(z − 1)ˆ2;
f = iztrans(F );
The answer is f = n.

Example 4.6.29.
>> syms z;
F = z/(z + 1);
f = iztrans(F );
The answer is f = (−1)ˆn.

Example 4.6.30.
>> syms z a;
F = z/(z − a)ˆ2;
f = iztrans(F );
The answer is f = piecewise(a == 0, kroneckerDelta(n − 1, 0),
a ∼= 0, a*(kroneckerDelta(n, 0)/aˆ2 + (aˆn*(n − 1))/aˆ2) +
aˆn/a − kroneckerDelta(n, 0)/a). 
j 1, if i = j
% kroneckerDelta(i, j) is δi = ; hence, the result for a = 0
0, if i 6=j
0 1, if n = 1
is f = kroneckerDelta(n-1, 0) = δn−1 = . Indeed, the Z
0, if n 6= 1
1  z 
transform of this function is F = = 2 . For a > 0 the result is f =
z z
nan−1 . See the simplified version below.

Example 4.6.31.
>> syms z a;
F = z/(z − a)ˆ2;
f = simplify(iztrans(F ));

315
The answer is f = piecewise(a == 0, kroneckerDelta(n − 1, 0), a ∼=
0, aˆ(n − 1)*n.
Example 4.6.32.
>> syms z x;
F = (z* sin(x))/(zˆ2 − 2*z* cos(x) + 1);
f = iztrans(F );
The answer is f = sin(n*x).
Example 4.6.33.
>> syms z x;
F = (z*(z − cosh(x)))/(zˆ2 − 2*z* cosh(x) + 1);
f = iztrans(F );
The answer is f = cosh(n*x).
Example 4.6.34.
>> syms t z;
F = exp(3/z);
f = iztrans(F, t);
The answer is f = 3ˆt/factorial(t).
Example 4.6.35.
>> syms x y a;
F = (1 + a/y)ˆ3;
f = iztrans(F, y, x);
The answer is f = kroneckerDelta(x − 3, 0)*aˆ3 + 3*kroneckerDelta(x −
2, 0)*aˆ2 + 3*kroneckerDelta(x − 1, 0)*a + kroneckerDelta(x, 0).
a a2 a3
% F = 1 + 3 + 3 2 + 3 ; hence, f (0) = 1, f (1) = 3, f (2) = 3, f (3) = 1.
y y y

4.7 Maple Applications


4.7.1 Z Transform
The syntax is one of the following:
1. F = ztrans(f(n), n, z). This computes the Z transform F of the
symbolic expression f . The variable of the function f is n and the
computed transform F is a function of the variable z;

316
2. F = ztrans(f(n), n, p). This computes the Z transform F as a func-
tion of the variable p;

3. F = ztrans(f(t), t, p). This computes the Z transform F as a func-


tion of the variable p and considers that f is a function of the variable
t.

Example 4.7.1.
> ztrans n2 − 2 · exp(n), n, z


z(z + 1) 2z
3
− z 
(z − 1) e −1
e
Example 4.7.2.
> ztrans n2 − 2 · exp(n), n, p


p(p + 1) 2p
−  p 
(p − 1)3 e −1
e
Example 4.7.3.
> ztrans t2 − 2 · exp(t), t, p


p(p + 1) 2p
3
− p 
(p − 1) e −1
e

Remark 4.7.4. In Maple one writes nˆ2 for n2 .

Transforms of Dirac’s Delta Function (Distribution) and Heavi-


side’s Discrete Step Function
The charfcn function is the characteristic function of a set A. It is defined
by 
 1, x∈A
charfcn[A](x) = 0, x∈/A .
 0
charfcn[A](x)0 , otherwise
The discrete time impulse function is charfcn[0](n).

317
Example 4.7.5.
> ztrans (charfcn[0](n), n, z)
1

Example 4.7.6.
> ztrans (charfcn[0, 1](n), n, z)

1
1+
z
Example 4.7.7.
> ztrans (charfcn[0, 1, 2, 3, 5](n), n, z)

1 1 1 1
1+ + 2+ 3+ 5
z z z z
Example 4.7.8.
> ztrans(charfcn[0, 1](n) − 3 · charfcn[3, 7](n) + 5 ·
charfcn[4](n) − 11 · charfcn[5](n), n, z)

1 3 3 5 11
1+ − 3− 7+ 4− 5
z z z z z
Example 4.7.9.
> ztrans(Heaviside(n), n, z)
z
z−1

Heaviside and Delay


Example 4.7.10.
> ztrans(Heaviside(n − 1), n, z)
z
−1
z−1

Example 4.7.11.
> ztrans(Heaviside(n − 3), n, z)

z 1 1
−1− − 2
z−1 z z

318
Example 4.7.12.
> ztrans(Heaviside(t) − 2 · Heaviside(t − 4)+
+ Heaviside(t − 8), t, z)
1 1 1 1 1 1 1
1+ + 2+ 3− 4− 5− 6− 7
z z z z z z z
Example 4.7.13.
> ztrans(t · (Heaviside(t) − 2 · Heaviside(t − 3)+
+ Heaviside(t − 6)), t, z)
z 4 + 2z 3 − 3z 2 − 4z − 5
z5
Remark 4.7.14. In Maple the product is indicated by an asterisk, namely
*. Hence, one writes 2*Heaviside for 2 · Heaviside.

Power Functions and Exponentials


Example 4.7.15.
> ztrans(t, t, z)
z
(z − 1)2
Example 4.7.16.
> ztrans(t2 , t, z)
z(z + 1)
(z − 1)3
Example 4.7.17.
> ztrans(t3 , t, z)
z(z 2 + 1 + 4z)
(z − 1)4
Example 4.7.18.
> ztrans(t4 , t, z)
z(z 3 + 11z 2 + 11z + 1)
(z − 1)5
Example 4.7.19.
> ztrans(at , t, z)
z
z 
a −1
a

319
Example 4.7.20.
> ztrans(exp(a · t), t, z)
z
z 
ea −1
ea
Example 4.7.21.
> assume(n, positive):
ztrans(n · 2n , n, z)
2z
(z − 2)2
Example 4.7.22.
> assume(n, positive):
ztrans(n3 · 2n , n, z)
2z(z 2 + 4 + 8z)
(z − 2)4
Example 4.7.23.
> ztrans(exp(I · a · t), t, z)
z
 z 
e Ia −1
eIa
Example 4.7.24.
> ztrans(exp(t − 2), t, z)
e−2 z
z 
e −1
e
Example 4.7.25.
> ztrans(exp(a · t) − exp(b · t), t, z)
z z
z − z 
ea −1 eb −1
ea eb

Sine and Cosine


Example 4.7.26.
> ztrans(sin(n), n, z)
z sin(1)

−z 2 + 2z cos(1) − 1

320
Example 4.7.27.
> ztrans(cos(n), n, z)
(−z + cos(1))z
−z 2 + 2z cos(1) − 1
Example 4.7.28.
> ztrans(sin(a · n), n, z)
z sin(a)

−z 2 + 2z cos(a) − 1
Example 4.7.29.
> ztrans(cos(a · n), n, z)
(−z + cos(a))z
−z 2 + 2z cos(a) − 1
Example 4.7.30.
> ztrans(sinh(a · n), n, z)

z − z (ea )2
−2z 2 ea + 2z + 2z (ea )2 − 2ea
Example 4.7.31.
> ztrans(cosh(a · n), n, z)

−2z 2 ea + z + z (ea )2
−2z 2 ea + 2z + 2z (ea )2 − 2ea
Some properties of the Z transform can be obtained using Maple.

Time Delay
Example 4.7.32.
> ztrans(f (n − 1), n, z)
ztrans(f (n), n, z)
z
Example 4.7.33.
> ztrans(f (n − 5), n, z)
ztrans(f (n), n, z)
z5

321
Example 4.7.34.
> ztrans(f (n + 1), n, z)

z ztrans(f (n), n, z) − f (0)z

Example 4.7.35.
> ztrans(f (n + 3), n, z)

z 3 ztrans(f (n), n, z) − f (0)z 3 − f (1)z 2 − f (2)z

Differentiation of the Image


Example 4.7.36.
> ztrans(−n · f (n), n, z)
 

z ztrans(f (n), n, z)
∂z
Example 4.7.37.
> ztrans(n2 · f (n), n, z)
  2 
∂ ∂
z ztrans(f (n), n, z) + z ztrans(f (n), n, z)
∂z ∂z 2

Difference
Example 4.7.38.
> ztrans(f (t + 1) − f (t), t, z)

z ztrans(f (t), t, z) − f (0)z − ztrans(f (t), t, z)

4.7.2 Inverse Z Transform


The syntax is one of the following:
1. f = invztrans(F(z), z, n). This computes the inverse Z transform
(the original) f of the symbolic expression F . The variable of the
function F is z and the computed inverse transform f is a function of
the variable n;
2. f = invztrans(F(z), z, t). This computes the inverse Z transform f
as a function of the variable t;

322
3. f = invztrans(F(p), p, x). This computes the inverse Z transform
f as a function of the variable x and considers that F is a function of
the variable p.
Example 4.7.39.
 4
z + 11 · z 3 + 11 · z 2 + z)

> invztrans , z, n
(z − 1)5
n4
Example 4.7.40.
 4
z + 11 · z 3 + 11 · z 2 + z)

> invztrans , z, t
(z − 1)5
t4
Example 4.7.41.
 4
p + 11 · p3 + 11 · p2 + p)

> invztrans , p, x
(p − 1)5
x4
Example 4.7.42.
 
z
> invztrans , z, n
z−1
1
In fact, the answer stands in for Heaviside’s discrete step function, namely
h(n) = 1, ∀n ∈ {0, 1, 2, . . .}.
Example 4.7.43.
 
z
> invztrans , z, n
z−5
5n
Example 4.7.44.
 
z
> invztrans , z, n
z−a
an
Example 4.7.45.
 a 
> invztrans exp , z, n
z
an
n!

323
Example 4.7.46.
 2 
z + cos(a) · z
> invztrans , z, n
z 2 − 2z cos(a) + 1

cos(a n)

3 · z2 − 7 · z
 
Example 4.7.47. > invztrans , z, n
z2 − 6 · z + 5

2 5n + 1

Example 4.7.48.
3 · z 2 − 13 · z
 
> invztrans , z, n
z 2 − 9 · z + 20

2 5n + 4 n

Example 4.7.49.
 
2·z
> invztrans , z, n
(z − 2)2
2n n

Example 4.7.50.
2 · z · (z 2 + 8 · z + 4)
 
> invztrans , z, n
(z − 2)4

2n n3

324
Appendix A

Tables of Integral Transforms

A.1 Fourier Transform


Z ∞
No. f (x) fb(ω) = f (x)eiωx dx
−∞
x 
1. f + b , a > 0, b ∈ R ae−iabω fb(aω)
a
 x 
2. f − + b , a > 0, b ∈ R aeiabω fb(−aω)
a
 
1 b ω+b
3. f (ax)eibx , a > 0, b ∈ R f
a a
    
1 b ω+b ω−b
4. f (ax) cos bx, a > 0, b ∈ R f +f b
2a a a
    
1 b ω+b ω−b
5. f (ax) sin bx, a > 0, b ∈ R f −f b
2ai a a
6. xn f (x), n ∈ N (−i)n fb(n) (ω)
7. f (n) (x), n ∈ N (−i)n ω n fb(ω)
1 π −a|ω|
8. ,a>0 e
x2 + a2 a

325
Z ∞
No. f (x) fb(ω) = f (x)eiωx dx
−∞

2 x2 π − ω22
9. e−a ,a>0 e 4a
a
√  2 
2 x2 π i 4aω
−π
10. e−ia ,a>0 e 2 4
a
2a
11. e−a|x| , a > 0
a + ω2
2

1
12. e−ax h(x), a > 0
a − iω

(
sin x  π, ω ∈ (−1, 1)
, x 6= 0

13. x 0, |ω| > 1
π
1, x = 0 
 , ω = ±1
2

 2π · ω ν−1 e−aω , ω > 0
14. (a − ix)−ν , a, ν > 0 Γ(ν)
 0, ω < 0

Table A.1: Fourier Transforms

326
A.2 Cosine Fourier Transform

No. f (x) Fc (f )(ω)


1 ω 
1. f (ax), a > 0 Fc (f (x))
a a
    
f (ax) cos(bx), 1 ω+b ω−b
2. Fc (f ) + Fc (f )
a, b > 0 2a a a
    
f (ax) sin(bx), 1 ω+b ω−b
3. Fs (f ) − Fs (f )
a, b > 0 2a a a
4. x2n f (x), n ∈ N∗ (−1)n (Fc (f ))(2n) (ω)
5. x2n+1 f (x), n ∈ N∗ (−1)n (Fs (f ))(2n+1) (ω)

1, 0 < x < a 1
6. sin(aω)
0, x > a ω
r
1 πω
7. √ ,x>0
x 2
√1 ,

0<x<1
r Z ω
x 1 cos t
8. 0, x > 1 √ dt
ω 0 t

0, x ∈ (0, a) r
π
9. √1 ,x > a [cos(aω) − sin(aω)]
x−a 2ω
1 π −aω
10. ,a>0 e
x2 + a2 2a
πω ν−1
11. x−ν , 0 < ν < 1
2Γ(ν) cos πν

2
a
12. e−ax , a > 0
ω + a2
2

327
No. f (x) Fc (f )(ω)
r
2 π − ω2
13. e−ax , a > 0 e 4a
2a
 π
 2, ω < a


1 π
14. sin (ax), a > 0 , ω=a
x  4


0, ω > ω
 
sin x −x 1 2
15. e arctan
x 2 ω2
( π
1 − cos (ax) (a − ω), 0 < ω < a
16. ,a>0 2
x2
0, ω > a

Table A.2: Cosine Fourier Transforms

328
A.3 Sine Fourier Transform

No. f (x) Fs (f )(ω)


1 ω 
1. f (ax), a > 0 Fs (f (x))
a a
    
f (ax) cos(bx), 1 ω+b ω−b
2. Fs (f ) + Fs (f )
a, b > 0 2a a a
    
f (ax) sin(bx), 1 ω+b ω−b
3. Fc (f ) − Fc (f )
a, b > 0 2a a a
4. x2n f (x), n ∈ N∗ (−1)n (Fs (f ))(2n) (ω)
5. x2n+1 f (x), n ∈ N∗ (−1)n (Fc (f ))(2n+1) (ω)

1, 0 < x < a 1
6. [1 − cos(aω)]
0, x > a ω
r
1 π
7. √ ,x>0
x 2ω
√1 ,

0<x<1
r Z ω
x 1 sin t
8. 0, x > 1 √ dt
ω 0 t

0, x ∈ (0, a) r
π
9. √1 ,x > a [cos(aω) + sin(aω)]
x−a 2ω
x π −aω
10. ,a>0 e
x2 + a2 2
x−ν , 0 < ν < 1 πν

11. ω ν−1 Γ(1 − ν) cos 2
ω
12. e−ax , a > 0
ω + a2
2

329
No. f (x) Fs (f )(ω)
 
1 1 ω+a
13. sin (ax), a > 0 ln
x 2 ω−a
(ω + 1)2 + 1
 
sin x −x 1
14. e ln
x 4 (ω − 1)2 + 1
1 − cos (ax) ω
 2
ω − a2

a

ω+a

15. ,a>0 ln + ln
x2 2 ω2 2 ω−a

Table A.3: Sine Fourier Transforms

330
A.4 Laplace Transform

No. f (t) F (s)


1
1. h(t)
s
e−as
2. h(t − a), a ∈ C
s
1
3. eat , a ∈ C
s−a
ω
4. sin(ωt), ω ∈ C
s + ω2
2
s
5. cos(ωt), ω ∈ C
s + ω2
2
ω
6. sinh(ωt), ω ∈ C
s − ω2
2
s
7. cosh(ωt), ω ∈ C
s2 − ω 2
Γ(α + 1)
8. tα , α > −1
sα+1
n!
9. tn , n ∈ N n+1
s
n!
10. tn eat , n ∈ N, a ∈ C
(s − a)n+1
2ωs
11. t sin(ωt), ω ∈ C
(s + ω 2 )2
2

s2 − ω 2
12. t cos(ωt), ω ∈ C
(s2 + ω 2 )2
ω
13. eat sin(ωt), a, ω ∈ C
(s − a)2 + ω 2

331
s−a
14. eat cos(ωt), a, ω ∈ C
(s − a)2 + ω 2
No. f (t) F (s)
15. δ(t) 1
16. δ (n) (t), n ∈ N sn
17. δ(t − t0 ), t0 > 0 e−t0 s
1 1
18. √ √
πt s
sin(ωt) π s
19. ,ω∈C − arctan ,
t 2 ω
eat − ebt s−b
20. , a, b ∈ C, a 6= b ln ,
t s−a
2
(cos(at) − cos(bt)), s 2 + b2
21. t ln ,
a, b ∈ C, a 6= b s 2 + a2

Table A.4: Laplace Transforms

332
A.5 Z Transform

No. f (t) F ∗ (z)


1. δ(t) 1
2. δ(t − n) z −n
z
3. u(t)
z−1
z
4. at , a ∈ C
z−a
z
5. eλt , λ ∈ C
z − eλ
z sin ω
6. sin(ωt), ω ∈ R
z 2 − 2z cos ω + 1
z(z − cos ω)
7. cos(ωt), ω ∈ R
z 2 − 2z cos ω + 1
z sinh ω
8. sinh(ωt), ω ∈ R
z 2 − 2z cosh ω + 1
z(z − cosh ω)
9. cosh(ωt), ω ∈ R
z 2 − 2z cosh ω + 1
za sin ω
10. at sin(ωt), a ∈ C∗ , ω ∈ R
z 2 − 2az cos ω + a2
z(z − a cos ω)
11. at cos(ωt), a ∈ C∗ , ω ∈ R
z 2 − 2az cos ω + a2
z
12. t
(z − 1)2
z(z + 1)
13. t2
(z − 1)3

333
No. f (t) F ∗ (z)
z(z 2 + 4z + 1)
14. t3
(z − 1)4
az
15. tat
(z − a)2
z(z 2 − 1) sin ω
16. t sin(ωt), ω ∈ R
(z 2 − 2z cos ω + 1)2
z (z 2 + 1) cos ω − 2z

17. t cos(ωt), ω ∈ R
(z 2 − 2z cos ω + 1)2
1 z
18. , t ∈ {1, 2, . . .} ln
t z−1
 
(−1)t−1 1
19. , t ∈ {1, 2, . . .} ln 1 +
t z
at−1 1 z
, t ∈ {1, 2, . . .}, ln
20. t
a ∈ C∗ a z−a
sin(ωt) sin ω
, t ∈ {1, 2, . . .}, arctan
21. t
ω∈C z − cos ω
at a
22. , a ∈ C∗ ez
t!
Table A.5: Z Transforms

334
Bibliography

[1] Abell, M., Braselton, J. P.: Differential Equations with MAPLE


V, Academic Press Professional, London, 1994.

[2] Ahmed, N. U.: Elements of Finite-Dimensional Systems and Control


Theory, Longman Scientific & Technical, London - New York, 1988.

[3] Balan, V., Pı̂rvan, M.: Matematici avansate pentru ingineri - Pro-
bleme date la Concursul Ştiinţific Studenţesc ”Traian Lalescu”, Mate-
matică, anii 2002-1014, Ed. Politehnica Press, Bucureşti, 2014.

[4] Breaz, N., Crăciun, M., Gaşpar, P., Miroiu, M., Paraschiv-
Munteanu, I.: Modelarea matematică prin MATLAB, Ed. StudIS,
Iaşi, 2013.

[5] Breaz, D., Suciu, N., Gaşpar, P., Barbu, G., Pı̂rvan, M.,
Prepeliţă, V., Breaz, N.: Transformări integrale şi funcţii com-
plexe cu aplicaţii ı̂n tehnică, Vol. 1 - Funcţii complexe cu aplicaţii ı̂n
tehnică, Ed. StudIS, Iaşi, 2013.

[6] Câşlaru, C., Prepeliţă, V., Drăguşin, C: Matematici Avansate.


Teorie şi aplicaţii, Ed. Fair Partners, Bucureşti, 2007.

[7] Davies, B.: Integral Transforms and Their Applications, Springer-


Verlag, New York, 1978.

[8] Doetsch, G.: Anleitung zum Praktischen Gebrauch der Laplace


Transformation und der Z-Transformation, Oldenbourg, München,
Wien, 1967.

[9] Drăguşin, L., Drăguşin, C., Radu, C.: Calcul diferenţial şi
ecuaţii diferenţiale, Ed. Du Style, Bucuereşti, 1996.

335
[10] Drăguşin, C., Gavrilă, M.: Analiză matematică. Calcul diferen-
ţial, Ed. Matrix Rom, Bucureşti, 2007.

[11] Glaeske, H.-J., Prudnikov, A. P., Skòrnik, K. A.: Opera-


tional Calculus and Related Topics, Chapman & Hall/CRC, Boca Ra-
ton, 2006.

[12] Halanay, Aristide: Teoria calitativă a ecuaţiilor diferenţiale, Ed.


Academiei R.S.R., Bucureşti, 1963.

[13] Halanay, Aristide: Ecuaţii diferenţiale, Ed. Didactică şi Peda-


gogică, Bucureşti, 1972.

[14] Hiwarekar, A. P.: A New Method of Cryptography using Laplace


Transform, IJMA 3 (3) (2012), 1193-1197.

[15] Hiwarekar, A. P.: A New Method of Cryptography using Laplace


Transform of Hyperbolic Functions, IJMA 4 (2) (2013), 208-213.

[16] Jayanthi, Ch., Srinivas, V.: Mathematical Modelling for Cryptog-


raphy using Laplace Transform, IJMTT 65 (2) (2019), 10-15.

[17] Lyapunov, A. M.: The general problem of the stability of motion,


Int. J. Control 55 (1992), 531-773.

[18] Niţă, C., Năstăsescu, C., Brandiburu, M., Joiţa, D.: Culegere
de probleme pentru liceu - algebră - clasele IX-XII, Ed. Rotech Pro,
Bucureşti, 2004.

[19] Olariu, V., Prepeliţă, V.: Matematici speciale, Ed. Didactică şi
Pedagogică, Bucureşti, 1985.

[20] Olariu, V., Prepeliţă, V.: Teoria distribuţiilor, funcţii complexe


şi aplicaţii, Ed. Ştiinţifică şi Enciclopedică, Bucureşti, 1986.

[21] Olariu, V., Olteanu, O.: Analiză matematică, Ed. Semne, Bu-
cureşti, 1998.

[22] Petroşanu, D. M.: Matematici speciale: elemente teoretice şi


aplicaţii, Ed. Matrix Rom, Bucureşti, 2016.

336
[23] Pı̂rvan, M., Savu, I.: Matematici avansate pentru ingineri, Ed. Ma-
trix Rom, Bucureşti, 2021.

[24] Prepeliţă, V.: Calcul operaţional şi ecuaţii cu derivate parţiale,


Chapter 9 in Enciclopedie Matematică, eds. Iosifescu, M., Stănăşilă,
O., Ştefănoiu, D., Ed. AGIR, Bucureşti, 2010.

[25] Prepeliţă, V., Vasilache, T., Doroftei M.: Control Theory,


UPB Dep. Eng. Sci., Bucureşti, 1997.

[26] Prepeliţă, V., Pı̂rvan, M., Ioniţă, G. I.: Differential Systems


and Complex Analysis for Engineers, Ed. Matrix Rom, Bucureşti, 2018.

[27] Rudner, V., Nicolescu, C.: Probleme de matematici speciale, Ed.


Didactică şi Pedagogică, Bucureşti, 1982.

[28] Schiff, J. L.: The Laplace Transform. Theory and Applications,


Springer-Verlag, New York, 1999.

[29] Schoenstadt, A. L.: An Introduction to Fourier Analysis. Fourier


Series, Partial Differential Equations and Fourier Transforms, Notes
prepared for MA3139, Monterey, 2005. https://www.math.bgu.ac.
il/~leonid/ode_9171_files/Schoenstadt_Fourier_PDE.pdf

[30] Stănăşilă, N. O., Pı̂rvan, M., Olteanu, M. şi alţii: Teme


şi probleme pentru concursurile studenţeşti de matematică, Vol. 3 -
Concursuri Naţionale, Ed. StudIS, Iaşi, 2013.

[31] Stein, E., Shakarchi, R.: Princeton Lectures in Analysis I, Fourier


Analysis: An Introduction, Princeton University Press, Princeton and
Oxford, 2003.

[32] Storey, B. D.: Computing Fourier Series and Power Spectrum with
MATLAB. http://faculty.olin.edu/bstorey/Notes/Fourier.pdf

[33] Stroud, K. A., Booth, D. J.: Advanced Engineering Mathematics,


Palgrave Macmillan, Hampshire & New York, 2003.

[34] ***: Fourier Analysis and Synthesis. https://studylib.net/doc/


18585874/fourier-analysis-and-synthesis

337
Index

adder, 283 Laplace transform, 155, 156, 251


amplified, 283 Z transform, 267, 322
analysis equation, 22 Differentiation of the Original
angle between two vectors, 3 Fourier transform, 127
angular frequency, 26 Laplace transform, 154, 236
Differentiation of the Preimage
basis functions, 22 Fourier transform, 76, 134
black box representation, 282 Dirac’s delta function
Fourier transform, 127, 134, 232
cardinal sine, 72 Laplace transform, 240, 247, 248,
cipher, 191 253, 318
convergence of a Fourier series discrete convolution, 270
mean-square, 11 distance between two vectors, 3
pointwise, 12
uniform, 12 encryption, 191
Convolution exponential function
Fourier transform, 78, 134 Fourier transform, 124, 125, 133
Laplace transform, 160, 182 Laplace transform, 146, 148, 149,
Z transform, 270 232, 249
convolution, 78, 159 Z transform, 261, 311, 319
cosine function exponential order, 142
Laplace transform, 150, 233, 250
Z transform, 313, 321 Final Value
Laplace transform, 161
decryption, 191 Z transform, 273
delayer, 284 Fourier series
difference complex, 19, 20
equation, 278 cosine, 17
of a function, 266, 322 generalized, 6, 7
Differentiation of the Image sine, 18
Fourier transform, 77, 129 trigonometric, 8, 10, 11

338
Fourier series coefficients Z transform, 272
complex, 20 integers congruent modulo n, 192
generalized, 6, 7 integral
trigonometric, 11 Complex Fourier, 83
function Dirichlet, 72
absolutely integrable, 65, 142 Euler, 70, 82
even, 13, 17 Fourier, 66
Gamma, 148 Fourier for even functions, 84
odd, 14, 17 Fourier for odd functions, 85
original, 143, 258 Laplace, 142
periodic, 12, 13, 15, 16 Real Fourier, 84
piecewise continuous, 12, 142 Integration of the Image
standardized, 13, 83 Laplace transform, 158
transfer, 255 Z transform, 269
Integration of the Original
harmonically related exponentials, 22 Laplace transform, 157
Heaviside’s discrete step function Inversion Formula
Z transform, 260, 312, 313, 318, Fourier transform, 79
319
Heaviside’s step function jump discontinuity, 142
Fourier transform, 126, 127, 132,
kernel
133
Fourier, 67
Laplace transform, 145, 231, 232,
key, 191
239, 248, 251, 253
Kronecker’s function, 259
Hilbert space, 3
hyperbolic functions Linear control system
Laplace transform, 234, 250 discrete-time, 282
Z transform, 313, 321 time-invariant, 185
hyperbolic sine and cosine functions linear space, 2
Laplace transform, 150 Linearity
discrete Fourier transform, 93
image of a function Fourier transform, 74
Fourier transform, 67 Laplace transform, 149
Laplace transform, 142 Z transform, 261
Z transform, 258
index of growth, 143 matrix
Initial Value Fourier transform, 128
Laplace transform, 161 inverse Fourier transform, 130

339
Laplace transform, 237 Laplace transform, 152
transfer, 240, 255, 286 Z transform, 264
Z transform, 314 sequence
multiplier, 283 convergent, 3
fundamental, 3
norm of a vector, 3 signal
in the frequency domain, 67, 142,
orthogonal
258
system, 3, 8, 20
in the time domain, 67, 142, 258
vectors, 3
Similarity
Parseval’s Formula, 50, 81, 93 Fourier transform, 75
periodic functions Laplace transform, 151
Laplace transform, 162, 236 Z transform, 262
Plancherel’s Theorem, 81, 93 sine function
power function Laplace transform, 150, 233, 234,
Laplace transform, 147, 149, 250 250
Z transform, 310, 319 Z transform, 313, 320
power series spectrum
Laplace transform, 164 amplitude, 23
pre-Hilbert space, 2 frequency, 23
preimage signal, 23
of a Fourier transform, 67 state-space representation, 285
of a Laplace transform, 142 sum of a function, 268
Product summator, 283
Fourier transform, 80 synthesis equation, 22
Z transform, 271
text
proper rational function, 188
cipher, 191
pulse
plain, 191
duration, 25
Time Delay
period, 26
Fourier transform, 75
train, 25
Laplace transform, 151, 232, 237,
Realization problem, 189 248, 251
right periodic function Z transform, 263, 312, 318, 321
Z transform, 265 transform
RLC circuit, 189 Dirichlet, 258
discrete Fourier, 90, 136
Second Time Delay discrete Laplace, 258

340
fast Fourier, 94, 131
Fourier, 66, 67, 123, 132
Fourier cosine, 85, 135
Fourier sine, 85, 136
inverse Fourier, 128, 134
inverse Laplace, 168, 239, 252
inverse Z, 314, 322
Laplace, 142
Z, 309, 316
Z of a discrete function, 258
Z of an original function, 258
Translation
Fourier transform, 75, 93, 128, 130
Laplace transform, 153, 233, 249,
253

variable
input, 282
output, 282
state, 284
vector, 2
vector space, 2

341

You might also like