Fourier Analysis and Integral Transformations With Applications in Engineering
Fourier Analysis and Integral Transformations With Applications in Engineering
Fourier Analysis and Integral Transformations With Applications in Engineering
The present book entitled ’Fourier Analysis and Integral Transforms with
Applications in Engineering’ intends to be a useful tool mainly for students
enrolled in a technical university, both in bachelor and master degree pro-
grams, but also to engineers involved in research.
The content is structured in four chapters and annexes, each chapter pre-
senting definitions, properties, examples, and focusing on partially or solved
exercises. If an exercise is not explicitly solved, then it contains hints and
the final answer. An important part of the book is represented by various
applications in engineering together with numerous MATLAB and Maple
examples. All is meant to facilitate a good understanding of the theoretical
notions.
Fourier Analysis is based on the decomposition of periodic functions into a
discrete sum of trigonometric or exponential functions with specific frequen-
cies. It has multiple applications in electrical engineering, vibration analysis,
acoustics, optics, signal processing, image processing, quantum mechanics,
econometrics etc.
The integral transforms covered in detail in this book are the Fourier, the
Laplace and the Z transforms. Their definition is motivated by a plethora
of difficult problems that need to be solved in their original form in the time
domain. These complicated problems become much easier in the frequency
domain as they are reduced to simple algebraic equations. The inverse trans-
form has now the role of producing the solution in the initial time domain.
More information about the topics presented in this book can be found
in [6], [22], [23] and [33]. For exercises and applications one can use [3], [30]
and [27], where the first two references contain problems given at the ’Traian
Lalescu’ Mathematical Contest for Students.
For the use of MATLAB and Maple in mathematical modeling, see [4]
and [1], respectively.
The authors would like to express warm thanks to the referees that read
the material and contributed to its improvement and to all the colleagues for
their precious suggestions and valuable observations. A very special thanks
goes to Mihaela Pitiş for allowing us to use an example from her graduation
thesis related to cryptography, and to to Laurenţiu Toader for designing the
figures. Both are former students of the Politehnica University of Bucharest.
i
ii
Contents
Foreword i
1 Fourier Analysis 1
1.1 The pre-Hilbert space R . . . . . . . . . . . . . . . . . . . . . 2
1.2 Generalized Fourier Series . . . . . . . . . . . . . . . . . . . . 4
1.3 Trigonometric Fourier Series . . . . . . . . . . . . . . . . . . . 8
1.4 Convergence of the Fourier Series . . . . . . . . . . . . . . . . 11
1.5 Fourier Series of Even and Odd Functions . . . . . . . . . . . 13
1.6 Complex Fourier Series . . . . . . . . . . . . . . . . . . . . . . 19
1.7 Signal Fourier Series Representation . . . . . . . . . . . . . . . 22
1.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.9 MATLAB Applications . . . . . . . . . . . . . . . . . . . . . . 52
1.10 Maple Applications . . . . . . . . . . . . . . . . . . . . . . . . 62
2 Fourier Transform 65
2.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2.2 Properties of the Fourier Transform . . . . . . . . . . . . . . . 74
2.3 The Inversion Formula . . . . . . . . . . . . . . . . . . . . . . 79
2.4 Fourier Integral . . . . . . . . . . . . . . . . . . . . . . . . . . 83
2.5 Discrete Fourier Transform (DFT) . . . . . . . . . . . . . . . . 89
2.6 Fast Fourier Transform (FFT) . . . . . . . . . . . . . . . . . . 94
2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
2.8 MATLAB Applications . . . . . . . . . . . . . . . . . . . . . . 123
2.8.1 Fourier Transform . . . . . . . . . . . . . . . . . . . . . 123
2.8.2 Inverse Fourier Transform . . . . . . . . . . . . . . . . 128
2.8.3 Fast Fourier Transform . . . . . . . . . . . . . . . . . . 131
2.9 Maple Applications . . . . . . . . . . . . . . . . . . . . . . . . 132
2.9.1 Fourier Transform . . . . . . . . . . . . . . . . . . . . . 132
iii
2.9.2 Inverse Fourier Transform . . . . . . . . . . . . . . . . 134
2.9.3 Fourier Cosine Transform . . . . . . . . . . . . . . . . 135
2.9.4 Fourier Sine Transform . . . . . . . . . . . . . . . . . . 136
2.9.5 Discrete Transforms . . . . . . . . . . . . . . . . . . . 136
4 Z Transform 257
4.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
4.2 Properties of the Z Transform . . . . . . . . . . . . . . . . . . 261
4.3 Determination of the Original . . . . . . . . . . . . . . . . . . 274
4.4 Applications of the Z Transform . . . . . . . . . . . . . . . . . 278
4.4.1 Difference Equations . . . . . . . . . . . . . . . . . . . 278
4.4.2 Discrete-time Control Systems . . . . . . . . . . . . . . 282
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
4.6 MATLAB Applications . . . . . . . . . . . . . . . . . . . . . . 309
4.6.1 Z Transform . . . . . . . . . . . . . . . . . . . . . . . 309
iv
4.6.2 Inverse Z Transform . . . . . . . . . . . . . . . . . . . 314
4.7 Maple Applications . . . . . . . . . . . . . . . . . . . . . . . . 316
4.7.1 Z Transform . . . . . . . . . . . . . . . . . . . . . . . 316
4.7.2 Inverse Z Transform . . . . . . . . . . . . . . . . . . . 322
Bibliography 335
Index 338
v
vi
Chapter 1
Fourier Analysis
Fourier analysis is the study of the way general functions may be repre-
sented or approximated by sums of simpler trigonometric functions. Fourier
analysis grew from the study of Fourier series, and is named after Jean-
Baptiste Joseph Fourier (1768–1830), who showed that representing a func-
tion as a sum of trigonometric functions greatly simplifies the study of heat
transfer. Fourier introduced the series for the purpose of solving the heat
equation in a metal plate. Although the original motivation was to solve
the heat equation, it later became obvious that the same techniques could
be applied to a wide array of mathematical and physical problems, espe-
cially those involving linear differential equations with constant coefficients,
for which the eigensolutions are sinusoids. Fourier series have many such
applications as in electrical engineering, vibration analysis, acoustics, optics,
signal processing, image processing, quantum mechanics, econometrics.
1
1.1 The pre-Hilbert space R
In order to introduce and study Fourier series, one needs a suitable struc-
ture, namely that of a pre-Hilbert space.
We recall that a vector space or a linear space is a nonempty set V
(whose elements are called vectors) on which one has defined two operations,
addition and multiplications by scalars belonging to a field F, subjected to
eight axioms. These axioms need to be verified by any u, v, w ∈ V and
α, β ∈ F and are the following:
1. u + v = v + u (commutativity);
2. (u + v) + w = u + (v + w) (associativity);
6. α(βv) = (αβ)v;
7. α(u + v) = αu + αv;
8. (α + β)v = αv + βv.
i. (v, v) ≥ 0; (v, v) = 0 ⇔ v = 0;
ii. (u, v) = (v, u) (if F = R, then this becomes (u, v) = (v, u));
2
The advantage of a pre-Hilbert structure is the possibility of defining
various notions.
p
1. The norm of a vector: kvk = (v, v) (see [21, pp. 54-55]);
2. The distance between two vectors: d(u, v) = ku − vk;
3. The angle between two vectors: (d
u, v). It is defined by
(u, v)
cos(d
u, v) = ;
kuk · kvk
3
1.2 Generalized Fourier Series
A. Case F = R
Proposition 1.2.1. R(a, b) is a real vector space with the following opera-
tions:
1. addition (+) : for f, g ∈ R(a, b), f + g ∈ R(a, b), where (f + g)(x) =
f (x) + g(x), ∀x ∈ [a, b];
2. multiplication by a real scalar (·) : for f ∈ R(a, b) and α ∈ R, α · f ∈
R(a, b), where (α · f )(x) = αf (x), ∀x ∈ [a, b].
Proof. One can easily verify the axioms of a vector space for R(a, b).
Remark 1.2.2. In the space R(a, b), f = g if f (x) = g(x) at all common
Z b Z b
continuity points x of f and g and, in this case, f (x) dx = g(x) dx.
a a
Therefore, f = 0 if f (x) = 0 at all continuity points x of f and in this case
Z b Z b
f (x) dx = 0. If f (x) ≥ 0 at all continuity points x of f and f (x) dx =
a a
0, then f = 0.
We also make the observation that f ∈ R(a, b) if and only if f is bounded
and continuous almost everywhere (which means that the Lebesgue measure
of the set of the discontinuity points of f is 0, i.e. for every > 0, there exists
a sequence of intervals whose union includes the set of the discontinuity points
of f such that the sum of the series of the lengths of these intervals is smaller
than ). For further details one can consult [21].
Proposition 1.2.3. R(a, b) is a real pre-Hilbert space with the inner product
given by
Z b
(f, g) = f (x)g(x) dx (1.1)
a
4
Proof. Let us verify the axioms of the inner product (see Definition 1.1.1)
for any f, g, h ∈ R(a, b) and α, β ∈ R.
Z b
i. (f, f ) = f 2 (x) dx ≥ 0 since f 2 (x) ≥ 0, for every x ∈ [a, b]. If
a
f = 0, it follows that f 2 (x) = 0 at all its continuity points x. Hence,
Z b Z b
2
(f, f ) = f (x) dx = 0. Conversely, if f 2 (x) dx = 0, it follows
a a
that f 2 (x) = 0 at all continuity points x of f . Hence, f = 0 (see
Remark 1.2.2);
Z b Z b
ii. (f, g) = f (x)g(x) dx = g(x)f (x) dx = (g, f ) since the multipli-
a a
cation of real numbers is commutative;
Z b
f (x)g(x) dx
(f, g)
the angle cos(f,
dg) = = sZ a
sZ .
kf k · kgk b b
f 2 (x) dx · g 2 (x) dx
a a
5
Main Problem of Fourier Analysis
The series (1.3) is called the generalized Fourier series of f and cn are its
generalized Fourier series coefficients.
Due to orthogonality, the formal solution of the problem is very sim-
ple. Calculate the inner product (f, fn ) for an arbitrary index n using the
expression of f written as a series. We obtain that
∞
! ∞
X X
(f, fn ) = cm fm , fn = cm (fm , fn ) = cn (fn , fn ).
m=1 m=1
6
B. Case F = C
One denotes by RC (a, b) the set of complex integrable functions on the
interval [a, b] ⊂ R (see [5] or [20]). Thus,
Z b
RC (a, b) = f : [a, b] → C : f (x) dx ∈ C .
a
Similarly with the real case (F = R), one obtains the following results.
Proposition 1.2.4. RC (a, b) is a complex vector space with the addition and
the multiplication by scalars α ∈ C defined in a similar manner with those
in Proposition 1.2.1.
Proof. We only prove the second axiom. In this case, axiom ii. of the inner
product becomes (f, g) = (g, f ) and it is obviously true since
Z b Z b Z b
(g, f ) = g(x)f (x) dx = g(x)f (x) dx = f (x)g(x) dx = (f, g).
a a a
It follows that the generalized Fourier series (1.3) has coefficients (see
(1.4)
Z b
f (x)fn (x) dx
(f, fn ) a
cn = = Z b (1.6)
(fn , fn ) 2
|fn (x)| dx
a
7
1.3 Trigonometric Fourier Series
Consider the interval [a, b] = [0, T ], T > 0. The existence of an orthogonal
system is proven by the following result.
1, cos ωx, sin ωx, cos 2ωx, sin 2ωx, . . . , cos nωx, sin nωx, . . . , (1.7)
2π
is an orthogonal system in R(0, T ), where ω = .
T
Proof. Notice that 1 denotes the function f (x) = 1 = cos(0x), x ∈ [0, T ].
Let us calculate the inner products for (1.7). We obtain that
Z T T
(1, 1) = dx = x = T,
0 0
√
i.e. the magnitude of the pulse signal f (x) = 1 is k1k = T .
For n ≥ 0 and m ≥ 1 we have
Z T
(cos nωx, cos mωx) = cos nωx cos mωx dx.
0
2π
since ω = ⇒ ωT = 2π and sin(m ± n)2π = 0.
T
If n = m, then we have cos(m − n)ωx = cos 0 = 1. Hence,
1 sin 2nωx T T T
(cos nωx, cos mωx) = +x = .
2 2nω 0 0 2
8
Therefore,
0, n 6= m
(
(cos nωx, cos mωx) = T
, n=m
2
r
T
and kcos nωxk = .
2
cos(α − β)x − cos(α + β)x
Similarly, since sin αx sin βx = , one obtains,
2
for n ≥ 0 and m ≥ 1,
0, n 6= m
(
(sin nωx, sin mωx) = T
, n=m
2
r
T
and ksin nωxk = .
2
sin(α + β)x + sin(α − β)x
Also, since sin αx cos βx = , it follows that, for
2
n ≥ 0 and m ≥ 1,
Z T Z T
1
(sin nωx, cos mωx) = sin(m + n)ωx dx + sin(m − n)ωx dx .
2 0 0
9
Therefore, the inner product of any distinct functions in (1.7) is equal to
0. Hence, (1.7) is an orthogonal system of R(0, T ).
Since the orthogonal system (1.7) includes functions of the type cos and
sin, the corresponding Fourier coefficients cn will be denoted by two symbols,
a0
an and bn , while the coefficient of the pulse function 1 will be denoted by .
2
One associates to any function f ∈ R(0, T ) the following trigonometric
Fourier series, which corresponds to the generalized Fourier series (1.3):
∞
a0 X
f (x) ∼ + (an cos nωx + bn sin nωx). (1.8)
2 n=1
and
Z T
f (x) sin nωx dx
(f (x), sin nωx) 0
bn = = ,
(sin nωx, sin nωx) T
2
respectively.
10
In conclusion, the coefficients of the trigonometric Fourier series (1.8) are
the following:
2 T
Z
a0 = f (x) dx,
T Z0
2 T
an = f (x) cos nωx dx, ∀n ≥ 1, (1.9)
T Z0
2 T
bn = f (x) sin nωx dx, ∀n ≥ 1.
T 0
Remark 1.3.2. A usual period in many problems is T = 2π. In this case
2π
we have ω = = 1 and the formula of the Fourier series (1.8) becomes
T
∞
a0 X
f (x) ∼ + (an cos nx + bn sin nx), (1.10)
2 n=1
1 2π
Z
a0 = f (x) dx,
π Z0
1 2π
an = f (x) cos nx dx, ∀n ≥ 1, (1.11)
π Z0
1 2π
bn = f (x) sin nx dx, ∀n ≥ 1.
π 0
11
Using the definition of the norm in R(0, T ), this means that
s
Z T
(f (x) − SN (x))2 dx < .
0
12
It is a standard fact that a function f is continuous at a point x if and
only if f (x+) and f (x−) exist and f (x) = f (x−) = f (x+). In this case the
sum of the series becomes f (x). One obtains the following result.
Corollary 1.4.6. If the function f is periodic and continuous, then, for
every x ∈ R we have
∞
a0 X
f (x) = + (an cos nωx + bn sin nωx). (1.13)
2 n=1
13
1.1) and the area bounded by the graph, the x-axis and the lines x = −a and
x = 0 is equal to that corresponding to x = 0 and x = a (see Figure 1.2);
hence, Z a Z a
f (x) dx = 2 f (x) dx.
−a 0
14
and x = 0 is −A, where A is the area corresponding to x = 0 and x = a (see
Figure 1.4); hence, Z a
f (x) dx = 0.
−a
Proof. Consider a to be arbitrary, fixed for the entire proof. Then there
exists k ∈ Z such that kT ∈ [a, a + T ]. We distinguish between two cases:
1. if a = kT , by the change of variable y = x − kT , one obtains
Z a+T Z T Z T Z T
f (x) dx = f (y + kT ) dy = f (y) dy = f (x) dx,
a 0 0 0
15
2. if a < kT (see Figure 1.5 and notice the equality of the two hatched
areas which correspond to the definite integrals below), then one writes
Z a+T Z kT Z a+T
f (x) dx = f (x) dx + f (x) dx.
a a kT
hence,
Z a+T Z (k+1)T Z a+T
f (x) dx = f (x) dx + f (x) dx
a a+T kT
Z (k+1)T
= f (x) dx
kT
Z T
= f (x) dx.
0
16
T
a ∈ R. For a = − , the coefficients (1.9) become
2
Z T
2 2
a0 = f (x) dx,
T − T2
Z T
2 2
an = f (x) cos nωx dx, ∀n ≥ 1, (1.14)
T − T2
Z T
2 2
bn = f (x) sin nωx dx, ∀n ≥ 1.
T − T2
and since f (x) cos nωx is even and f (x) sin nωx is odd, one gets
Z T
4 2
a0 = f (x) dx,
T 0
Z T
4 2 (1.15)
an = f (x) cos nωx dx, ∀n ≥ 1
T 0
bn = 0, ∀n ≥ 1.
and since f (x) cos nωx is odd and f (x) sin nωx is even, one gets
a0 = an = 0, ∀n ≥ 1,
Z T
4 2 (1.17)
bn = f (x) sin nωx dx, ∀n ≥ 1.
T 0
17
Finally one obtains the Fourier sine series (from (1.13))
∞
X
f (x) = bn sin nωx. (1.18)
n=1
Remark 1.5.2. In some situations, for instance if one wants to solve the
boundary value problem for the wave equation or the heat equation, one
needs to expand in Fourier sine series a function f which represents the
initial position of the wave or the initial temperature on a bar of length l.
In this case, the period is the length of the interval [−l, l], i.e. T = 2l
π
and ω = . Therefore, the Fourier sine series expansion of the function
l
f (x), x ∈ (0, l] will be the Fourier sine series (1.13) of fe(x) restricted to the
interval (0, l] and the Fourier coefficients bn (1.17) will contain f (x) since
18
T 4 2 T
fe(x) = f (x), for x ∈ 0, . Since = and = l, we have
2 T l 2
Z l
2 nπx
bn = f (x) sin dx (1.19)
l 0 l
and
∞
X nπx
f (x) = bn sin . (1.20)
n=1
l
Since eiθ = cos θ + i sin θ and cosine is even and sine is odd, eiθ = cos θ −
i sin θ = cos(−θ) + i sin(−θ) = e−iθ . Then the inner product for functions
from the sequence (1.21) is the following:
Z T Z T
2 2
inωx imωx inωx −imωx
(e ,e )= e e dx = ei(n−m)ωx dx,
− T2 − T2
19
If n 6= m, then we get
T T
inωx imωx ei(n−m)ωx T2 2 ei(n−m)ω 2 − e−i(n−m)ω 2
(e ,e )= = ·
i(n − m)ω − T2 (n − m)ω 2i
2
= sin(n − m)π = 0,
(n − m)ω
eiz − e−iz ωT
since sin z = (Euler’s formula) and = π.
2i 2
If n = m, then we have
Z T Z T T
2 2 2
inωx imωx inω0
(e ,e )= e dx = dx = x = T.
− T2 − T2 − T2
20
Connections between Real and Complex Fourier Series
T T T T
It is obvious that R − , ⊂ RC − , since R ⊂ C and a func-
2 2 2 2
T T
tion f : − , → R can be considered as having complex values. There-
2 2
fore, such a function can be written using both real and complex Fourier
series expansions.
For n = 0 one obtains from (1.25) and (1.9)
Z T Z T
1 2
−i0ωx 1 2 a0
c0 = f (x)e dx = f (x) dx = .
T − T2 T − T2 2
For n ≥ 1, again by (1.25) and (1.9) and using the fact that e−iθ =
cos θ − i sin θ, we have
Z T
1 2
cn = f (x)e−inωx dx
T − T2
Z T Z T
1 2 1 2
= f (x) cos nωx dx − i f (x) sin nωx dx
T − T2 T − T2
an bn
= −i .
2 2
Similarly, for n ≥ 1 and since eiθ = cos θ + i sin θ, one gets
Z T
1 2 an bn
c−n = f (x)einωx dx = +i .
T − T2 2 2
Hence, the real and the complex Fourier coefficients are related by the for-
mulas a0
c0 = ,
2
an bn
cn = −i , (1.26)
2 2
a b
c−n = n + i n .
2 2
Note that for a real function f we get c−n = cn .
In many problems it is more practical to compute the coefficients c−n
using the substitution z = eiωx and Residue Theorems ([26, Chapter 5]).
21
This is due to the fact that in cn , this change of variable introduces an nth
order pole; or we can just use the substitution z = e−iωx , both methods being
rather complicated.
The Analysis and the Synthesis Equations Using the Fourier Series
Representations of the Periodic Signal f
cn einωx = An ei(nωx+ϕn ) .
22
Hence, An = |cn | represents the amplitude (a measure of the strength) and
ϕn represents the phase of the frequency content of the signal at nω Hz.
By (AE), it follows that the Fourier coefficient cn may be considered as a
measure of the correlation of the signal f (x) and the signal e−inωx . The set
of coefficients {cn : n ∈ Z} is called the signal spectrum. It can be represented
as a frequency spectrum with vertical spectral lines chain between the points
nω and cn in the complex plane (see Figure 1.7).
23
1
Since = −i, one gets
i
f (x) = 2ie−i200πx − 2iei200πx .
One identifies the coefficients of the exponential in these two formulas and
obtains
c−1 = 2i, c1 = −2i, cn = 0, n ∈ Z\{−1, 1},
with amplitudes |c−1 | = |c1 | = 2 (see Figure 1.9a).
Method 2 (general). Using formula (AE) of the complex Fourier series,
one gets
Z T
1 2
cn = f (x)e−inωx dx
T − T2
Z 0.005
1 2 i200πx
= (e − e−i200πx )e−in200πx dx
0.01 −0.005 i
Z 0.005 Z 0.005
200 −i(n−1)200πx −i(n+1)200πx
= e dx − e dx .
i −0.005 −0.005
e−i(n−1)200πx e−i(n+1)200πx
0.005 0.005
cn = −200i − + .
i(n − 1)200π −0.005 i(n + 1)200π −0.005
0.005
Similarly, e−i(n+1)200πx = 0. Hence,
−0.005
24
For n = −1, one has
Z 0.005 Z 0.005
200 400iπx
c−1 = e dx − dx
i
−0.005 −0.005
0.005
= −200i −x
−0.005
25
1
Solution. The pulse period is T = = 0.01 s and the angular frequency
F
2π
ω= = 200π rad/s.
T
T T
In the interval − , the signal is
2 2
d d
A, x ∈ − ,
2 2
f (x) = (see Figure 1.10a).
T d d T
0, x ∈ − , − ∪ ,
2 2 2 2
Due to the fact that the function f and the coefficients cn are real, for
n ≥ 1, one gets
sin(0.4nπ)
c−n = cn = cn = .
2nπ
For n = 0, by (AE), one obtains
T
Z Z 0.002
1 2 1
c0 = f (x) dx = 0.5 dx
T − T2 0.01 −0.002
0.002
= 50x = 0.2.
−0.002
Remark 1.7.3. This example (Example 1.7.2) and the one that follows
(Example 1.7.4) are also solved at the end of this chapter using MATLAB
(see Examples 1.9.9 and 1.9.8, respectively). Note that Figures 1.10a and
1.10b from below are the same as Figures 1.15 and 1.16, respectively.
26
Figure 1.10a: The Pulse Train
Example 1.7.4. Determine the synthesis of the pulse train from Example
1.7.2, up to the N th harmonic.
Hence,
N
X sin(0.4nπ)
ein200πx + e−in200πx ,
fN (x) = 0.2 +
n=1
2nπ
27
so
N
1X1
fN (x) = 0.2 + sin(0.4nπ) cos(200nπx).
π n=1 n
1.8 Exercises
In what follows, the function f : R → R is a periodic one, with period T
2π
and ω = .
T
E 1. Determine the trigonometric Fourier expansion of the following even
functions:
a) f : [−1, 1) → R, f (x) = x2 ;
x
b) f : [−π, π) → R, f (x) = cos ;
2
x + a, x ∈ [−a, 0]
c) f : [−a, a) → R, f (x) = , a > 0.
−x + a, x ∈ (0, a)
Solution. Since all the functions are even, we are using formulas (1.15),
Z T
4 2
a0 = f (x) dx,
T 0
Z T
4 2
an = f (x) cos nωx dx, ∀n ≥ 1
T 0
bn = 0, ∀n ≥ 1,
and the Fourier cosine series (1.16),
∞
a0 X
f (x) = + an cos nωx.
2 n=1
T 2π
a) As = 1, it follows that T = 2 and ω = = π. For n 6= 0, one gets
2 T
Z T Z 1
4 2
an = f (x) cos(nωx) dx = 2 x2 cos(nπx) dx
T 0 0
28
sin(nπx) 1
Since sin(kπ) = 0, for every k ∈ Z, one has = 0 and
nπ 0
Z 1
4
an = − x sin(nπx) dx,
nπ 0
4(−1)n
−4 n sin(nπx) 1
an = 2 2 −(−1) + = 2 2 .
nπ nπ 0 nπ
Also,
T
1
x3
Z Z
4 2 1 2
a0 = f (x) dx = 2 x2 dx = 2 · = .
T 0 0 3 0 3
Therefore, the Fourier cosine series (1.16) is the following:
∞
1 X 4(−1)n
f (x) = + cos(nπx);
3 n=1 n2 π 2
2π
b) In this case, T = 2π and ω = = 1. For n ∈ N, one gets
T
Z π
2 x
an = cos cos(nx) dx.
π 0 2
One method to solve this integral is to use integration by parts two times
until we reach the initial integral. An easier way to solve it is using the
cos(A + B) + cos(A − B)
trigonometric identity cos A cos B = . Our solution
2
29
follows the last method, so
x x
2
Z π cos + nx + cos − nx
an = 2 2 dx
π 0 2
x x
1 sin + nx
π − nx π
sin
= 2 + 2
π
1 0 1 0
+n −n
2 2
π π
1 sin + nπ sin − nπ
=
2 + 2
π
1 1
+n −n
2 2
(−1)n (−1)n
2
= + ,
π 1 + 2n 1 − 2n
where we haveused the fact that sin(kπ) = 0 and cos(kπ) = (−1)k , for every
π π
k ∈ Z and sin = 1 and cos = 0. Finally, we obtain
2 2
2 (−1)n (−1)n 4(−1)n
2 1 − 2n + 1 + 2n
an = + = · (−1)n · = .
π 1 + 2n 1 − 2n π 1 − 4n2 π(1 − 4n2 )
4
Replacing n with 0 into the expression of an , we get a0 = . Therefore, the
π
Fourier cosine series (1.16) is the following:
∞
2 X 4(−1)n
f (x) = + cos(nx);
π n=1 π(1 − 4n2 )
c) The fact that the function f is even is not obvious. But if one makes
the computations, one gets
−x + a, −x ∈ (−a, 0) −x + a, x ∈ (0, a)
f (−x) = = = f (x),
−(−x) + a, −x ∈ [0, a] x + a, x ∈ [−a, 0]
2π π
In this exercise, T = a and ω = = . For n 6= 0, one obtains
T a
Z a
2 a
Z
2 nπx nπx
an = f (x) cos dx = (−x + a) cos dx.
a 0 a a 0 a
30
Integrating by parts, one gets
nπx nπx
2 sin a
Z a sin
an = (−x + a) · a + a dx
a nπ 0
nπ
0
a
nπx a
2 cos
= 2a (−1)n+1 + 1 .
a
= 0−0− a
nπ nπ 0 nπ2 2
a
2 a 2 a
Z Z
For n = 0, one finds a0 = f (x) dx = (−x + a) dx = a. It follows
a 0 a 0
that the cosine Fourier series (1.16) is the following:
∞
a X 2a[(−1)n+1 + 1] nπx
f (x) = + cos .
2 n=1 n2 π 2 a
1 − cos(2A)
b) One can notice that since sin2 A = , we immediately find
2
1 1
the finite expansion f (x) = − cos(2πx). By computing the coefficients
2 2
with formulas (1.15), one gets
Z 2 Z 2
2
nπx 1 − cos(2πx) nπx
an = sin (πx) cos dx = cos dx
0 2 0 2 2
Z 2 Z 2
1 nπx nπx
= cos dx − cos(2πx) cos ( dx .
2 0 2 0 2
31
Using the formula of f (x), one gets an = 0, for n 6= 0, 4 and
Z 2
1
a4 = sin2 (πx) cos(2πx)) dx = −
0 2
and Z 2
a0 = sin2 (πx) dx = 1;
0
Solution. Since all the functions are odd, we are using formulas (1.17),
a0 = an = 0, ∀n ≥ 1,
Z T
4 2
bn = f (x) sin nωx dx, ∀n ≥ 1.
T 0
and the Fourier sine series (1.18),
∞
X
f (x) = bn sin nωx.
n=1
T 2π π
a) Since = 2, it follows that T = 4 and ω = = . Applying formula
2 T 2
(1.17) we get
2 2
Z nπx Z 2 nπx
bn = ax sin dx = a x sin dx.
2 0 2 0 2
32
Integrating by parts and using that sin(kπ) = 0, for every k ∈ Z, cos(kπ) =
(−1)k , for every k ∈ Z, one gets
nπx
−x cos 2 2
Z 2 nπx
bn = a 2 + cos dx
nπ 0 nπ 0 2
2 nπx
−4 · (−1) n
2 sin 2
= a + · 2
nπ
nπ nπ 0
2
n+1
4 · (−1)
=a· .
nπ
It follows that the Fourier sine series is the following:
∞
X 4 · (−1)n+1 nπx
f (x) = a sin ;
n=1
nπ 2
ex − e−x ex + e−x
sinh(x) = and cosh(x) = .
2 2
It follows that (sinh(x))0 = cosh(x) and (cosh(x))0 = sinh(x). The function
e−x − ex ex − e−x
sinh is an odd function since sinh(−x) = =− = − sinh(x),
2 2
for every x ∈ R.
T 2π
Since = π, it follows that T = 2π and ω = = 1. Therefore, for
2 Z π T
2
n ≥ 1, one gets bn = sinh(x) sin(nx) dx. We will use integration by
π 0
parts twice.
First one obtains
2 π
Z
bn = (cosh(x))0 sin(nx) dx
π 0
Z π
2 π
= cosh(x) sin(nx) − cosh(x)n cos(nx) dx
π 0 0
−2n π
Z
= cosh(x) cos(nx) dx
π 0
33
Then
−2n π
Z
bn = (sinh(x))0 cos(nx) dx
π 0
Z π
−2n π
= sinh(x) cos(nx) + sinh(x)n sin(nx) dx
π 0 0
Z π
−2n n
= sinh π · (−1) + n sinh(x) sin(nx) dx
π 0
−2n nπ
= sinh π · (−1)n + bn
π 2
−2n
= sinh π · (−1)n − n2 bn .
π
−2n
In conclusion, (n2 + 1)bn = sinh π · (−1)n and
π
2n(−1)n+1 sinh π
bn = .
π(n2 + 1)
The Fourier sine series (1.18) is the following:
∞
X 2n(−1)n+1 sinh π
f (x) = sin(nx);
n=1
π(n2 + 1)
c) Let us verify first that the function f is indeed odd. One obtains
−1 + x, −x ∈ [−3, −1) −1 + x, x ∈ (1, 3]
f (−x) = 0, −x ∈ [−1, 1] = 0, x ∈ [−1, 1]
1 + x, −x ∈ (1, 3] 1 + x, x ∈ [−3, −1)
−(1 − x), x ∈ (1, 3]
= 0, x ∈ [−1, 1] = −f (x),
−(−1 − x), x ∈ [−3, −1)
2 3
Z nπx Z 1 Z 3 nπx
2
bn = f (x) sin dx = 0 dx + (1 − x) sin dx.
3 0 3 3 0 1 3
34
Integrating by parts, one finds
2
nπx 3 Z 3 nπx
bn = −(1 − x) cos − cos dx
nπ 3 1 1 3
nπx 3
2 n 3
= 2(−1) − sin
nπ nπ 3 1
2 3 nπ
= 2(−1)n + sin .
nπ nπ 3
The Fourier sine series (1.18) is the following:
∞
X 2 n 3 nπ nπx
f (x) = 2(−1) + sin sin .
n=1
nπ nπ 3 3
35
Hence,
16n(−1)n+1
bn = ,
π(4n2 − 1)2
∞
X 16n(−1)n+1
so f (x) = sin(2nx);
n=1
π(4n2 − 1)2
b) One gets
π2
n 6
bn = 2(−1) −
n3 n
∞
π2
X
n 6
and f (x) = 2(−1) − sin(nx);
n=1
n3 n
c) One obtains
2 π 2 π
Z Z
bn = f (x) sin(nx) dx = − sin(nx) dx
π 0 π π2
2 n
nπ
= (−1) − cos
nπ 2
∞
X 2 nπ
and f (x) = (−1)n − cos sin(nx).
n=1
nπ 2
36
π
It follows that T = 4 and ω = .
2
One may try to compute one by one the coefficients a0 , an and bn , but
the integrals are quite complicated. A simpler way is to compute an + ibn .
So, for n ≥ 0,
1 2 x h nπx
Z nπx i
an + ibn = xe cos + i sin dx
2 −2 2 2
1 2 x inπx 1 2 x+ inπx
Z Z
= xe e 2 dx = xe 2 dx.
2 −2 2 −2
Replacing into the expression e2 +e−2 with 2 cosh 2 and e2 −e−2 with 2 sinh 2,
it follows that
4(−1)n cosh 2 4(−1)n sinh 2
an + ibn = +
2 + inπ (2 + inπ)2
(2 − inπ) cosh 2 (2 − inπ)2 sinh 2
n
= 4(−1) +
4 + n2 π 2 (4 + n2 π 2 )2
4(−1)n (4 − 4inπ − n2 π 2 ) sinh 2
= (2 − inπ) cosh 2 + ,
4 + n2 π 2 4 + n2 π 2
so
4(−1)n (4 − n2 π 2 ) sinh 2
an = 2 cosh 2 + ,
4 + n2 π 2 4 + n2 π 2
37
and
4(−1)n
4nπ sinh 2
bn = −nπ cosh 2 − .
4 + n2 π 2 4 + n2 π 2
For n = 0, we obtain a0 = 2 cosh 2 + sinh 2;
b) When the function f is R(sin x, cos x), it is necessary to compute an +
ibn since the integral will be transformed into a complex integral that will be
solved using residues (see [26, Chapter 5, Section 5.4]).
For n ≥ 0, one gets
1 π sin x + 1
Z
an + ibn = (cos(nx) + i sin(nx)) dx
π −π 5 − 4 cos x
1 π sin x + 1 inx
Z
= e dx.
π −π 5 − 4 cos x
z2 − 1
1
Z +1 dz
an + ibn = 2iz · zn
2
π |z|=1 z +1 iz
5−4
2z
(z + i)2 z n−1
Z
1
= 2
dz.
2π |z|=1 2z − 5z + 2
(z + i)2 z n−1
For n ≥ 1, let us denote by g(z) the fraction . The singular
2z 2 − 5z + 2
1
points of g are z1 = 2 and z2 = , both first order poles, but only z2 is inside
2
the circle |z| = 1. Therefore,
1 1
an + ibn = · 2πi · res g,
2π 2
(z + i)2 z n−1
1
= i lim1 z −
z→ 2 2 2(z − 2)(z − 21 )
n
1 i 1
= · + .
2 4 3
n n+2
1 1 1
It follows that an = · and bn = .
3 2 2
38
(z + i)2
For n = 0, g(z) = . Now, the singular points of g are
z(2z 2 − 5z + 2)
1
z1 = 2, z2 = and z3 = 0, all first order poles, but only z2 and z3 are inside
2
the circle |z| = 1. Therefore, we get that
1 1
a0 + ib0 = · 2πi · res g, + res (g, 0)
2π 2
1 ( 21 + i)2
= i − + −3
2 2
2
= .
3
1 X (−1)n − 1
Answer. a) f (x) =
+ sin(nπx);
2 n≥1 nπ
1 X (−1)n n(−1)n
b) f (x) = + cos(nx) + sin(nx) ;
2 n≥1 1 + n2 1 + n2
39
c) One gets
1 π zn
Z Z
1 inx 4
an + ibn = e dx = dz
π −π 2 + sin x π |z|=1 z 2 + 4iz − 1
zn √
4
= · 2πi · res , i(−2 + 3)
π z 2 + 4iz − 1
4 · in √
= √ (−2 + 3)n .
3
π π n nπ nπ
But in = cos + i sin = cos + i sin , so
2 2 2 2
2 4 Xh √ nπ nπ i
f (x) = √ + √ (−2 + 3)n (cos cos nx + sin sin nx) ;
3 3 n≥1 2 2
n
13 X 2 2
d) f (x) = + · (5n + 13) cos nx.
125 n≥1 125 3
a) One gets
1 1
Z
cn = (x + ex )e−inπx dx
2 −1
Z 1 Z 1
1 −inπx x−inπx
= xe dx + e dx
2 −1 −1
Z 1 −inπx
1 xe−inπx 1 ex−inπx
e 1
= + dx + .
2 −inπ −1 −1 inπ 1 − inπ −1
40
Hence,
e−inπ + einπ e−inπx e1−inπ − e−1+inπ
1 1
cn = − + .
2 −inπ (inπ)2 −1 1 − inπ
Since einπ = cos nπ + i sin nπ = (−1)n = e−inπ , it follows that, for n 6= 0,
2(−1)n (−1)n (e − e−1 )
1
cn = − +
2 inπ 1 − inπ
n
(−1) 2 2 sinh 1
= − +
2 inπ 1 − inπ
n sinh 1(1 + inπ) i
= (−1) +
1 + n2 π 2 nπ
1 1
Z
and c0 = (x + ex ) dx = sinh 1; so, using (1.24), we get
2 −1
∞
X sinh 1(1 + inπ) i
f (x) = sinh 1 + (−1) n
2π2
+ e−inπx ;
n=−∞,
1 + n nπ
n6=0
b) One obtains
Z π Z π x
1 −inx 1 e + e−x −inx
cn = cosh xe dx = e dx
2π −π 2π −π 2
Z π Z π
1 x−inx −x−inx
= e dx + e dx
4π −π −π
1 ex−inx π e−x−inx π
= −
4π 1 − in −π 1 + in −π
1 e (−1) − e (−1)n e−π (−1)n − eπ (−1)n
−π
π n
= −
4π 1 − in 1 + in
n
(−1)n sinh π
(−1) 2 sinh π 2 sinh π
= + = .
4π 1 − in 1 + in π(1 + n2 π 2 )
In conclusion, the complex Fourier expansion of f is the following:
∞
X (−1)n sinh π −inx
f (x) = e .
n=−∞
π(1 + n2 π 2 )
41
W 4. Expand in complex Fourier series the following functions:
x, x ∈ [−2, 0)
a) f : [−2, 2) → R, f (x) = ;
0, x ∈ [0, 2)
h π π
b) f : − , → R, f (x) = eax , a ∈ R.
2 2
(−1)n i 1 − (−1)n 1
Answer. a) For n 6= 0, cn = + 2 2
and c0 = − ;
nπ nπ 2
π
∞ 2(−1)n (a + 2in) sinh a
2 e2inx .
X
b) f (x) =
n=−∞
π(a2 + 4n2 )
sin(a + b) + sin(a − b)
Now we use the formula sin a cos b = (for the second
2
integral) and we get
Z π Z π
1 − cos(nx) π 1
bn = − sin(nx + 2x) dx + sin(nx − 2x) dx
π n 0 2 0 0
(−1)n − 1 cos(nx + 2x) π cos(nx − 2x) π
1
= − + + .
π n 2(n + 2) 0 2(n − 2) 0
We need to impose now the condition n 6= 2 and the coefficient b2 will be
determined separately. So, for n 6= 2,
(−1)n − 1 (−1)n − 1 (−1)n − 1 4[(−1)n − 1]
1
bn = − + + = .
π n 2(n + 2) 2(n − 2) nπ(n2 − 4)
42
On the other hand,
2 π 2
Z Z π Z π
1
b2 = sin x sin(2x) dx = sin(2x) dx − cos(2x) sin(2x) dx
π 0 π 0 0
Z π
1 cos(2x) π sin(4x) 1 cos(4x) π
= − − dx = 0+ = 0.
π 2 0 0 2 π 8 0
It follows that the trigonometric Fourier sine series for f is the following:
X
f (x) = b1 sin x + b2 sin(2x) + bn sin(nx)
n≥3
8 X 4[(−1)n − 1]
= sin x + 2 − 4)
sin(nx);
3π n≥3
nπ(n
b) In this exercise
Z 2 nπx Z 1 nπx Z 2 nπx
bn = f (x) sin dx = sin dx − sin dx
0 2 0 2 1 2
2 h n
nπ i
= 1 + (−1) − 2 cos ,
nπ 2
∞
X 2 h n
nπ i nπx
so f (x) = 1 + (−1) − 2 cos sin .
n=1
nπ 2 2
43
b) For n ∈ N∗ \ {2},
1 h nπ
n
nπ i
bn = 4 sin + 2n (−1) + cos ,
π(4 − n2 ) 2 2
1
and b2 = . Therefore,
2
x X nx 2 x 1
f (x) = b1 sin + b2 sin x + bn sin = sin + sin x+
2 n≥3
2 3π 2 2
X 1 h nπ
n
nπ i nx
+ 4 sin + 2n (−1) + cos sin .
n≥3
π(4 − n2 ) 2 2 2
In this case, the period is the length of the interval [−l, l], i.e. T = 2l
π
and ω = . Therefore, the Fourier cosine series expansion of the function
l
f (x), x ∈ (0, l] will be the Fourier cosine series (1.16) of fe(x) restricted to the
interval (0, l] and the Fourier coefficients
a0 and an (1.15) will contain f (x)
T 4 2 T
since fe(x) = f (x), for x ∈ 0, . Since = and = l, one obtains
2 T l 2
2 l 2 l
Z Z nπx
a0 = f (x) dx, an = f (x) cos dx (1.27)
l 0 l 0 l
44
and
∞
a0 X nπx
f (x) = + an cos . (1.28)
2 n=1
l
a) One gets
2 π
Z
an = (x + cos(2x)) cos(nx) dx
π 0
Z π Z π
2
= x cos(nx) dx + cos(2x) cos(nx) dx
π 0 0
Z π
2 x sin(nx) π sin(nx)
= − dx +
π n 0 0 n
2 π cos(nx + 2x) + cos(nx − 2x)
Z
+ dx
π 0 2
2 cos(nx) π sin(nx + 2x) π sin(nx − 2x) π
= + + .
π n2 0 2(n + 2) 0 2(n − 2) 0
Now one must impose the conditions for the denominators to be different
from 0, thus n 6= ±2 and n 6= 0. Since n is a positive integer, one is left only
with n 6= 0 and n 6= 2. It follows that
(−1)n − 1
2
an = .
π n2
Also,
Z π
2
a0 = (x + cos(2x)) dx = π,
π 0
and
2 π
Z
a2 = (x + cos(2x)) cos(2x) dx
π 0
Z π Z π
2 2
= x cos(2x) dx + cos (2x) dx
π 0 0
Z π Z π
2 x sin(2x) π sin(2x) 1 + cos(4x)
= − dx + dx
π 2 0 0 2 0 2
= 1.
45
In conclusion,
a0 X
f (x) = + a1 cos x + a2 cos(2x) + an cos(nx)
2 n≥3
π 4 X 2 (−1)n − 1
= − cos x + cos(2x) + cos(nx);
2 π n≥3
π n2
b) One gets
1 4
Z nπx
an = f (x) cos dx
2 0 4
Z 1 Z 4
1 nπx nπx
= x cos dx + |x − 3| cos dx
2 0 4 1 4
1 1
Z nπx
= x cos dx+
2 0 4
Z 3 Z 4
1 nπx nπx
+ (3 − x) cos dx + (x − 3) cos dx .
2 1 4 3 4
a0 X nπx 3 X nπx
f (x) = + an cos = + an cos ,
2 n≥1
4 4 n≥1
4
46
Answer. For both functions one uses formulas (1.27) and (1.28) from E.
6.
1 X 4(−1)n+1
a) f (x) = + cos(nπx);
3 n≥1 n2 π 2
1 2π
Z nx 2n nπ
b) an = cos x cos dx = sin , for n 6= 0, 2,
π π 2 π(n2 − 4) 2
1
a2 = and a0 = 0. In conclusion,
2
a0 x X nx
f (x) = + a1 cos + a2 cos x + an cos
2 2 n≥3
2
2 x 1 X 2n nπ nx
= − cos + cos x + sin cos .
3π 2 2 n≥3
π(n2 − 4) 2 2
2 π
Z π
2 −x cos(nx) π
Z
cos(nx)
bn = x sin(nx) dx = + dx
π 0 π n 0 0 n
2 −π(−1)n sin(nx) π 2(−1)n+1
= + = .
π n n2 0 n
47
π
When one computes f , one obtains
2
π X 2(−1)n+1 nπ
= sin
2 n≥1
n 2
X −2 X 2 π
= sin(kπ) + sin (2k + 1)
n=2k,
2k n=2k+1,
2k + 1 2
k≥1 k≥0
X 2 π
=0+ sin kπ +
n=2k+1,
2k + 1 2
k≥0
X 2
= (−1)k ,
k≥0
2k + 1
so
X (−1)n π
= ;
n≥0
2n + 1 4
b) In this exercise
1 π ax 1 eax 1 eaπ − e−aπ
Z π 2 sinh(aπ)
a0 = e dx = = = ,
π −π π a −π π a πa
and for n ≥ 1, one obtains
1 π ax inx 1 eax+inx π 1 eaπ+inπ − e−aπ−inπ
Z
an + ibn = e e dx = =
π −π π a + in −π π a + in
a − in
eaπ (−1)n − e−aπ (−1)n
= 2 2
π(a + n )
(−1)n (a − in)2 sinh(aπ)
= .
π(a2 + n2 )
Hence,
(−1)n 2a sinh(aπ) (−1)n+1 2n sinh(aπ)
an = and bn = .
π(a2 + n2 ) π(a2 + n2 )
The Fourier trigonometric expansion of f is
sinh(aπ) X (−1)n 2 sinh(aπ)
f (x) = + 2 + n2 )
(a cos(nx) − n sin(nx)) ,
πa n≥1
π(a
48
for every x ∈ [−π, π).
On one hand, f (−π) = e−aπ and on the other hand,
In conclusion,
sinh(aπ) 2 sinh(aπ) X 1
e−aπ = +
πa π n≥1
a2 + n 2
and
X 1 πae−aπ − sinh(aπ)
= .
n≥1
a2 + n 2 2a sinh(aπ)
49
1 2 X (−1)n − 1
to =− 2 , so
2 π n≥1 n2
X (−1)n − 1 π2
=− ;
n≥1
n2 4
4((−1)n − 1) 4(−1)n
b) a0 = 2, an = and bn = , for n ≥ 1. The trigono-
n2 π n2
metric Fourier expansion of f is
X 4((−1)n − 1) nx 4(−1)n nx
f (x) = 1 + 2π
cos + 2
sin .
n≥1
n 2 n 2
X 4((−1)n − 1)
When x = 0 we get that 1 + = 0, which implies that
n≥1
n2 π
X X −8
1+ 0+ 2π
= 0. In conclusion,
n=2k, n=2k+1,
(2k + 1)
k≥1 k≥0
X 1 π
= .
n≥0
(2n + 1)2 8
a20 X 2 1 l 2
Z
2
+ (an + bn ) = f (x) dx.
2 n≥1
l −l
E 8. Find the sum of the following series, using the trigonometric Fourier
expansion of the specified functions and Theorem 1.8.1 (Parseval’s Formula):
X 1
a) 2
, f : [−π, π) → R, f (x) = x;
n≥1
n
X 1 0, x ∈ [−π, 0)
b) , f (x) = .
(2n + 1)4 x, x ∈ [0, π)
n≥0
50
Solution. a) The function f is odd, so an = 0, for n ∈ N. One obtains
2 π
Z π
2 −x cos(nx) π
Z
cos(nx)
bn = x sin(nx) dx = + dx
π 0 π n 0 0 n
2 −π(−1)n sin(nx) π 2(−1)n+1
= + = .
π n n2 0 n
From Theorem 1.8.1 (Parseval’s Formula), one gets
a20 X 2 1 π 2
Z
2
+ (an + bn ) = x dx.
2 n≥1
π −π
X 2(−1)n+1 2 2π 2
This implies that = , so
n≥1
n 3
X 1 2π 2 X 1 π2
4 2
= and 2
= ;
n≥1
n 3 n≥1
n 6
b) In this exercise
1 π 1 π 1 (−1)n − 1
Z Z
π
a0 = x dx = , an = x cos(nx) dx = ·
π 0 2 π 0 π n2
and
1 π (−1)n+1
Z
bn = x sin(nx) dx = , for n ≥ 1.
π 0 n
From Theorem 1.8.1 (Parseval’s Formula), one obtains
π 2 X 1 2(1 − (−1)n ) 1 π 2
Z
1
+ 2 4
+ 2 = x dx,
8 n≥1
π n n π 0
which is equivalent to
π2 2 X 1 − (−1)n X 1 π2
+ 2 + = .
8 π n≥1 n4 n≥1
n2 3
X 1 π2
Using the fact that 2
= (from a)), we get that
n≥1
n 6
2 X 1 − (−1)n π2 2 X X 2 π2
= ⇔ 0 + =
π 2 n≥1 n4 24 π 2 n=2k, n=2k+1,
(2k + 1) 4 24
k≥1 k≥0
51
and
X 1 π4
= .
n≥0
(2n + 1)4 96
W 8. Find the sum of the following series, using the trigonometric Fourier
expansion of the specified functions and Theorem 1.8.1 (Parseval’s Formula):
X 1
a) 4
, f : [−π, π) → R, f (x) = x2 ;
n≥1
n
X 1
b) , f : [−π, π) → R, f (x) = cosh(ax), a 6= 0.
n≥1
(n2 + a2 )2
X 1 π4
Answer. a) = ;
n≥1
n4 90
sinh(aπ)
b) Since f is even, then bn = 0, for n ≥ 1. We obtain that a0 = ,
aπ
2a sinh(aπ)(−1)n
an = and
π(n2 + a2 )
X 1 (2aπ sinh(2aπ) − 2 sinh2 (aπ) + a2 π 2 )
= .
n≥1
(n2 + a2 )2 16a4 sinh2 (aπ)
2π
Solution. The period is T = 2π; hence, ω = = 1.
T
>> syms n positive;
syms x;
f = 1; % defines the signal f
g = f * cos(n*x);
h = f * sin(n*x);
a0 = int(f, x, 0, pi)/pi;
52
an = (int(g, x, 0, pi)/pi); % computes the Fourier coefficients
bn = int(h, x, 0, pi)/pi;
The answers are the following:
a0 = 1;
bn = (2* sin((pi*n)/2)ˆ2)/(n*pi).
We obtain ∞
1 X 2 sin2 πn
2
f (x) = + .
2 n=1 πn
53
a0 = int(f, x, −pi, pi)/pi;
an = (int(g, x, −pi, pi)/pi); % computes the Fourier coefficients
bn = int(h, x, −pi, pi)/pi;
The answers are the following:
a0 = 2;
bn = 0.
We obtain
a0
f (x) = = 1.
2
Example 1.9.4. Determine the Fourier coefficients of the periodic function
f (x) = ex , x ∈ (−π, π) and explain the result.
Solution.
>> syms n positive;
syms x;
f = exp(x); % defines the signal f
g = f * cos(n*x);
h = f * sin(n*x);
a0 = int(f, x, −pi, pi)/pi;
an = (int(g, x, −pi, pi)/pi); % computes the Fourier coefficients
bn = int(h, x, −pi, pi)/pi;
The answers are the following:
a0 = (2* sinh(pi))/pi;
We can further simplify the results, using the equalities sin(nπ) = 0 and
cos(nπ) = (−1)n .
54
π
Example 1.9.5. Expand in Fourier series the periodic function f (x) = −x,
2
x ∈ (0, π). Plot the N th truncated Fourier series.
2π
Solution. The period is T = π; hence, ω = = 2. One uses formulas of
T
the coefficients (1.9).
>> syms n natural;
syms x;
f = pi/2 − x; % defines the signal f
g = f * cos(2*n*x);
h = f * sin(2*n*x);
a0 = 2*int(f, x, 0, pi)/pi;
an = 2*(int(g, x, 0, pi)/pi); % computes the Fourier coefficients
bn = 2*int(h, x, 0, pi)/pi;
The answers are the following:
a0 = 0;
Hence,
1
an = 0 and bn = ;
n
thus, one obtains
∞
X 1
f (x) = sin(2nx).
n=1
n
>> X = 10;
N = 10;
x = [0 : 4/X : X];
f s = zeros(1, numel(x));
for n = 1 : 1 : N
f s = f s + sin(2*n*x)/n;
end;
figure;
plot(x, f s);
grid on;
55
Figure 1.11a: The 10th Truncated Series
Above we have the plots of the 10th and the 1000th truncated Fourier
series (see Figures 1.11a and 1.11b, respectively).
56
f = x; % defines the signal f
g = f * sin(n*x);
bn = int(g, x, −pi, pi)/pi; % computes the Fourier coefficients
The answer is the following:
2(−1)n+1
Hence, bn = . Thus, we obtain
nπ
∞
X 2(−1)n+1
f (x) = sin(nx).
n=1
nπ
Figure 1.12: The Signal and the 2nd Truncated Fourier Series
57
x = linspace(−pi, pi);
plot(x, f s); % plots the N th truncated series
In Figure 1.12 we have the the signal f (in blue) and the N th truncated
Fourier series for N = 2 (in red).
−1, t ∈ (−1, 0)
Example 1.9.7. Plot the periodic function f (t) = and
1, t ∈ (0, 1)
its truncated Fourier series.
2π
Solution. The period is T = 2; hence, ω = = π. Since the function f
T
is odd (see Figure 1.13), we get that a0 = an = 0, so one computes only bn .
2
One obtains bn = (1 − (−1)n ). Thus, the Fourier series expansion is the
nπ
following:
∞
X 2
f (x) = (1 − (−1)n ) sin(nπx).
n=1
nπ
Figure 1.13: The Graph of the Function and the 13th Truncated Fourier
Series
>> clf;
N = 13;
c = 0;
x = −4 : 0.01 : 4;
58
f s = c*ones(size(x));
for n = 1 : 2 : N
bn = 2*(1 − (−1)ˆn)/(n*pi);
f s = f s + bn* sin(n*pi*x);
end;
plot([−4 − 3 − 3 − 2 − 2 − 1 − 1 0 0 1 1 2 2 3 3 4], . . .
[1 1 − 1 − 1 1 1 − 1 − 1 1 1 − 1 − 1 1 1 − 11]);
hold;
plot(x, f s);
xlabel(’t(seconds)’); ylabel(’y(t)’);
Example 1.9.8. Write a MATLAB program to generate the synthesized
pulse train from Example 1.7.4 composed of N h harmonics, if there are 5
cycles in an array of 500 samples (see [34]).
Solution.
>> N h = 10; N c = 4; N s = 500;
f (1 : N s) = 0.2; j = 1 : N s;
for n = 1 : N h
x(j) = (sin(0.4*pi*n)/(pi*n))* cos(n*2*pi*N c*j/N s);
f = f + x;
plot(f ); pause;
end;
Figure 1.14a: n = 1
59
Figure 1.14b: n = 2
Figure 1.14c: n = 5
60
Figure 1.14d: n = N h = 10
Example 1.9.9. Write MATLAB programs to plot the pulse train and the
frequency spectrum analyzed in Example 1.7.2.
Solution.
The Pulse Train
x = [−18, −16, −16, −14, −14, −6, −6, −4, −4, 4, 4, 6, 6, 14, 14, 16, 16, 18];
y = [0.5, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0.5, 0, 0, 0.5, 0.5];
line(x, y);
61
The Frequency Spectrum
>> figure;
n = linspace(0, 10, 11);
f = sin(0.4*pi*n)./(2*pi*n);
stem(n, f );
πn2
2 − sin (πn) cos (πn) + 2πn cos (πn)2 − πn
−
πn2
62
Using the equalities sin(nπ) = 0 and cos(nπ) = (−1)n , it follows that the
2
coefficients are an = 0 and bn = − .
n
63
64
Chapter 2
Fourier Transform
2.1 Definition
Definition 2.1.1. A function f : R → C is called absolutely integrable on R
if the integral of the absolute value of f on R is bounded, i.e.
Z ∞
|f (x)| dx < ∞. (2.1)
−∞
65
One denotes by L1 the set of absolutely integrable (on R) functions. It
follows that L1 is a normed linear space with the following operations:
66
Proof. Since |eiωx | = 1, one obtains
Z ∞ Z ∞ Z ∞
iωx iωx
f (x)e dx ≤ |f (x)| e dx = |f (x)| dx < ∞,
−∞ −∞ −∞
for every ω ∈ R.
It follows that the function fb(ω) is defined and continuous for every ω ∈ R.
eiωz
The function 2 has a simple pole z = ia in the domain bounded by
z + a2
Γ. Hence, one obtains from the Residue Theorem ([26, Theorem 5.1.1]) the
value of I. We get that
iωz
eiωz eiωia
e π
I = 2πi · res 2 2
, ia = 2πi 2 2 0
= 2πi = e−ωa .
z +a (z + a ) z=ia 2ia a
67
Figure 2.2: The Closed Contour Γ = AB ∪ BCA
68
Figure 2.3: The Closed Contour Γ = BA ∪ ACB
69
Figure 2.4: The Roots of Q(x) in the Upper and Lower Half Plane
One obtains
n
P (z) iωz X
2πi res
e , aj , ω ≥ 0
P j=1
Q(z)
fb(ω) = F (ω) = n . (2.4)
Q X P (z) iωz
−2πi res e , āj , ω < 0
Q(z)
j=1
Z ∞
2 2
−a2 x2
Example 2.1.5. Compute F[e ](ω) = e−a x eiωx dx, for a > 0.
−∞
Z ∞
−t2 1
Solution. We will make use of Euler’s integral e dt = Γ =
√ −∞ 2
π. Using the substitution t = ax, one obtains
Z ∞ Z ∞ √
−a2 x2 −t2 1 π
e dx = e dt = < ∞.
−∞ −∞ a a
2 x2
Hence, f (x) = e−a is absolutelyZintegrable on R.
∞
2 2
−a2 x2
It follows that F[e ](ω) = e−(a x −iωx) dx. Using the fact that
−∞
2 2 2
ω2
2 2 iω 2 iω iω iω
a x − iωx = (ax) − 2ax + − = ax − + 2
2a 2a 2a 2a 4a
we get
Z ∞ h
iω 2 ω2
i Z ∞
− (ax− 2a ω2 2
F[e −a2 x2
](ω) = e )+ 4a2 dx = e −
4a2
iω
e−(ax− 2a ) dx.
−∞ −∞
70
iω
Substitute now ax − by z. Then (see Figure 2.5)
2a
Z ∞ Z
iω 2
−(ax− 2a ) 1 −z2
I := e dx = e dz.
−∞ AB a
hence, Z Z Z ∞ √
−z 2 −z 2 2
I= e dz = e dz = e−z dz = π.
AB CD −∞
√
2
− ω2π 2
− ω2
One obtains fb(ω) = e 4a eI= . 4a
a
Remark 2.1.6. The absolute integrability is a sufficient condition for the
existence of the Fourier transform, but it is not necessary. Now we give a
more general definition. The Fourier transform of a function f : R → C is
the integral below, defined in the sense of the principal value, if it exists:
Z ∞ Z l
iωx
f (ω) = F[f ](ω) =
b f (x)e dx = lim f (x)eiωx dx.
−∞ l→∞ −l
71
Example 2.1.7. Consider the function sinc (the cardinal sine)
sin x
(
, x 6= 0
f (x) = sinc(x) = x ,
1, x = 0
Z ∞
sin t π
Solution. We will use the Dirichlet integral dt = . Using the
0 t 2
substitution u = −t, one obtains
Z 0 Z ∞ Z ∞
sin t sin(−u) sin u π
dt = − (− du) = du = .
−∞ t 0 −u 0 u 2
Z ∞
sin t
Therefore, dt = π, i.e. f is integrable on R.
−∞ t
Similarly, by the substitution u = at,
Z ∞ Z ∞ Z ∞
sin(at) sin(u) du sin u π
dt = u = du = , if a > 0,
0 t 0 a 0 u 2
a
and
Z ∞ Z −∞ Z ∞
sin(at) sin u sin u π
dt = du = − du = − , if a < 0.
0 t 0 u 0 u 2
72
Now we start computing the Fourier transform of f . We obtain the following:
Z ∞ Z l
sin x iωx sin x iωx
f (ω) = F[f ](ω) =
b e dx = lim e dx
−∞ x l→∞ −l x
Z l
sin x
= lim (cos(ωx) + i sin(ωx)) dx
l→∞ −l x
Z l Z l
sin x sin x
= lim cos(ωx) dx + i lim sin(ωx) dx.
l→∞ −l x l→∞ −l x
sin x sin x
As the function cos(ωx) is even and the function sin(ωx) is odd,
x x
it follows that
Z l Z ∞
sin x sin x cos(ωx)
f (ω) = lim 2
b cos(ωx) dx = 2 dx
l→∞ 0 x 0 x
Z ∞ Z ∞
sin(1 + ω)x sin(1 − ω)x
= dx + dx.
0 x 0 x
Z ∞ Z ∞
sin(1 + ω)x sin(1 − ω)x
Let us denote dx by I1 and dx by I2 . By
0 x 0 x
the previous computations, we have the following situations:
π π
- if ω < −1, then 1 + ω < 0 and 1 − ω > 0, hence I1 = − , I2 = and
2 2
fb(ω) = 0;
π π
- if ω = −1, then I1 = 0 and I2 = , hence fb(ω) = ;
2 2
π
- if −1 < ω < 1, then 1 + ω > 0 and 1 − ω > 0, hence I1 = I2 = and
2
fb(ω) = 0;
π π
- if ω = 1, then I1 = and I2 = 0, hence fb(ω) = ;
2 2
π π
- if ω > 1, then 1 + ω > 0 and 1 − ω < 0, hence I1 = , I2 = − and
2 2
fb(ω) = 0.
Therefore,
π, ω ∈ (−1, 1)
fb(ω) = 0, |ω| > 1 .
π
, ω = ±1
2
73
The function f is not absolutely integrable and its Fourier transform has
discontinuities at ±1 (see Figure 2.7).
74
Theorem 2.2.2 (Similarity or Change of the Time Scale). For every f ∈ L1
and a > 0, one gets
1 b ω
F[f (ax)](ω) = f . (2.6)
a a
Proof. By definition, one obtains
Z ∞
F[f (ax)](ω) = f (ax)eiωx dx.
−∞
Now we restore the signal using the substitution t = ax. It follows that
Z ∞
1 ∞
Z
iω at 1 iω t 1 b ω
F[f (ax)](ω) = f (t)e dt = f (t)e dt = f
a .
−∞ a a −∞ a a
1
Example 2.2.4. Determine F[f (x − 3)](ω) if f (x) = .
x2 +4
1 π
Solution. By Example 2.1.4, the image of f (x) = is e−2|ω| . Then
x2 +4 2
1 π
F[f (x − 3)](ω) = F 2
(ω) = e−2|ω| eiω3 .
(x − 3) + 4 2
75
Proof. Using the definition of the Fourier transform, we get that
Z ∞ Z ∞
ixa ixa iωx
F[e f (x)](ω) = e f (x)e dx = eix(ω+a) f (x) dx = fb(ω + a).
−∞ −∞
e5ix
Example 2.2.6. Compute F 2 (ω).
x + 16
Solution. Again by Example 2.1.4, one obtains
i5x
e π
F 2 (ω) = e−4|ω+5| .
x + 16 4
= −iω fb(ω),
76
Proof. The statement will be proved by induction. Step n = 1 is true from
formula (2.9). Assume now that (2.10) is correct for n ∈ N∗ . We have to
show that formula (2.10) is true for n + 1. Using (2.9), one gets
Proof. By the Leibniz rule of differentiation under the integral sign, one
obtains
Z ∞ 0 Z ∞
0
[f (ω)] =
b iωx
f (x)e dx = f (x)(eiωx )0ω dx
Z ∞−∞ ω −∞
77
Definition 2.2.9. The convolution of two functions f, g ∈ L1 is the function
f ∗ g defined by Z ∞
(f ∗ g)(x) = f (t)g(x − t) dt. (2.13)
−∞
It follows that
Z ∞ Z ∞ Z ∞
|(f ∗ g)(x)| dx ≤ |f (t)| |g(x − t)| dx dt
−∞ −∞ −∞
Z ∞ Z ∞
= |f (t)| kgk1 dt = kgk1 |f (t)| dt
−∞ −∞
= kgk1 kf k1 < ∞.
Z ∞
Hence, k(f ∗ g)k1 = |(f ∗ g)(x)| dx < ∞, i.e. f ∗ g ∈ L1 .
−∞
Using the definition of the convolution,
Z ∞
F[f ∗ g](ω) = (f ∗ g)(x)eiωx dx
−∞
Z ∞ Z ∞
= f (t)g(x − t) dt eiωx dx,
−∞ −∞
78
From (2.7), one gets
Z ∞ Z ∞
iωt
F[f ∗ g](ω) = f (t)e gb(ω) dt = gb(ω) f (t)eiωt dt = gb(ω)fb(ω).
−∞ −∞
After we introduce the inversion formula, we can prove two more proper-
ties of the Fourier transform.
Z−∞
∞ Z ∞−∞
−a2 ω 2 iω(y−x)
= f (y) e e dω dy
−∞ −∞
Z ∞ Z ∞
−a2 ω 2 +iω(y−x)
= f (y) e dω dy.
−∞ −∞
Z ∞ √
−a2 x2 π − ω22
−a2 x2 iωx
In Example 2.1.5, we obtained F[e ](ω) = e e dx =
e 4a ,
−∞ a
for a > 0. Z ω by y − x, one obtains that the second
Substituting x by ω and √
∞
2 2 π − (y−x)2
integral is e−a ω eiω(y−x) dω = e 4a2 . Therefore,
−∞ a
√ Z ∞
π (y−x)2
I(a) = f (y)e− 4a2 dy.
a −∞
79
y−x
Using the substitution = t ⇐⇒ y = 2at + x, which implies that
2a
dy = 2a dt, we obtain that
√ Z ∞ Z ∞
π −t2
√ 2
I(a) = f (2at + x)e 2a dt = 2 π f (2at + x)e−t dt.
a −∞ −∞
80
Theorem 2.3.3 (Plancherel’s Theorem). For every f, g ∈ L1 ,
Z ∞ Z ∞
1
f (x)g(x) dx = fb(u)b
g (u) du. (2.17)
−∞ 2π −∞
Proof. We first notice that gb(−ω) = F[g(x)](ω). Indeed, since eiθ = cos θ −
i sin θ = cos(−θ) + sin(−θ) = e−iθ , one obtains
Z ∞ Z ∞
gb(−ω) = g(x)e−iωx dx = g(x)eiωx dx = F[g(x)](ω).
−∞ −∞
hence, Z ∞ Z ∞
1
f (x)g(x) dx = fb(u)b
g (u) du.
−∞ 2π −∞
The last formula represents the total power content of the signal f .
Proof. Since for z = a + ib one gets z z̄ = a2 + b2 = |z|2 , it follows that
f (x)f (x) = |f (x)|2 and (2.18) is obtained from (2.17) for g = f .
Spectral Analysis
The formula (2.15) (Inversion Formula) can be written (by e−iθ = cos θ −
i sin θ) as Z ∞
1
f (x) = fb(ω)(cos(ωx) − i sin(ωx)) dx.
2π −∞
The formula can be interpreted as the expression of the signal f (x) as a
linear combination of simple harmonic functions cos(ωx) and sin(ωx). It fol-
lows that the frequency content of the signal f (x) is spread over a continuous
81
range of frequencies with the amplitude fb(ω) of any frequency ω ∈ R. One
considers |f (x)|2 as a measure of the intensity of the signal f at a moment x
and |fb(ω)|2 as the measure of the intensity at the frequency ω.
Formula (2.18) from Corollary 2.3.4 (Parseval’s Formula) has the follow-
ing interpretation: if |f (x)|2 dx is the power content of f (x) in the interval
1 b
[x, x + dx], then |f (ω)|2 dω is the power content in the frequency range ω
2πZ ∞ Z ∞
2 1
to ω + dω. Hence, |f (x)| dx = |fb(ω)|2 dω represents the total
−∞ 2π −∞
power content of the signal f .
Theorem 2.3.5. If x is a discontinuity of the first kind of the function
f ∈ L1 (i.e. the one sided limits f (x−) and f (x+) are finite), then
Z ∞
f (x−) + f (x+) 1
= fb(ω)e−iωx dω. (2.19)
2 2π −∞
Proof. We modify the proof of Theorem 2.3.1 (Inversion Formula) by writing
Z ∞
√ 2
I(a) = 2 π f (2at + x)e−t dt
−∞
Z 0 Z ∞
√
−t2 −t2
=2 π f (2at + x)e dt + f (2at + x)e dt .
−∞ 0
Z 0 Z ∞
−t2 2
Let us denote f (2at + x)e dt by I1 and f (2at + x)e−t dt by I2 .
−∞ 0
Since in I1 , t ∈ (−∞, 0) it implies that 2at + x < 0, hence lim f (2at + x) =
a→0
f (x−). Similarly, in I2 , t ∈ (0, ∞), which implies that 2at + x > 0 and
lim f (2at + x) = f (x+). As a → 0, we get that
a→0
Z ∞
fb(ω)e−iωx dω = lim I(a)
−∞ a→0
Z 0 Z ∞
√
−t2 −t2
= 2 π f (x−) e dt + f (x+) e dt .
−∞ 0
Z ∞ √
2 2
From Euler’s integral e−t dt = π, since e−t is an even function, it
Z 0 −∞ Z ∞ √
−t2 −t2 π
follows that e dt = e dt = . So,
−∞ 0 2
√
√ π
I(a) = 2 π (f (x−) + f (x+)) = π(f (x−) + f (x+);
2
82
hence, Z ∞
π(f (x−) + f (x+)) = fb(ω)e−iωx dω.
−∞
Since eiθ = cos θ + i sin θ, the right hand side splits into the following sum:
Z ∞ Z ∞
1
f (x) = f (y) cos ω(y − x) dω dy+
2π −∞ −∞
Z ∞ Z ∞ (2.21)
i
+ f (y) sin ω(y − x) dω dy.
2π −∞ −∞
83
Z ∞ Z ∞
Denoting cos ω(y − x) dω by I1 and sin ω(y − x) dω by I2 , since
−∞ −∞
cos ω(y − x) and sin ω(y − x) are even and odd functions with respect to ω,
respectively, one gets
Z ∞
I1 = 2 cos ω(y − x) dω and I2 = 0.
0
1 ∞
Z Z ∞
f (x) = f (y) cos ω(y − x) dy dω. (2.22)
π 0 −∞
But cos ω(y − x) = cos ωy cos ωx + sin ωy sin ωx and taking into account that
cos ωx and sin ωx are constants with respect to the variable of the integration
y, the integral in (2.22) splits as
1 ∞
Z Z ∞
f (x) = cos ωx f (y) cos ωy dy dω
π 0 −∞
(2.23)
1 ∞
Z Z ∞
+ sin ωx f (y) sin ωy dy dω.
π 0 −∞
84
The Fourier Integral for Odd Functions
If f is odd, i.e f (−x) = −f (x), for every x ∈ R, then, since cos ωy is
even, Z ∞
f (y) cos ωy dy = 0;
−∞
2 ∞
Z Z ∞
f (x) = sin ωx f (y) sin ωy dy dω. (2.25)
π 0 0
and Z ∞
Fs [f ](ω) = fbs (ω) = f (y) sin ωy dy (2.27)
0
are called the Fourier Cosine Transform and Fourier Sine Transform of f ,
respectively.
By replacing the inner integrals from formulas (2.24) and (2.25) by fbc (ω)
and fbs (ω), respectively one obtains the Inverse Formulas of the Fourier Co-
sine and Sine Transforms:
2 ∞b
Z
−1 b
Fc [fc ](x) = f (x) = fc (ω) cos ωx dω (2.28)
π 0
85
and
2 ∞b
Z
Fs−1 [fbs ](x)
= f (x) = fs (ω) sin ωx dω. (2.29)
π 0
Next we are going to provide some examples and solve a couple of integral
equations with kernels cos ωx and sin ωx.
Example 2.4.2. Solve the following equation:
Z ∞ ω
e , ω ∈ (0, 1)
f (x) cos ωx dx =
0 0, ω ≥ 1
ω
e , ω ∈ (0, 1)
Solution. One gets fbc (ω) = . Using the Inversion For-
0, ω ≥ 1
mula (2.28), one obtains
2 ∞b 2 1 ω
Z Z
f (x) = fc (ω) cos ωx dω = e cos ωx dω
π 0 π 0
2 1 ω eiωx + e−iωx
Z Z 1 Z 1
1 ω(1+ix) ω(1−ix)
= e dω = e dω + e dω
π 0 2 π 0 0
1 eω(1+ix) 1 eω(1−ix) 1 1 e(1+ix) − 1 e(1−ix) − 1
= + = +
π 1 + ix 0 1 − ix 0 π 1 + ix 1 − ix
(1+ix) (1−ix)
1 (1 − ix)(e − 1) + (1 + ix)(e − 1)
= ·
π 1 + x2
1 e(1 − ix)(cos x + i sin x − 1) + e(1 + ix)(cos x − i sin x − 1)
= ·
π 1 + x2
In conclusion,
2e cos x − 1 + x sin x
f (x) = ·
π 1 + x2
Example 2.4.3. Solve the following equation:
Z ∞
ω
f (x) sin ωx dx = 2 .
0 ω +1
ω
Solution. As fbs (ω) = , using the Inversion Formula (2.29), one
ω2 +1
gets (for x > 0)
Z ∞ Z ∞
2 2 ω
f (x) = fbs (ω) sin ωx dω = sin ωx dω.
π 0 π 0 ω2 +1
86
We will use residues for solving the integral. Due to the fact that the function
ω
2
sin ωx is even with respect to the variable ω, it follows that
ω +1
1 ∞ ω
Z Z ∞
1 ω iωx
f (x) = sin ωx dω = Im e dx
π −∞ ω 2 + 1 π 2
−∞ ω + 1
ixz
1 ze 1 −x
= Im 2πi res , i = Im πie
π z2 + 1 π
−x
=e .
Remark 2.4.4. We have showed that I2 and the second integral in (2.21)
are equal to 0. Then formula (2.21) remains true with −I2 = 0, i.e.
Z ∞ Z ∞
1
f (x) = f (y) cos ω(y − x) dω dy
2π −∞ −∞
Z ∞ Z ∞
i
− f (y) sin ω(y − x) dω dy.
2π −∞ −∞
Let us just use the variable x instead of y. We get the following formula:
Z ∞
fb(ω) = f (x)e−iωx dx. (2.31)
−∞
Then (2.30) can be written as the inverse formula for this Fourier transform
Z ∞
1
f (x) = fb(ω)eiωx dω.
2π −∞
Therefore, if one takes e−iωx as the kernel of the direct formula of the Fourier
transform, then the inversion formula kernel is eiωx .
87
The difference between the two approaches is ±I2 = 0, hence both
definitions (2.3) and (2.31) are valid. Their properties are similar, with
small differences. For instance, the differentiation of the original becomes
F[f 0 (x)](ω) = iω fb(ω) (for the definition given by (2.31)).
Similarly, one can modify the kernel eiωx and one obtains other forms of
the Fourier transform. A frequently used transform is the following:
Z ∞
F[f ](ω) = f (ω) =
b f (x)e−2πiωx dx. (2.32)
−∞
By a slight modification in the proof of Theorem 2.3.1, one can show that
the inverse formula has the form
Z ∞
−1 b
F [f ](x) = f (x) = fb(ω)e2πiωx dω. (2.33)
−∞
1. time-delay becomes
2. translation becomes
88
2.5 Discrete Fourier Transform (DFT)
Consider the form (2.31) of the Fourier transform of a continuous signal
f ∈ L1 . Let us restrict ω to the interval [0, 2π] and consider N samples
2π 4π 2kπ (N − 1)2π
ω : 0, , ,..., ,..., .
N N N N
Then Z ∞
2kπ 2π
fb = f (x)e−i N kx dx, k = 0, N − 1. (2.34)
N −∞
fN −1
89
fb0
fb1
then the vector Fd [f ] = fb = is called the Discrete Fourier Trans-
···
fN −1
b
form of f .
2π
Denoting e−i N by w, formula (2.35) becomes
N
X −1
Fd [f ]k = fbk = fn ewkn , k = 0, N − 1. (2.36)
n=0
2π 2π
Notice that w = cos − i sin and that fbk = f0 + f1 wk + · · · +
N N
fn wkn + · · · + fN −1 wk(N −1) .
Consider the matrix
1 1 1 ··· 1 ··· 1
1
w w 2
··· wn ··· wN −1
1
w2 w4 ··· w2n ··· w2(N −1)
W = · · · · ·k· ··· ··· ··· ··· ··· .
1
w w k2
··· wkn ··· wk(N −1)
··· ··· ··· ··· ··· ··· ···
N −1 (N −1)2 (N −1)n (N −1)(N −1)
1 w w ··· w ··· w
Fd [f ] = fb = W f. (2.37)
W · W = N IN . (2.38)
2π 2π
Proof. Since eiθ = e−iθ , it follows that w = e−i N = ei N = w−1 .
Let us multiply the k th row of W by the nth column of W , 1 ≤ k, n ≤ N .
One obtains
N
X −1 N
X −1 N
X −1
kl ln kl −ln
w w = w w = w(k−n)l .
l=0 l=0 l=0
90
N
X −1
If k = n, then this value is w0 = 1 + 1 + · · · + 1 = N . If k 6= n, then
l=0
we have a geometric progression with ratio q = wk−n , which has the sum
N −1
X
l 1 − qN
q = . Hence, the value is
l=0
1−q
N −1
X 1 − w(k−n)N 1−1
w(k−n)l = k−n
= = 0,
l=0
1−w 1 − wk−n
2π
since w(k−n)N = e−i N (k−n)N = e−i2π(k−n) = cos(2π(k−n))−i sin(2π(k−n)) =
1. Therefore, the product W · W is a matrix with N on the main diagonal
and 0 elsewhere, i.e W · W = N IN .
1 w3 w6 w9 1 i −1 −i
91
Its inverse is
1 1 1 1
1 1 1 i −1 −i
W −1 = W = .
4 4 1 −1 1 −1
1 −i −1 i
1
1
Consider the signal f =
0 (see Figure 2.9).
−1
One obtains
1 1 1 1 1 1
1 −i −1 i 1 1 − 2i
Fd [f ] = fb = W f =
1 −1 1 −1 0
=
1 .
1 i −1 −i −1 1 + 2i
92
1. Linearity: Fd [αf + βg] = αFd [f ] + βFd [g], for every f, g ∈ CN and
α, β ∈ C.
Proof. By (2.37),
= fbk .
h i
i 2π nm
3. Translation (shift): Fd e N fn = fbk−m .
k
N −1N −1
1 Xb X
4. Plancherel’s Theorem: f n gn = fk gbk .
n=0
N n=0
−1
N −1
N
1 X b 2
X
2
5. Parseval’s Formula: |fn | = |fk | .
n=0
N n=0
93
2.6 Fast Fourier Transform (FFT)
The FFT represents algorithms which apply iterations of DFTs in order
to significantly reduce the number of operations. For instance, the Cooley-
Tucker FFT algorithm recursively breaks down a DFT of size N = N1 N2
into N1 DFTs of size N2 .
We will denote the number w of a DFT of size N by wN , hence wN =
2π
e−i N .
Consider the case of N an even number and split the DFT of a signal
f = (fn )n=0,N −1 given by
N −1
2π
X
fbk = fn e−i N kn
n=0
N
into samples, one for n even (n = 2m), and the other for n odd (n =
2
2m + 1):
N N
2
−1 2
−1
X X (2m+1)k
2mk
fbk = f2m wN + f2m+1 wN .
m=0 m=0
2π −i 2π
N l
2l
Since wN = e−i N 2l = e 2 = wlN , one can write
2
N N
2
−1 2
−1
X X
mk k
fbk = f2m w N + wN f2m+1 wmk
N . (2.42)
2 2
m=0 m=0
The two sums represent DFTs of the even and odds entries of the signal f ,
ek )k=0,N −1 and (b
respectively. Let us denote them by (b ok )k=0,N −1 , respectively.
Then (2.42) can be written as
k
fbk = ebk + wN obk , k = 0, N − 1. (2.43)
N N
Since ebk and obk are periodic with period , notice that only values ebk
2 2
and obk are necessary.
2π
Example 2.6.1. Consider N = 2. Then w2 = e−i 2 = e−iπ = −1. The
equalities (2.43) are given by
(
fb0 = eb0 + w20 ob0
;
fb1 = eb1 + w21 ob1
94
hence, (
fb0 = eb0 + ob0
(2.44)
fb1 = eb1 − ob1
Relations (2.44) are represented by the following butterfly diagram (see Fig-
ure 2.10):
2π π
Example 2.6.2. Consider N = 4. Then w4 = e−i 4 = e−i 2 = −i. One
N
obtains that ebk and obk are periodic with period = 2. Hence, eb2 = eb0 ,
2
eb3 = eb1 , ob2 = ob0 and ob3 = ob1 . The equalities (2.43) are given by
fb0 = eb0 + w40 ob0
f1 = eb1 + w41 ob1
b
;
fb2 = eb2 + w42 ob2
f3 = eb3 + w43 ob4
b
hence,
fb0 = eb0 + ob0
f1 = eb1 − ibo1
b
(2.45)
f2 = eb0 − ob0
b
f3 = eb1 + ibo1
b
95
Figure 2.11: The Combination of 2 Butterfly Operations
until we get to 2-points transform (as in (2.44)). These p iterations form the
so called decimation-in-time (DIT) algorithm. An equivalent decimation-in-
frequency algorithm also exists.
The advantages of the FFT with respect to the simple DFT result from
the following evaluation. The DFT requires N 2 multiplications by complex
numbers. At each iteration of the FFT, the number of the complex multipli-
N N
cations is (multiplications by wN0
= 1 and wN2 = e−iπ = −1 being simple
2
additions or subtractions). The number of iterations is p = log2 N . There-
N
fore, the FFT requires log2 N multiplications. To compare, we consider
2
the following cases:
DFT FFT
N
N N2 log2 N
5
2
2 = 32 1024 16 · 5 = 80
10
2 = 1024 1.048.576 512 · 10 = 5120
15
2 = 32768 230 245 · 760
Example 2.6.3. If N = 23 = 8, then the 3-iterations are indicated in Figure
2.12.
2π
−i
Example 2.6.4. Consider
the Example 2.5.4. We have N = 4, w = e 4 =
1
1
−i and the signal f =
0 .
−1
96
Figure 2.12: The Combination of 4 Butterfly Operations
N
Iteration 1 (see Figure 2.11): = 2 points DFT of the inputs f0 = 1
2
and f2 = 0. One obtains (see Figure 2.10, i.e. (2.44))
eb0 = f0 + f2 = 1, eb1 = f0 − f1 = 1,
ob0 = f1 + f3 = 0, ob1 = f1 − f3 = 2.
97
2.7 Exercises
E 9. Determine the Fourier transform fb(ω) (Definition 2.1.2) of the following
functions:
1, x ∈ (a, b)
a) f : R → R, f (x) = , a, b ∈ R, a < b;
0, x ∈ R \ (a, b)
b) f : R → R, f (x) = e−4|x+2| , where | · | is the absolute value;
x 2 − a2
c) f : R → R, f (x) = 2 , a > 0.
(x + a2 )2
Solution. a) We will use formula (2.3). One gets
Z a Z b Z ∞
iωx iωx
f (ω) =
b 0 · e dx + 1 · e dx + 0 · eiωx dx
−∞ a b
Z b iωx b
e
= eiωx dx =
a iω a
iωb iωa
e−e
= .
iω
Since the above
Z b ω is a denominator,
Z b we need to impose the condition ω 6= 0.
But fb(0) = eiω0 dx = dx = b − a. It follows that
a a
eiωb − eiωa
fb(ω) = , ω ∈ R∗ ;
iω
b − a, ω = 0
x + 2, x ≥ −2
b) It is obvious that |x + 2| = . We will use again
−x − 2, x < −2
formula (2.3). One obtains
Z ∞ Z −2 Z ∞
f (ω) =
b iωx
f (x)e dx = e4(x+2) iωx
e dx + e−4(x+2) eiωx dx
−∞ −∞ −2
Z −2 Z ∞
= e8+4x+iωx dx + e−8−4x+iωx dx
−∞ −2
e8+4x+iωx −2 e−8−4x+iωx ∞ e−2iω
= + = −
4 + iω −∞ −4 + iω −2 4 + iω
1 1 e−2iω
− lim e8+4x+iωx + lim e−8−4x+iωx − .
4 + iω x→−∞ −4 + iω x→∞ −4 + iω
98
The first limit is equal to lim e8+4x (cos ωx+i sin ωx) = lim e8+4x cos ωx+
x→−∞ x→−∞
i lim e8+4x sin ωx. As e−∞ = 0 and cos ωx, sin ωx ∈ [−1, 1], it follows that
x→−∞
lim e8+4x+iωx = 0. Similarly, lim e−8−4x+iωx = 0. Hence,
x→−∞ x→+∞
e−2iω e−2iω
1 1 8
fb(ω) = − = e−2iω − = e−2iω ;
4 + iω −4 + iω 4 + iω −4 + iω 16 + ω 2
P
c) Since f (x) = , where P, Q ∈ R[X], Q(x) 6= 0, for every x ∈ R and
Q
z 2 − a2 iωz
deg P ≤ deg Q − 2, we use formula (2.4). Let us denote 2 e by
(z + a2 )2
g(z), z ∈ C.
Case 1: ω ≥ 0. Then z = ai is the only complex root located in the upper
half plane of the polynomial Q(x) = (x2 + a2 )2 (since Im(ai) = a > 0) and it
is a pole of order 2 of g. Hence,
0
(z 2 − a2 )eiωz
res (g, ai) = lim
z→ai (z + ai)2
[2zeiωz + (z 2 − a2 )iωeiωz ](z + ai)2 − (z 2 − a2 )eiωz 2(z + ai)
= lim
z→ai (z + ai)4
[2zeiωz + (z 2 − a2 )iωeiωz ](z + ai) − 2(z 2 − a2 )eiωz
= lim
z→ai (z + ai)3
e−aω (−4a2 + 4a3 ω + 4a2 )
=
−8a3 i
ω
= e−aω .
−2i
99
W 9. Determine the Fourier transform fb(ω) (Definition 2.1.2) of the follow-
ing functions:
−x
e , x ∈ (0, 1)
a) f : R → R, f (x) = ;
0, x ∈ R \ (0, 1)
sin x, x ∈ (0, π)
b) f : R → R, f (x) = ;
0, x ∈ R \ (0, π)
c) f : R → R, f (x) = e−|3x+2| , where | · | is the absolute value;
2
d) f : R → R, f (x) = e−16x −5 ;
x+3
e) f : R → R, f (x) = 4 ;
x + 5x2 + 4
1
f) f : R → R, f (x) = 2 ;
(x + 1)3
1
g) f : R → R, f (x) = 4 .
x +1
Answer. a) One obtains
Z 1 Z 1
−x iωx
fb(ω) = e e dx = ex(−1+iω) dx
0 0
ex(−1+iω) 1 (1 + iω)(1 − e−1+iω )
= = ;
−1 + iω 0 1 + ω2
b) One gets
π π
eix − e−ix iωx
Z Z
iωx
fb(ω) = sin xe dx = e dx.
0 0 2i
Thus,
Z π Z π
1 ix+iωx −ix+iωx
f (ω) =
b e dx − e dx
2i 0 0
1 eix(1+ω) π eix(−1+ω) π
= −
2i i(1 + ω) 0 i(−1 + ω) 0
1 eiπ(1+ω) − 1 eiπ(−1+ω) − 1
=− − .
2 1+ω −1 + ω
100
Since eiπ = cos π + i sin π = −1, it follows that
for ω ∈ R \ {±1}.
But
π π ix
e − e−ix ix
Z Z
ix
fb(1) = sin xe dx = e dx
0 0 2i
Z π Z π
1 2ix π
= e dx − dx = −
2i 0 0 2i
and
π π
eix − e−ix −ix
Z Z
−ix
fb(−1) = sin xe dx = e dx
0 0 2i
Z π Z π
1 π
= dx − e−2ix dx = .
2i 0 0 2i
Hence,
eiπω + 1
, ω ∈ R \ {±1}
1 − ω2
f (ω) = π ;
b − , ω=1
2i
π
, ω = −1
2i
c) As in Exercise 9 b),
Z − 32 Z ∞
2iω 6
fb(ω) = e3x+2 iωx
e dx + e−3x−2 eiωx dx = e− 3 .
−∞ − 32 9 + ω2
√
π −5− ω2
d) One obtains fb(ω) = e 64 (see Example 2.1.5);
4
101
(z + 3)eiωz
e) Denote by g(z). Hence, (see Exercise 9 c))
(z 2 + 1)(z 2 + 4)
2πi(res (g, i) + res (g, 2i)), ω ≥ 0
fb(ω) =
−2πi(res (g, −i) + res (g, −2i)), ω < 0
(z + 3)eiωz (z + 3)eiωz
2πi lim + lim 2 , ω≥0
z→i (z + i)(z 2 + 4) z→2i (z + 1)(z + 2i)
=
(z + 3)eiωz (z + 3)eiωz
−2πi lim + lim , ω<0
z→−i (z − i)(z 2 + 4) z→−2i (z 2 + 1)(z − 2i)
(2i + 3)e−2ω
π −ω
(i + 3)e − , ω≥0
3 2
= ;
2ω
π (−2i + 3)e
(−i + 3)eω − , ω<0
3 2
eiωz
f) Denote by g(z). Hence, (see Exercise 9 c))
(z 2 + 1)3
2πi · res (g, i), ω ≥ 0
fb(ω) = .
−2πi · res (g, −i), ω < 0
102
1 1 1
In conclusion, the complex roots are √ (1 + i), √ (1 − i), √ (−1 + i) and
2 2 2
1
√ (−1 − i).
2
eiωz
Let us denote 4 by g(z). Hence,
z +1
1 1
2πi res g, √ (1 + i) + res g, √ (−1 + i) , ω ≥ 0
2 2
fb(ω) =
1 1
−2πi res g, √ (1 − i) + res g, √ (−1 − i) , ω < 0
2 2
ω ω
!
√ (i−1) √ (−i−1)
π e 2 e 2
√ + , ω≥0
1+i 1−i
2
= .
ω ω !
√ (−i−1) √ (−1+i)
π e 2 e 2
− √2 + , ω<0
1−i 1+i
E 10. Determine the cosine Fourier transform fbc (ω) (formula (2.26)) of the
following functions:
ax
e , x ∈ (0, a)
a) f : (0, ∞) → R, f (x) = , a ∈ R;
0, x ≥ a
cos x, x ∈ (0, π)
b) f : (0, ∞) → R, f (x) = x, x ∈ [π, 2π] ;
0, x > 2π
1
c) f : (0, ∞) → R, f (x) = .
(x2 + 1)(x4 + 5x2 + 4)
Solution. We use formula (2.26) throughout the whole exercise.
a) One gets
Z a Z ∞ Z a
ax
fbc (ω) = e cos(ωx) dx + 0 · cos(ωx) dx = eax cos(ωx) dx.
0 a 0
One way to solve this integral is to integrate by parts two times until the
initial integral is reached. A better method is to replace into the integral
103
eix + e−ix
Euler’s formula for cosine, cos x = , x ∈ R. It follows that
2
Z a Z a Z a
iωx
+ e−iωx
ax e 1 ax+iωx ax−iωx
fc (ω) =
b e dx = e dx + e dx
0 2 2 0 0
2 2
!
1 eax+iωx a eax−iωx a 1 ea +iωa − 1 ea −iωa − 1
= + = +
2 a + iω 0 a − iω 0 2 a + iω a − iω
" 2 2
#
1 (a − iω)(ea +iωa − 1) + (a + iω)(ea −iωa − 1)
= .
2 a2 + ω 2
2 2 2 2
Since ea +iωa = ea (cos ωa + i sin ωa) and ea −iωa = ea (cos ωa − i sin ωa), it
follows that
2 2
1 2aea cos ωa + 2ωea sin ωa − 2a
fc (ω) = ·
b
2 a2 + ω 2
2
ea (a cos ωa + ω sin ωa) − a
= ;
a2 + ω 2
b) One obtains
Z π Z 2π
fbc (ω) = cos x cos(ωx) dx + x cos(ωx) dx + 0.
0 π
Z π Z 2π
Let us denote cos x cos(ωx) dx by I1 (ω) and x cos(ωx) dx by I2 (ω),
0 π
ω > 0.
For the computation of I1 (ω) one can integrate by parts two times un-
til the initial integral is reached. An easier way to solve it is to use the
cos(α + β) + cos(α − β)
trigonometric formula cos α cos β = . Following this
2
method, one obtains
Z 2π
cos(x + ωx) + cos(x − ωx)
I1 (ω) = dx
π 2
Z 2π Z 2π
1
= cos(x + ωx) dx + cos(x − ωx) dx
2 π π
1 sin(x + ωx) π sin(x − ωx) π
= +
2 1+ω 0 1−ω 0
1 sin(π + ωπ) sin(π − ωπ)
= + .
2 1+ω 1−ω
104
Since sin(π + α) = − sin α and sin(π − α) = sin α, it follows that for ω 6= ±1,
we have
1 sin ωπ sin ωπ sin ωπ 1 1 ω sin ωπ
I1 (ω) = − + = − + = .
2 1+ω 1−ω 2 1+ω 1−ω 1 − ω2
But ω > 0, so the previous expression makes sense for ω 6= 1. On the other
hand,
Z π Z π
I1 (1) = cos x cos(1 · x) dx = cos2 x dx
Z0 π 0
1 + cos 2x 1 π sin 2x π π
= dx = x + = .
0 2 2 0 2 0 2
For the computation of I2 (ω) we proceed by integrating by parts. There-
fore,
Z 2π
sin ωx 2π sin ωx
I2 (ω) = x − dx
ω π π ω
2π sin 2ωπ − π sin πω cos ωx 2π
= +
ω ω2 π
2π sin 2ωπ − π sin πω cos 2πω − cos πω
= + .
ω ω2
We obtained
ω sin ωπ 2π sin 2ωπ − π sin πω cos 2πω − cos πω
fbc (ω) = I1 (ω)+I2 (ω) = + + ,
1 − ω2 ω ω2
π 0−0+1−1 π
for ω ∈ (0, ∞) \ {1} and fbc (1) = I1 (1) + I2 (1) = + = ;
2 1 2
c) One gets
Z ∞
1
fbc (ω) = cos ωx dx
0 (x2 + 1)(x4 + 5x2 + 4)
Z ∞
1
= cos ωx dx.
0 (x + 1)2 (x2 + 4)
2
Since we are dealing with the integral of an even function, one can write
1 ∞
Z
1
fc (ω) =
b cos ωx dx.
2 −∞ (x + 1)2 (x2 + 4)
2
105
∞
eiωx
Z
The integral is now the real part of integral I = 2 2 2
dx.
−∞ (x + 1) (x + 4)
In order to compute I we use residues. Consider the complex function g(z) =
eiωz
, z ∈ C. So I = 2πi(res (g, i) + res (g, 2i)) (the method used
(z 2 + 1)2 (z 2 + 4)
here is included in [26, p. 150]). Since i is a pole of order 2 and 2i is a pole
of order 1 of g, it follows that
0
eiωz e−ω (3ω + 1)
res (g, i) = lim =
z→i (z + i)2 (z 2 + 4) i · 36
and
eiωz e−2ω
res (g, 2i) = lim = .
z→2i (z 2 + 1)2 (z + 2i) 36i
Hence,
π −2ω
+ e−ω (3ω + 1) .
fbc (ω) = e
36
W 10. Determine the cosine Fourier transform fbc (ω) (formula (2.26)) of the
following functions:
1, x ∈ (0, a)
a) f : (0, ∞) → R, f (x) = , a ∈ R;
0, x ≥ a
sin 2x, x ∈ (0, π)
b) f : (0, ∞) → R, f (x) = ;
0, x ≥ π
1 − x2 , x ∈ (0, 1)
c) f : (0, ∞) → R, f (x) = ;
0, x ≥ 1
x2 − 1
d) f : (0, ∞) → R, f (x) = ;
(x2 + 4)2
1
e) f : (0, ∞) → R, f (x) = 4 .
x + 7x2 + 10
Z a
sin ωa
Answer. a) One gets fc (ω) =
b cos ωx dx = ;
0 ω
b) If ω 6= 2, then
Z π π
sin(2x + ωx) + sin(2x − ωx)
Z
fc (ω) =
b sin 2x cos ωx dx = dx
0 0 2
2(1 − cos ωπ)
= ,
4 − ω2
106
Z π Z π
1
and fbc (2) = sin 2x cos 2x dx = sin 4x dx = 0;
0 2 0
Z 1
c) One obtains fbc (ω) = (1 − x2 ) cos ωx dx. Performing two times
0
integration by parts, one finds
2 sin ω
fbc (ω) = 2 − cos ω + ;
ω ω
π
d) One gets fbc (ω) = e−2ω (3 − 10ω) (see Exercise 10 c));
32
√ √ !
− 2ω − 5ω
π e e
e) One obtains fbc (ω) = √ − √ (see Exercise 10 c)).
6 2 5
E 11. Determine the sine Fourier transform fbs (ω) (formula (2.27)) of the
following functions:
2 cos2 x, x ∈ (0, π)
a) f : (0, ∞) → R, f (x) = ;
0, x ≥ π
2x
b) f : (0, ∞) → R, f (x) = , a ∈ R∗ .
x 2 + a2
[x], x ∈ (0, n)
c) f : (0, ∞) → R, f (x) = , where n ∈ N∗ and [·]
0, x ≥ n
denotes the greatest integer function;
Solution. We use formula (2.27) throughout the
Z ∞ Z π whole exercise.
a) One obtains fbs (ω) = f (x) sin ωx dx = 2 cos2 x sin ωx dx. Since
0 0
2 cos2 x = 1 + cos(2x), one has
Z π Z π
fs (ω) =
b sin ωx dx + cos(2x) sin ωx dx.
0 0
sin(a + b) + sin(a − b)
Using the trigonometric formula sin a cos b = , one gets
2
− cos ωx π 1 π
Z
fs (ω) =
b + (sin(2x + ωx) + sin(ωx − 2x)) dx
ω 0 2 0
1 − cos ωπ 1 − cos(ωx + 2x) π − cos(ωx − 2x) π
= + +
ω 2 ω+2 0 ω−2 0
107
Thus,
1 − cos ωπ 1 1 − cos(ωπ + 2π) 1 − cos(ωπ − 2π)
fbs (ω) = + + .
ω 2 ω+2 ω−2
As 2π is one of the periods of the cosine function (in fact, it is its fundamental
period), one has
1 − cos ωπ 1 1 − cos(ωπ) 1 − cos(ωπ)
fbs (ω) = + +
ω 2 ω+2 ω−2
1 − cos ωπ 1 − cos(ωπ) 1 1
= + +
ω 2 ω+2 ω−2
1 − cos ωπ ω(1 − cos ωπ)
= +
ω ω2 − 4
2
2ω − 4
= (1 − cos ωπ) · 2 .
ω −4
But the denominator has to be different from 0. Since ω > 0, it follows that
the previous expression is valid for ω 6= 2. So we need to calculate separately
fbs (2). We obtain
Z π
fbs (2) = 2 cos2 x sin 2x dx
Z0 π
= 2 cos2 x · 2 sin x cos x dx
0
Z π
=4 cos3 x sin x dx.
0
Hence,
2ω 2 − 4
(1 − cos ωπ) · , ω ∈ (0, ∞) \ {2}
fbs (ω) = ω2 − 4 ;
0, ω = 2
108
Z ∞
2x 2x
b) One gets fbs (ω) = sin ωx dx. But 2 sin ωx is an even
0 x2 +a 2 x + a2
function, so
∞ ∞
xeiωx
Z Z
1 2x
fbs (ω) = sin ωx dx = Im dx .
2 −∞ x 2 + a2 −∞ x 2 + a2
∞
xeiωx
Z
In order to compute dx we need residues. Consider the
−∞ x 2 + a2
zeiωz
function g(z) = 2 , z ∈ C. The singular points of g are ±ai. Since we
z + a2
need only the points in the upper half plane, we need to distinguish between
the following two cases depending on a:
zeiωz e−aω
res (g, ai) = = .
2z z=ai 2
Therefore,
∞
xeiωx
Z
dx = 2πi · res (g, ai) = πie−aω
−∞ x 2 + a2
zeiωz eaω
res (g, −ai) = = .
2z z=−ai 2
Therefore,
∞
xeiωx
Z
dx = 2πi · res (g, −ai) = πieaω
−∞ x 2 + a2
109
0, x ∈ (0, 1)
1, x ∈ [1, 2)
······ ············
c) Since f (x) = k − 1, x ∈ [k − 1, k) , it follows that
······ ············
n − 1, x ∈ [n − 1, n)
0, x≥n
Z ∞ n−1 Z
X k+1 Z ∞
fbs (ω) = f (x) sin ωx dx = k sin ωx dx + 0 · sin ωx dx
0 k=0 k n
n−1 Z k+1 n−1
X X − cos ωx k+1
= k sin ωx dx = k· .
k=0 k k=0
ω k
Thus,
n−1
1X
fbs (ω) = k (cos(ωk) − cos(ω(k + 1))
ω k=0
1
= (cos ω − cos(2ω) + 2 cos(2ω) − 3 cos(2ω) + 3 cos(3ω)−
ω
− 3 cos(4ω) + · · · + (n − 1) cos((n − 1)ω) − (n − 1) cos(nω))
1
= cos ω + cos(2ω) + · · · + cos((n − 1)ω) −(n − 1) cos(nω))
ω | {z }
S
2n − 1
1 sin 2 ω 1
= − − (n − 1) cos(nω)) ,
ω ω 2
2 sin
2
where the computation of S can be found in [18, p. 77]).
W 11. Determine the sine Fourier transform fbc (ω) (formula (2.27)) of the
following functions:
1, x ∈ (0, a)
a) f : (0, ∞) → R, f (x) = , a ∈ R;
0, x ≥ a
sinh 2x, x ∈ (0, π)
b) f : (0, ∞) → R, f (x) = ;
0, x ≥ π
110
1 − x, x ∈ (0, 1)
c) f : (0, ∞) → R, f (x) = ;
0, x ≥ 1
x3
d) f : (0, ∞) → R, f (x) = ;
(x2 + 4)2
ax
e) f : (0, ∞) → R, f (x) = 4 2
, a ∈ R∗ .
x + 7x + 10
Z a
1 − cos ωa
Answer. a) fbs (ω) = sin ωx dx = ;
0 ω
b) One gets
π π
e2x − e−2x eiωx − e−iωx
Z Z
fs (ω) =
b sinh 2x sin ωx dx = · dx.
0 0 2 2i
Thus,
Z π Z π
1 2x+iωx 2x−iωx
fs (ω) =
b e dx − e dx −
4i 0 0
Z π Z π
1 −2x+iωx −2x−iωx
− e dx + e dx
4i 0 0
1 e2π+iωπ − 1 e2π−iωπ − 1 e−2π+iωπ − 1 e−2π−iωπ − 1
= − − + .
4i 2 + iω 2 − iω −2 + iω −2 − iω
= πie−2ω (1 − ω).
111
π
Therefore, fbs (ω) = e−2ω (1 − ω);
2
√ √ !
πa e− 2ω e− 5ω
e) fbs (ω) = − .
2 4 3
112
Z ∞
−1 2
Fs [fbs ](x) = f (x) = fbc (ω) sin ωx dω. It follows that the unknown of
π 0
2 π
Z
the integral equation is f (x) = cos nω sin ωx dω. By transforming the
π 0
trigonometric product into a sum, one obtains
2 π sin(nω + ωx) + sin(ωx − nω)
Z
f (x) = dω
π 0 2
1 − cos(nω + ωx) π − cos(ωx − nω) π
= +
π n+x 0 x−n 0
1 − cos(nπ + πx) + 1 − cos(πx − nπ) + 1
= + .
π n+x x−n
Thus,
1 (−1)n+1 cos πx + 1 (−1)n+1 cos πx + 1
f (x) = +
π n+x x−n
2x
(−1)n+1 cos πx + 1 .
= 2 2
π(x − n )
Since x > 0, it follows that the previous expression makes sense for x 6= n.
For x = n we have to do a separate computation. We get that
2 π 1 π
Z Z
1 cos 2nω π
f (n) = cos nω sin nω dω = sin 2nω dω = − · = 0;
π 0 π 0 π 2n 0
c) One obtains
2 ∞b 2 ∞
Z Z
1
f (x) = fc (ω) cos ωx dω = cos ωx dω.
π 0 π 0 ω 4 + 13ω 2 + 36
1
As 4 cos ωx is an even function with respect to the ω variable,
ω + 13ω 2 + 36
we get
Z ∞
1 ∞
Z
1 1
4 2
cos ωx dω = cos ωx dω.
0 ω + 13ω + 36 2 −∞ ω + 13ω 2 + 36
4
1
Also sin ωx is an odd function with respect to the ω variable.
ω4
+ 13ω 2 + 36
Therefore, Z ∞
1
4 2
sin ωx dω = 0.
−∞ ω + 13ω + 36
113
Putting together the previous facts, one gets
2 1 ∞
Z
1
f (x) = cos ωx dω + i · 0
π 2 −∞ ω 4 + 13ω 2 + 36
1 ∞
Z
1
= cos ωx dω+
π −∞ ω + 13ω 2 + 36
4
Z ∞
1 1
+ i sin ωx dω
π −∞ ω + 13ω 2 + 36
4
1 ∞
Z
1
= (cos ωx + i sin ωx)) dω
π −∞ ω + 13ω 2 + 36
4
1 ∞
Z
1
= · eiωx dω.
π −∞ ω + 13ω 2 + 36
4
eizx
Consider now the complex function g(z) = , z ∈ C. The
z 4 + 13z 2 + 36
singular points of g are ±2i and ±3i, all being poles of order 1. We have the
following computation (using residues):
Z ∞
1
4 2 + 36
· eiωx dω = 2πi(res (g, 2i) + res (g, 3i))
−∞ ω + 13ω
eizx eizx
= 2πi +
4z 3 + 26z z=2i 4z 3 + 26z z=3i
−2x
e−3x
e
= 2πi +
20i −30i
−2x −3x
π e e
= − .
5 2 3
It follows that
e−2x e−3x e−2x e−3x
1 π 1
f (x) = · − = − .
π 5 2 3 5 2 3
d) One gets
2 ∞b 2 ∞
Z Z
2ω
f (x) = fs (ω) sin ωx dω = sin ωx dω.
π 0 π 0 (ω 2 + 1)2
2ω
As 2 sin ωx is an even function with respect to the ω variable, we get
(ω + 1)2
1 ∞
Z
2ω
f (x) = sin ωx dω.
π −∞ (ω + 1)2
2
114
2zeizx
Consider the function g(z) = , z ∈ C. It is obvious that i is the only
(z 2 + 1)2
singular point of g situated inthe upper half plane and it is pole of order 2.
0
izx
xe−x
2ze
Since res (g, i) = lim = , it follows that
z→i (z + i)2 2
∞
2ωeiωx
Z
dω = 2πi · res (g, i) = πixe−x
−∞ (ω 2 + 1)2
and
∞
2ωeiωx
Z
1
f (x) = · Im dω = xe−x .
π −∞ (ω 2 + 1)2
Z b
a
We denote eaω−b sin ωx dω by I and one can solve it integrating by parts
0
twice. We get
b
sin x
x2
a x b
I= − 2 cos x − 1 − 2 I,
a a a a
115
so
b b
a sin x − x cos x) − 1
a a
I=
x 2 + a2
and
b b
2 a sin x − x cos x −1
a a
f (x) = ;
π(x2 + a2 )
b) One obtains
2 ∞b 2 1
Z Z
f (x) = fc (ω) cos ωx dω = sin 2πω cos ωx dω
π 0 π 0
1 1
Z
= (sin(2πω + ωx) + sin(2πω − ωx)) dω.
π 0
Thus,
4(1 − cos x)
f (x) = (if x 6= 2π)
4π 2 − x2
and Z 1 Z 1
2 1
f (2π) = sin 2πω cos 2πω dω = sin 4πω dω = 0;
π 0 π 0
c) One gets
2 ∞b 2 ∞
Z Z
1
f (x) = fc (ω) cos ωx dω = cos ωx dω
π 0 π 0 (ω + 4)2
2
Z ∞ Z ∞
1 1 1 1
= 2 2
cos ωx dω = 2 2
eiωx dω
π −∞ (ω + 4) π −∞ (ω + 4)
izx
e−2x (2x + 1)
1 e
= · 2πi · res , 2i = ;
π (z 2 + 4)2 2
√
d) Hint: z 4 + 1 = √
0 ⇔ z 4 + 2z 2 +√1 − 2z 2 = 0 ⇔ (z 2 + 1)2 − ( 2z)2 = 0.
2 2
Hence, the roots are (1 ± i) and (−1 ± i). We get
2 2
√
√ !
− 22 x 2
f (x) = e cos x .
2
116
E 13. Represent the following functions as a Fourier integral:
a, |x| < b
a) f : R → R, f (x) = , a, b ∈ R, b > 0;
0, |x| ≥ b
x
e , |x| < 1
b) f : R → R, f (x) = ;
0, |x| ≥ 1
1
c) f : R → R, f (x) = .
x2 +x+1
1 ∞
Z Z ∞
f (x) = cos ωx f (y) cos ωy dy dω
π 0 −∞
1 ∞
Z Z ∞
+ sin ωx f (y) sin ωy dy dω.
π 0 −∞
and Z ∞
B(ω) := f (y) sin ωy dy
−∞
b Z
sin ωy b 4a sin ωb
a) One obtains A(ω) = a cos ωy dy = 2a = and
−b ω −b ω
Z b
B(ω) = a sin ωy dy = 0 (since the function inside the integral is odd with
−b
respect to the y variable). It follows that
Z ∞
4a sin ωb
f (x) = cos ωx dω;
π 0 ω
Z 1 Z 1
y
b) One gets A(ω) = e cos ωy dy and B(ω) = ey sin ωy dy. The
−1 −1
easiest way to compute the integrals is to organize the computations in the
117
following way:
1 1
ey+iωy 1
Z Z
y iωy y+iωy
A(ω) + iB(ω) = e e dy = e dy =
−1 −1 1 + iω −1
−1−iω
e1+iω
−e (1 − iω)(e1+iω
− e−1−iω )
= = .
1 + iω 1 + ω2
Since, for z = a + ib ∈ C,
c) One obtains
Z ∞
1
A(ω) = cos ωy dy
−∞ y2 + y + 1
and Z ∞
1
B(ω) = sin ωy dy.
−∞ y2 +y+1
Using the same idea as before,
Z ∞
eiωz
1 iωy
A(ω) + iB(ω) = 2
e dy = 2πi · res ,ε ,
−∞ y + y + 1 z2 + z + 1
118
√
−1 + i 3 eiωz
where ε = is a pole of order 1 of the function 2 , z ∈ C.
2 z +z+1
So
eiωz 2π ω√3 ω
A(ω) + iB(ω) = 2πi = √ e− 2 −i 2
2z + 1 z=ε 3
√
2π − ω 3 ω ω
=√ e 2 cos − i sin
3 2 2
√ √
2π ω 3 ω 2π ω 3 ω
and A(ω) = √ e− 2 cos , B(ω) = − √ e− 2 sin . It follows that
3 2 3 2
Z ∞ √
2 − ω2 3 ω ω
f (x) = √ e cos cos ωx − sin sin ωx dω
3 Z0 2 2
∞ √
2 ω 3
ω
=√ e− 2 cos + ωx dω.
3 0 2
119
Z π Z π
2
Also B(1) = 2 sin y dy = (1 − cos 2y) dy = π. Therefore,
0 0
Z ∞
2 cos ωπ
f (x) = sin ωx dω;
π 0 1 − ω2
c) B(ω) = 0 and
∞ Z ∞
eiωy
Z
1
A(ω) = 2 2
cos ωy dy = Re 2 2
dy
−∞ (2y + 1) −∞ (2y + 1)
eiωz π − √ω √
i
= Re 2πi · res , √ = √ e 2 (ω 2 + 2).
(2z 2 + 1)2 2 4 2
Therefore,
1
Z ∞
− √ω
√
f (x) = √ e 2 (ω 2 + 2) cos ωx dω;
4 2 0
d) One gets
∞
eiωy
Z
A(ω) + iB(ω) = 2
dy
−∞ 2y + 2y + 1
eiωz
−1 + i
= 2πi · res ,
2z 2 + 2z + 1 2
ω ω ω
ω ω
= πe− 2 −i 2 = πe− 2 cos − i sin ,
2 2
ω ω ω ω
so A(ω) = πe− 2 cos and B(ω) = −πe− 2 sin . Therefore,
2 2
Z ∞ ω
−ω
f (x) = e cos
2 + ωx dω.
0 2
1
E 14. Consider the function f : (0, ∞) → (0, ∞), f (x) = √ .
x
Z ∞
1 1 2
a) Prove that √ = √ e−xu du.
x π 0
b) Compute fbc (ω) (using a)).
120
√
Solution. a) Employing the substitution xu = t, one gets
Z ∞ Z ∞ Z ∞ √
−xu2 −t2 dt 1 −t2 π
e du = e √ =√ e dt = √ ;
0 0 x x 0 x
b) One obtains
Z ∞ Z ∞ Z ∞
1 1 −xu2
fbc (ω) = √ cos ωx dx = √ e du cos ωx dx
0 x 0 π 0
Z ∞ Z ∞
1 −xu2
=√ e cos ωx dx du.
π 0 0
Z ∞
2
Let us denote e−xu cos ωx dx by I(u). We are going to solve it inte-
0
grating by parts twice. We obtain
Z ∞
u2 ∞ −xu2
∞
Z
−xu2 sin ωx −xu2 2 sin ωx
I(u) = e + e u dx = 0 + e sin ωx dx
ω 0 0 ω ω 0
Z ∞
u2
∞
−xu2 cos ωx −xu2 2 cos ωx
= −e − e u dx
ω ω 0 0 ω
u2 1 u2 u2 u4
= − I(u) = 2 − 2 I(u),
ω ω ω ω ω
Z ∞
u2 bc (ω) = √ 1 u2
so I(u) = 4 . Therefore, f du. One way to
u + ω2 π 0 u4 + ω 2
calculate this last integral is to decompose it in simple fractions. We suggest
a different approach, namely the following:
Z ∞ Z ∞ 2 Z ∞ 2
u2
1 u +ω u −ω
du = du + du .
0 u4 + ω 2 2 0 u4 + ω 2 0 u4 + ω 2
Z ∞ 2 Z ∞ 2
u +ω u −ω
Let us denote 4 2
du by I1 (ω) and du by I2 (ω). We
0 u +ω 0 u4 + ω 2
get
Z ∞ 2 Z ∞ 1+ ω
u +ω u2 du
I1 (ω) = 2
du =
ω2
0 ω 0
u2 u2 + 2 u2 + 2
u u
ω 0 ω 0
Z ∞ u− Z ∞ u−
= u du = u du.
0 ω 2
0
ω 2
u2 + 2 u− + 2ω
u u
121
ω
Now we use the substitution u − = t. One gets
u
Z ∞
dt 1 t ∞ 1 π π π
I1 (ω) = 2
=√ arctan √ =√ + =√ .
−∞ t + 2ω 2ω 2ω −∞ 2ω 2 2 2ω
1
W 14. Calculate fbs (ω) for f : (0, ∞) → (0, ∞), f (x) = √ .
x
122
From Example 2.1.7 (the Cardinal Sine), we know that
π,
u ∈ (−1, 1)
fb(u) = 0, |u| > 1 .
π
, u = ±1
2
It follows that
π, ω − u ∈ (−1, 1)
gb(ω − u) = 0, |ω − u| > 1
π , ω − u = ±1
2
which is equivalent to
π, u ∈ (−1 + ω, 1 + ω)
gb(ω − u) = 0, u ∈ R \ [−1 + ω, 1 + ω] .
π , u = ±1 − ω
2
Therefore, we have the following cases:
Z ω+1
1 π
if ω ∈ [−2, 0], then H[h](ω) = π 2 du = (ω + 2);
2π −1 2
Z 1
1 π
if ω ∈ [0, 2], then H[h](ω) = π 2 du = (2 − ω);
2π ω−1 2
123
1. F = fourier(f ). This returns the Fourier transform F of the symbolic
preimage f . The default time variable is x and the default frequency
variable is w. The definition that is used is similar to formula (2.3) in
Definition 2.1.2, but with the kernel e−iwx (instead of eiwx , see Remark
2.4.4):
Z ∞
F (w) = f (x)e−iwx dx.
−∞
Exponentials
Example 2.8.1.
>> syms x;
f = exp(−xˆ2);
F = fourier(f );
The answer is F = piˆ(1/2)/ exp(−wˆ2/4).
Example 2.8.2.
>> syms u v;
f = exp(−9*uˆ2);
F = fourier(f, u, v);
The answer is F = (piˆ(1/2)* exp(−vˆ2/36))/3.
Example 2.8.3.
>> syms x;
f = 1/(1 + xˆ2);
F = fourier(f );
The answer is F = pi* exp(−abs(w)).
124
Example 2.8.4. The fourier command can also be applied to an expression.
>> F = fourier(1/(xˆ2 + 4), x, w);
The answer is F = (pi* exp(−2*abs(w)))/2.
Example 2.8.5.
>> syms x;
f = 1/(aˆ2 + xˆ2);
F = fourier(f );
The answer is F = (pi* exp(−abs(w)*(aˆ2)ˆ(1/2)))/(aˆ2)ˆ(1/2).
Example 2.8.6.
>> syms x;
f = (2 − i*x)ˆ(−3);
F = fourier(f );
The answer is F = (wˆ2*pi* exp(−2*w))/2 + (wˆ2*pi* exp(−2*w)*
sign(w))/2.
Example 2.8.7.
>> syms x;
f = (5 − i*x)ˆ(−4);
F = fourier(f );
The answer is F = (wˆ3*pi* exp(−5*w))/6 + (wˆ3*pi* exp(−5*w)*
sign(w))/6.
Example 2.8.8.
>> syms x;
f = exp(−3*abs(x));
F = fourier(f );
The answer is F = 6/(wˆ2 + 9).
Example 2.8.9.
>> syms u w a;
f = exp(−a*abs(u));
F = fourier(f, u, w);
The answer is F = (2*a)/(aˆ2 + wˆ2).
125
Heaviside’s Step Function
Example 2.8.10.
>> syms x;
f = heaviside(x) − heaviside(x − 1);
F = fourier(f );
The answer is F = (sin(w) + cos(w)*1i)/w − 1i/w.
Example 2.8.11.
>> syms x;
f = heaviside(x) − heaviside(x − 1) + heaviside(−x) − heaviside(−x − 1);
F = fourier(f );
The answer is F = −(cos(w)*1i − sin(w))/w + (sin(w) + cos(w)*1i)/w.
Another way that this can be done is the following:
>> syms x;
f = heaviside(x) − heaviside(x − 1) + heaviside(−x) − heaviside(−x − 1);
F = simplify(fourier(f ));
The answer is F = (2* sin(w))/w.
Example 2.8.12. If the transformation variable is not specified, then the
fourier command uses the variable x. Compare the following two examples:
i. >> syms x y z;
f = exp(−xˆ2)* exp(−yˆ2);
F = fourier(f, z);
126
Example 2.8.14. The fourier command returns F even if the function f
is not absolutely integrable.
>> syms x;
f = x/(xˆ2 + 1);
F = fourier(f );
The answer is F = −pi* exp(−abs(w))*sign(w)*1i.
127
Translation
Theorem 2.2.5 (Translation) and formula (2.8) (with ω − a instead of
ω + a) can be used together with the fourier command.
Example 2.8.20.
>> syms f (x) w;
fourier(exp(i*x*3)*f (x));
ans = fourier(f (x), x, w − 3).
The fourier command can be applied to matrices.
Matrices
Example 2.8.21.
>> syms x y v w z;
m = [exp(−2*xˆ2) 1/(yˆ2 + 16); exp(−3*x)*heaviside(x) 5];
M = fourier(m, [x y; x y], [w z; v w]);
The answer is
M = [(2ˆ(1/2)*piˆ(1/2)* exp(−wˆ2/8))/2, (pi* exp(−4*abs(z)))/4]
[ 1/(3 + v*1i), 10*pi*dirac(w)].
128
Example 2.8.22.
>> F = exp(−wˆ2/4);
f = ifourier(F );
The answer is f = exp(−xˆ2)/piˆ(1/2).
Example 2.8.23. If F = F (x), then the ifourier command returns f as a
function of t, namely f = f (t).
>> F = −x* exp(−xˆ2/4)*1i;
f = ifourier(F );
The answer is f = (2*t* exp(−tˆ2))/piˆ(1/2).
Example 2.8.24.
>> syms x real;
g = pi* exp(−abs(x));
ifourier(g, z);
ans = 1/(zˆ2 + 1).
Example 2.8.25.
>> syms w u;
syms a positive;
F = exp(−wˆ2/(4*aˆ2));
f = ifourier(F, w, u);
The answer is f = (a* exp(−aˆ2*uˆ2))/piˆ(1/2).
As before, some properties of the Fourier transform can be obtained using
the ifourier command.
129
i. g = ifourier(F );
Translation
Theorem 2.2.3 (Time Delay) and formula (2.7) (with x + a instead of
x − a) can be used together with the ifourier command.
Example 2.8.28.
>> syms F (w) x;
ifourier(exp(i*w*5)*F (w));
ans = fourier(F (w), w, −x − 5)/(2*pi).
If the ifourier command cannot find an explicit representation of the
transform, then it returns results in terms of the direct Fourier transform.
Example 2.8.29.
>> syms F (w) t;
f = ifourier(F, w, t);
The answer is f = fourier(F (w), w, −t)/(2*pi).
The ifourier command can also be applied to matrices.
Matrices
Example 2.8.30.
>> syms x y v w z;
M = [exp(−wˆ2/8), (pi* exp(−4*abs(z)))/4; 1/(3 + v*1i), 10*pi*dirac(w)];
m = ifourier(M, [w z; v w], [x y; x y]);
The answer is
m = [(2ˆ(1/2)* exp(−2*xˆ2))/piˆ(1/2), 1/(yˆ2 + 16)]
[ (exp(−3*x)*(sign(x) + 1))/2, 5].
130
2.8.3 Fast Fourier Transform
The syntax is the following: F = fft(t). This computes the discrete
Fourier transform (DFT) of f using a fast Fourier transform (FFT) algo-
rithm. If
Example 2.8.31.
>> f = [1 1 0 − 1];
F = fft(f );
f 1 = ifft(F );
f1 = 1 1 0 − 1.
Example 2.8.32.
>> F s = 10; % Sampling frequency
t = −0.5 : 1/F s : 0.5;
f = 1/(4*sqrt(2*pi*0.1))* exp(−t.ˆ2/(2*0.1));
F = fft(f ); % Convert a Gaussian pulse from the time domain to the fre-
quency domain
F =
131
f = ifft(F ); % computes the inverse Fast Fourier Transform
f=
Columns 1 through 5
Columns 6 through 11
The - sign in the exponent is just a convention, but we will see that it will
also impact the formula for the inverse Fourier transform.
The fourier command recognizes derivatives (diff or Diff ), integrals
(int or Int), the Dirac delta (or unit-impulse) function as Dirac(t) and
Heaviside’s step function as Heaviside(t).
132
Example 2.9.2.
> with(inttrans):
5
f ourier , x, w
(x2 + 3)
5 √ √3w √
3π e Heaviside(−w) + e− 3w Heaviside(w)
3
Exponentials
Example 2.9.3.
> with(inttrans):
f ourier exp(−x2 ), x, w
1 2
− w √
e 4 π
Example 2.9.4.
> with(inttrans):
assume(0 < a):
f ourier exp(−a2 · x2 ), x, w
1 w2 r
− π
e 4 a ∼2
a ∼2
Example 2.9.5.
> with(inttrans):
assume(0 < a):
f ourier x · exp(−a2 · x2 ), x, w
1 w2
1 − √
Iwe 4 a ∼2 π
−2
a ∼3
Example 2.9.6.
> with(inttrans):
assume(0 < a):
f ourier x3 · exp(−a2 · x2 ), x, w
1 w2
1 − √
Iwe 4 a ∼2 π −6a ∼2 +w2
8
a ∼7
133
Example 2.9.7.
> with(inttrans):
F = f ourier (x · exp(−5 · x) · Heaviside(x), x, w)
1
F =
(5 + Iw)2
Convolution
Example 2.9.10.
> with(inttrans):
F = f ourier (int(g(t)h(x − t), t = −∞..∞), x, w)
F = f ourier(g(x), x, w)f ourier(h(x), x, w)
Example 2.9.11.
> with(inttrans):
2
d
f ourier y(t) + y(t) = sin(t), t, s
dt2
−(s − 1)(s + 1)f ourier(y(t), t, s) = I π(Dirac(s + 1) − Dirac(s − 1))
134
Example 2.9.12.
> with(inttrans):
1
f (t) = invf ourier , w, t
(3 + Iw)2
f (t) = te−3t Heaviside(t)
Example 2.9.13.
> with(inttrans):
assume(0 < a):
1 w2 r
− π
f (x) = invf ourier e 4 a ∼2 , w, x
a∼ 2
2 x2
f (x) = e−a∼
Example 2.9.14.
> with(inttrans):
f (x) = invf ourier we−3w Heaviside(w), w, x
1
f (x) =
2(−3 + Ix)2 π
135
2.9.4 Fourier Sine Transform
The syntax is F = fouriersin(f, t, s) and it returns the Fourier sine
transform F (s) of the symbolic preimage f (t). The following normalized
form of formula (2.27) is used as its definition:
Z ∞
√
2 f (t) sin(st) dt
0
F (s) = √ .
π
The Fourier sine transform is self-inverting (since the normalized forms of
formulas (2.27) and (2.29) are similar).
Example 2.9.16.
> with(inttrans):
t
f ouriersin 2 , t, s
t +1
1 √ √ −s
2 πe
2
136
padding = num. Here num specifies the maximum amount of zero
padding that can be used to compute more efficiently the Fourier trans-
form;
inplace = bool. Here bool can have the value true or false and
specifies whether the output overwrites the input. By default this is
false;
Efficiency
The most efficient transform can be obtained when the data length(s)
are highly composite, the input data has datatype = complex8 and the
computation is being performed inplace.
137
Example 2.9.17.
2πk
> z := V ector 4, k → evalf15 3 + exp I , datatype = complex8
4
3. + 1. I
2. + 0. I
3. − 1. I
4. + 0. I
with(DiscreteT ransf orms):
F ourierT ransf orm(z, inplace = true)
6. + 0. I
0. + 2. I
0. + 0. I
0. + 0. I
InverseF ourierT ransf orm(z, inplace = true)
3. + 1. I
2. + 0. I
3. − 1. I
4. + 0. I
Example 2.9.18.
k
> X, Y := V ector 5, k → evalf cos , datatype = f loat8 , V ector
4
k
5, k → evalf sin , datatype = f loat8
3
0.968912421700000 0.327194696800000
0.877582561900000 0.618369803100000
0.731688868900000 , 0.841470984800000
0.540302305900000 0.971937901300000
0.315322362400000 0.995407957700000
X1, Y 1 := F ourierT ransf orm(X, Y )
1.53564585484536 1.67901037959404
−0.0567036805932297 −0.576205462152703
0.133879158532305 ,
−0.253334690322848
0.221118058877586 −0.120540144863056
0.332614647503121 0.00269950166679971
138
InverseF ourierT ransf orm(X1, Y 1, inplace = true)
0.968912421700000 0.327194696800000
0.877582561900000 0.618369803100000
0.731688868900000 , 0.841470984800000
0.540302305900000 0.971937901300000
0.315322362400000 0.995407957700000
139
140
Chapter 3
Laplace Transform
141
simplifies the process of analyzing the behavior of the system, or allows the
synthesis of a new system of a lesser dimension.
Various facets of the Laplace transform are presented in [8], [24, Chap-
ter 9], [28] and [33].
3.1 Definition
Definition 3.1.1. Consider a function f : [0, ∞) → R(C). The function
F : C → C defined by
Z ∞
F (s) = L[f (t)](s) = f (t)e−st dt (3.1)
0
is called the Laplace transform (or the image) of the function f (t) or the
signal in the frequency domain, provided the Riemann improper integral
exists. At the same time f is called the preimage of F or the signal in the
time domain.
The operator L which associates with the function f (t) its Laplace trans-
form
Z ∞ F (s) is called the Laplace transform. The improper integral given by
f (t)e−st dt is called the Laplace integral .
0
Let us consider
Z ∞ the functions f (t) which are absolutely integrable on
[0, ∞), i.e. |f (t)| dt < ∞, for which F (a) exists for some a ∈ [0, ∞).
0
Then one can show that F (s) is analytic function of s ∈ C in the half plane
Re(s) > a (see [7]). In the sequel we will use a large class of functions
for which the Laplace transform exists. For this we need to introduce the
following definitions.
Definition 3.1.2. A function f has a jump discontinuity at a point a if both
of the one sided limits lim f (t) = f (a−) and lim f (t) = f (a+) exist, are finite
t→a t→a
a<0 a>0
numbers and f (a−) 6= f (a+).
A function f is piecewise continuous on [0, ∞) if it is continuous on any
finite subinterval of [0, ∞) except at finitely many jump discontinuities.
Definition 3.1.3. One says that a function f has exponential order α if
there exist constants M > 0 and α ∈ R such that for some t0 ≥ 0
|f (t)| ≤ M eαt , ∀t ≥ t0 .
142
The smallest exponential order α is called the index of growth of f and it is
denoted by σ or σf .
Definition 3.1.4. A function f : R → R(C) is called an original function if
f satisfies the following conditions:
i) f (t) = 0, ∀t < 0;
143
If σ ≥ 0, then |f (t)| ≤ M2 ≤ M2 eσt for t ∈ [0, t0 ]. If σ < 0, then
σ(t − t0 ) ≥ 0 for t ∈ [0, t0 ]. Hence, |f (t)| ≤ M2 ≤ M2 eσ(t−t0 ) = M2 e−σt0 eσt .
Choosing M3 = M2 e−σt0 , it follows that |f (t)| ≤ M3 eσt for t ∈ [0, t0 ]. If one
denotes M = max(M1 , M2 ) in the case σ ≥ 0 and M = max(M1 , M3 ) in the
case σ < 0, one gets
|f (t)| ≤ M eσt ,
for t ∈ [0, ∞).
Z b Z b
−st
Now, for any b > 0, f (t)e dt ≤ f (t)e−st dt and since |ez | =
0 0
eRe(z) , z ∈ C one obtains
Z b Z b
−st
f (t)e dt ≤ M eσt · e−Re(s)t dt
0 0
Z b
=M e−(Re(s)−σ)t dt
0
M b
= · e−(Re(s)−σ)t
−(Re(s) − σ) 0
M
· 1 − e−(Re(s)−σ)b .
=
Re(s) − σ
144
Remark 3.1.8. The converse is not true. There are functions which have
no index of growth, but their Laplace transform exists. For example the
t2 2
function f (t) = 2te · cos et (see [28, Example 1.14]).
1
On the other hand, the function f (t) = √ has a non finite right-hand
t
1
limit in 0, since lim √ = ∞; hence 0 is not a jump discontinuity of f and f
t→0
t>0
t
√
π
is not piecewise continuous, but it still has the Laplace transform F (s) = √
s
(see Example 3.2.33 b)).
Now we will compute the Laplace transform of some common functions
using the definition.
Example 3.1.9 (Heaviside’s Step Function). Consider the function
0, t < 0
1
h(t) = , t=0 .
2
1, t > 0
Z ∞ Z ∞
−st
H(s) = L[h(t)](s) = h(t)e dt = e−st dt
0 0
e−st ∞ e−st 1
= = lim + .
−s 0 t→∞ −s s
From Theorem 3.1.6 (Existence), s = s1 + is2 ∈ C with Re(s) = s1 > σ = 0.
Therefore,
lim e−st = lim e−s1 t = 0,
t→∞ t→∞
145
e−st 1
so lim = 0. Hence, H(s) = L[h(t)](s) = for every s ∈ C with
t→∞ −s s
Re(s) > 0.
eat−st
so lim = 0;
t→∞ −s
eat−st
so lim = 0.
t→∞ −s
1
It follows that F (s) = L[f (t)](s) = . We have now a first important
s−a
Laplace transform, namely
1
L[eat ](s) = , a ∈ C, t ≥ 0. (3.2)
s−a
146
Example 3.1.11 (Power Function). Consider the function
n
t , t≥0
f (t) = , n ∈ N.
0, t < 0
The power function is continuous for n ∈ N∗ and has a jump discontinuity in
∞
t t
X tn
0, if n = 0. The Taylor expansion of the function e around 0 is e =
0
n!
n
t
for every t ∈ R. It follows that for t ≥ 0, et > or, equivalently, tn < n! · et .
n!
So, for t ≥ 0, one has |tn | = tn < n! · et and the power function is an original
function with M = n! and σ = 1.
Let us now compute the Laplace transform of this function using integra-
tion by parts. We obtain that
Z ∞ Z ∞ Z ∞ −st 0
−st n −st n e
Fn (s) = f (t)e dt = t e dt = t · dt
0 0 0 −s t
−st ∞ Z ∞
n e e−st
=t · − n · tn−1 · dt
−s 0 0 −s
−st
n ∞ n−1 −st
Z
n e
= lim t · −0+ t e dt
t→∞ −s s 0
e−st n
= lim tn · + · Fn−1 (s).
t→∞ −s s
From Theorem 3.1.6 (Existence), s = s1 + is2 ∈ C with Re(s) = s1 > σ = 1.
On the other hand, for t ≥ 0, we get that
lim tn e−st = lim tn · e−st = lim tn · e−s1 t
t→∞ t→∞ t→∞
t −s1 t
≤ lim n!e · e = lim n! · et−s1 t
t→∞ t→∞
(1−s1 )t
= lim n! · e = 0.
t→∞
n
Hence, one obtains the recurrence Fn (s) = · Fn−1 (s), n ∈ N. It fol-
s
n n n−1 n!
lows that Fn (s) = · Fn−1 (s) = · Fn−2 (s) = · · · = n F0 (s). But
Z ∞ s Z ∞ s s s
0 −st −st 1
F0 (s) = te dt = e dt = (see Example 3.1.9 (Heaviside’s
0 0 s
Step Function)). Therefore,
n! 1 n!
Fn (s) = · = ,
sn s sn+1
147
for every n ∈ N and s ∈ C, with Re(s) > 1. We have now a second important
Laplace transform, namely
n!
L[tn ](s) = , n ∈ N, t ≥ 0. (3.3)
sn+1
More general,
Γ(α + 1)
L[tα ](s) = , α > −1, t ≥ 0,
sα+1
Z ∞
where Γ is Euler’s Gamma function, i.e. Γ(α) = xα−1 e−x dx, α > 0.
0
2e3t , t ≥ 0
Example 3.1.13. The function f1 (t) = is an original func-
0, t < 0
tion being obtained from the multiplication of the constant c = 2 with the
exponential function for a = 3 (see Example 3.1.10 (Exponential Function).
Hence, for f1 , one has M = 2 and σ = 3.
148
e−3t + t2 , t ≥ 0
Example 3.1.14. The function f2 (t) = is an original
0, t < 0
function since it is the sum of an exponential function with a = −3 (M1 = 1
and σ1 = 0; see Example 3.1.10 (Exponential Function))) and a power func-
tion for n = 2 (M2 = 2! = 2 and σ2 = 1; see Example 3.1.11 (Power Func-
tion)). Hence, for the function f2 , one has σ = max(σ1 , σ2 ) = max(0, 1) = 1
and M = M1 + M2 = 1 + 2 = 3.
t7 e(1+i)t , t ≥ 0
Example 3.1.15. The function f3 (t) = is obtained by
0, t < 0
the multiplication of two original functions: an exponential function for a =
1 + i (M1 = 1 and σ1 = Re(1 + i) = 1; see Example 3.1.10 (Exponential
Function)) and a power function for n = 7 (M2 = 7! and σ2 = 1; see Example
3.1.11 (Power Function)). Therefore, the original function f3 has the growth
index σ = σ1 + σ2 = 2 and M = M1 · M2 = 7!.
Theorem 3.2.1 (Linearity). If f (t) and g(t) are original functions and a, b ∈
C are arbitrary constants, then the following equality holds for s ∈ C and
Re(s) > max(σf , σg ):
149
Proof. Since the integral is linear,
Z ∞
L[af (t) + bg(t)](s) = (af (t) + bg(t)) e−st dt
0
Z ∞ Z ∞
−st
=a f (t)e dt + b g(t)e−st dt
0 0
= aL[f (t)](s) + bL[g(t)](s).
e − e−iωt
iωt
(3.4) 1
L[eiωt ](s) − L[e−iωt ](s)
L[sin(ωt)](s) = L (s) =
2i 2i
(3.2) 1 1 1 1 2iω ω
= − = · 2 2
= 2 .
2i s − iω s + iω 2i s + ω s + ω2
In conclusion,
ω
L[sin(ωt)](s) = . (3.5)
s2 + ω 2
Example 3.2.3 (Cosine Function). For ω ∈ R and t ≥ 0,
e + e−iωt
iωt
(3.4) 1
L[eiωt ](s) + L[e−iωt ](s)
L[cos(ωt)](s) = L (s) =
2 2
(3.2) 1 1 1 1 2s s
= + = · 2 2
= 2 .
2 s − iω s + iω 2 s +ω s + ω2
In conclusion,
s
L[cos(ωt)](s) = . (3.6)
s2 + ω 2
Example 3.2.4 (Hyperbolic Cosine and Sine Functions). For ω ∈ R and
eωt + e−ωt
t ≥ 0, the hyperbolic cosine function cosh(ωt) = describes the
2
curve of a hanging cable between two supports. If follows that
e + e−ωt
ωt
(3.4) 1
L[eωt ](s) + L[e−ωt ](s)
L[cosh(ωt)](s) = L (s) =
2 2
(3.2) 1 1 1 1 2s s
= + = · 2 2
= 2 .
2 s−ω s+ω 2 s −ω s − ω2
150
In conclusion,
s
L[cosh(ωt)](s) = .
s2 − ω2
Similarly,
eωt − e−ωt
ω
L[sinh(ωt)](s) = L (s) = 2 .
2 s − ω2
Theorem 3.2.5 (Similarity or Change of Time Scale). If a > 0, then the
following equality holds for Re(s) > a · σf ):
1 s
L[f (at)](s) = L[f (t)] . (3.7)
a a
Proof. By the change of variable u = at ⇒ du = a dt we obtain the following:
Z ∞ Z ∞
−st u 1
L[f (at)](s) = f (at)e dt = f (u)e−s· a · du
0 0 a
1 ∞
Z Z ∞
s 1 s
= f (u)e− a ·u du = f (t)e− a ·t dt
a 0 a 0
1 s
= L[f (t)] .
a a
151
Proof. Knowing that f (t − a) = 0 for t < a, by the substitution u = t − a, it
follows that
Z ∞ Z ∞
−st
L[f (t − a)](s) = f (t − a)e dt = f (t − a)e−st dt
Z0 ∞ a
Z ∞
−s(u+a) −sa
= f (u)e du = e · f (u)e−su du
0 0
−sa
=e · L[f (t)](s).
Example 3.2.8. Now we show how we can explicitly use Theorem 3.2.7
(Time Delay) and formula (3.8).
(3.8) (3.5) 1
1. L[sin(t − 2)](s) = e−2s · L[sin t](s) = e−2s · 2 ;
s +1
(3.7) 1
s
2. L[cos(2t − 2)](s) = L[cos(2(t − 1))](s) = · L[cos(t − 1)] . Using
2 2
s
now the time-delay formula (3.8) for a = 1, f (t) = cos t and s → ,
2
one obtains
1 −s s
L[cos(2t − 2)](s) = · e · L[cos t]
2
2 2
s
(3.6) 1 s s s
= · e− 2 · s2 2 = e− 2 · 2 .
2 4
+1 s +4
Theorem 3.2.9 (Second Time Delay). If a > 0, then ∀s ∈ C with Re(s) > σf
the following holds:
Z a
sa −st
L[f (t + a)](s) = e L[f (t)](s) − f (t)e dt . (3.9)
0
Proof. By Definition 3.1.1 (Laplace Transform) and using the substitution
u = t + a, one obtains
Z ∞ Z ∞
−st
L[f (t + a)](s) = f (t + a)e dt = f (u)e−s(u−a) du
0 a
Z ∞
= esa · f (u)e−su du
Za ∞ Z a
sa −su −su
=e f (u)e du − f (u)e du
0 Z a 0
sa
= e L[f (t)](s) − f (t)e−st dt .
0
152
Example 3.2.10. Now we use Theorem 3.2.9 (Second Time Delay) and
formula (3.9).
Z 3 Z 3
t+3 (3.9) 3s t t −st (3.2) 3s 1 t−st
L[e ](s) = e L[e ](s) − e ·e dt = e − e dt
0 s−3 0
et−st 3 e3−3s
3s 1 3s 1 1
= e − =e − + .
s−3 1−s 0 s−3 1−s 1−s
Notice that actually the function f (t) = et+3 is identically 0 for t + 3 < 0.
Theorem 3.2.11 (Translation or Frequency Shifting). If a ∈ C, then ∀s ∈ C
with Re(s) > Re(a) the following holds:
L[eat · f (t)](s) = L[f (t)](s − a). (3.10)
Proof. We make the following computations:
Z ∞
at
L[e · f (t)](s) = eat · f (t) · e−st dt
Z0 ∞
= f (t) · e−(s−a)t dt
0
= L[f (t)](s − a).
153
Theorem 3.2.13 (Differentiation of the Original). Let f (t) be a differen-
tiable original function with derivative f 0 (t). Then
L[f 0 (t)](s) = s · L[f (t)](s) − f (0+ ), (3.11)
where f (0+ ) = lim f (t) (the right-hand limit of f at t = 0).
t→0
t>0
− f (n−1) (0+ )
= sn L[f (t)](s) − sn−1 f (0+ ) − · · · − sf (n−2) (0+ ) − f (n−1) (0+ );
hence, formula (3.12) is true for any n ≥ 1.
154
Example 3.2.15. Now we are providing examples on how to use Theorem
3.2.13 (Differentiation of the Original) and formula (3.11) or their generaliza-
tions, namely Theorem 3.2.14 (General Differentiation of the Original) and
formula (3.12).
(3.11)
1. L[(et (cos t − sin t)](s) = L[(et cos t)0 ](s) = sL[et cos t](s) − lim et cos t
t→0
t>0
(3.10)
= sL[et cos t](s) − 1 = L[cos t](s − 1) − 1
s−1
= − 1 (thanks to (3.6));
(s − 1)2 + 1
(3.12)
2. L[(sinh(2t))00 ](s) = s2 L[sinh(2t)](s) − s lim sinh(2t) − lim(sinh(2t))0
t→0 t→0
t>0 t>0
155
We obtain
∞ 0 Z ∞
∂(f (t)e−st )
Z
0 −st
F (s) = f (t)e dt = dt
0 0 ∂s
Z ∞ Z ∞
−st
= f (t)e · (−t) dt = (−tf (t))e−st dt
0 0
= L[−tf (t)](s).
Remark 3.2.17. Formulas (3.11) and (3.14) indicate that the operation
which corresponds to differentiation in the time domain and in the frequency
domain, respectively is the multiplication by the variable in the other do-
main (with some corrections, −f (0+ ) and multiplication by −t in (3.11) and
(3.14), respectively). This property allows us to transform linear differential
equations into polynomial algebraic equations and to solve them in an easier
way.
Example 3.2.19. This example shows how Theorem 3.2.18 (General Differ-
entiation of the Image) and formula (3.15) are explicitly used.
156
0
−t (3.15) −t 0 (3.2) 1 1
1. L[te ](s) = (−1) · (L[e ](s)) = − = ;
s+1 (s + 1)2
00
2 −t (3.15) 2 1 −t2 00 (3.2)
2. L[t e ](s) = (−1) · (L[e ](s)) = = ;
s+1 (s + 1)3
000
3 −t (3.15) 3 −t 000 (3.2) 1 6
3. L[t e ](s) = (−1) · (L[e ](s)) = − = ;
s+1 (s + 1)4
4. In general, for n ∈ N∗ ,
(3.15)
L[tn e−t ](s) = (−1)n · (L[e−t ](s))(n)
(n)
(3.2) n 1 n!
= (−1) · = .
s+1 (s + 1)n+1
Z t Z t Z t
|g(t)| = f (x) dx ≤ |f (x)| dx ≤ M eαx dx
0 0 0
eαt − 1 M αt
=M· < e , ∀t ≥ 0.
α α
We derive g(t) by applying formula (3.13) reduced to the B term (since
f (x) is constant with respect to t and a = 0, hence their derivatives are equal
to 0) and we obtain that g 0 (t) = f (t) at the continuity points of the function
f. Z 0
Obviously, g(0) = f (x) dx = 0 and by Theorem 3.2.13 (Differentiation
0
of the Original),
L[f (t)](s) = L[g 0 (t)](s) = sL[g(t)](s) − g(0),
157
which implies
1
L[g(t)](s) = L[f (t)](s).
s
158
Example 3.2.24. This example shows how Theorem 3.2.22 (Integration of
the Image) and formula (3.17) fail.
Z ∞ Z ∞
cos t (3.17) x
L (s) = L[cos t](x) dx = 2
dx.
t s s x +1
The last integral is an improper one since the integrand is of the form
P (x)/Q(x) with deg P = 1 and deg Q = 2 and it does not respect the
convergence condition deg P ≤ deg Q − 2 as in the previous example.
Corollary 3.2.25. If the integral in Theorem 3.2.22 (Integration of the Im-
age, formula (3.17)) converges as s → 0, then
Z ∞ Z ∞
f (t)
F (x) dx = dt. (3.18)
0 0 t
Proof. Indeed, from Theorem 3.2.22 (Integration of the Image) and the hy-
pothesis, we get
Z ∞ Z ∞ Z ∞
f (t) f (t) −st
lim F (x) dx = F (x) dx = L (0) = lim ·e dt
s→0 s 0 t s→0 0 t
Z ∞ Z ∞
f (t) −st f (t)
= lim ·e dt = dt.
0 s→0 t 0 t
Convolution
Consider two original functions f and g. If f, g ∈ L1 (see Definition 2.1.1),
then the general definition of their convolution is given by the integral
Z ∞
(f ∗ g)(t) = f (x)g(t − x) dx. (3.19a)
−∞
159
Hence, the convolution can be written as
Z ∞
(f ∗ g)(t) = f (x)g(t − x) dx. (3.19b)
0
But g(t − x) = 0 for t − x < 0, i.e. for x > t, hence formula (3.19b) becomes
Z t
(f ∗ g)(t) = f (x)g(t − x) dx. (3.19c)
0
Proof. We will use formula (3.19a) for the convolution. By changing the
order of integration, we have the following formula:
Z ∞
L[(f ∗ g)(t)](s) = (f ∗ g)(t) · e−st dt
Z0 ∞ Z ∞
= f (x)g(t − x) dx e−st dt
Z0 ∞ 0 Z ∞
−st
= f (x) g(t − x)e dt dx.
0 0
Since g(y) = 0 for y < 0, the integral on (−x, 0) is equal to 0 and the double
integral splits in the product of the following two integrals:
Z ∞ Z ∞ Z ∞ Z ∞
−s(y+x) −sx
f (x) g(y)e dy dx = f (x)e dx · g(y)e−sy dy
0 0 0 0
= F (s) · G(s).
160
Due to the hypothesis that f and g are original functions, the Laplace inte-
grals of f and g are absolutely
Z convergent
Z and hence, in view of the preced-
∞ ∞
ing calculation, the integral f (x) g(t − x)e−st dt dx is absolutely
0 0
convergent. This fact allowed us to change the integration order.
and formula (3.21) follows by Theorem 3.1.6 (Existence) and Corollary 3.1.7,
since L[f 0 (t)](s) → 0 as Re(s) → ∞.
Proof. Since f (t) is piecewise continuous and lim f (t) exists, it follows that
t→∞
f (t) is bounded on [0, ∞). Hence, there exists M > 0 such that |f (t)| ≤
M = M · e0·t , ∀t ≥ 0. This implies that the growth index of f is σf ≤ 0 and
its Laplace transform exists for Re(s) > 0.
In virtue of Theorem 3.2.13 (Differentiation of the Original) we obtain
that L[f 0 (t)](s) = sF (s) − f (0+ ) and
161
But, by Definition 3.1.1 (Laplace Transform),
Z ∞
0
lim L[f (t)](s) = lim f 0 (t)e−st dt
s→0 s→0 0
Z ∞
= f 0 (t) lim e−st dt
s→0
Z0 ∞ ∞
= f 0 (t) dt = f (t)
0 0
hence,
lim L[f 0 (t)](s) = lim f (t) − f (0+ ). (??)
s→0 t→∞
Formula (3.22) follows from (?) and (??).
162
Example 3.2.31. Now we give two examples in which Theorem 3.2.30 and
formula (3.23) are used.
1. The function sin t is periodic with period T = 2π. Then
Z 2π
(3.23) 1
L[sin t](s) = −2πs
sin t · e−st dt.
1−e 0
We get that
Z 2π 2π
eit − e−it −st
Z
−st
I := sin te dt = ·e dt
0 0 2i
Z 2π Z 2π
1 t(i−s) −t(i+s)
= e dt − e dt
2i 0 0
1 et(i−s) 2π e−t(i+s) 2π
= +
2i i − s 0 i+s 0
− 1 e−2π(i+s) − 1
2π(i−s)
1 e
= + .
2i i−s i+s
Since e2πi = e−2πi = 1, it follows that
1 e−2πs − 1 e−2πs − 1 e−2πs − 1
1 1
I= + = +
2i i−s i+s 2i i−s i+s
e−2πs − 1 2i 1 − e−2πs
= · = .
2i −s2 − 1 s2 + 1
1 1
In conclusion, L[sin t](s) = −2πs
·I = 2 ;
1−e s +1
1, t ∈ [0, 1)
2. Let us consider the function f (t) = , which is periodic
0, t ∈ [1, 2)
with period T = 2. Then
Z 2 Z 1
(3.23) 1 −st 1 −st
L[f (t)](s) = f (t)e dt = e dt + 0
1 − e−2s 0 1 − e−2s 0
1 e−st 1 e−s − 1 e−s − 1
= · = =
1 − e−2s −s 0 s(e−2s − 1) s(e−s − 1)(e−s + 1)
1
= −s
.
s(e + 1)
163
Theorem 3.2.32 (Laplace Transform of Power Series). Assume that the
∞
X
power series an tn , an ∈ R is convergent and has the sum f (t). If there
n=0
exist M > 0 and b > 0 such that
bn
|an | ≤ M · , n ∈ N, (3.24)
n!
∞ ∞
X X n!
L[f (t)](s) = an L[tn ](s) = an · . (3.25)
n=0 n=0
sn+1
Proof. Consider the restriction of the Laplace transform to the real axis,
namely
Z ∞
L[f (t)](s) = f (t)e−xt dt, x ∈ R.
0
1. if f (t) ≥ 0 for every t ∈ [0, ∞), then f (t)e−xt ≥ 0. Hence, L[f (t)](x) ≥
0. Therefore, if g(t) ≤ h(t), for every t ∈ [0, ∞), we obtain that
L[g(t)](x) ≤ L[h(t)](x) (take f = h − g);
Hence,
|L[f (t)](s)| ≤ L[|f (t)|](x); (3.26)
∞
x
X xn
3. using the exponential series e = and then applying formula
n=0
n!
164
(3.24), one gets that
n
X ∞
X n
X ∞
X
k k k
f (t) − ak t = ak t − ak t = ak tk
k=0 k=0 k=0 k=n+1
∞ ∞ n
!
k
(bt)
X (bt)k X (bt)k
X
≤M =M −
k=n+1
k! k=0
k! k=0
k!
n
!
bt
X (bt)k
=M e − .
k=0
k!
Hence, !
n n
X X (bt)k
f (t) − ak tk ≤ M ebt − . (3.27)
k=0 k=0
k!
Using formulas (3.26) and (3.27) and also property 1. from above, we have
n
" n
#
X X
L[f (t)](s) − ak L[tk ](s) ≤ L f (t) − ak tk (x)
k=0 k=0
n
!
bt
X bk k
≤M L[e ](x) − L[t ](x)
k=0
k!
n
!
1 X bk k!
=M − ·
x − b k=0 k! xk+1
n k
!
1 1X b
=M −
x − b x k=0 x
n+1 !
1 1 − xb
=M −
x−b 1 − xb
Bare in mind that above we have used the sum of the geometric progression,
n
X 1 − q n+1
namely qk = . Finally,
k=0
1 − q
n n+1 !
X 1 1 1 b
L[f (t)](s) − ak L[tk ](s) ≤ M − + ·
k=0
x−b x−b x x
n+1
M b
= · .
x x
165
b
Since Re(s) > b, it follows that x > b ⇐⇒ < 1, which implies that
n+1 n
x
b X
lim = 0. Hence, lim L[f (t)](s) − ak L[tk ](s) = 0, so
n→∞ x n→∞
k=0
∞ ∞
X
n
X n!
L[f (t)](s) = an L[t ](s) = an · .
n=0 n=0
sn+1
(2n)!
But because (−1)n · = (2n)(2n − 1) · . . . · (n + 1) → ∞, the
n!
transformed series is divergent for any s ∈ C, so we can see that the
condition (3.24) is not verified. Therefore, the method of the Laplace
transform of power series fails in this case.
166
e−at
b) Consider the function f (t) = √ , a ∈ C, a 6= 0. It follows that
πt
∞ ∞
−at
X (−a)n n
X (−1)n an
e = t = tn ,
n=0
n! n=0
n!
∞
e−at 1 X (−1)n an n− 1 (−1)n an |a|n
so f (t) = √ = √ t 2 . Since ≤ , the
πt π n=0 n! n! n!
condition (3.24) is verified. Hence,
∞
1 X (−1)n an h n− 1 i
(3.24)
L[f (t)](s) = √ L t 2 (s)
π n=0 n!
∞
1 X (−1)n an Γ n + 12
= √ · 1 .
π n=0 n! sn+ 2
(2n − 1) · · · 1 √
1 1 1 1
Γ n+ = n− ··· Γ = π,
2 2 2 2 2n
so
∞
1 X (−1)n an (2n − 1) · · · 1 √ 1
L[f (t)](s) = √ · n
π · n+ 1
π n=0 n! 2 s 2
∞
X (−1)n an (2n − 1) · · · 1 1
= · n
· n+ 1
n=0
n! 2 s 2
∞
1 X (−1)n (2n − 1) · · · 1 a n
=√ · .
s n=0 n! · 2n s
1
Since we know that the expansion of (1 + z)− 2 is
∞ ∞
− 12 · · · − 2n−1
X
2 n
X (−1)n (2n − 1) · · · 1 n
z = n
z ,
n=0
n! n=0
n! · 2
1 a − 12 1
it follows that L[f (t)](s) = √ · 1 + =√ .
s s s+a
167
3.3 Inverse Laplace Transform
The Laplace transform is used to transform some (difficult) problems into
simple algebraic ones. We obtain the solution in the frequency domain, but
we must return it to the time domain and obtain the corresponding original
solution.
Theorem 3.3.1. Let f (t) be an original function with the growth index σ
and the Laplace transform F (s). Then
Z a+i∞
f (t− ) + f (t+ ) 1
= F (s)est ds, a > σ, t ∈ R. (3.28)
2 2πi a−i∞
f (t− ) + f (t+ )
Proof. Consider the function g(t) = e−at · . Obviously, if f is
2
continuous at the point t, then f (t− ) = f (t+ ) = f (t), which implies that
g(t) = e−at f (t) (for t ≥ 0) and g(t) = 0 (for t < 0). The function g(t) is
absolutely integrable, since, for t > 0,
Writing the Fourier integral for g(t) (see formula (2.20)), one obtains
Z ∞ Z ∞
1
g(t) = g(x)e dx e−iωt dω.
iωx
2π −∞ −∞
168
Consider the substitution s = a − iω (formally, ds = −i dω, so ω = ∞ ⇒
s = a − i∞ and ω = −∞ ⇒ s = a + i∞). It follows that
Z a−i∞ Z ∞
f (t− ) + f (t+ ) 1 −sx st ds
= f (x)e dx e · −
2 2π a+i∞ 0 i
Z a+i∞ Z ∞
1
= f (x)e−sx dx est ds.
2πi a−i∞ 0
Z ∞
Since F (s) = f (x)e−sx dx, it follows that
0
Z a+i∞
f (t− ) + f (t+ ) 1
= F (s)est ds.
2 2πi a−i∞
f (t− ) + f (t+ )
If f is standardized (see Remark 2.3.6), i.e. = f (t) at the
2
discontinuity points, then formula (3.28) becomes the Mellin-Fourier
formula, namely
Z a+i∞
1
f (t) = F (s)est ds, a > σ. (3.29)
2πi a−i∞
169
Lemma 3.3.3 (Jordan’s Lemma). Let Cn be the half-circles of radius Rn
centered at s0 ∈ C (see Figure 3.2) such that R1 < R2 < · · · < Rn < · · · and
lim Rn = ∞; so each half-circle can be written as follows:
n→∞
The formula for determining the original function f (t) is given by formula
(3.30) below. We denote by la the vertical line from a − i∞ to a + i∞, so
la := {z = a + iy : y ∈ R}.
170
Z
u ∗
Assume that F (s) −→ 0 with respect to s ∈ Cn , n ∈ N and F (s) ds is
s→∞ la
absolutely convergent for any a > σ. Then
k
X
f (t) = res (F (s)est , sj ). (3.30)
j=1
I k
1 X
F (s)est ds = res (F (s)est , sj ). (3.31)
2πi Γn j=1
171
If we let n → ∞ in formula (3.32), then, using Lemma 3.3.3 (Jordan’s
Lemma), we get that Z
lim F (s)est ds = 0.
n→∞ γn
Bare in mind that the segment An Bn becomes la (the vertical line from a−i∞
to a + i∞) and that the right hand member of (3.32) remains constant, since
by the assumption Rn > max |sj |, no isolated singular point appears as
j=1,k
n → ∞. Therefore, when n → ∞, the equality (3.32) implies formula (3.30).
By applying formula (5.5) in [26] (for the computation of the residues at
poles), one obtains from (3.30), formula (3.33) from below.
Corollary 3.3.5. If F (s) satisfies the assumption of Theorem 3.3.4 and its
isolated singular points sj are poles of order nj ≥ 1, j = 1, k, then
k
X 1 (n −1)
lim (s − sj )nj F (s)est j , t ≥ 0.
f (t) = (3.33)
j=1
(nj − 1)! s→sj
lim (s − sj )F (s)est .
s→sj
e−s
Example 3.3.6. Consider the function F (s) = . We take G(s) =
(s2 + 1)2
st est−s
F (s)e = 2 . The isolated singular points of G are ±i, both being
(s + 1)2
poles of order 2. Now we use formula (3.30), so we need to compute the
residues at these points. Since
st−s 0
e est−s [(t − 1)(s + i) − 2]
res (G, i) = lim = lim
s→i (s + i)2 ) s→i (s + i)3
e−i+it [2i(t − 1) − 2]
= ,
−8i
ei−it [−2i(t − 1) − 2]
res (G, −i) = res (G, i) = ,
8i
172
it follows that
f (t) = res (G, i) + res (G, −i)
e−i+it [2i(t − 1) − 2] ei−it [−2i(t − 1) − 2]
= +
−8i 8i
2i(t − 1) −i+it 2
=− (e + ei−it ) + (e−i+it − ei−it ).
8i 8i
But
e−i+it = ei(t−1) = cos(t − 1) + i sin(t − 1)
and
ei−it = e−i(t−1) = cos(t − 1) − i sin(t − 1).
Hence,
1 1
f (t) = (t − 1) · 2 cos(t − 1) + · 2i sin(t − 1)
4 4i
(t − 1) cos(t − 1) sin(t − 1)
= + .
2 2
These formulas can be applied in the case of rational functions
P (s)
F (s) = ,
Q(s)
where P and Q are polynomials such that deg(P ) < deg(Q). Hence, the
poles of F are the roots of Q.
In the case of simple poles, the conditions of the Theorem 3.3.4 hold by
formula (5.7) in [26]. One obtains from (3.30) the following formula:
k
X P (sj ) sj t
f (t) = e , t ≥ 0. (3.34)
j=1
Q0 (sj )
173
k!
2. L[tk ](s) = , k ∈ N,
sk+1
3. L[eλt f (t)] = F (s − λ), λ ∈ C,
one obtains Proposition 3.3.7 from below.
P (s)
Proposition 3.3.7. If the rational function F (s) = has the partial
Q(s)
fraction decomposition
n
A X Aj
F (s) = + kj
, n, kj ∈ N∗ , j = 1, k,
s j=1
(s − s j )
Proof. One can prove that the series in formula (3.36) is convergent, ∀t ≥ 0
(see [11, Theorem 1.4.15]).
k!
Since L[tk ](s) = k+1 , k ∈ N, and using Theorem 3.2.1 (Linearity), it
s
follows that
"∞ # ∞ ∞
X an X an n X an n!
n
L[f (t)](s) = L t (s) = L[t ](s) = · = F (s).
n=0
n! n=0
n! n=0
n! sn+1
174
Example 3.3.9. Now we are going to make some explicit computations using
partial fraction decomposition.
1
1. Let F (s) = . Then
s2 − 3s + 2
1 1 1
F (s) = =− + ,
(s − 1)(s − 2) s−1 s−2
3s2 + 3s + 3
2. Let F (s) = . Then
s3 − 3s + 2
3s2 + 3s + 3 1 2 3
F (s) = = + + ,
(s + 2)(s − 1)2 s + 2 s − 1 (s − 1)2
175
L[y (n−1) (t)](s) = sn−1 Y (s) − sn−2 y(0) − sn−3 y 0 (0) − · · · − sy (n−3) (0) −
y (n−2) (0);
L[y (n) (t)](s) = sn Y (s)−sn−1 y(0)−sn−2 y 0 (0)−···−sy (n−2) (0)−y (n−1) (0).
One applies the Laplace transform to the DE (3.37) and by Theorem 3.2.1
(Linearity), one transforms it into the following algebraic equation:
(an sn + · · · + a1 s + a0 )Y (s) − y(0)(an sn−1 + · · · + a2 s + a1 )−
− y 0 (0)(an sn−2 + · · · + a3 s + a2 ) − · · · −
− y (n−2) (0)(an s + an−1 )−
− y (n−1) (0)an = F (s).
Let us denote by p(s) the coefficient of Y (s). Hence,
176
Example 3.4.1. Solve the equation
and
L[y 0 (t)](s) = sY (s) − y(0) = sY (s) + 1.
Also formula (3.2) gives us
1
L[et ](s) = .
s−1
Combining these equalities, one gets
1
L[y(t)](s)(s2 − 5s + 6) = s − 6 + .
s−1
Therefore,
s2 − 7s + 7
Y (s) = .
(s − 1)(s − 2)(s − 3)
Decomposing this into partial fractions, one gets
1 5 7
Y (s) = − + .
2(s − 1) s − 2 2(s − 3)
177
Example 3.4.2. Solve the equation
y 000 (t) − y 0 (t) = sinh t
with initial conditions y(0) = 0, y 0 (0) = 0, y 00 (0) = 1.
We proceed in the same way as in Exercise 3.4.1. The algebraic equation
which results is
1
s3 Y (s) − 1 − sY (s) = 2 ,
s −1
s
which implies that Y (s) = 2 .
(s − 1)2
To determine the original, we will use residues and formula (3.30). Con-
sider G(s) = Y (s)est , t ≥ 0. The isolated singular points of G are ±1, both
poles of order 2. Since
0
sest est [(st + 1)(s + 1) − 2s] tet
res (G, 1) = lim = lim = ,
s→1 (s + 1)2 s→1 (s + 1)3 4
0
sest est [(st + 1)(s − 1) − 2s] te−t
res (G, −1) = lim = lim = − ,
s→−1 (s − 1)2 s→−1 (s − 1)3 4
it follows that
t(et − e−t ) t sinh t
y(t) = res (G, 1) + res (G, −1) = = .
4 2
The Laplace transform can be used by similar method to solve some
differential equations whose coefficients are polynomial functions of t.
Example 3.4.3. Solve the equation
ty 00 − 2y 0 − ty = −2 cosh t
with initial conditions y(0) = 0, y 0 (0) = 1.
Applying Laplace transform, one gets
L[ty 00 ](s) − 2L[y 0 ](s) − L[ty](s) = −2L[cosh t](s).
Using Theorem 3.2.16 (Differentiation of the Image), and then Theorem
3.2.14 (General Differentiation of the Original),
(3.15) (3.12)
L[ty 00 ](s) = (−1)[L[y 00 ](s)]0 = (−1)(s2 Y (s) − 1)0
= (−1)(2sY (s) + s2 Y 0 (s)) = −2sY (s) − s2 Y 0 (s).
178
From the same formulas as before we get that
Also the formula for the Laplace transform of the hyperbolic cosine (Exercise
3.2.4) is the following:
2s
L[cosh t](s) = 2 .
s −1
−2s
Hence, (−s2 + 1)Y 0 (s) − 4sY (s) = 2 , which is equivalent to
s −1
(s2 − 1)2 · Y 0 (s) + 4s(s2 − 1)Y (s) = 2s.
s2 + C
Y (s) = .
(s2 − 1)2
To determine the original, we will use again residues and formula (3.30).
(s2 + C)est
Consider G(s) = . The isolated singular points of G are ±1, both
(s2 − 1)2
poles of order 2. Since
2 0
(s + C)est (1 − C)et + (1 + C)tet
res (G, 1) = lim = ,
s→1 (s + 1)2 4
0
−(1 − C)e−t + (1 + C)te−t
2
(s + C)est
res (G, −1) = lim = ,
s→−1 (s − 1)2 4
it follows that
1+C 1−C
y(t) = res (G, 1) + res (G, −1) = t cosh t + sinh t.
2 2
179
where A is a real nonsingular n × n matrix, f (t) = [f1 (t), f2 (t), . . . , fn (t)]T is
an n-dimensional vector, whose coefficients are continuous original functions
and y(t) = [y1 (t), y2 (t), . . . , yn (t)]T is the unknown vector function. For the
study of systems of differential equations, see [12] and [13].
The Laplace transform of the vector function f (t) is the vector function
One applies the Laplace transform to the SDE (3.40) and by the extension
of linearity to vectors and matrices, one obtains the algebraic system
1 −1
which has the solution Y (s) = A F (s) + y(0) .
s
Notice that the SDE (3.40) can be written in the form y 0 = A−1 f (t) since
the matrix A is nonsingular.
If the equations of the SDE have different orders, one can apply the
Laplace transform to each equation as in Subsection 3.4.1 (Differential Equa-
tions). One obtains an algebraic system which can be solved by suitable
methods (substitution, reduction, Cramer’s rule etc.) in the frequency do-
main. Then using the inverse Laplace transform, we determine the solution
y(t).
180
hence,
1
1 1 2 −1 s−1 0
Y (s) = 1 +
s 3 −1 2 0
s+ 1
2 1
1 s−1 − s+1
= .
3s − 1 + 2
s−1 s+1
It follows that
1 2 1
Y1 (s) = −
3s s−1 s+1
and
1 1 2
Y2 (s) = − + .
3s s−1 s+1
Using the partial fractions decompositions
1 1 1 1 1 1
= − and = − ,
s(s − 1) s−1 s s(s + 1) s s+1
one gets
1 2 1 3 1
2et + e−t − 3
Y1 (s) = + − ⇒ y1 (t) =
3 s−1 s+1 s 3
and
1 1 2 3 1
−et − 2e−t + 3 .
Y2 (s) = − − + ⇒ y2 (t) =
3 s−1 s+1 s 3
Example 3.4.5. Solve the system
0
y (t) − y(t) + 2x(t) = 0
y 00 (t) + 2x0 (t) = 2t − cos 2t
181
Using Theorem 3.2.14 (General Differentiation of the Original), we obtain
that
sY (s) − y(0) − Y (s) + 2X(s) = 0
(
2 s ,
s2 Y (s) − sy(0) − y 0 (0) + 2sX(s) − 2x(0) = 2 − 2
s s +4
which is equivalent to
(s − 1)Y (s) + 2X(s)) = 0
(
2 s
s2 Y (s) + 2sX(s) = 2 − 2 .
s s +4
Solving the algebraic system with unknowns Y (s) and X(s), one gets
2 1 2 1
Y (s) = − ⇒ y(t) = t − sin(2t)
s3 s2 + 4 2
and
(s − 1)Y (s) 1 1 1 s 1 1
X(s) = − =− 2 + 3 + · 2 − · 2 ⇒
2 s s 2 s +4 2 s +4
1 1
x(t) = −t + t2 + cos(2t) − sin(2t).
2 4
where f (t) and k(t, r) are given original functions with respect to t, I ⊂ R
is an interval, r ∈ I, A ∈ R and y(t) is the unknown original function. The
function k(t, r) is called the kernel of the IE.
An (IE) can be solved using Laplace transform if it is of convolution type,
i.e. of the form Z t
Ay(t) + k(r − t)y(r) dr = f (t). (3.41)
0
One denotes as usual L[f (t)](s) = F (s), L[y(t)](s) = Y (s) and L[k(t)](s) =
K(s). Using one of the equivalent definitions of the convolution (3.19c) and
Theorem 3.2.26 (Convolution), we obtain
Z t
L k(r − t)y(r) dr (s) = L[(k ∗ y)(t)](s) = K(s) · Y (s).
0
182
By applying Laplace transform to the IE (3.41), one obtains the algebraic
equation
(A + K(s))Y (s) = F (s), (3.42)
F (s)
which has the solution Y (s) = .
A + K(s)
Z t
But sin(t − x)y(x) dx = sin t ∗ y(t) (see (3.19c)), hence, using Theorem
0
3.2.26 (Convolution), one gets
Z t
L sin(t − x)y(x) dx (s) = L[sin t ∗ y(t)](s)
0
= L[sin t](s) · L[y(t)](s)
1
= 2 Y (s).
s +1
It follows that
1 s
2Y (s) − Y (s) = 2 .
s2 +1 s +1
This is equivalent to
s 1 s 1 1
Y (s) = 2 = · 2 1 = · L cosh √ t (s),
2s + 1 2 s + 2
2 2
183
By combining the DEs (3.37) with the IEs (3.41), one obtains integro-
differential equations (IDEs) of the following form:
Z t
(n) 0
an y + · · · + a1 y + a0 y + k(r − t)y(r) dr = f (t). (3.43)
0
1 F (s)
Y (s) = [F (s) + y0 δ(p(s)) + · · · + yn−1 δ n (p(s))] + . (3.44)
p(s) A + K(s)
hence,
1 1 1 1 1 1 1
Y (s) = − 2 1 + 2 = − 2 + 3 − 4,
s s s s s s s
which implies that
1 1
y(t) = 1 − t + t2 − t3 .
2 6
184
3.4.4 Linear Time-Invariant Control Systems
A linear time-invariant control system (LTI system) has the state space
representation (see [2] and [25])
ẋ(t) = Ax(t) + Bu(t), (3.45)
Σ
y(t) = Cx(t) + Du(t), (3.46)
where
and
185
We multiply equation (•) by (sIn − A)−1 for s ∈ C \ σ(A) and we get the
formula of the state of the system Σ in the frequency domain
X(s) = (sIn − A)−1 BU (s) + (sIn − A)−1 x(0). (3.47)
By Theorem 3.2.1 (Linearity), the Laplace transform of (3.46) is the equa-
tion
Y (s) = CX(s) + DU (s),
where Y (s) = L[y(t)](s) = [L[y1 (t)](s), L[y2 (t)](s), . . . , L[yp (t)](s)]T .
By replacing X(s) from (3.47) into the previous equation, one obtains
the general response of the system Σ in the frequency domain
Y (s) = C(sIn − A)−1 BU (s) + C(sIn − A)−1 x(0) + DU (s). (3.48)
For the initial state x(0) = 0, it follows that the forced response of the
system Σ has the formula
Y (s) = C(sIn − A)−1 B + D U (s).
(3.49)
Definition 3.4.8. The matrix T (s) = C(sIn − A)−1 B + D is called the
transfer matrix of the system Σ.
If m = p = 1, i.e. Σ is a single input-single output system (SISO), then
T (s) is the transfer function of Σ.
By (3.49), one obtains the following characterization of the LTI systems
with null initial state.
Proposition 3.4.9. If x(0) = 0, then the input-output map of the system Σ
is
Y (s) = T (s)U (s). (3.50)
Example 3.4.10. Consider the LTI system (3.45), (3.46) given by the fol-
lowing matrices:
0 1 0 0 1
A = 0 0 1 , B = 0 −1 , C = [1 0 2], D = [0 1].
2 3 0 1 0
Determine the transfer matrix
T (s) and the output of the system pro-
h(t)
duced by the control u(t) = , where h(t) is the Heaviside’s step
0
function.
186
It follows that T (s) = C(sI3 −A)−1 B +D. One method to find (sI3 −A)−1
is
1
(sI3 − A)−1 = (sI3 − A)∗ .
det(sI3 − A)
s −1 0
Since (sI3 − A) = 0 s −1 , the characteristic polynomial of A is
−2 −3 s
s −1 0
p(s) = det(sI3 − A) = 0 s −1 = s3 − 3s − 2.
−2 −3 s
s2 − 3 s 1
Then the adjoint matrix (sI3 − A)∗ = 2 s2 s . Therefore,
2s 3s + 2 s2
2s2 + 1 s3 + s2 − 6s − 9
−1
T (s) = C(sI3 − A) B + D = 3
s − 3s − 2 s3 − 3s − 2
2s2 + 1
Y (s) = T (s)U (s) = .
s(s3 − 3s − 2)
187
In order to determine the output in the time domain, we determine the
original y(t) = L−1 [Y (s)](t). We decompose Y (s) using partial fraction de-
composition and we get that
2s2 + 1 2s2 + 1
Y (s) = =
s(s3 − 3s − 2) s(s + 1)2 (s − 2)
A B C D
= + + + , A, B, C, D ∈ R.
s s + 1 (s + 1)2 s − 2
We obtain that
2s2 + 1 1
A = lim sY (s) = lim 3
=− ;
s→0 s→0 s − 3s − 2 2
2s2 + 1
C = lim (s + 1)2 Y (s) = lim = 1;
s→−1 s→−1 s(s − 2)
2s2 + 1 1
D = lim(s − 2)Y (s) = lim = .
s→2 s→0 s(s + 1)2 2
Also if one adds the simple fractions in the decomposition of Y (s), one can
see that the coefficient of s3 is A + B + D and this must be equal to the
coefficient of s3 in the numerator of Y (s). Hence, A + B + D = 0, so B = 0.
Therefore,
1 1 1 1 1
Y (s) = − · + 2
+ · .
2 s (s + 1) 2 s−2
It follows that the original is
1 1
y(t) = − + te−t + e2t , t ≥ 0.
2 2
In conclusion, this is the response of the system to Heaviside’s step function.
P (s)
A rational function , where P and Q are polynomials is said to be
Q(s)
proper if deg P ≤ deg Q. A matrix M (s) is said to be proper rational if all
its entries are proper rational functions. The above example suggests the
following result (the proof is an exercise for the reader).
Proposition 3.4.11. The transfer matrix T (s) of the LTI system (3.45),
(3.46) is a p × m proper rational matrix.
We solved the problem of determining the transfer matrix of a given
LTI system (3.45), (3.46) when the matrices A, B, C and D are known. An
important problem is the converse.
188
Realization Problem
Given a p × m proper rational matrix T (s), determine a realization of
T (s), i.e. a quadruplet of matrices Σ = (A, B, C, D) such that
The electrical current flowing into a node is equal to the current out of it.
The sum of all voltages around any closed loop in a circuit is equal to 0.
189
Figure 3.4: The RLC Circuit
VR + VL + VC = V (t),
1 t
Z
dI(t)
where VR = RI(t), VL = L · and VC = V (0) + I(x) dx. Hence,
dt C 0
the following IDE holds:
1 t
Z
dI(t)
RI(t) + L · + V (0) + I(x) dx = V (t).
dt C 0
We apply the Laplace transform and make the notations I(s) e := L[I(t)](s)
and Ve (s) := L[V (t)](s). Using Theorem 3.2.1 (Linearity), Theorem 3.2.13
(Differentiation of the Original) and Theorem 3.2.20 (Integration of the Orig-
inal), one obtains the equation
190
hence the transfer function (Laplace admittance) is
I(s)
e s
T (s) = = 2 1 .
V (s)
e Ls + Rs + C
Before encryption, the plain text is coded and for this we can use, for
example, the code that allocates to each letter its position in the English
alphabet. Let us denote this function by Φ. Thus, character codes are
positive integers starting with 1 and ending with 26. For example, A is 1, B
is 2 and so on.
Now consider a message
m1 m2 . . . mi ,
where i ∈ N∗ represents its length. For encryption we are going to use the
following algorithm:
1) Assign to each letter its code and construct the following infinite se-
quence {Gn }n of positive integers, where
Gn = Φ(mn+1 ), n = 0, i − 1 and Gn = 0, ∀n ≥ i;
191
∞
X
2) Consider the function tp f (t) such that f (t) = an tn , an ≥ 0 and
n=0
p ∈ N∗ . Define the function
∞
X i−1
X
p n
g(t) = t Gn an t = Gn an tn+p ;
n=0 n=0
Hn ≡ Fn mod 26, n = 0, i − 1;
kn = (Fn − Hn )/26, n = 0, i − 1.
Remark 3.4.14. Given an integer n > 1, two integers a and b are said
to be congruent modulo n, if n is a divisor of their difference. Congruence
modulo n is denoted by a ≡ b mod n. For example, 38 ≡ 2 mod 12 because
38 − 2 = 36, which is a multiple of 12. Another way to express this is to say
that the remainder that we obtain by dividing 38 to 12 is 2.
192
At the end of the encryption algorithm we obtain the cipher text
b1 b2 . . . bi
1) Assign to each letter its code and construct a finite sequence {G0n }n of
positive integers, where
G0n = Φ(bn+1 ) − 1, n = 0, i − 1;
2) Use the encryption key and determine a finite sequence {Fn }n of posi-
tive integers, where
Fn = 26 · kn + G0n , n = 0, i − 1;
Now we show how to use the algorithms described above on the plain text
LAPLACE using p = 1 and f (t) = e2t .
193
1) We obtain that
G0 = G3 = 12, G1 = G4 = 1, G2 = 16, G5 = 3, G6 = 5
and Gn = 0, ∀n ≥ 7;
∞
X 2n
2) Since p = 1 and f (t) = e = 2t
tn , we get the function
n=0
n!
∞ 6
X 2n n X 2n n+1
g(t) = t Gn t = Gn t
n=0
n! n=0
n!
2 2 22 23 24 25 26
= G0 t + G1 t + G2 t3 + G3 t4 + G4 t5 + G5 t6 + G6 t7
1! 2! 3! 4! 5! 6!
2 2 22 3 23 4 24 5 25 6 26 7
= 12 · t + t + 16 · t + 12 · t + t + 3 · t + 5 · t ;
1! 2! 3! 4! 5! 6!
3) We compute
" 6
# 6
2n n+1
X X 2n (n + 1)!
L[g(t)](s) = L Gn t = Gn
n=0
n! n=0
n! sn+2
12 4 192 384 80 576 2240
= 2
+ 3+ 4 + 5 + 6 + 7 + 8 ;
s s s s s s s
4) One gets
H0 ≡ 12 mod 26, H1 ≡ 4 mod 26,
H2 ≡ 10 mod 26, H3 ≡ 20 mod 26,
H4 ≡ 2 mod 26, H5 ≡ 4 mod 26, H6 ≡ 4 mod 26;
5) We obtain that
194
b6 = Φ−1 (H5 + 1) = Φ−1 (5) = E;
b7 = Φ−1 (H6 + 1) = Φ−1 (5) = E.
6) We calculate
1) We obtain that
G00 = 12, G01 = G05 = G06 = 4, G02 = 10, G03 = 20 and G04 = 2;
2) We calculate
• F0 = 26 · k0 + G00 = 26 · 0 + 12 = 12;
• F1 = 26 · k1 + G01 = 26 · 0 + 4 = 4;
• F2 = 26 · k2 + G02 = 26 · 7 + 10 = 192;
• F3 = 26 · k3 + G03 = 26 · 14 + 20 = 384;
• F4 = 26 · k4 + G04 = 26 · 3 + 2 = 80;
• F5 = 26 · k5 + G05 = 26 · 22 + 4 = 576;
• F6 = 26 · k6 + G06 = 26 · 86 + 4 = 2240;
195
3) We construct the function
6
X Fn
F (s) =
n=0
sn+2
12 4 192 384 80 576 2240
= 2
+ 3+ 4 + 5 + 6 + 7 + 8 ;
s s s s s s s
One obtains
196
3.5 Exercises
E 16. Calculate the Laplace transform of the following original functions
f (t):
a) f (t) = t · sinh(2t) + 2 cos2 t, t ≥ 0;
Z t
b) f (t) = e −t
xex cos x dx, t ≥ 0;
0
e−2t + 1
c) f (t) = · sin(3t), t > 0;
t
1
d) f (t) = √ − cosh2 t, t > 0;
t
t
e , t ∈ [0, 1)
e) f (t) = 1, t ∈ [1, 2) .
0, t ≥ 2
Finally,
4s s2 + 2
L[t · sinh(2t) + 2 cos2 t](s) = + ;
(s2 − 4)2 s(s2 + 4)
197
Image) and Example 3.2.3 (Cosine Function). It follows that
Z t Z t
−t x x
L e xe cos x dx (s) = L xe cos x dx (s + 1)
0 0
1
= · L[tet cos t](s + 1)
s+1
1 0
= · (−1) · L[et cos t](s + 1)
s+1
1
= · (−1) · (L[cos t](s + 1 − 1))0
s+1
0
−1 0 −1 s
= · (L[cos t](s)) = · .
s+1 s+1 s2 + 1
Finally,
t
1 − s2
−1 s−1
Z
−t x
L e xe cos x dx (s) = · 2 2
= 2 ;
0 s + 1 (s + 1) (s + 1)2
e−2t + 1
−2t
e sin 3t + sin 3t
L · sin(3t) (s) = L (s)
t t
Z ∞
= L[e−2t sin 3t + sin 3t](x) dx
Zs ∞ Z ∞
−2t
= L[e sin 3t](x) dx + L[sin 3t](x) dx
Zs ∞ Z s∞
3
= L[sin 3t](x + 2) dx + 2
dx
x +9
Zs ∞ s
x ∞
3
= dx + arctan
s (x + 2)2 + 9 3 s
x+2 ∞ π s
= arctan + − arctan
3 s 2 3
π s+2 π s
= − arctan + − arctan .
2 3 2 3
198
Finally,
e−2t + 1
s+2 s
L · sin(3t) (s) = π − arctan − arctan ;
t 3 3
e) Since the function f (t) changes for t ≥ 0, we cannot use the properties
of the Laplace transform, but only its definition (Definition 3.1.1). It follows
that
Z ∞
L[f (t)](s) = f (t)e−st dt
Z0 1 Z 2 Z ∞
−st −st
= t
e ·e dt + 1·e dt + 0 · e−st dt
0 1 2
Z 1 Z 2
et(1−s) 1
e−st 2
= et(1−s) dt + e−st dt + 0 = − .
0 1 1−s 0 s 1
Finally,
e1−s − 1 e−2s − e−s
L[f (t)](s) = − .
1−s s
199
W 15. Calculate the Laplace transform of the following original functions
f (t):
a) f (t) = t2 + 2 cos t − et sin 2t, t ≥ 0;
b) f (t) = sin(t + π) + 4 sin2 3t, t ≥ 0;
Z t
2t
c) f (t) = te − x2 e−ax dx, t > 0, a ∈ R∗ ;
0
Z t
d) f (t) = ex (t − x)2021 dx, t > 0;
0
cos t − cos 3t
e) f (t) = , t > 0;
t
1, t ∈ [0, 1)
f) f (t) = t2 , t ∈ [1, π) ;
0, t ≥ π
t, t ∈ [0, π)
g) f (t) = , f (t + 2π) = f (t) for t > 0.
π − t, t ∈ [π, 2π)
Answer. a) One gets
2 2s 2
L[f (t)](s) = + − ;
s3 s2 + 1 s2 − 2s + 2
c) One gets
1 2
L[f (t)](s) = − ;
(s − 2)2 s(s + a)3
200
e) One obtains
Z ∞ Z ∞
x x
L[f (t)](s) = L[cos t − cos 3t](x) dx = − dx
s s x2 + 1 x2 + 9
2 2
1 x +1 ∞ 1 s +1
= · ln = − ln 2 ;
2 x2 + 9 s 2 s +9
f) One gets
1 − π 2 e−πs
2 −πs 1 −s 1
L[f (t)](s) = + 2 e −π − +e 1+ ;
s s s s
E 17. Calculate, using the Laplace transform, the following improper inte-
grals:
Z ∞ −at
e − e−bt
a) dt, a, b > 0, a 6= b;
0 t
Z ∞
sin t + t cos t −at
b) ·e dt, a > 0.
0 t
Solution. The idea is to use Corollary 3.2.25 and formula (3.18).
a) One gets
Z ∞ −at Z ∞
e − e−bt
dt = L[e−at − e−bt ](x) dx
0 t
Z0 ∞
L[e−at ](x) − L[e−bt ](x) dx
=
Z0 ∞
∞
1 1 x+a
= − dx = ln
0 x+a x+b x+b 0
a
b
= ln 1 − ln = ln ;
b a
201
b) One obtains
Z ∞ Z ∞
sin t + t cos t −at
·e dt = L[(sin t + t cos t)e−at ](x) dx
0 t 0
Z ∞
(3.10)
= L[(sin t + t cos t)](x + a) dx
Z0 ∞
= L[(t sin t)0 ](x + a) dx.
0
W 16. Calculate, using the Laplace transform, the following improper inte-
grals:
Z ∞
sin(at) + sin(bt)
a) dt, a, b > 0;
Z0 ∞ t
sin(at) · cos(bt)
b) dt, a, b > 0, a 6= b.
0 t
202
Answer. We employ again Corollary 3.2.25 and formula 3.18 and various
properties of the Laplace transform.
a) One gets
Z ∞ Z ∞
sin(at) + sin(bt) a b
dt = + dx
0 t 0 x 2 + a2 x 2 + b 2
x ∞ x ∞
= arctan + arctan
a 0 b 0
= π;
1 A B
= + , A, B ∈ R.
s2 + 5s + 6 s+2 s+3
One way to determine the constants A and B is the following:
1
A = lim (s + 2)F (s) = lim = 1;
s→−2 s→−2 s+3
1
B = lim (s + 3)F (s) = lim = −1.
s→−3 s→−3 s + 2
203
Therefore,
1 1
F (s) = − = L[e−2t ](s) − L[e−3t ](s),
s+2 s+3
which implies that
f (t) = e−2t − e−3t ;
b) One obtains
s+3 s+3 A Bs + C
F (s) = 3
= 2
= + 2 , A, B, C ∈ R.
s + 9s s(s + 9) s s +9
We determine the constants A, B and C equalizing the corresponding nu-
merators. Hence,
s + 3 = A(s2 + 9) + s(Bs + C) = s2 (A + B) + Cs + 9A
A+B =0 1 1
We obtain the system C = 1 , which has the solution A = , B = −
3 3
9A = 3
and C = 1. Therefore,
1 1 1 s 1
F (s) = · − · 2 + 2
3 s 3 s +9 s +9
1 1 1 3
= · L[1](s) − · L[cos 3t](s) + · 2
3 3 3 s +9
1 1 1
= · L[1](s) − · L[cos 3t](s) + · L[sin 3t](s),
3 3 3
which implies that
1 − cos 3t + sin 3t
f (t) = ;
3
c) One gets
s3 + s2 + 1 s3 + s2 + 1 As + B Cs + D
F (s) = = = + 2 ,
s4 + 5s2 + 4 (s2 + 1)(s2 + 4) s2 + 1 s +4
where A, B, C, D ∈ R. Equalizing again the corresponding numerators, we
obtain that
s3 + s2 + 1 = s3 (A + C) + s2 (B + D) + s(4A + C) + 4B + D.
204
A+C =1
B+D =1
Now we identify the coefficients and we get the system ,
4A +C =0
4B + D = 1
1 4
which has the solution A = − , C = , B = 0 and D = 1. Therefore,
3 3
1 s 4 s 1
F (s) = − · 2 + · 2 + 2
3 s +1 3 s +4 s +4
1 4 1 2
= − L[cos t](s) + L[cos 2t](s) + · 2
3 3 2 s +4
1 4 1
= − L[cos t](s) + L[cos 2t](s) + L[sin 2t](s),
3 3 2
which implies that
1 4 1
f (t) = − cos t + cos 2t + sin 2t.
3 3 2
205
hence,
c) One gets
s s
F (s) = =
s3
− 3s + 2 (s − 1)2 (s + 2)
2 1 2 1 1 1
= · − · + · ;
9 s − 1 9 s + 2 3 (s − 1)2
hence,
2 2 1
f (t) = et − e−2t + tet .
9 9 3
E 19. Using residues, determine the originals f (t), t > 0 of the following
Laplace transforms:
s2 + s + 2
a) F (s) = ;
(s − 1)4
s2 − 1
b) F (s) = ;
s3 + 9s
s−1
c) F (s) = 2 .
(s + 1)2
1. Construct the function G(s) = F (s) · est and determine the isolated
singular points of G;
(s2 + s + 2)est
a) 1. One gets G(s) = and s = 1 is the only isolated
(s − 1)4
singular point of G (pole of order 4);
206
2. We compute the residue of G in s = 1 and we obtain that
1 000 1 000
lim (s − 1)4 · G(s) = lim (s2 + s + 2)est
res (G, 1) =
3! s→1 6 s→1
1
= lim est t3 (s2 + s + 2) + 3t2 (2s + 1) + 6t
6 s→1
et (4t3 + 9t2 + 6t)
= ;
6
3. We conclude that
(s − 1)est
c) 1. One gets G(s) = and the isolated singular points are ±i,
(s2 + 1)2
both being poles of order 2;
207
2. We compute the residues of G in s = ±i and we obtain that
0
(s − 1)est
2
0
res (G, i) = lim (s − i) G(s) = lim
s→i s→i (s + i)2
[est + (s − 1)test ](s + i) − 2(s − 1)est
= lim
s→i (s + i)3
eit (t − 1 + it)
= ,
4i
e−it (t − 1 − it)
res (G, −i) = res (G, i) = ;
−4i
3. We conclude that
W 18. Using residues, determine the originals f (t), t > 0 of the following
Laplace transforms:
1
a) F (s) = 3 2
;
s − 3s + 3s − 1
1
b) F (s) = 2 ;
(s + 4)2
se−s
c) F (s) = .
s2 − 1
Answer. We will follow the algorithm described in Exercise 19.
est
a) G(s) = , s = 1 is the only isolated singular point of G (pole of
(s − 1)3
t2 et
order 3) and f (t) = ;
2
208
est
b) G(s) = , s1,2 = ±2i are the isolated singular points of G
(s2 + 4)2
(both poles of order 2),
E 20. Determine the originals f (t), t > 0 of the following Laplace transforms:
s2 − s + 2
a) F (s) = ;
(s2 + 1)(s − 1)
1
b) F (s) = ;
(s − 1)3 (s2 + 4)
s+4 s2 + s + 1
c) F (s) = + .
s2 − 3s + 2 (s − 2)3
Solution. a) Based on how the denominator looks like, the easiest method
is to use partial fraction decomposition. One obtains
s2 − s + 2 As + B C
F (s) = = + , A, B, C ∈ R.
(s2 + 1)(s − 1) s2 + 1 s−1
209
which implies that
f (t) = − sin t + et ;
est
(s − 1)3 e2it (2i + 1)3
res (G, 2i) = =− ,
2s s=2i 500i
est
(s − 1)3 e−2it (−2i + 1)3
res (G, −2i) = = .
2s s=−2i 500i
3. We conclude that
210
s+4
c) We write F (s) = F1 (s)+F2 (s), where F1 (s) = and F2 (s) =
s2 − 3s + 2
s2 + s + 1
. We determine the original f1 (t) of F1 (s) using partial fraction
(s − 2)3
decomposition. Hence,
s+4 s+4 A B
F1 (s) = = = + , A, B ∈ R.
s2 − 3s + 2 (s − 1)(s − 2) s−1 s−2
It follows that s + 4 = A(s − 2) + B(s− 1) = s(A + B) − 2A − B and
A+B =1
identifying the coefficients, we get that , so A = −5 and
−2A − B = 4
B = 6. Therefore,
1 1
F1 (s) = −5 · +6· ;
s−1 s−2
hence,
f1 (t) = −5et + 6e2t .
For the second part of the exercise, which is determining the original f2 (t)
of F2 (s), we will use residues.
est (s2 + s + 1)
1. One gets G(s) = and s = 1 is the only isolated singular
(s − 2)3
point of G (pole of order 3);
2. We compute the residue of G in s = 1 and we get that
1 00 1 00
res (G, 1) = lim (s − 1)3 G(s) = lim est (s2 + s + 1)
2! s→1 2 s→1
1 st 2 0
= lim e (ts + ts + t + 2s + 1)
2 s→1
1
= lim test (ts2 + ts + t + 2s + 1) + est (2st + t + 2)
2 s→1
1
= et (3t2 + 6t + 2);
2
3. We conclude that
1
f2 (t) = et (3t2 + 6t + 2).
2
Hence,
1
f (t) = f1 (t) + f2 (t) = −5et + 6e2t + et (3t2 + 6t + 2)
2
2t 1 t 2
= 6e + e (3t + 6t − 8).
2
211
W 19. Determine the originals f (t), t > 0 of the following Laplace trans-
forms:
s2 + s + 4
a) F (s) = 3 ;
s + s2 + s + 1
1
b) F (s) = 2 ;
(s − 4)2
s 1
c) F (s) = 2 + 2 .
s + 3s + 2 (s + s + 1)2
E 21. Determine the solution y(t), t ≥ 0 of the following initial value prob-
lems:
a) y 00 + y 0 = sin t, y(0) = 1, y 0 (0) = 2;
b) y 00 − 2y 0 + y = tet , y(0) = 0, y 0 (0) = 1;
c) y 000 − 3y 00 = sinh(3t), y(0) = 1, y 0 (0) = 0, y 00 (0) = 1.
212
1
Hence, s2 Y (s) − s − 2 + sY (s) − 1 = , which implies that
s2 + 1
s+3 1 s3 + 3s2 + s + 4
Y (s) = + = .
s2 + s (s2 + 1)(s2 + s) s(s + 1)(s2 + 1)
For determining the original y(t), we will use partial fraction decomposition.
Therefore,
A B Cs + D
Y (s) = + + 2 , A, B, C, D ∈ R.
s s+1 s +1
We get the equality s3 + 3s2 + s + 4 = A(s + 1)(s2 + 1) + Bs(s2 + 1) + (Cs +
D)(s2 + s), ∀s ∈ C. Hence,
- for s = 0, we get that 4 = A;
5
- for s = −1, we get that 5 = −2B ⇒ B = − ;
2
for s = i, we get that 1 = (Ci + D)(−1 + i) = −C − D + i(D − C) ⇒
-
D−C =0 1 1
⇒C=− ,D=− .
−C − D = 1 2 2
It follows that
1 5 1 1 s 1 1
Y (s) = 4 · − · − · 2 − · 2
s 2 s+1 2 s +1 2 s +1
5 1 1
= 4L[1](s) − L[e−t ](s) − L[cos t](s) − L[sin t](s),
2 2 2
so the original is
5 1 1
y(t) = 4 − e−t − cos t − sin t;
2 2 2
213
L[y 0 ](s) = sY (s) − y(0) = sY (s).
For computing L[tet ](s), one can use Theorem 3.2.16 and formula (3.14) and
we obtain that
0
t t
0 1 1
L[te ](s) = (−1) · L[e ](s) = (−1) · = .
s−1 (s − 1)2
1
It follows that s2 Y (s) − 1 − 2sY (s) + Y (s) = , which implies that
(s − 1)2
1 1
Y (s) = 2
+ .
(s − 1) (s − 1)4
1 1
As L[tet ](s) = 2
, we just need to determine the original of .
(s − 1) (s − 1)4
For this we will use the residues method and the algorithm described in
Exercise 19.
est
1. One gets G(s) = and s = 1 is the only isolated singular point
(s − 1)4
of G (pole of order 4);
2. We compute the residue of G in s = 1 and we get that
1 000 1 000
res (G, 1) = lim (s − 1)4 G(s) = lim est
3! s→1 6 s→1
3 t
1 te
= lim(t3 est ) = .
6 s→1 6
1 t3 et
3. We conclude that the original of is .
(s − 1)4 6
It follows that the solution of the initial value problem is
t3 et
y(t) = tet + ;
6
c) Applying Laplace transform to the differential equation and using The-
orem 3.2.1 (Linearity), one gets
L[y 000 ](s) − 3L[y 00 ](s) = L[sinh(3t)](s).
From Theorem 3.2.14 (General Differentiation of the Original) and for-
mula (3.12),
L[y 000 ](s) = s3 Y (s) − s2 y(0) − sy 0 (0) − y 00 (0) = s3 Y (s) − s2 ,
214
L[y 00 ](s) = s2 Y (s) − sy(0) − y 0 (0) = s2 Y (s) − s.
3
Using this and the fact that L[sinh(3t)](s) = (see Example 3.2.4
−9 s2
(Hyperbolic Sine Function)), we obtain that s3 Y (s) − s2 − 3(s2 Y (s) − s) =
3
2
, which implies that
s −9
1 3
Y (s) = + 2 .
s s (s − 3)2 (s + 3)
1
We already know from Example 3.1.9 that = L[1](s), so we just need
s
3
to determine the original of F (s) = 2 . For this we will use
s (s − 3)2 (s + 3)
the residues method and the algorithm described in Exercise 19.
3est
1. One gets G(s) = 2 and the isolated singular points are
s (s − 3)2 (s + 3)
s1 = 0 (pole of order 2), s2 = 3 (pole of order 2) and s3 = −3 (simple pole);
2. We compute the residues of G in all isolated singular points and we
get that 0
est
3t + 1
res (G, 0) = 3 lim = ,
s→0 (s − 3)2 (s + 3) 27
0
est e3t (6t − 5)
res (G, 3) = 3 lim 2 = ,
s→3 s (s + 3) 108
est e−3t
res (G, −3) = 3 lim = .
s→−3 s2 (s − 3)2 108
3t + 1 e3t (6t − 5) e−3t
3. We conclude that f (t) = + + .
27 108 108
It follows that the solution of the initial value problem is
215
b) y 00 − y = t cos t, y(0) = 1, y 0 (0) = 0;
Z t
00 0
c) y − 4y + 4y = xex dx, y(0) = 0; y 0 (0) = 1;
0
1
d) y 00 + y = , y(0) = 0, y 0 (0) = −1;
cos t
e) y 000 − 3y 00 + 3y 0 − y = cos t − sin t, y(0) = 0, y 0 (0) = 1, y 00 (0) = −1;
f) y 000 + y 00 = t · sinh t, y(0) = 1, y 0 (0) = 2, y 00 (0) = 0.
hence,
y(t) = et − tet + te2t ;
b) One obtains
s 1
Y (s) = + 2 ;
s2 − 1 (s + 1)2
hence,
sin t − t cos t
y(t) = cosh t + ;
2
c) One gets
1 1
Y (s) = + ;
(s − 2)2 s(s − 1)2 (s − 2)2
hence,
e2t (2t − 5) + 1
y(t) = te2t + et (t + 1) + ;
4
d) One obtains
1 1 1
Y (s) = − 2 + ·L (s)
s + 1 s2 + 1 cos t
1
= −L[sin t](s) + L[sin t](s) · L (s)
cos t
1
= −L[sin t](s) + L sin t ∗ (s);
cos t
216
hence,
Z t
1 sin(t − x)
y(t) = − sin t + sin t ∗ = − sin t + dx
cos t 0 cos x
Z t
sin t cos x − sin x cos t
= − sin t + dx
0 cos x
Z t Z t
= − sin t + sin t dx − cos t tan x dx
0 0
= − sin t + t sin t + cos t · ln(| cos t|);
e) One gets
s−4 1
Y (s) = 3
+ ;
(s − 1) (s − 1) (s2 + 1)
2
hence,
et (−3t2 + 2t) et (t − 1) cos t
y(t) = + +
2 2 2
et (−3t2 + 3t − 1) + cos t
= ;
2
f) One obtains
s2 + 3s + 2 2s
Y (s) = 3 2
+ 2
s +s (s − 1)2 (s3 + s2 )
(s + 1)(s + 2) 2s
= 2
+ 2
s (s + 1) s (s + 1)(s2 − 1)2
s+2 2
= +
s2 s(s + 1)3 (s − 1)2
1 2 2
= + 2+ ;
s s s(s + 1)3 (s − 1)2
hence,
et (2t − 5)
2
−t t 11
y(t) = 1 + 2t + 2 1 + +e − −t−
16 4 8
t
2
e (2t − 5) t 11
= 3 + 2t + − e−t + 2t + .
8 2 4
217
E 22. Determine the solution of the following systems of differential equa-
tions with the specified initial conditions:
0
x + x + y0 = 0
a) , x(0) = 1, y(0) = 2, y 0 (0) = 0;
x − y 00 = e−t
x − y 0 = sin t + cos t
b) , x(0) = 1, x0 (0) = 1, y(0) = 1, y 0 (0) =
x + y0 = 0
00
0;
0
x + x + y 0 + y = e−t
c) y 0 + z 00 = 0 , x(0) = 0, y(0) = 0, z(0) = 0, z 0 (0) = 1.
x + z 0 = et
218
( s+1
L[x](s) − sL[y](s) + y(0) =
s2 + 1 ⇔
s2 L[x](s) − sx(0) − x0 (0) + sL[y](s) − y(0) = 0
( s+1
L[x](s) − sL[y](s) = 2 −1
s +1 .
s2 L[x](s) + sL[y](s) = 2 + s
s+1
Adding the two algebraic equations, we get that (s2 + 1)L[x](s) = 2 +
s +1
1 + s, so
s+1 s+1
L[x](s) = 2 2
+ 2 .
(s + 1) s +1
It follows that
3+s s+1
L[y](s) = − .
s(s + 1) s(s2 + 1)2
2
219
Hence,
(t − 2) cos t + t sin t
x(t) = + cos t + sin t.
4
For determining the original y, one can use partial fraction decomposition
3+s s+1
for 2
and residues for .
s(s + 1) s(s2 + 1)2
3+s 3 s 1
For the first fraction we get that 2
= −3 2 + 2 , so
s(s + 1) s s +1 s +1
−1 3+s
L (t) = 3 − 3 cos t + sin t.
s(s2 + 1)
Hence,
(2 + t) cos t − (1 − t) sin t
y(t) = 3 − 3 cos t + sin t − 1 +
2
(2 + t) cos t − (1 − t) sin t
= 2 − 3 cos t + sin t − ;
2
220
following algebraic system:
1
L[x](s) + L[y](s) =
(s + 1)2
1
L[y](s) + sL[z](s) =
s
1
L[x](s) + sL[z](s) =
s−1
It is advisable to use Cramer’s rule for obtaining the solutions X(s) =
1 1 0
L[x](s), Y (s) = L[y](s) and Z(s) = L[z](s). Hence, ∆ = 0 1 s = 2s
1 0 s
and
1
(s+1)2
1 0
s s
∆X = 1
s
1 s = 2
+ − 1,
1 (s + 1) s−1
s−1
0 s
1
1 (s+1)2
0
s s
∆Y = 0 1
s
s =1+ 2
− ,
1 (s + 1) s−1
1 s−1
s
1
1 1 (s+1)2 1 1 1
∆Z = 0 1 1
s
= + − .
1 s − 1 s (s + 1)2
1 0 s−1
We get that
∆X 1 1 1 1
X(s) = = + − ,
∆ 2 (s + 1)2 s − 1 s
∆Y 1 1 1 1
Y (s) = = − + ,
∆ 2 (s + 1)2 s − 1 s
∆Z 1 1 1 1
Z(s) = = + − .
∆ 2 s(s − 1) s2 s(s + 1)2
As X(s) and Y (s) are already decomposed in partial fractions, it follows that
the originals are
1
te−t + et − 1
x(t) =
2
221
and
1
te−t − et + 1 .
y(t) =
2
1 1 1 1 1 1 1
Since = − and 2
= − − , it follows
s(s − 1) s−1 s s(s + 1) s s + 1 (s + 1)2
that
1 1 2 1 1 1
Z(s) = − + + +
2 s − 1 s s2 s + 1 (s + 1)2
and the original is
1
te−t + et + e−t − 2 + t .
z(t) =
2
222
Hence, using partial fraction decomposition for the last ratio, we obtain that
1 1 1 s+2
Y (s) = + 2 − + 2 ,
s s + 4 s s + 2s + 2
1
so y(t) = sin 2t + e−t cos t + e−t sin t;
2
b) One gets
s+2 1 1 1 s 1 1
L[x](s) = X(s) = 2
= · − · 2 + · 2 ,
4s(s + 1) 2 s 2 s +1 4 s +1
1 1 1
so x(t) = − cos t + sin t.
2 2 4
On the other hand,
s−2 1 1 1 s 1 1
L[y](s) = Y (s) = − 2
= · − · 2 − · 2 ,
4s(s + 1) 2 s 2 s +1 4 s +1
1 1 1
so y(t) = − cos t − sin t;
2 2 4
c) One obtains
3 1 1 3 1
L[x](s) = X(s) = − · 2 + + · 3 ,
2 s s 2 s
3 3
so x(t) = − t + 1 − t2 .
2 4
On the other hand,
1 1
L[y](s) = Y (s) = − 3
+ 2,
s s
1
so y(t) = − t2 + t.
2
Finally,
3 1 1 1 1
L[z](s) = Z(s) = · 2 − − · 3,
2 s s 2 s
3 1
so z(t) = t − 1 − t2 .
2 4
223
E 23. Determine the solution y(t), t ≥ 0 of the following integral equations:
Z t
a) y(t) − cosh(2(t − x))y(x) dx = 1 − t − 2t2 ;
0
Z t
b) y(t) − et−x (t − x)y(x) dx = sin t;
0
Z t Z t
x
c) y(t − x)e dx − y(x) dx = t sinh t.
0 0
Using Theorem 3.2.26 (Convolution) and formula (3.20) alongside with Ex-
amples 3.2.4 (Hyperbolic Cosine Function) and 3.1.11 (Power Function), it
follows that
s 1 1 2!
Y (s) − Y (s) · 2 = − 2 − 2 · 3,
s −4 s s s
which implies that
s2 − 4 1 4
Y (s) = 3
= − 3,
s s s
2
so y(t) = 1 − 2t ;
b) We notice again that we have a convolution product (see (3.41)),
namely Z t
et−x (t − x)y(x) dx = tet ∗ y(t).
0
Hence, the integral equation can be written as
224
Applying Laplace transform together Theorem 3.2.1 (Linearity), one gets
1
Y (s) − L[tet ](s) · Y (s) = .
s2 +1
Using Theorem 3.2.26 (Convolution) and formula (3.20) alongside with Ex-
ample 3.2.12, it follows that
(s − 1)2
1 1
1− Y (s) = ⇒ Y (s) = .
(s − 1)2 s2 + 1 s(s − 2)(s2 + 1)
Now we use partial fraction decomposition. Therefore,
(s − 1)2 A B Cs + D
2
= + + 2 ,
s(s − 2)(s + 1) s s−2 s +1
1 1 2 4
which implies that A = − , B = , C = and D = . Hence,
2 10 5 5
1 1 1 1 2 s 4 1
Y (s) = − · + · + · 2 + · 2 ,
2 s 10 s − 2 5 s + 1 5 s + 1
1 1 2 4
so y(t) = − + e2t + cos t + sin t;
2 10 5 5
c) Here we have two integrals. The first is the convolution product
et ∗ y(t)
and the second one is just a primitive. Hence, applying Laplace transform
and Theorem 3.2.1 (Linearity), one gets
Z t
t
L[e ∗ y(t)](s) − L y(x) dx (s) = L[t sinh t](s).
0
225
0
1
0 2s
L[t sinh t](s) = (−1) · (L[sinh t](s)) = (−1) · 2
= 2
s −1 (s − 1)2
(using Theorem 3.2.16 (Differentiation of the Image), formula (3.14)
and Example 3.2.4 (Hyperbolic Sine Function)).
We get that
1 1 2s 1 2s
− Y (s) = 2 2
⇔ Y (s) = ,
s−1 s (s − 1) s(s − 1) (s − 1)2 (s + 1)2
which implies that
2s2
Y (s) = .
(s − 1)(s + 1)2
Now we use to use residues.
2s2 est
1. One gets G(s) = and the isolated singular points are
(s − 1)(s + 1)2
s1 = 1 (simple pole) and s2 = −1 (pole of order 2);
2. We compute the residues of G in all isolated singular points and we
obtain that
2s2 est et
res (G, 1) = lim = ,
s→1 (s + 1)2 2
2 st 0
se e−t (3 − 2t)
res (G, −1) = 2 lim = ;
s→−1 s − 1 2
3. We conclude that
et + e−t (3 − 2t)
y(t) = .
2
226
7 √
so y(t) = − √ sin( 3t) + 4 sin 2t;
3
b) One gets
s2 − 1
L[y](s) = Y (s) = ,
s2 (2s2 − 1)
1 1
so y(t) = t − √ sinh √ t ;
2 2
c) Here we have two convolution products. The first one is
y(t) ∗ cos t
227
which is equivalent to
s3 + s2 − 2s − 2 s
Y (s) = ,
s−1 s−1
so
s
Y (s) = .
− 2)(s + 1)(s2
We use partial fraction decomposition to determine the original. Hence,
s As + B C
= 2 +
(s2 − 2)(s + 1) s −2 s+1
−s + 2 1
= 2 +
s −2 s+1
s 1 1
=− 2 +2· 2 + ,
s −2 s −2 s+1
so √ √ √
y(t) = − cosh( 2t) + 2 sinh( 2t) + e−t ;
228
2
L[t ∗ sin 2t](s) = L[t](s) · L[sin 2t](s) = (using Theorems
s2 (s2 + 4)
3.2.26 (Convolution) and Examples 3.2.2 (Sine Function) and 3.1.11
(Power Function)).
We get that
3s2
2
1+ 2 Y (s) = ,
s +4 s2 (s2 + 4)
which implies that
1 1 1 1
Y (s) = 2 2 = 2
− 2 ,
2s (s + 1) 2 s s +1
so
1
y(t) = (t − sin t) .
2
s−1 s−1
Y (s) = = .
(s2 + 1)(s3 2
− 2s + s − 2) (s − 2)(s2 + 1)2
229
0
st s−1 1
res (G, i) = lim e · ·
s→i s − 2 (s + i)2
st s − 1 1
= lim te · · −
s→i s − 2 (s + i)2
st 1 1 s−1 2
− lim e · + ·
s→i (s − 2)2 (s + i)2 s − 2 (s + i)3
eit
1−i 1 i−1
= t· + +
4 i − 2 (i − 2)2 i − 2
eit
i − 3 3 + 4i
= (t − 1) · + ,
4 5 25
b) We get that
1 1 1 1 1
s + 1 + 2 Y (s) = 1 + 2 + − =1+ 3 ,
s s s+1 s s + s2
230
2. F = laplace(f, p). This computes the Laplace transform F as a func-
tion of the parameter p instead of the default variable s;
Example 3.6.1.
>> f = exp(t) − sin(t);
F = laplace(f );
The answer is F = 1/(s − 1) − 1/(sˆ2 + 1).
Example 3.6.2.
>> syms p;
f = exp(t) − sin(t);
F = laplace(f, p);
The answer is F = 1/(p − 1) − 1/(pˆ2 + 1).
Example 3.6.3.
>> syms p x;
f = exp(x) − sin(x);
F = laplace(f, p);
The answer is F = 1/(p − 1) − 1/(pˆ2 + 1).
Example 3.6.5.
>> syms t;
f = heaviside(t);
F = laplace(f );
The answer is F = 1/s.
231
Dirac and Delay
Example 3.6.6.
>> syms t s;
F = laplace(dirac(t − 5), t, s);
The answer is F = exp(−5*s).
Example 3.6.8.
>> syms t;
f = 1 − heaviside(t − 3);
F = laplace(f );
The answer is F = 1/s − exp(−3*s)/s.
Example 3.6.9.
>> syms t a;
f = heaviside(t) − 2*heaviside(t − 4) + heaviside(t − 2*4);
F = laplace(f );
The answer is F = exp(−8*s)/s − (2* exp(−4*s))/s + 1/s.
Example 3.6.10.
>> f = t*(heaviside(t) − heaviside(t − 3)) + heaviside(t − 3);
F = laplace(f );
The answer is F = 1/s2 − exp(−3*s)/s2 − (2* exp(−3*s))/s.
Exponentials
Example 3.6.11.
>> syms t a;
f = exp(a*t);
F = laplace(f );
The answer is F = −1/(a − s).
232
Example 3.6.12.
>> syms t a;
f = exp(i*a*t);
F = laplace(f );
The answer is F = −1/(a*i − s).
Example 3.6.13.
>> syms a b;
F = laplace(exp(a*t) − exp(b*t));
The answer is F = 1/(b − s) − 1/(a − s).
Translation
Example 3.6.14.
>> F = laplace(tˆ3* exp(5*t));
The answer is F = 6/(s − 5)ˆ4.
Example 3.6.15.
>> syms a n;
F = laplace(tˆn* exp(a*t));
The answer is F = piecewise(−1 < real(n) | 1 <= n & in(n, ’integer’),
gamma(n + 1)/(−a + s)ˆ(n + 1)).
Example 3.6.17.
>> syms a t;
f = cos(a*t);
F = laplace(f );
The answer is F = s/(aˆ2 + sˆ2).
233
Example 3.6.18.
>> f = 1 − cos(2*t);
F = laplace(f );
The answer is F = 1/s − s/(sˆ2 + 4).
Example 3.6.19.
>> f = 1 − cos(2*t);
F = simplify(laplace(f ));
The answer is F = 4/(s*(sˆ2 + 4)).
Example 3.6.20.
>> syms a;
F = laplace((cos(a*t))ˆ2);
The answer is F = (2*aˆ2 + sˆ2)/(s*(4*aˆ2 + sˆ2)).
Example 3.6.21.
>> syms a;
F = laplace(sinh(a*t));
The answer is F = −a/(aˆ2 − sˆ2).
Example 3.6.22.
>> syms a;
F = laplace(cosh(a*t));
The answer is F = −s/(aˆ2 − sˆ2).
Example 3.6.23.
>> syms a;
F = simplify(laplace(cos(a*t) + cosh(a*t)));
The answer is F = −(2*sˆ3)/(a4 − s4 ).
Example 3.6.24.
>> F = laplace(sin(2*t + pi));
The answer is F = −2/(sˆ2 + 4).
Example 3.6.25.
>> F = laplace(sin(2*t + 0.7*pi));
The answer is F = −((2ˆ(1/2)*(5 − ˆ(1/2))ˆ(1/2))/2 − s*(5ˆ(1/2)/4 +
1/4))/(sˆ2 + 4).
234
Example 3.6.26.
>> syms t a;
f = sin(a*t)/t;
F = laplace(f );
The answer is F = atan(a/s).
Example 3.6.27.
>> syms t n;
f = tˆn;
F = laplace(f );
The answer is F = piecewise([−1 < Re(n), gamma(n + 1)/sˆ(n + 1)]).
Example 3.6.28.
>> syms t a;
f = cos(a*t)/t;
F = laplace(f );
The answer is F = −eulergamma − log(s − a*i)/2 − log(s + a*i)/2.
Example 3.6.29.
>> syms t;
f = 1/sqrt(t);
F = laplace(f );
The answer is F = piˆ(1/2)/sˆ(1/2).
Example 3.6.30.
>> syms x y;
f = 1/sqrt(x);
laplace(f, x, y);
ans = piˆ(1/2)/yˆ(1/2)
Example 3.6.31.
>> syms a;
f = (1 − exp(a*t))/t;
F = laplace(f );
The answer is F = log(s − a) − log(s).
Example 3.6.32.
>> syms a b;
235
f = (exp(a*t) − exp(b*t))/t;
F = laplace(f );
The answer is F = log((b − s)/(a − s)).
Example 3.6.34.
>> F = simplify((1/(1 − exp(−pi*s/2)))*laplace((t* (pi/2 − t)*(heaviside(t)
− heaviside(t − pi/2)))));
The answer is F = (pi*s − 4* exp((pi*s)/2) + s*pi* exp((pi*s)/2) + 4)/
(2*sˆ3*(exp((pi*s)/2) − 1)).
Example 3.6.36.
>> syms f (t) s;
laplace(diff(diff(f (t)), t), t, s);
ans = sˆ2*laplace(f (t), t, s) − s*f (0) − D(f )(0)
Example 3.6.37.
>> syms f (t) s;
F = laplace(diff(diff(diff(f (t))), t), t, s);
The answer is F = sˆ3*laplace(f (t), t, s) − D(D(f ))(0) − s*D(f )(0) −
sˆ2*f (0).
236
Time Delay
Example 3.6.38.
>> syms t a f (t);
F = laplace(exp(a*t)*f (t));
The answer is F = laplace(f (t), t, s − a).
Example 3.6.39.
>> syms t a f (t);
F = laplace(f (t)* sin(a*t));
The answer is F = −(laplace(f (t), t, s − a*1i)*1i)/2 + (laplace(f (t), t,
s + a*1i)*1i)/2.
If laplace cannot find an explicit representation of the transform, it re-
turns an unevaluated call. As an example we have Theorem 3.2.5 (Similarity
or Change of Time Scale) and formula (3.7).
Example 3.6.40.
>> F = laplace(f (a*t));
The answer is F = laplace(f (a*t), t, s).
One can determine the Laplace transform of matrices. One uses matrices
of the same size to specify the transformation variables and evaluation points.
Matrices
Example 3.6.41.
>> syms t a b c s x y z;
M = laplace([sin(t), exp(2*a); dirac(b − 3), 1], [t, a; b, c], [s, x; y, z]);
The answer is
M=
[1/(sˆ2 + 1), 1/(x − 2)]
[ exp(−3*y), 1/z].
Example 3.6.42.
>> syms t s w x y z;
laplace([exp(t), t; sin(t), i*t], [t, x; t, t], [s, s; s, s]);
ans =
[1/(s − 1), t/s]
[1/(sˆ2 + 1), 1i/sˆ2].
237
When the input arguments are matrices, then laplace applies element-
wise. If the arguments are both scalar and matrix, then laplace expands the
scalar arguments into arrays of the same size as the matrix arguments with
all elements of the array equal to the scalar.
Example 3.6.43.
>> syms t a b c s x y z;
M = laplace(tˆ2, [t, a; b, c], [s, x; y, z]);
The answer is
M=
[2/sˆ3, tˆ2/x]
[tˆ2/y, tˆ2/z].
Example 3.6.44.
>> syms t s x y z;
M = laplace(tˆ2, t, [s, x; y, z]);
The answer is
M=
[2/sˆ3, 2/xˆ3]
[2/yˆ3, 2/zˆ3].
Example 3.6.45.
>> syms t s;
M = laplace(sinh(3*t), t, [s, s; s, s]);
The answer is
M=
[3/(sˆ2 − 9), 3/(sˆ2 − 9)];
[3/(sˆ2 − 9), 3/(sˆ2 − 9)].
Example 3.6.46.
>> syms f 1(t) f 2(t) s u;
f 1(t) = cosh(t);
f 2(t) = t* exp(4*t);
F = laplace([f 1(t) f 2(t)], t, [s u]);
The answer is F = [s/(sˆ2 − 1), 1/(u − 4)ˆ2].
238
3.6.2 Inverse Laplace Transform
The syntax is one of the following:
Example 3.6.47.
>> syms s;
f = ilaplace(5/(sˆ2 − 25));
The answer is f = exp(5*t)/2 − exp(−5*t)/2.
Another way that this can be done is the following:
>> syms s;
f = simplify(ilaplace(5/(sˆ2 − 25)));
The answer is f = sinh(5*t).
Example 3.6.48.
>> syms x s;
F = (sˆ2 − s + 2)/(sˆ3 − sˆ2 + s − 1); f = ilaplace(F, x);
The answer is f = exp(x) − sin(x).
Example 3.6.49.
>> syms a x y;
F = 1/sqrt(y + a); f = ilaplace(F, y, x);
F = 1/(a + y)ˆ(1/2);
The answer is f = exp(−a*x)/(xˆ(1/2)*piˆ(1/2)).
Example 3.6.50.
>> syms s;
239
F = exp(−pi*s)/s;
f = ilaplace(F );
The answer is f = heaviside(t − pi).
Example 3.6.52.
>> f = ilaplace(exp(−5*s));
The answer is f = dirac(t − 5).
Example 3.6.53.
>> syms a s;
F = piecewise(a < 0, 0, 0 <= a, exp(−a*s)); % i.e. F = 0 for a < 0 and
F = e−as for a ≥ 0
f = ilaplace(F );
The answer is f = piecewise(a < 0, 0, 0 <= a, dirac(a − t)).
240
row vectors with as many rows as outputs and as many columns as in-
puts. One can set den to the row vector representation of the common
denominator;
3. sys = tf(num, den, Ts) creates a discrete-time transfer function with
sample time Ts (in seconds). If T s = −1 or T s = [ ], then the sample
time is unspecified.
The default variable of the transfer matrix of a discrete-time system is z.
This is because the Z transform is used (instead of the Laplace transform).
Example 3.6.54.
>> A = [1 3 − 4 3; 0 0 1 0; 0 0 0 1; −2 − 1 − 3 0];
B = [2 1; 0 0; 0 0; 0 1];
C = [4 − 4 1 0; 0 9 0 0]; D = [1 2; 0 0];
sys = ss(A, B, C, D); % generates the continuous-time system sys
T M = tf(sys); % computes the transfer matrix TM of sys
The answer is
sys =
A=
x1 x2 x3 x4
x1 1 3 −4 3
x2 0 0 1 0
x3 0 0 0 1
x4 −2 −1 −3 0
B=
u1 u2
x1 2 1
x2 0 0
x3 0 0
x4 0 1
C=
x1 x2 x3 x4
y1 4 −4 1 0
y2 0 9 0 0
D=
u1 u2
y1 1 2
y2 0 0
241
Continuous-time state-space model.
TM =
Example 3.6.55.
>> h = 0.1;
sysd = ss(A, B, C, D, h); % generates the discrete-time system sysd with the
sample time h
M T = tf(sysd); % computes the transfer matrix TM of sysd
The answer is
sysd =
A=
x1 x2 x3 x4
x1 1 3 −4 3
x2 0 0 1 0
x3 0 0 0 1
x4 −2 −1 −3 0
B=
u1 u2
x1 2 1
x2 0 0
x3 0 0
x4 0 1
242
C=
x1 x2 x3 x4
y1 4 −4 1 0
y2 0 9 0 0
D=
u1 u2
y1 1 2
y2 0 0
TM =
Example 3.6.56.
>> num = {[1 − 2]; 3};
den = {[1 0 4]; [1 7]};
T M = tf(num, den);
The answer is
TM =
243
From input 1 to output...
s−2
1:
sˆ2 + 4
−36
2:
sˆ4 − sˆ3 + 9sˆ2 − 10s + 5
From input 2 to output...
3
1:
s+7
9s − 27
2:
sˆ4 − sˆ3 + 9sˆ2 − 10s + 5
Continuous-time transfer function.
Example 3.6.57.
>> num = {[1 0 3][1 − 1]; 0 [1 − 4]};
den = [1 − 5 6];
T s = 0.2;
T M d = tf(num, den, T s); % computes the discrete transfer matrix TMd with
common denominator d(z) = z 2 − 5z + 6 and sample time T s = 0.2 seconds
The answer is
TS =
0.2000
T MD =
From input 1 to output...
zˆ2 + 3
1:
zˆ2 − 5z + 6
2:0
From input 2 to output...
z−1
1:
zˆ2 − 5z + 6
z−4
2:
zˆ2 − 5z + 6
Sample time: 0.2 seconds
Discrete-time transfer function.
244
3.6.4 Partial Fraction Decomposition. Residue
The syntax is one of the following:
1. [r, p, k] = residue(P, Q). This computes the coefficients, poles and quo-
P
tient of the partial fraction decomposition of the ratio , where P and
Q
Q are polynomials;
2. [P, Q] = residue(r, p, k). This returns the coefficients of the two poly-
nomials P and Q corresponding to the partial fraction expansion.
P
If the ratio has simple poles p1 , p2 , . . . , pn , then the partial fraction
Q
decomposition is
P r1 r2 ri rn
=k+ + + ··· + ··· + ,
Q s − p1 s − p2 s − pi s − pn
P
where r1 , r2 , . . . rn are the residues of at these poles and k is a poly-
Q
nomial (see Example 3.6.58). The output is r = [r1 r2 . . . ri . . . rn ],
p = [p1 p2 . . . pi . . . pn ] and k.
If a pole pi has multiplicity m, then the partial fraction decomposition
contains the terms
ri1 ri2 rij rim
+ 2
+ ··· j
+ ··· +
s − pi (s − pi ) (s − pi ) (s − pi )m
and in the vector r, ri is replaced by the sequence ri1 ri2 . . . rij . . . rim (see
Example 3.6.59).
Example 3.6.58.
>> P = [1 0 1 − 4]; Q = [1 − 3 2];
[r, p, k] = residue(P, Q);
r=6
2
p=2
1
k= 1 3
245
s3 + s − 4 6 2
Therefore, 2
=s+3+ + .
s − 3s + 2 s−2 s−1
Example 3.6.59.
>> P = [3 3 3]; Q = [1 0 − 3 2];
[r, p, k] = residue(P, Q);
r=
1.0000
2.0000
3.0000
p=
− 2.0000
1.0000
1.0000
k=
[]
3s2 + 3s + 3 1 2 3
Therefore, = + + and
s3 − 3s + 2 s + 2 s − 1 (s − 1)2
2
−1 3s + 3s + 3
L = e−2t + 2et + 3tet .
s3 − 3s + 2
246
3. F = laplace(f(t), x, p). This computes the Laplace transform F as a
function of the variable p instead of the default variable s and considers
that f is a function of the variable x instead of the default variable t.
Example 3.7.1.
> with(inttrans):
laplace(exp(t) − sin(t), t, s)
1 1
− 2
s−1 s +1
Example 3.7.2.
> with(inttrans):
laplace(exp(t) − sin(t), t, p)
1 1
− 2
p−1 p +1
Example 3.7.3.
> with(inttrans):
laplace(exp(t) − sin(t), x, p)
1 1
− 2
p−1 p +1
F =1
Example 3.7.5.
> f := t → 1:
assume(t, positive):
with(inttrans):
F = laplace(f (t), t, s)
1
F =
s
247
Dirac and Delay
Example 3.7.6.
> assume(a, positive):
F = laplace(Dirac(t − a), t, s)
F = e−sa∼
248
Exponentials
Example 3.7.13.
> with(inttrans):
F = laplace(exp(a · t), t, s)
1
F =
s−a
Example 3.7.14.
> with(inttrans):
F = laplace(exp(I · a · t), t, s)
1
F =
s − Ia
Example 3.7.15.
> with(inttrans):
F = laplace(exp(a · t) − exp(b · t), t, s)
−b + a
F =
(s − a)(s − b)
Translation
Example 3.7.16.
> with(inttrans):
F = laplace(t3 · exp(5 · t), t, s)
6
F =
(s − 5)4
Example 3.7.18.
> with(inttrans):
F = laplace((t3 ) · exp(a · t), t, s)
6
F =
(s − a)4
Example 3.7.19.
> with(inttrans):
249
a := ’a’: b := ’b’:
f := t → exp(a · t) · cosh(b · t):
assume(t, positive):
laplace(f (t), t, s)
s − a)
(s − a)2 − b2
Power Functions
Example 3.7.24.
> with(inttrans):
f := t → t5 − 1:
F = laplace(f (t), t, s)
120 1
F = −
s6 s
250
Example 3.7.25.
> with(inttrans):
assume(n, natural):
F = laplace(tn , t, s)
F = n ∼! s−n∼−1
Example 3.7.26.
> with(inttrans):
sin(a · t)
F = laplace , t, s
t
a
F = arctan
s
Example 3.7.27.
> with(inttrans):
1
F = laplace , t, s
sqrt(t) r
π
F =
s
Example 3.7.28.
> with(inttrans):
F = laplace((1 − exp(a · t))/t, t, s)
F = ln(s − a) − ln(s)
Time Delay
Example 3.7.29.
> with(inttrans):
laplace(Heaviside(t − a)myfunc(t − a), t, s)
251
addtable(laplace, myf unc(t), M yf unc(s), t, s):
laplace(t · myfunc(t), t, s)
∂
− laplace(myf unc(t), t, s)
∂s
laplace(t2 · myfunc(t), t, s)
∂2
laplace(myf unc(t), t, s)
∂s2
252
Example 3.7.33.
> with(inttrans):
1 s
f = invlaplace 3 − 2 , p, x
p (p + 25)
1 1
f = x2 − s sin(5x)
2 5
Translation
Example 3.7.34.
> with(inttrans):
assume(a, positive):
1
f = invlaplace , s, t
sqrt(s + a)
e−at
f=√
πt
Example 3.7.36.
> with(inttrans):
1
f = invlaplace , s, t
s
f =1
Example 3.7.37.
> with(inttrans):
f = invlaplace(exp(t), s, t)
f = et Dirac(t)
Example 3.7.38.
> with(inttrans):
253
assume(a, positive):
1
f = invlaplace , s, t
(s − a)
f = eat
Example 3.7.39.
> with(inttrans):
F := s → 1/s5 :
invlaplace(F (s), s, t)
1 4
t
24
Example 3.7.40.
> with(inttrans):
F := s → −7/(s2 + 16):
invlaplace(F (s), s, t)
7
− sin(4t)
4
Example 3.7.41.
> with(inttrans):
F := s → 1/(s · (s − 2)):
invlaplace(F (s), s, t)
1 e2t
− +
2 2
3
3e−t cos(2t) + e−t sin(2t)
2
254
Example 3.7.43.
> with(inttrans):
ode1 := dif f (y(t), t) + 2 · y(t) = exp(2 · t):
y(0) := 0:
Lap1 := laplace(ode1, t, s):
Lp1 := subs(laplace(y(t), t, s) = Y (s), Lap1):
solve(Lp1, Y (s)): Y (s) := %:
invlaplace(Y (s), s, t)
1 −t
e sinh(2t)
2
Example 3.7.44.
> with(inttrans):
ode1 := dif f (y(t), t, t) + dif f (y(t), t) = cos(t) + sin(t):
y(0) := 0: D(y)(0) := 0:
Lap1 := laplace(ode1, t, s):
Lp1 := subs(laplace(y(t), t, s) = Y (s), Lap1):
solve(Lp1, Y (s)): Y (s) := %:
invlaplace(Y (s), s, t)
1 − cos(t)
Example 3.7.45.
> with(inttrans):
ode1 := dif f (y(t), t, t) + 4 · y(t) = 0:
y(0) := 5: D(y)(0) := 0:
Lap1 := laplace(ode1, t, s):
Lp1 := subs(laplace(y(t), t, s) = Y (s), Lap1):
solve(Lp1, Y (s)): Y (s) := %:
invlaplace(Y (s), s, t)
5 cos(2t)
255
C := M atrix(. . .)
D := M atrix(. . .)
sys := TransferFunction(A, B, C, D)
P rintSystem(sys)
The parameters num and den are the coefficients of the numerator and
denominator, respectively. For single-input/single-output systems, num and
den are vectors formed with the coefficients of polynomials starting with the
coefficient of the highest order term. For multi-input/multi-output systems,
num and den are matrices of vectors.
Example 3.7.46.
> with(DynamicSystems):
sys := T ransf erF unction([3, −1], [1, 5, 6]):
P rintSystem(sys)
Transfer Function
continuous
1 output(s); 1 input(s)
inputvariable = [u1(s)]
outputvariable = [y1(s)]
3s − 1
tf1,1 = 2
s + 5s + 6
256
Chapter 4
Z Transform
4.1 Definition
The Laplace transform of an original function f : R → C is defined by
the improper integral Z ∞
F (p) = f (t)e−pt dt.
0
257
By analogy, for a function f : Z → C one defines the discrete Laplace trans-
form or the Dirichlet transform by the series
∞
X
F (p) = f (t)e−pt .
t=0
which is called the Z transform of the discrete function f . This justifies the
following definitions.
ii) there exist M > 0 and R > 0 so that |f (t)| ≤ M Rt , ∀t ∈ {0, 1, . . .}.
The Z transform F ∗ (z) denoted by Z[f (t)] is also called the image of the
function f or the signal in the frequency domain.
Proposition 4.1.3. The series (4.1) is convergent on |z| > R (the exterior
of the disk of radius R centered in 0, where R = Rf is the radius of the
original function f ). In any closed region |z| ≥ R0 > R the series (4.1) is
uniformly convergent.
258
∞
X 1
Proof. One uses the geometric series zn = with the convergence
t=0
1−z
R
condition |z| < 1. Since < 1 is equivalent to |z| > R, it follows that
|z|
∞
X ∞
X ∞
X
∗ −t −t
|F (z)| = f (t)z ≤ |f (t)||z | ≤ M Rt |z|−t
t=0 t=0 t=0
∞ t
X R |z|
=M =M < ∞.
t=0
|z| |z| − R
t
0 −t R
Similarly, for |z| ≥ R > R (see Figure 4.1) one obtains |f (t)z | ≤ M .
R0
∞ t
X R R
Since the geometric series 0
with 0 < 1 is convergent, according
t=0
R R
to the Weierstrass Criterion (see [10, pp. 265-266] and [20, pp. 108-109]), the
series (4.1) is uniformly convergent.
Example 4.1.4. Let us consider the Kronecker function (see Figure 4.2)
0, t 6= 0
δ0 (t) = .
1, t = 0
259
Figure 4.2: The Kronecker Function
This function plays the role of the Dirac distribution δ for the case of
the discrete-time control systems and signals. It is also called the discrete δ
function or the discrete impulse function.
Solution. Obviously,
1 1
Z[δ(t)] = δ0 (0) + δ0 (1) + · · · + δ0 (t) t + · · · = 1.
z z
Example 4.1.5. Consider Heaviside’s discrete step function (see Figure 4.3)
0, t < 0
u(t) = .
1, t ≥ 0
260
Example 4.1.6. Consider the exponential function
t 0, t < 0
p(t) = u(t)a = .
at , t ∈ Z+
Solution. Since p(t) verifies the inequality |p(t)| = |a|t , its radius is R =
a
|a|. Then, it follows for |z| > |a| that < 1 and we can apply again the
z
geometric series. We get that
∞ ∞
X
t −t
X a t z
Z[p(t)] = az = = .
t=0 t=0
z z−a
261
Example 4.2.2. Consider the function f (t) = cos(ωt), ω > 0.
Using Theorem 4.2.1 (Linearity), the expression of the cosine from Euler’s
formula and the transform of the exponential (see Example 4.1.6), one obtains
e + e−iωt
iωt
1
Z[eiωt ] + Z[e−iωt ]
Z[cos(ωt)] = Z =
2 2
1 z z
= +
2 z − eiω z − e−iω
1 z[2z − (eiω + e−iω )]
= · 2
2 z − (eiω + e−iω )z + eiω e−iω
z(z − cos ω)
= 2 .
z − 2z cos ω + 1
Analogously one gets the following Z transforms:
z sin ω
Z[sin(ωt)] = ;
z 2 − 2z cos ω + 1
z(z − cosh ω)
Z[cosh(ωt)] = ;
z2 − 2z cosh ω + 1
z sinh ω
Z[sinh(ωt)] = ;
z2 − 2z cosh ω + 1
Z[sin(ωt + ϕ)] = cos ϕZ[sin ωt] + sin ϕZ[cos ωt]
z(z sin ϕ + sin(ω − ϕ))
= .
z 2 − 2z cos ω + 1
Theorem 4.2.3 (Similarity). If R is the radius of the original function f
and a ∈ C \ {0}, then for |z| > |a|R, it follows that
z
Z[at f (t)] = F ∗ . (4.3)
a
262
Proof. It follows that |at f (t)| ≤ |a|t M Rt = M (|a|R)t . Hence, the radius of
the original function at f (t) is |a|R. For |z| > |a|R, one obtains
X∞ X∞ z −t z
−t
t
Z[a f (t)] = t
[a f (t)]z = f (t) = F∗ .
t=0 t=0
a a
Example 4.2.4. Determine the Z transform of the function f (t) = at cos ωt,
a 6= 0, ω > 0.
Solution. According to Theorem 4.2.3 (Similarity) and Example 4.2.2, it
follows that
z z
− cos ω z(z − a cos ω)
Z[at cos ωt] = 2a a = 2 .
z z z − 2az cos ω + a2
− 2 cos ω + 1
a2 a
In particular, for a = eλ , one gets
z(z − eλ cos ω)
Z[eλt cos ωt] = .
z 2 − 2zeλ cos ω + e2λ
Theorem 4.2.5 (Time Delay).
Z[f (t − n)] = z −n F ∗ (z), ∀n ∈ N∗ . (4.4)
Proof. According to Definition 4.1.2, it follows that
∞
X
Z[f (t − n)] = f (t − n)z −t .
t=0
263
Example 4.2.6. Compute Z[u(t − n)], ∀n ∈ N∗ .
n−1
!
X
Z[f (t + n)] = z n F ∗ (z) − f (t)z −t , ∀n ∈ N∗ . (4.5)
t=0
264
Proof. As in the proof of Theorem 4.2.5, one makes the following change in
the index of summation: t + n = k; hence, t = 0 becomes k = n. Then one
adds and subtracts the sum which lacks in the series of F ∗ (z). One obtains
∞
X ∞
X
−t
Z[f (t + n)] = f (t + n)z = f (k)z −(k−n)
t=0 k=n
∞ n−1
!
X X
= zn f (k)z −k − f (k)z −k
k=0 k=0
n−1
!
X
= z n F ∗ (z) − f (t)z −t .
t=0
hence,
z T FT∗ (z)
F ∗ (z) = .
zT − 1
This implies that
T −1
∗ 1 X
F (z) = T f (t)z T −t .
z − 1 t=0
265
0, t = 3k
For example, the function f (t) = 1, t = 2k + 1 (see Figure 4.5) has the
2, t = 3k + 2
z 2 + 2z
period T = 3 and the transform F ∗ (z) = 3 .
z −1
a0 z T + a1 z T −1 + · · · + aT −1 z
F ∗ (z) = ,
zT − 1
then it is the transform of the function f which is periodic of period T and
has the values f (kT + i) = ai , i = 0, T − 1, k ∈ N.
For discrete functions, the role of the derivative is played by the difference
function.
Obviously, the operator ∆ is linear and one can prove by recurrence that
n
n−k n
X
n
∆ f (t) = (−1) f (t + k).
k=0
k
266
Proof. By using Theorem 4.2.1 (Linearity) and Theorem 4.2.9 (Second Time
Delay) for n = 1, one obtains
In the first sum, the index of summation starts from 0 by the definition of the
Z transform. Also notice that the first term is f (0), which does not depend
on t. In the second sum, because of the derivative, this f (0) vanishes, so the
lower index of summation is now 1. Then we want to use again the definition
of the Z transform. Thus, the series needs to start from 0. This is not an
issue because the corresponding term for t = 0 is 0, so adding it does not
change anything.
267
Solution. One gets
0
z z
Z[t] = −Z[−t · u(t)] = −z = ;
z−1 (z − 1)2
0
2 z z(z + 1)
Z[t ] = −Z[−t · t] = −z 2
= .
(z − 1) (z − 1)3
0
z(z 2 + 4z + 1)
3 2 z(z + 1)
Z[t ] = −Z[−t · t ] = −z = .
(z − 1)3 (z − 1)4
The inverse function of the difference, which plays the role of the integral
in the case of discrete functions, is the sum.
Definition 4.2.16. One calls the sum of the function f and one denotes it
by Sf (t) the function
0, t ≤ 0
t−1
Sf (t) = X . (4.8)
f (k), t ∈ 1, 2, . . .
k=0
Obviously,
t
X t−1
X
∆Sf (t) = Sf (t + 1) − Sf (t) = f (k) − f (k) = f (t)
k=0 k=0
and
t−1
X t−1
X
S (∆f (t)) = ∆f (k) = (f (k + 1) − f (k)) = f (t) − f (0).
k=0 k=0
268
F ∗ (z)
Hence, G∗ (z) = , equality which is equivalent to formula (4.9).
z−1
269
(−1)t−1
Example 4.2.21. Compute Z .
t
Solution. Using Example 4.2.8 andapplying Theorem 4.2.20 (Integration
0, t≤0
of the Image) to the function f (t) = (−1)t−1 , one obtains
, t = 1, 2, . . .
t
the transform
Z ∞
(−1)t−1
du u ∞
Z = = ln
t z u(u + 1) u+1 z
z 1
= − ln = ln 1 + .
z+1 z
270
Theorem 4.2.24 (Product).
Z
1 ∗ z dζ ∗
Z[f (t) · g(t)] = F (ζ) · G , (4.13)
2πi |ζ|=r ζ ζ
|z|
where Rf < r < .
Rg
Z
1 z dζ
∗ ∗
Proof. We get that F (ζ)G is the same as
2πi |ζ|=r ζ ζ
∞
Z ! ∞ −k !
1 X X z dζ
f (t)ζ −t g(k) ,
2πi |ζ|=r t=0 k=0
ζ ζ
∞ X
∞ Z
X 1 −k
which is equal to f (t)g(k)z · ζ k−t−1 dζ.
t=0 k=0
2πi |ζ|=r
The last equality was obtained integrating term by term the uniformly
convergent series. By Cauchy’s Fundamental Theorem ([26, Theorem 4.2.1])
and using the substitution ζ = reiθ , θ ∈ [0, 2π),
Z
k−t−1 0, k − t − 1 6= −1 ⇔ k =
6 t
ζ dζ = .
|ζ|=r 2πi, k − t − 1 = −1 ⇔ k = t
∞
X
Hence, the last double series reduces to f (t)g(t)z −t = Z[f (t)g(t)].
t=0
F ∗ (ζ)
By applying Residue Theorems ([26, Chapter 5]), if the function
ζ
has poles a1 , . . . , an , one obtains the following result.
Corollary 4.2.25.
n
X
∗ z 1∗
Z[f (t)g(t)] = res F (ζ) · G , aj .
j=1
ζ ζ
Example 4.2.26. Compute Z teλt .
271
Solution. From Example 4.1.6 and Example 4.2.15 one obtains, for 1 <
|z|
r < Re(λ) the following:
e
z
I
1 ζ ζ dζ
Z teλt =
2
·
2πi |ζ|=r (ζ − 1) z ζ
− eλ
ζ
I
1 z
= dζ
2πi |ζ|=r (ζ − 1) (z − ζeλ )
2
z
= res ,1
(ζ − 1)2 (z − ζeλ )
0
eλ
1
= z lim = z lim
ζ→1 z − ζeλ ζ→1 (z − ζeλ )2
zeλ
= .
(z − eλ )2
Notice that the inequalities that r verifies imply that 1 belongs to the disk
z
|ζ| < r, while ζ = λ lies outside of it. This is why the residue at this second
e
point is not taken into consideration.
The same result can be obtained using Theorem 4.2.3 (Similarity). We
get
z
λ zeλ
Z e t = Z (e ) t = e 2 =
λt λ t
.
z (z − eλ )2
− 1
eλ
One can also use Theorem 4.2.14 (Differentiation of the Image) and obtain
the following identical result:
0
z − eλ − z zeλ
λt z
Z te = −z = −z · = .
z − eλ (z − eλ )2 (z − eλ )2
Theorem 4.2.27 (Initial Value).
f (0) = lim F ∗ (z). (4.14)
z→∞
1 1 1
Proof. The function F ∗ (z) = f (0) + f (1) + f (2) 2 + · · · + f (t) t + · · · can
z z z
∗ G(z) 1
be written as F (z) = f (0) + , where G(z) = f (1) + f (2) + · · · . The
z z
272
function G(z) is analytic on |z| > R since F ∗ (z). Hence, lim G(z) exists, is
z→∞
G(z)
finite and lim = 0. In conclusion,
z→∞ z
G(z)
lim F ∗ (z) = f (0) + lim = f (0).
z→∞ z→∞ z
Remark 4.2.28. In the same manner one can prove the following formulas,
which together with (4.14) can be used to determine the original function
f (t) when its transform F ∗ (z) is known:
f (1) = lim z (F ∗ (z) − f (0)) ,
z→∞
f (2) = lim z 2 F ∗ (z) − f (0) − f (1)z −1 ,
z→∞
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ·!
······· , (4.15)
t−1
X
f (t) = lim z t F ∗ (z) − f (k)z −k , t = 1, 2, . . . .
z→∞
k=0
Bare in mind that by definition, the sum of the series is equal to the limit of
the sequence (St )t∈N of the partial sums.
273
z
Example 4.2.30. For the transform F ∗ (z) = , it is obvious that
z−1
∗
lim ((z − 1)F (z)) = lim z = 1. Actually, this is Z the transform of the
z→1 z→1
|z|>1 |z|>1
unit function u(t) (see Example 4.1.5) for which lim u(t) = 1.
t→∞
Method I
Theorem 4.3.1. If F ∗ (z) is an analytic function on the domain |z| > R,
then its original function exists, it is unique and it is given by the formula
Z 0, t < 0
f (t) = 1 ∗
2πi |z|=r F (z)z
t−1
dz, t = 0, 1, . . . . (4.17)
r>R
274
Figure 4.6: The Singular Points of F ∗ (z) Inside the Disk |z| < R
In the case t = 0 this formula remains true if the numerator of F ∗ (z) has a
factor z, otherwise one adds in formula (4.18) the residue at z = 0.
z
Example 4.3.2. Determine the original of the function F ∗ (z) = .
z2 − 1
z
Solution. The function F ∗ (z) = is analytic on the domain |z| > 1.
−1 z2
According to formulas (4.17) and (4.18), its original is
zt zt
Z
1 z t−1
f (t) = z dz = res , 1 + res , −1
2πi |z|=r z 2 − 1 z2 − 1 z2 − 1
r>1
zt zt 1
· 1 + (−1)t−1 .
= + =
2z z=1 2z z=−1 2
Method II
Theorem 4.3.3. If F ∗ (z) is an analytic function on the domain |z| > R,
then its original function is given by the formula
0, t < 0
(t)
f (t) = 1 1 . (4.19)
F∗ , t = 0, 1, . . .
t! z z=0
275
1
Proof. If one replaces z with in the Laurent series expansion F ∗ (z) =
∞
z X ∞
X
−t 1
f (t)z , then one obtains the Taylor series F ∗
= f (t)z t . Hence,
t=0
z t=0
f (t),
t = 0, 1, . . . are the coefficients of the Taylor series of the function
1
F∗ (which is analytic on the disc |z| < R1 ).
z
If a function g(z) is analytic on a disc centered at a, it has the Taylor
∞
X g (n) (a)
series expansion g(z) = cn (z − a)n , where cn = . By replacing
n!
t=0
1
a = 0, g(z) = F ∗ , n = t and f (t) = ct , one gets formula (4.19).
z
∗ z
Example 4.3.4. Compute the original of the function F (z) = Ln .
z−1
1 1
Solution. One gets F ∗ = Ln = −Ln(1 − z) and
z 1−z
0 (t)
∗1 1 ∗ 1 1 (t − 1)!
F = 0, F = ,..., F∗ = ;
z z=0 z 1−z z (1 − z)t
1 (t − 1)! 1
f (t) = · = , for t = 1, 2, . . . .
t! (1 − z)t z=0 t
Method III
The original f (t) can be determined by formulas (4.14) and (4.15) (see
Remark 4.2.28).
Method IV
Theorem 4.3.5 (Recurrence Computation of the Original). If F ∗ (z) is a
rational function, so
276
then the original is given by
0, t<0
an , t=0
t−1
X
f (t) = an−t − bn−t+j f (j), t = 1, n . (4.20)
j=0
Xt−1
− bn+j−t f (j), t≥n+1
j=t−n
277
Thus, we get (4.20) by replacing ct with f (t).
Method V
If F ∗ (z) is a rational function, one decomposes it in partial fractions and
expands them in Laurent series around the point at infinity (i.e. for |z| > R,
for a suitable R) using the geometric series. For other types of functions
F ∗ (z), the exponential series, binomial series etc. can be used.
Example 4.3.6. Determine the original of the function given by F ∗ (z) =
z2 + 1
.
z 2 (z 2 − 3z + 2)
Solution. The function F ∗ (z) is analytic on the domain |z| > 2 and has
the following decomposition in partial fractions:
3 1 1 1 1 5 1
F ∗ (z) = · + · 2 −2· + · .
4 z 2 z z−1 4 z−2
For |z| > 2,
∞ t ∞ ∞
1 1 1 1X 2 X 2t X
= · = = = 2t−1 z −t .
z−2 z 2 z t=0 z z t+1
1− t=0 t=1
z
∞
1 X
Analogously, = z −t . One obtains the expansion in Laurent series
z−1 t=0
∞
∗ 1 X
F (z) = 2 + (5 · 2t−3 − 2)z −t ;
z t=3
278
coefficients
an ∆n y(t) + an−1 ∆n−1 y(t) + · · · + a1 ∆y(t) + a0 y(t) = f (t), (4.21)
where ai ∈ R, i ∈ {0, 1, . . . , n}, an 6= 0 and the right-hand side f (t) and the
unknown function y(t) are original functions.
One imposes the following the initial conditions:
y(0) = y0 , ∆y(0) = y1 , . . . , ∆n−1 y(0) = yn−1 . (4.22)
According to Theorem 4.2.13 (Difference),
Z[∆2 y(t)] = (z − 1)2 Y ∗ (z) − z[(z − 1)y0 + ∆y(0)]
= (z − 1)2 Y ∗ (z) − z[(z − 1)y0 + y1 ],
...................................................,
n−1
X (4.23)
Z[∆n y(t)] = (z − 1)n Y ∗ (z) − z (z − 1)n−i−1 ∆i y(0)
i=0
n−1
X
= (z − 1)n Y ∗ (z) − z (z − 1)n−i−1 yi .
i=0
One applies the Z transform to equation (4.21) and one replaces the
corresponding images with those given by the formulas (4.23). The difference
equation (4.21) becomes the algebraic equation
n−1
X
∗
n
an [(z − 1) Y (z) − z (z − 1)n−i−1 yi ] + · · · + a1 [(z − 1)Y ∗ (z) − zy0 ]+
i=0
279
The solution of equation (4.24)
F ∗ (z) + G(z)
Y ∗ (z) = ,
C(z)
and the original of this solution y(t) = Z −1 [Y ∗ (z)] is the solution of the
initial value (Cauchy) problem (4.21), (4.22).
A second method of solving the initial value problem represented by the
difference equation (4.21) and the initial conditions (4.22) is based upon
Theorem 4.2.9 (Second Time Delay). Using the definition of the nth order
differences k ∈ {1, 2, . . . , n} we have
where
b n = an ,
n
bn−1 = an−1 − an ,
1
.................................,
3 n−2 n
b 2 = a2 − a3 + · · · + (−1) an ,
1 n−2
2 n−1 n
b 1 = a1 − a2 + · · · + (−1) an ,
1 n−1
1 n
b 0 = a0 − a1 + · · · + (−1)n an .
1 n
280
According to Theorem 4.2.9 (Second Time Delay),
By applying the Z transform to equation (4.26) and taking into account the
initial conditions (4.22), equation (4.26) is transformed into the algebraic
equation
F ∗ (z) + H(z)
Y ∗ (z) =
C ∗ (z)
281
Solution. Using formulas (4.25), one writes the equation in the form
y(t + 2) − 2y(t + 1) + y(t) − 5y(t + 1) + 5y(t) + 6y(t) = 0.
Hence, y(t + 2) − 7y(t + 1) + 12y(t) = 0. One applies the Z transform (and
Theorem 4.2.9 (Second Time Delay)) and one obtains the algebraic equation
z 2 Y ∗ (z) − y(0) − y(1)z −1 − 7z (Y ∗ (z) − y(0)) + 12Y ∗ (z) = 0.
This is equivalent to
z 2 Y ∗ (z) − 1 − 3z −1 − 7z (Y ∗ (z) − 1)) + 12Y ∗ (z) = 0,
282
Figure 4.7: Black Box Representation of a Discrete-Time Control System
1. summators
A summator (or adder ), represented in Figure 4.8, has m inputs and
one output. These variables verify the input-output map y(t) = u1 (t)+
u2 (t) + · · · + um (t);
283
3. delayers
A delayer , represented in Figure 4.10, has one input, one output and
the input-output map y(t + 1) = u(t). If a system Σ has n delayers,
one associates to the system n state variables xi (t), where xi (t) is the
output variable of the ith delayer at the moment t.
We denote by aij (t), bij (t), cij (t) and dij (t) the gains on the following
connections:
respectively.
The scheme of a linear system Σ is represented in Figure 4.11.
At moment t, the signal from the output of the summator connected with
the ith delayer is xi (t+1) (the signal which will be at the output of the delayer
i at moment t + 1). It is equal to the sum of the signals which come from
284
the inputs j, j = 1, p and the delayers j, j = 1, n. One obtains the state
equations of the system Σ, namely
n
X m
X
xi (t + 1) = aij (t)xj (t) + bij (t)uj (t), i = 1, n. (4.28)
j=1 j=1
Analogously, by the examination of the input and output signals at the sum-
mator of the output terminal i, one obtains the output equations of the system
Σ, namely
n
X m
X
yi (t) = cij (t)xj (t) + dij (t)uj (t), i = 1, p. (4.29)
j=1 j=1
One calls the vectors x(t) = (x1 (t), ..., xn (t))T , u(t) = (u1 (t), ..., um (t))T ,
y(t) = (y1 (t), ..., yp (t))T the state, the input (or the control ) and the output
of the system Σ at the moment t, respectively.
One denotes by A(t), B(t), C(t) and D(t) the n × n, n × m, p × n, p × m
matrices which have the elements aij (t), bij (t), cij (t) and dij (t), respectively.
Then the equations (4.28) and (4.29) can be written as
x(t + 1) = A(t)x(t) + B(t)u(t), (4.30)
Σ
y(t) = C(t)x(t) + D(t)u(t). (4.31)
285
Using Theorem 4.2.1 (Linearity) and Theorem 4.2.9 (Second Time Delay)
and applying the Z transform to equations (4.30) and (4.31), one gets
Equation (4.32) can be written as (zI − A)X ∗ (z) = BU ∗ (z) + zx(0). By left
multiplication with (zI − A)−1 , for z ∈ C \ σ(A), one obtains
Replacing X ∗ (z) in (4.33), one gets the so called input-output map of the
system Σ in the frequency domain
For the initial state x(0) = 0, this map has the form
where T (z) = C(zI − A)−1 B + D. The matrix T (z) is called the transfer
matrix of the system Σ and plays an important role in systems and control
theory.
Notice that the transfer matrices of continuous-time and discrete-time
systems coincide, the only difference being that the variables are s and z,
respectively.
4.5 Exercises
E 25. Determine Z[f (t)](z) for the following original functions:
a) f (t) = 2t − t, t ∈ N;
π
b) f (t) = sinh t + 2 sin t , t ∈ N;
4
c) f (t) = t2 − cosh ωt, ω > 0, t ∈ N;
d) f (t) = tet − 2 · 4t cos(πt), t ∈ N.
286
Solution. a) Using Theorem 4.2.1 (Linearity), Theorem 4.2.14 (Differen-
tiation of the Image) and Examples 4.1.6 and 4.1.5, one obtains
z
Z[f (t)](z) = Z[2t − t](z) = Z[2t ](z) − Z[t](z) = − Z[t · u(t)](z)
z−2
0
z 0 z z
= − (−z)(Z[1](z)) = +z
z−2 z−2 z−1
z −1 z z
= +z 2
= − ;
z−2 (z − 1) z − 2 (z − 1)2
287
Thus,
z(z + 1) 1 z z
Z[f (t)](z) = − +
(z − 1)3 2 z − eω z − e−ω
z(z + 1) 1 z(2z − eω − e−ω )
= − ·
(z − 1)3 2 (z − eω )(z − e−ω )
z(z + 1) z(z − cosh ω)
= 3
− 2 ;
(z − 1) z − 2z cosh ω + 1
288
b) One obtains
c) One gets
π
2
z z(z − 1) sin
Z[f (t)](z) = 2
−4· π 6 2
(z − a)
z 2 − 2z cos +1
6
z 2z(z 2 − 1)
= − √ ;
(z − a)2 (z 2 − z 3 + 1)2
d) One obtains
t(t + 1)(2t + 1)
Z[f (t)](z) = Z (z)
6
1
= · 2Z[t3 ](z) + 3Z[t2 ](z) + Z[t](z) (z).
6
Since (see Example 4.2.15)
0
z z
Z[t](z) = −Z[−t · u(t)] = −z = ,
z−1 (z − 1)2
0
z z(z + 1)
Z[t2 ](z) = −Z[−t · t](z) = −z 2
= ,
(z − 1) (z − 1)3
0
z(z 2 + 4z + 1)
z(z + 1)
Z[t ](z) = −Z[−t · t ] = −z
3 2
= ,
(z − 1)3 (z − 1)4
it follows that
1 z(z 2 + 4z + 1) 1 z(z + 1) 1 z
Z[f (t)](z) = · 4
+ · 3
+ · ;
3 (z − 1) 2 (z − 1) 6 (z − 1)2
289
e) One gets
∞
X f (1) f (T − 1)
Z[f (t)](z) = f (t)z −t = f (0) + + ··· + +
t=0
z z T −1
f (0) f (T − 1) f (0) f (T − 1)
+ T + ··· + 2T +1
+ · · · + kT
+ · · · + 2kT +T −1
+ ···
z z z z
f (T − 1) 1 1
= f (0) + · · · + 1 + T + · · · + kT + · · ·
z T −1 z z
T
f (1) f (T − 1) z
= f (0) + + ··· + T −1
· T .
z z z −1
a)
zt
1. G∗ (z) = , t ∈ N, z ∈ C;
z2 − 1
2. z 2 − 1 = 0 ⇒ z = ±1 are isolated singular points, both poles of order
1;
290
zt 1
3. One obtains res (G∗ (z), 1) = lim = and res (G∗ (z), −1) =
z→1 z + 1 2
zt (−1)t
lim = ;
z→−1 z − 1 −2
1 (−1)t
4. f (t) = − , t ∈ N;
2 2
b)
zt
1. G∗ (z) = , t ∈ N, z ∈ C;
(z − 1)3
t(t − 1)
4. f (t) = , t ∈ N;
2
c)
(z − 1)z t−1
1. G∗ (z) = , t ∈ N, z ∈ C;
z2 + 4
When we try to find the isolated singular points, since t = 0, 1, . . ., we
identify two cases, namely t = 0 and t = 1, 2, . . ..
(z − 1)
2. (a) For t = 0, G∗ (z) = . Thus, 0, ±2i are the isolated singular
z(z 2 + 4)
points of G∗ (z), all of them being poles of order 1;
z−1 −1
3. (a) One obtains res (G∗ (z), 0) = lim 2
= , res (G∗ (z), 2i) =
z +4
z→0 4
z−1 2i − 1 ∗ −2i − 1
lim = and res (G (z), −2i) = ;
z→2i z(z + 2i) −8 −8
−1 2i − 1 2i + 1
4. (a) f (0) = + + = 0;
4 −8 8
291
(z − 1)z t−1
2. (b) For t ≥ 1, G∗ (z) = . Thus, ±2i are the isolated singular
z2 + 4
∗
points of G (z), both poles of order 1;
(z − 1)z t−1 (2i − 1)(2i)t−1
3. (b) One obtains res (G∗ (z), 2i) = lim = =
z→2i z + 2i 4i
(2i − 1)(2i)t−2 (−2i − 1)(−2i)t−2
and res (G∗ (z), −2i) = ;
2 2
(2i − 1)(2i)t−2 (−2i − 1)(−2i)t−2
4. (b) f (t) = + . Since i4n = 1, n ∈ Z,
2 2
we have 4 cases:
• t = 4k + 1. We get that
(2i − 1)(2i)4k−1 (−2i − 1)(−2i)4k−1
f (t) = +
2 2
4k 4k
(2i − 1)2 (−2i − 1)2 24k (2i − 1 + 2i + 1)
= − =
4i 4i 4i
4k
=2 ;
• t = 4k + 2. We get that
(2i − 1)(2i)4k (−2i − 1)(−2i)4k
f (t) = + = −24k ;
2 2
• t = 4k + 3. We get that
(2i − 1)(2i)4k+1 (−2i − 1)(−2i)4k+1
f (t) = + = −24k+2 ;
2 2
• t = 4k + 4. We get that
(2i − 1)(2i)4k+2 (−2i − 1)(−2i)4k+2
f (t) = + = 24k+2 .
2 2
From all cases above (for k ∈ N), it follows that
0, t = 0
4k
2 , t = 4k + 1
f (t) = −24k , t = 4k + 2 .
−24k+2 , t = 4k + 3
24k+2 , t = 4k + 4
292
W 25. Determine the original function of the following Z transforms:
z
a) F ∗ (z) = ;
(z − 1)2 (z − e)
z2
b) F ∗ (z) = ;
z4 − 1
z(z + 1)
c) F ∗ (z) = ;
(z − 1)4
z+5
d) F ∗ (z) = 2 ;
z + 3z + 2
z2
e) F ∗ (z) = 2 ;
(z + 1)2
f) F ∗ (z) = ea/z , a ∈ R∗ .
zt
Answer. a) We get G∗ (z) = and
(z − 1)2 (z − e)
et − t(e − 1) − 1
f (t) = res (G∗ (z), 1) + res (G∗ (z), e) = , t ∈ N;
(e − 1)2
z t+1
b) We get G∗ (z) = and
z4 − 1
f (t) = res (G∗ (z), 1) + res (G∗ (z), −1) + res (G∗ (z), i) + res (G∗ (z), −i)
0, t = 4k
1
0, t = 4k + 1
1 − (−1)t+1 − it − (−i)t = , k ∈ N∗ ;
=
4
1, t = 4k + 2
0, t = 3k + 3
1
c) f (t) = · t(t − 1)(2t − 1), t ∈ N∗ ;
6
z+5
d) For t = 0 we obtain G∗ (z) = . Thus,
z(z + 1)(z + 2)
f (0) = res (G∗ (z), 0) + res (G∗ (z), −1) + res (G∗ (z), −2) = 0.
z t−1 (z + 5)
For t ≥ 1 we obtain G∗ (z) = . Thus,
(z + 1)(z + 2)
f (t) = res (G∗ (z), −1) + res (G∗ (z), −2) = 4 · (−1)t−1 − 3 · (−2)t−1 .
293
0, t = 0
In conclusion, f (t) = ;
4 · (−1) t−1
− 3 · (−2) , t ∈ N∗
t−1
−2k, t = 4k
t t
0, t = 4k + 1
e) f (t) = − (i + (−i)t ) = , k ∈ N;
4
2k + 1, t = 4k + 2
0, t = 3k + 3
2. Apply the Z transform to the above equation, use Theorem 4.2.9 (Sec-
ond Time Delay) and determine Z[y(t)](z);
3. Find the original y(t), t = n, n + 1, n + 2, . . ..
a)
1. Since ∆y(t) = y(t + 1) − y(t) and ∆2 y(t) = y(t + 2) − 2y(t + 1) + y(t),
the equation becomes
3(y(t + 2) − 2y(t + 1) + y(t)) − 4(y(t + 1) − y(t)) − 4y(t) = 0.
This is equivalent to y(t + 2) − 10y(t + 1) + 3y(t) = 0;
294
2. Now we apply the Z transform and we obtain
6z t 3
res (G∗ (z), 3) = lim 1 = · 3t
z→3 3(z − ) 4
3
and t
6z t
∗ 1 −3 1
res G (z), = lim1 = · ;
3 z→ 3 3(z − 3) 4 3
295
2
X
3.4 We conclude that y(t) = res (G∗ (z), zj ), t = 2, 3, . . ., so
j=1
t t !
3 −3 1 3 1
y(t) = · 3t + · = 3t − , t = 2, 3, . . . ;
4 4 3 4 3
b)
c)
296
3 y(1) y(2)
z Z[y(t)](z) − y(0) − − 2 −
z z
2 y(1)
−2z Z[y(t)](z) − y(0) − +
z
+2z(Z[y(t)](z) − y(0)) − 4Z[y(t)](z) = 0.
Employing the initial condition, one obtains
(z 3 − 2z 2 + 2z − 4)Z[y(t)](z) = z 2 − 2z ⇔
z(z − 2) z(z − 2) z
Z[y(t)](z) = = 2 = 2 ;
z3 2
− 2z + 2z − 4 z (z − 2) + 2(z − 2) z +2
3. Now we know the Z transform of y(t). Let us follow the steps of the
algorithm presented and used in E. 26:
zt
3.1 Take G∗ (z) = , t = 3, 4, . . ., z ∈ C;
z2 + 2
3.2 Find the isolated singular√ points of G∗ (z).√The roots of the equa-
tion z 2 + 2 = 0 are z1 = i 2 and z2 = −i 2, both poles of order
1;
3.3 We compute the residues of G∗ (z) at z1 and z2 . We obtain
√ t−1
√ z t
(i 2)
res (G∗ (z), i 2) = lim√ √ =
z→i 2 z + i 2 2
and
√
∗
√ zt (−i 2)t−1
res (G (z), −i 2) = lim√ √ =
z→−i 2 z − i 2 2
√ √
(i 2)t−1 (−i 2)t−1
3.4 We conclude that y(t) = + , t = 3, 4, . . ., so
2 2
22k , t = 4k + 1
0, t = 4k + 2,
y(t) = 2k+1 , k ∈ N.
−2 , t = 4k + 3,
0, t = 4k + 4,
297
W 26. Find the solution y(t) of the following homogeneous difference equa-
tions:
a) ∆2 y(t) + ∆y(t) − 2y(t) = 0, y(0) = 1, y(1) = 2;
b) ∆2 y(t) + 2∆y(t) + 2y(t) = 0, y(0) = 3, y(1) = 3;
√ √
c) ∆3 y(t) + y(t) = 0, y(0) = 0, y(1) = 3, y(2) = 3.
Answer. a) y(t) = 2t ;
3, t = 4k + 1
t−1 t−1
3i (i + 1) 3(−i) (i + 1)
−3, t = 4k + 2
b) y(t) = + = , k ∈ N;
2 2
−3, t = 4k + 3
3, t = 4k + 4
√ √ t−1
3(z − 2) 3z (z − 2)
c) One gets Z[y(t)](z) = 2 and G∗ (z) = . Fi-
z − 3z + 3 z 2 − 3z + 3
nally, one obtains
√ √ !t−1 √ √ !t−1
3+i 3+i 3 3−i 3−i 3
y(t) = · + · .
2 2 2 2
√
3+i 3
Since z1 = is a root of z 2 − 3z + 3 = 0, it follows that
2
2
• z1 = 3z1 − 3;
• z13 = 3z12 − 3z1 = 3(3z1 − 3) − 3z1 = 6z1 − 9;
• z14 = 6z12 − 9z1 = 6(3z1 − 3) − 9z1 = 9z1 − 18;
• z15 = 9z12 − 18z1 = 9(3z1 − 3) − 18z1 = 9z1 − 27;
• z16 = 9z12 − 27z1 = 9(3z1 − 3) − 27z1 = −27.
Therefore, we have to divide the problem into 6 cases by considering t =
6k + p, k ∈ N, p ∈ {1, 2, . . . , 6}. One gets
√ k k
√3 · (−1)k · 27k , t = 6k + 1
3 · (−1) · 27 , t = 6k + 2
0, t = 6k + 3
y(t) = √ .
3√3 · (−1) k+1
· 27k , t = 6k + 4
9 3 · (−1)k+1 · 27k , t = 6k + 5
√
18 3 · (−1)k+1 · 27k , t = 6k + 6
298
E 28. Find the solution y(t) of the following nonhomogeneous difference
equations:
a) ∆2 y(t) − ∆y(t) = t · 4t , y(0) = 0, y(1) = 4;
2 3π
b) ∆ y(t) − 5∆y(t) + 6y(t) = sin t , y(0) = 1, y(1) = 3;
2
c) ∆3 y(t) + 3∆2 y(t) = (−2)t , y(0) = 1, y(1) = 0, y(2) = 0.
Solution. We are going to find the solution using the Z transform follow-
ing the same algorithm as the one presented in E. 27.
a)
It follows from Theorem 4.2.9 (Second Time Delay) and from the initial
conditions that
0
2 4 z
z Z[y(t)](z) − −3zZ[y(t)](z)+2Z[y(t)](z) = (−z)· ⇔
z z−4
4z
z 2 Z[y(t)](z) − 4z − 3zZ[y(t)](z) + 2Z[y(t)](z) = ⇔
(z − 4)2
4z
(z 2 − 3z + 2)Z[y(t)](z) − 4z = ,
(z − 4)2
4z((z − 4)2 + 1)
so Z[y(t)](z) = ;
(z − 4)2 (z 2 − 3z + 2)
299
3. Now we know the Z transform of y(t). Let us follow the steps of the
algorithm presented and used in E. 26:
∗ 4z t ((z − 4)2 + 1)
3.1 Take G (z) = , t = 2, 3, . . ., z ∈ C;
(z − 4)2 (z 2 − 3z + 2)
3.2 Find the isolated singular points of G∗ (z). The roots of the equa-
tion (z − 4)2 (z 2 − 3z + 2) = 0 are z1,2 = 4, z3 = 1 and z4 = 2,
where z1 is a pole of order 2 and z3 , z4 are both poles of order 1;
3.3 We compute the residues of G∗ (z) at z1 , z3 and z4 . We obtain
t 0
∗ 4z ((z − 4)2 + 1)
res (G (z), 4) = lim
z→4 z 2 − 3z + 2
t4t−1 (16 − 12 + 2) − 4t (8 − 3)
=4·
36
6t − 20 t−1
= ·4 ,
9
4z t ((z − 4)2 + 1) 40
res (G∗ (z), 1) = lim = −
z→1 (z − 4)2 (z − 2) 9
and
∗ 4z t ((z − 4)2 + 1)
res (G (z), 2) = lim = 5 · 2t ;
z→1 (z − 4)2 (z − 1)
6t − 20 t−1 40
3.4 We conclude that y(t) = ·4 − + 5 · 2t , t = 2, 3, . . .;
9 9
b)
1. Since ∆y(t) = y(t + 1) − y(t) and ∆2 y(t) = y(t + 2) − 2y(t + 1) + y(t),
the equation becomes
3π
(y(t + 2) − 2y(t + 1) + y(t)) − 5(y(t + 1) − y(t)) + 6y(t) = sin t .
2
3π
This is now equivalent to y(t + 2) − 7y(t + 1) + 12y(t) = sin t ;
2
2. Now we apply the Z transform and we obtain
3π
Z[y(t + 2) − 7y(t + 1) + 12y(t)](z) = Z sin t (z).
2
300
Our goal is to determine Z[y(t)](z). Using Theorem 4.2.1 (Linearity)
and the sine expression from Euler’s formula, we get
" 3π 3π
#
e 2 it − e− 2 it
Z[y(t + 2)](z) − 7Z[y(t + 1)](z) + 12Z[y(t)](z) = Z (z).
2i
It follows from Theorem 4.2.9 (Second Time Delay) and from the initial
conditions that the left-hand side of the above equation is
2 3
z Z[y(t)](z) − 1 − − 7z(Z[y(t)](z) − 1) + 12Z[y(t)](z)
z
or
which is equaivalent to
(z 2 − 7z + 12)Z[y(t)](z) − z 2 + 4z.
301
3. Now we know the Z transform of y(t). Let us follow the steps of the
algorithm presented and used in E. 26:
z t (z 3 − 4z 2 + z − 5)
3.1 Take G∗ (z) = , t = 2, 3, . . ., z ∈ C;
(z 2 + 1)(z 2 − 7z + 12)
3.2 Find the isolated singular points of G∗ (z). The roots of the equa-
tion (z 2 + 1)(z 2 − 7z + 12) = 0 are z1,2 = ±i, z3 = 3 and z4 = 4,
all being poles of order 1;
3.3 We compute the residues of G∗ (z) at zn , n ∈ {1, 2, 3, 4}. We
obtain
z t (z 3 − 4z 2 + z − 5) it (−i + 4 + i − 5)
res (G∗ (z), i) = lim =
z→i (z + i)(z 2 − 7z + 12) 2i(−1 − 7i + 12)
t t
−i −i (11 + 7i) −it−1 (11 + 7i)
= = = ,
2i(11 − 7i) 2i(121 + 49) 340
−(−i)t−1 (11 − 7i)
res (G∗ (z), −i) = ,
340
z t (z 3 − 4z 2 + z − 5) 11 t
res (G∗ (z), 3) = lim 2
= ·3
z→3 (z + 1)(z − 4) 10
and
z t (z 3 − 4z 2 + z − 5) 1
res (G∗ (z), 4) = lim 2
= − · 4t .
z→4 (z + 1)(z − 3) 17
3.4 We conclude that
−it−1 (11 + 7i) −(−i)t−1 (11 − 7i)
y(t) = + +
340 340
11 t 1
+ ·3 − · 4t , t = 2, 3, . . . ,
10 17
so
14 11 4k+2 1
+ ·3 − · 44k+2 , t = 4k + 2
340 10 17
22 11 4k+3 1
· 44k+3 , t = 4k + 3
+ ·3 −
340 10 17
y(t) = , k ∈ N;
−14 11 1
· 34k+4 − · 44k+4 , t = 4k + 4
+
340 10 17
−22 + 11 · 34k+5 − 1 · 44k+5 , t = 4k + 5
340 10 17
302
c)
1. Since ∆y(t) = y(t + 1) − y(t), ∆2 y(t) = y(t + 2) − 2y(t + 1) + y(t) and
3
k 3
X
3
∆ y(t) = (−1) y(t+3−k) = y(t+3)−3y(t+2)+3y(t+1)−y(t)
k=0
k
the equation becomes y(t + 3) − 3y(t + 1) + 2y(t) = (−2)t ;
2. Using the same idea as before, we obtain that
∗ z t (z 3 + 2z 2 − 3z − 5)
3.1 Take G (z) = , t = 3, 4, . . ., z ∈ C;
(z + 2)(z 3 − 3z + 2)
3.2 Find the isolated singular points of G∗ (z). The equation (z +
2)(z 3 − 3z + 2) = 0 can be rewritten as (z + 2)2 (z − 1)2 = 0, so its
roots are z1 = −2 and z2 = 1, both poles of order 2;
3.3 We compute the residues of G∗ (z) at z1 and z2 . We obtain
t+3 0
∗ z + 2z t+2 − 3z t+1 − 5z t
res (G (z), −2) = lim
z→−2 (z − 1)2
3t − 10
= · (−2)t−1
27
and
0
z t+3 + 2z t+2 − 3z t+1 − 5z t
∗
res (G (z), 1) = lim
z→1 (z + 2)2
−15t + 22
= ;
27
3t − 10 −15t + 22
3.4 We conclude that y(t) = · (−2)t−1 + , t =
27 27
3, 4, . . ..
303
W 27. Find the solution y(t) of the following nonhomogeneous difference
equations:
a) ∆2 y(t) + 2∆y(t) − 3y(t) = 5t , y(0) = −1, y(1) = 5;
b) 2 · ∆2 y(t) + 4∆y(t) + y(t) = t, y(0) = 1, y(1) = 0;
c) ∆3 y(t) + ∆2 y(t) + ∆y(t) + y(t) = 2t − 3t , y(0) = 0, y(1) = 0, y(2) = 0.
Answer. a) We get the equation y(t + 2) − 4y(t) = 5t . Applying the
z(1 − (z − 5)2 )
Z transform, we get Z[y(t)](z) = . The solution is y(t) =
(z − 5)(z 2 − 4)
5t 2t+1 6(−2)t+1
+ + , t = 2, 3, . . .;
21 3 7
b) We get the equation 2y(t + 2) − y(t) = t. Applying the Z transform,
z(2z 3 − 4z 2 + 2z + 1)
we get Z[y(t)](z) = . The solution is y(t) = t − 4 +
√ (z − 1)2 (2z 2 −√1)
t t
1 5+3 2 1 5−3 2
√ · + −√ · , t = 2, 3, . . .;
2 2 2 2
c) We get the equation y(t+3)−2y(t+2)+2y(t+1) = 2t −3t . Applying the
−1
Z transform, we get Z[y(t)](z) = . The solution
(z − 2)(z − 3)(z 2 − 2z + 2)
3t−1 3−i i+3
is y(t) = 2t−2 − − (1 + i)t−1 · − (1 − i)t−1 · , t = 3, 4, . . .. Since
5 20 20
4 2 4 2
(1 + i) = (2i) = −4 and (1 − i) = (−2i) = −4, it follows that
34k 3(−4)k
4k−1
2 − − , t = 4k + 1
5 10
34k+1 2(−4)k
4k
2 − 5 − , t = 4k + 2
5
y(t) = , k ∈ N.
4k+2 k
3 (−4)
24k+1 − − , t = 4k + 3
5 5
4k+3
24k+2 − 3 2(−4)k
+ , t = 4k + 4
5 5
304
b) yn+2 = yn+1 + yn , y0 = 0, y1 = 1, n ∈ N (the well know Fibonacci
sequence);
c) yn+2 = 3yn+1 − 2yn + n, y0 = 1, y1 = 1, n ∈ N.
It follows from Theorem 4.2.9 (Second Time Delay) and from the initial
conditions that
z
z · (Z[y(t)](z) − y(0)) = 2Z[y(t)](z) + 3 · ⇔
z−2
3z
z · Z[y(t)](z) − z = 2Z[y(t)](z) + ,
z−2
z(z + 1)
so Z[y(t)](z) = . Now we know the Z transform of y(t). Let us
(z − 2)2
follow the steps of the algorithm presented and used in E. 26:
z t (z + 1)
1. G∗ (z) = , t ∈ N, z ∈ C;
(z − 2)2
305
It follows from Theorem 4.2.9 (Second Time Delay) and from the initial
conditions that
2 y(1)
z (Z[y(t)](z) − y(0) − = z · (Z[y(t)](z) − y(0)) + Z[y(t)](z) ⇔
z
2 1
z (Z[y(t)](z) − 0 − = z · (Z[y(t)](z) − 0) + Z[y(t)](z),
z
z
so Z[y(t)](z) = 2 . Now we know the Z transform of y(t). Let us
z −z−1
follow the steps of the algorithm presented and used in E. 26:
zt
1. G∗ (z) = , t ∈ N, z ∈ C;
z2 − z − 1
√ √
2 1+ 5 1− 5
2. z − z − 1 = 0 ⇒ z1 = and z2 = are the isolated
2 2
singular points, both first order poles of G∗ (z);
3. One gets
√ !t
zt 1 1+ 5
res (G∗ (z), z1 ) = =√ ·
2z − 1 z=z1 5 2
and √ !t
∗ zt 1 1− 5
res (G (z), z2 ) = = −√ · .
2z − 1 z=z2 5 2
" √ !t √ !t #
1 1+ 5 1− 5
4. y(t) = √ − , t ∈ N; as a consequence,
5 2 2
" √ !t √ !t #
1 1+ 5 1− 5
yn = √ − , n ∈ N;
5 2 2
306
It follows from Theorem 4.2.9 (Second Time Delay) and from the initial
conditions that
2 1 z
z (Z[y(t)](z) − 1 − = 3z(Z[y(t)](z) − 1) − 2Z[y(t)](z) + ⇒
z (z − 1)2
z
(z 2 − 3z + 2)Z[y(t)](z) = z 2 + z − 3z + ⇔
(z − 1)2
z(z − 2) z
Z[y(t)](z) = + ,
z 2 − 3z + 2 (z − 1)2 (z 2 − 3z + 2)
z z z
so Z[y(t)](z) = + 3
. We know that the original of
z − 1 (z − 1) (z − 2) z−1
z
is 1 (or the function u(t)). For the function we will follow
(z − 1)3 (z − 2)
the steps of the algorithm presented and used in E. 26:
zt
1. G∗ (z) = , t = 0, 1, . . ., z ∈ C;
(z − 1)3 (z − 2)
2. (z −1)3 (z −2) = 0 ⇒ z1 = 1 and z2 = 2 are the isolated singular points,
the first one a pole of order 3 and the second one a pole of order 1;
3. One gets
t 00 0
(t − 1)z t − 2tz t−1
∗ 1 z 1
res (G (z), 1) = · lim = · lim
2! z→1 z − 2 2! z→1 (z − 2)2
t2 + t + 2
=−
2
and
zt
res (G∗ (z), 2) = lim = 2t ;
z→2 (z − 1)3
z t t2 + t + 2
4. The original of the function is 2 − , t ∈ N.
(z − 1)3 (z − 2) 2
t2 + t + 2
It follows that y(t) = 1 + 2t − , t ∈ N. Hence,
2
n2 + n + 2
yn = 1 + 2 n − , n ∈ N.
2
307
W 28. Find the general term yn of the following recurrent sequences:
a) yn+1 = yn + n · 3n , y0 = 3, n ∈ N;
b) yn+2 + yn = 3n , y0 = 0, y1 = 3, n ∈ N;
c) yn+3 + 3yn+2 + 3yn+1 + yn = 0, y0 = 0, y1 = 0, y2 = 1, n ∈ N.
3z 3z
Answer. a) We get that Z[y(t)](z) = + and
z − 1 (z − 1)(z − 3)2
3 3t (2t − 3) 15 + 3t (2t − 3)
yn = 3 + + = ;
4 4 4
z(3z − 8)
b) We get Z[y(t)](z) = and
(z − 3)(z 2 + 1)
3t it (−27i − 1) (−i)t (27i − 1)
y(t) = + + ;
10 20 20
This implies that
34k 1
− , n = 4k
10 10
34k+1 27
10 + 10 , n = 4k + 1
yn = , k ∈ N;
34k+2 1
+ , n = 4k + 2
10 10
4k+3
3 27
−
, n = 4k + 3
10 10
z (−1)n−2 · n(n − 1)
c) We get Z[y(t)](z) = and y n = .
(z + 1)3 2
308
4 −1 −2 1 −1
1 1 1
where A = 2 1 −2 , B = −1 2 , C = and
2 0 −1
1 −1 1 0 3
x1 (t)
12 −30 u 1 (t)
D= . Also, x(t) = x2 (t) , u(t) = and y(t) =
−6 13 u2 (t)
x 3 (t)
y1 (t)
. By a straightforward computation, det(sI −A) = s3 −6s2 +11s−6
y2 (t) 2
s − 2s − 1 −s − 3 −2s + 4
and (sI − A)∗ = 2s − 4 s2 − 5s + 6 −2s + 4 . One obtains
2
s−3 −s + 3 s − 5s + 6
the following transfer matrix:
−1 8s − 2 4s2 − 42s + 32
T (s) = C · (sI − A) · B + D = .
3s2 − 4s + 4 −5s2 + 6s
309
1. F = ztrans(f ). This computes the Z transform F of the symbolic
expression f . By default the variable of f is n (which must be declared)
and the variable of the computed transform F is z. Any other variable
can be used instead of n, if it is declared, with one exception: if z is
used as time variable, then ztrans returns F as a function of w, namely
F (w);
Example 4.6.1.
>> syms m;
f = exp(2*m) − sin(3*m);
F = ztrans(f );
The answer is F = z/(z − exp(2)) − (z* sin(3))/(zˆ2 − 2* cos(3)*z + 1)
Example 4.6.2.
>> syms z v;
f = exp(2*z) − sin(3*z);
F = ztrans(f, v);
The answer is F = v/(v − exp(2)) − (v* sin(3))/(vˆ2 − 2* cos(3)*v + 1)
Example 4.6.3.
>> syms m v;
f = exp(2*m) − sin(3*m);
F = ztrans(f, m, v);
The answer is F = v/(v − exp(2)) − (v* sin(3))/(vˆ2 − 2* cos(3)*v + 1)
Power Functions
Example 4.6.4.
>> syms n; % discrete ramp function
F = ztrans(n);
The answer is F = z/(z − 1)ˆ2.
310
Example 4.6.5.
>> syms n;
F = ztrans(nˆ2);
The answer is F = (z*(z + 1))/(z − 1)ˆ3.
Example 4.6.6.
>> syms n;
F = ztrans(nˆ7);
The answer is F = (z*(zˆ6+120*zˆ5+1191*zˆ4+2416*zˆ3+1191*zˆ2+
120*z + 1))/(z − 1)ˆ8.
Example 4.6.7.
>> syms n;
f = nˆ3;
F = ztrans(f );
The answer is F = (z*(zˆ2 + 4*z + 1))/(z − 1)ˆ4.
Example 4.6.8.
>> syms n;
F = ztrans(7ˆn);
The answer is F = z/(z − 7).
Example 4.6.9.
>> syms n;
F = ztrans((1 + 2i)ˆn);
The answer is F = z/(z − 1 − 2i).
Example 4.6.10.
>> syms n a;
F = ztrans(aˆn);
The answer is F = −z/(a − z).
Exponentials
Example 4.6.11.
>> syms n a;
f = exp(a*n);
F = ztrans(f );
311
The answer is F = z/(z − exp(a)).
Example 4.6.12.
>> syms n a;
f = exp(i*a*n);
F = ztrans(f );
MATLAB rewrites f = exp(a*n*1i).
The answer is F = z/(z − exp(a*1i)).
Example 4.6.13.
>> syms a b t;
F = ztrans(exp(a*t) − exp(b*t));
The answer is F = z/(z − exp(a)) − z/(z − exp(b)).
Example 4.6.15.
>> syms n;
F = ztrans(heaviside(n + 5));
The answer is F = (zˆ3*(1/(z − 1) + 1/2) − z − zˆ2 − zˆ3/2.
Example 4.6.16.
>> syms t a;
f = heaviside(t) − 2*heaviside(t − 4) + heaviside(t − 2*4);
F = ztrans(f );
The answer is F = exp(−8*s)/s − (2* exp(−4*s))/s + 1/s.
Example 4.6.17.
>> syms t;
f = t*(heaviside(t));
F = ztrans(f );
The answer is F = z/(z − 1)ˆ2.
312
Example 4.6.18.
>> syms t;
f = tˆ3*(heaviside(t));
F = ztrans(f );
The answer is F = (z*(zˆ2 + 4*z + 1))/(z − 1)ˆ4.
Example 4.6.19.
>> syms n;
F = ztrans(nˆ3* exp(5*n));
The answer is F = (z* exp(5)*(zˆ2 + 4* exp(5)*z + exp(10)))/(z−
exp(5))ˆ4.
313
f = sinh(a*n);
F = ztrans(f );
The answer is F = (z* sinh(a))/(zˆ2 − 2* cosh(a)*z + 1).
Example 4.6.25.
>> syms n a;
f = cosh(a*n);
F = ztrans(f );
The answer is F = (z*(z − cosh(a)))/(zˆ2 − 2* cosh(a)*z + 1).
Matrices
We can also determine the Z transform of matrices. One uses matrices of
the same size to specify the transformation variables and evaluation points.
Example 4.6.26.
>> syms z;
f = iztrans(z/(z − 1));
The answer is f = 1.
% f is memorized as the Heaviside’s discrete step function
Now we do it the other way around.
>> F = ztrans(f );
The answer is F = z/(z − 1).
314
Example 4.6.27.
>> syms z;
F = z/(z + 1);
f = iztrans(F );
The answer is f = (−1)ˆn.
Example 4.6.28.
>> syms z;
F = z/(z − 1)ˆ2;
f = iztrans(F );
The answer is f = n.
Example 4.6.29.
>> syms z;
F = z/(z + 1);
f = iztrans(F );
The answer is f = (−1)ˆn.
Example 4.6.30.
>> syms z a;
F = z/(z − a)ˆ2;
f = iztrans(F );
The answer is f = piecewise(a == 0, kroneckerDelta(n − 1, 0),
a ∼= 0, a*(kroneckerDelta(n, 0)/aˆ2 + (aˆn*(n − 1))/aˆ2) +
aˆn/a − kroneckerDelta(n, 0)/a).
j 1, if i = j
% kroneckerDelta(i, j) is δi = ; hence, the result for a = 0
0, if i 6=j
0 1, if n = 1
is f = kroneckerDelta(n-1, 0) = δn−1 = . Indeed, the Z
0, if n 6= 1
1 z
transform of this function is F = = 2 . For a > 0 the result is f =
z z
nan−1 . See the simplified version below.
Example 4.6.31.
>> syms z a;
F = z/(z − a)ˆ2;
f = simplify(iztrans(F ));
315
The answer is f = piecewise(a == 0, kroneckerDelta(n − 1, 0), a ∼=
0, aˆ(n − 1)*n.
Example 4.6.32.
>> syms z x;
F = (z* sin(x))/(zˆ2 − 2*z* cos(x) + 1);
f = iztrans(F );
The answer is f = sin(n*x).
Example 4.6.33.
>> syms z x;
F = (z*(z − cosh(x)))/(zˆ2 − 2*z* cosh(x) + 1);
f = iztrans(F );
The answer is f = cosh(n*x).
Example 4.6.34.
>> syms t z;
F = exp(3/z);
f = iztrans(F, t);
The answer is f = 3ˆt/factorial(t).
Example 4.6.35.
>> syms x y a;
F = (1 + a/y)ˆ3;
f = iztrans(F, y, x);
The answer is f = kroneckerDelta(x − 3, 0)*aˆ3 + 3*kroneckerDelta(x −
2, 0)*aˆ2 + 3*kroneckerDelta(x − 1, 0)*a + kroneckerDelta(x, 0).
a a2 a3
% F = 1 + 3 + 3 2 + 3 ; hence, f (0) = 1, f (1) = 3, f (2) = 3, f (3) = 1.
y y y
316
2. F = ztrans(f(n), n, p). This computes the Z transform F as a func-
tion of the variable p;
Example 4.7.1.
> ztrans n2 − 2 · exp(n), n, z
z(z + 1) 2z
3
− z
(z − 1) e −1
e
Example 4.7.2.
> ztrans n2 − 2 · exp(n), n, p
p(p + 1) 2p
− p
(p − 1)3 e −1
e
Example 4.7.3.
> ztrans t2 − 2 · exp(t), t, p
p(p + 1) 2p
3
− p
(p − 1) e −1
e
317
Example 4.7.5.
> ztrans (charfcn[0](n), n, z)
1
Example 4.7.6.
> ztrans (charfcn[0, 1](n), n, z)
1
1+
z
Example 4.7.7.
> ztrans (charfcn[0, 1, 2, 3, 5](n), n, z)
1 1 1 1
1+ + 2+ 3+ 5
z z z z
Example 4.7.8.
> ztrans(charfcn[0, 1](n) − 3 · charfcn[3, 7](n) + 5 ·
charfcn[4](n) − 11 · charfcn[5](n), n, z)
1 3 3 5 11
1+ − 3− 7+ 4− 5
z z z z z
Example 4.7.9.
> ztrans(Heaviside(n), n, z)
z
z−1
Example 4.7.11.
> ztrans(Heaviside(n − 3), n, z)
z 1 1
−1− − 2
z−1 z z
318
Example 4.7.12.
> ztrans(Heaviside(t) − 2 · Heaviside(t − 4)+
+ Heaviside(t − 8), t, z)
1 1 1 1 1 1 1
1+ + 2+ 3− 4− 5− 6− 7
z z z z z z z
Example 4.7.13.
> ztrans(t · (Heaviside(t) − 2 · Heaviside(t − 3)+
+ Heaviside(t − 6)), t, z)
z 4 + 2z 3 − 3z 2 − 4z − 5
z5
Remark 4.7.14. In Maple the product is indicated by an asterisk, namely
*. Hence, one writes 2*Heaviside for 2 · Heaviside.
319
Example 4.7.20.
> ztrans(exp(a · t), t, z)
z
z
ea −1
ea
Example 4.7.21.
> assume(n, positive):
ztrans(n · 2n , n, z)
2z
(z − 2)2
Example 4.7.22.
> assume(n, positive):
ztrans(n3 · 2n , n, z)
2z(z 2 + 4 + 8z)
(z − 2)4
Example 4.7.23.
> ztrans(exp(I · a · t), t, z)
z
z
e Ia −1
eIa
Example 4.7.24.
> ztrans(exp(t − 2), t, z)
e−2 z
z
e −1
e
Example 4.7.25.
> ztrans(exp(a · t) − exp(b · t), t, z)
z z
z − z
ea −1 eb −1
ea eb
320
Example 4.7.27.
> ztrans(cos(n), n, z)
(−z + cos(1))z
−z 2 + 2z cos(1) − 1
Example 4.7.28.
> ztrans(sin(a · n), n, z)
z sin(a)
−
−z 2 + 2z cos(a) − 1
Example 4.7.29.
> ztrans(cos(a · n), n, z)
(−z + cos(a))z
−z 2 + 2z cos(a) − 1
Example 4.7.30.
> ztrans(sinh(a · n), n, z)
z − z (ea )2
−2z 2 ea + 2z + 2z (ea )2 − 2ea
Example 4.7.31.
> ztrans(cosh(a · n), n, z)
−2z 2 ea + z + z (ea )2
−2z 2 ea + 2z + 2z (ea )2 − 2ea
Some properties of the Z transform can be obtained using Maple.
Time Delay
Example 4.7.32.
> ztrans(f (n − 1), n, z)
ztrans(f (n), n, z)
z
Example 4.7.33.
> ztrans(f (n − 5), n, z)
ztrans(f (n), n, z)
z5
321
Example 4.7.34.
> ztrans(f (n + 1), n, z)
Example 4.7.35.
> ztrans(f (n + 3), n, z)
Difference
Example 4.7.38.
> ztrans(f (t + 1) − f (t), t, z)
322
3. f = invztrans(F(p), p, x). This computes the inverse Z transform
f as a function of the variable x and considers that F is a function of
the variable p.
Example 4.7.39.
4
z + 11 · z 3 + 11 · z 2 + z)
> invztrans , z, n
(z − 1)5
n4
Example 4.7.40.
4
z + 11 · z 3 + 11 · z 2 + z)
> invztrans , z, t
(z − 1)5
t4
Example 4.7.41.
4
p + 11 · p3 + 11 · p2 + p)
> invztrans , p, x
(p − 1)5
x4
Example 4.7.42.
z
> invztrans , z, n
z−1
1
In fact, the answer stands in for Heaviside’s discrete step function, namely
h(n) = 1, ∀n ∈ {0, 1, 2, . . .}.
Example 4.7.43.
z
> invztrans , z, n
z−5
5n
Example 4.7.44.
z
> invztrans , z, n
z−a
an
Example 4.7.45.
a
> invztrans exp , z, n
z
an
n!
323
Example 4.7.46.
2
z + cos(a) · z
> invztrans , z, n
z 2 − 2z cos(a) + 1
cos(a n)
3 · z2 − 7 · z
Example 4.7.47. > invztrans , z, n
z2 − 6 · z + 5
2 5n + 1
Example 4.7.48.
3 · z 2 − 13 · z
> invztrans , z, n
z 2 − 9 · z + 20
2 5n + 4 n
Example 4.7.49.
2·z
> invztrans , z, n
(z − 2)2
2n n
Example 4.7.50.
2 · z · (z 2 + 8 · z + 4)
> invztrans , z, n
(z − 2)4
2n n3
324
Appendix A
325
Z ∞
No. f (x) fb(ω) = f (x)eiωx dx
−∞
√
2 x2 π − ω22
9. e−a ,a>0 e 4a
a
√ 2
2 x2 π i 4aω
−π
10. e−ia ,a>0 e 2 4
a
2a
11. e−a|x| , a > 0
a + ω2
2
1
12. e−ax h(x), a > 0
a − iω
(
sin x π, ω ∈ (−1, 1)
, x 6= 0
13. x 0, |ω| > 1
π
1, x = 0
, ω = ±1
2
2π · ω ν−1 e−aω , ω > 0
14. (a − ix)−ν , a, ν > 0 Γ(ν)
0, ω < 0
326
A.2 Cosine Fourier Transform
327
No. f (x) Fc (f )(ω)
r
2 π − ω2
13. e−ax , a > 0 e 4a
2a
π
2, ω < a
1 π
14. sin (ax), a > 0 , ω=a
x 4
0, ω > ω
sin x −x 1 2
15. e arctan
x 2 ω2
( π
1 − cos (ax) (a − ω), 0 < ω < a
16. ,a>0 2
x2
0, ω > a
328
A.3 Sine Fourier Transform
329
No. f (x) Fs (f )(ω)
1 1 ω+a
13. sin (ax), a > 0 ln
x 2 ω−a
(ω + 1)2 + 1
sin x −x 1
14. e ln
x 4 (ω − 1)2 + 1
1 − cos (ax) ω
2
ω − a2
a
ω+a
15. ,a>0 ln + ln
x2 2 ω2 2 ω−a
330
A.4 Laplace Transform
s2 − ω 2
12. t cos(ωt), ω ∈ C
(s2 + ω 2 )2
ω
13. eat sin(ωt), a, ω ∈ C
(s − a)2 + ω 2
331
s−a
14. eat cos(ωt), a, ω ∈ C
(s − a)2 + ω 2
No. f (t) F (s)
15. δ(t) 1
16. δ (n) (t), n ∈ N sn
17. δ(t − t0 ), t0 > 0 e−t0 s
1 1
18. √ √
πt s
sin(ωt) π s
19. ,ω∈C − arctan ,
t 2 ω
eat − ebt s−b
20. , a, b ∈ C, a 6= b ln ,
t s−a
2
(cos(at) − cos(bt)), s 2 + b2
21. t ln ,
a, b ∈ C, a 6= b s 2 + a2
332
A.5 Z Transform
333
No. f (t) F ∗ (z)
z(z 2 + 4z + 1)
14. t3
(z − 1)4
az
15. tat
(z − a)2
z(z 2 − 1) sin ω
16. t sin(ωt), ω ∈ R
(z 2 − 2z cos ω + 1)2
z (z 2 + 1) cos ω − 2z
17. t cos(ωt), ω ∈ R
(z 2 − 2z cos ω + 1)2
1 z
18. , t ∈ {1, 2, . . .} ln
t z−1
(−1)t−1 1
19. , t ∈ {1, 2, . . .} ln 1 +
t z
at−1 1 z
, t ∈ {1, 2, . . .}, ln
20. t
a ∈ C∗ a z−a
sin(ωt) sin ω
, t ∈ {1, 2, . . .}, arctan
21. t
ω∈C z − cos ω
at a
22. , a ∈ C∗ ez
t!
Table A.5: Z Transforms
334
Bibliography
[3] Balan, V., Pı̂rvan, M.: Matematici avansate pentru ingineri - Pro-
bleme date la Concursul Ştiinţific Studenţesc ”Traian Lalescu”, Mate-
matică, anii 2002-1014, Ed. Politehnica Press, Bucureşti, 2014.
[4] Breaz, N., Crăciun, M., Gaşpar, P., Miroiu, M., Paraschiv-
Munteanu, I.: Modelarea matematică prin MATLAB, Ed. StudIS,
Iaşi, 2013.
[5] Breaz, D., Suciu, N., Gaşpar, P., Barbu, G., Pı̂rvan, M.,
Prepeliţă, V., Breaz, N.: Transformări integrale şi funcţii com-
plexe cu aplicaţii ı̂n tehnică, Vol. 1 - Funcţii complexe cu aplicaţii ı̂n
tehnică, Ed. StudIS, Iaşi, 2013.
[9] Drăguşin, L., Drăguşin, C., Radu, C.: Calcul diferenţial şi
ecuaţii diferenţiale, Ed. Du Style, Bucuereşti, 1996.
335
[10] Drăguşin, C., Gavrilă, M.: Analiză matematică. Calcul diferen-
ţial, Ed. Matrix Rom, Bucureşti, 2007.
[18] Niţă, C., Năstăsescu, C., Brandiburu, M., Joiţa, D.: Culegere
de probleme pentru liceu - algebră - clasele IX-XII, Ed. Rotech Pro,
Bucureşti, 2004.
[19] Olariu, V., Prepeliţă, V.: Matematici speciale, Ed. Didactică şi
Pedagogică, Bucureşti, 1985.
[21] Olariu, V., Olteanu, O.: Analiză matematică, Ed. Semne, Bu-
cureşti, 1998.
336
[23] Pı̂rvan, M., Savu, I.: Matematici avansate pentru ingineri, Ed. Ma-
trix Rom, Bucureşti, 2021.
[32] Storey, B. D.: Computing Fourier Series and Power Spectrum with
MATLAB. http://faculty.olin.edu/bstorey/Notes/Fourier.pdf
337
Index
338
Fourier series coefficients Z transform, 272
complex, 20 integers congruent modulo n, 192
generalized, 6, 7 integral
trigonometric, 11 Complex Fourier, 83
function Dirichlet, 72
absolutely integrable, 65, 142 Euler, 70, 82
even, 13, 17 Fourier, 66
Gamma, 148 Fourier for even functions, 84
odd, 14, 17 Fourier for odd functions, 85
original, 143, 258 Laplace, 142
periodic, 12, 13, 15, 16 Real Fourier, 84
piecewise continuous, 12, 142 Integration of the Image
standardized, 13, 83 Laplace transform, 158
transfer, 255 Z transform, 269
Integration of the Original
harmonically related exponentials, 22 Laplace transform, 157
Heaviside’s discrete step function Inversion Formula
Z transform, 260, 312, 313, 318, Fourier transform, 79
319
Heaviside’s step function jump discontinuity, 142
Fourier transform, 126, 127, 132,
kernel
133
Fourier, 67
Laplace transform, 145, 231, 232,
key, 191
239, 248, 251, 253
Kronecker’s function, 259
Hilbert space, 3
hyperbolic functions Linear control system
Laplace transform, 234, 250 discrete-time, 282
Z transform, 313, 321 time-invariant, 185
hyperbolic sine and cosine functions linear space, 2
Laplace transform, 150 Linearity
discrete Fourier transform, 93
image of a function Fourier transform, 74
Fourier transform, 67 Laplace transform, 149
Laplace transform, 142 Z transform, 261
Z transform, 258
index of growth, 143 matrix
Initial Value Fourier transform, 128
Laplace transform, 161 inverse Fourier transform, 130
339
Laplace transform, 237 Laplace transform, 152
transfer, 240, 255, 286 Z transform, 264
Z transform, 314 sequence
multiplier, 283 convergent, 3
fundamental, 3
norm of a vector, 3 signal
in the frequency domain, 67, 142,
orthogonal
258
system, 3, 8, 20
in the time domain, 67, 142, 258
vectors, 3
Similarity
Parseval’s Formula, 50, 81, 93 Fourier transform, 75
periodic functions Laplace transform, 151
Laplace transform, 162, 236 Z transform, 262
Plancherel’s Theorem, 81, 93 sine function
power function Laplace transform, 150, 233, 234,
Laplace transform, 147, 149, 250 250
Z transform, 310, 319 Z transform, 313, 320
power series spectrum
Laplace transform, 164 amplitude, 23
pre-Hilbert space, 2 frequency, 23
preimage signal, 23
of a Fourier transform, 67 state-space representation, 285
of a Laplace transform, 142 sum of a function, 268
Product summator, 283
Fourier transform, 80 synthesis equation, 22
Z transform, 271
text
proper rational function, 188
cipher, 191
pulse
plain, 191
duration, 25
Time Delay
period, 26
Fourier transform, 75
train, 25
Laplace transform, 151, 232, 237,
Realization problem, 189 248, 251
right periodic function Z transform, 263, 312, 318, 321
Z transform, 265 transform
RLC circuit, 189 Dirichlet, 258
discrete Fourier, 90, 136
Second Time Delay discrete Laplace, 258
340
fast Fourier, 94, 131
Fourier, 66, 67, 123, 132
Fourier cosine, 85, 135
Fourier sine, 85, 136
inverse Fourier, 128, 134
inverse Laplace, 168, 239, 252
inverse Z, 314, 322
Laplace, 142
Z, 309, 316
Z of a discrete function, 258
Z of an original function, 258
Translation
Fourier transform, 75, 93, 128, 130
Laplace transform, 153, 233, 249,
253
variable
input, 282
output, 282
state, 284
vector, 2
vector space, 2
341