Nothing Special   »   [go: up one dir, main page]

Applied 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

AMCS 202-lecture 1 and lecture 2

Review: vectors, linear dependence


Prof. Sahrawi Chaib, PhD
Dear Students: Please read these notes prior to the class meeting. Please try
to solve all the problems that are posed in the text. It will help you understand the
class material better. In class we will be doing more problems and applications
and minimal lecturing. More discussing less dorsal lecturing. The rst lecture
will be on feb 4
th
. These notes are 2 lectures but I will most likely cover them
in three classes depending on how prepared most of you are.
1 Vector Space
Let V be a set of elements called vectors and K a set of elements called
scalars. We introduce two operations : addition which we denote and
multiplication A vector space V over K consists of a set V along with two
operations and such that
if v and w V then v w V
v w = w v (commutativity)
(v w) +u = v (w +u) (associativity)
There is a zero vector such that v

0 = v, v V
Each vhas an additive inverse w V such that w v =

0
K and v V we have v V
, K and v V we have ( + ) v = v v
K and v, w V we have (v w) = v w
1
, K and v V we have ( ) v = ( v)
1 v = v
0 v =

0
1 v v =

0 =

0
Examples of physical quantities that can be specied by a number are tem-
perature, mass, viscosity, and so on. They are also called scalars. Vectors
however that were dened in an abstract manner previously are dened by a
magnitude and a direction. Physical examples are force, velocity, magnetic
and electric elds and so on. If the vector space V is our physical 3 dimen-
sional space, then every vector is dened by three numbers that correspond
to the strength i the vector in each direction or axis (x, y or z). We can also
dene vectors in a 2-dimensional space with the the numbers (x,y). Geo-
metrically a vector can be represented by something like
!
. A vector has a
direction (heading up in this case) and magnitude which is the length of the
arrow in this gure.
1.1 Dot product and the introduction of angle
Since we dened a direction and a magnitude the concept of angle will be
automatically introduced. If we draw two vectors, most likely there will be
an angle between the lines that dene their direction


EAs you can see
there is an angle between these two vectors. we call this angle We will
dene the dot product which is a particular case of and dened as: u v
such that
u v
_
uv cos if u, v = 0
0 if u = 0 or v = 0
where u, v and cos are scalars and will be discussed later.
1.1.1 Example : Work done by a force
In mechanics the work W done when a body undergoes a linear displacement
from an initial point A to a nal point B under the action of a force

F is
2
dened by W =

F

AB where the work is positive if the force is assisting the
motion in which case the angle is smaller than /2 and negative if the force
is opposing the displacement. The work is then FAB cos . In this case
Fis a measurable quantity in Newtons and is called the magnitude of the
force. In general terms u is called the norm of u.
1.2 n-Space
Sometimes vectors are described in a larger space that is described by more
than 3 dimensions. For this reason we shall dene a vector by a set of num-
bers that we call coordinates. In a three dimensional space a vector can be
described by (x
1
, x
2
, x
3
) where x
i
are the coordinates in the i
th
axis. The
idea is similar to the denition of points in cartesian space. For example the
2-tuple (a
1
, a
2
) denotes a point we call P where a
1
and a
2
are the x and y
coordinates respectively.
T
E
q
P(a
1
, a
2
)
a
1
a
2
O
These coordinates can also serve to denote the vector OP. Here a vector is
denoted by a 2-tuple (a
1
, a
2
) rather by an arrow. The set of all real 2-tuples
is called 2-space and will be denoted by the symbol R
2
. That is
R
2
= {(a
1
, a
2
) | a
1
, a
2
real numbers }
3
Vectors u = (u
1
, u
2
) and v = (v
1
, v
2
) in R
2
are said to be equal if u
1
= v
1
and u
2
= v
2
. These ideas can be generalized to n-space where vectors are
dened by n-tuples. Vectors are dened by a set of n numbers similar to
what we dened in 2-space.
R
n
= {(a
1
, ..., a
n
) | a
1
, ..., a
n
real numbers }
The rules dened in section 1 are still valid in this space. For example the
dot product is:
u v u
1
v
1
+ u
2
v
2
+ ... + u
n
v
n
=
n

j=1
u
j
v
j
(Problem: If the vector u is making an angle with the horizontal
axis and the vector v is making an angle aver the horizontal
axis. Using the denition of the dot product ( is the angle be-
tween u and v) show that in 2-space u v = u
1
v
1
+ u
2
v
2
)
Lets now dene the norm u such that
u

u u =

_
n

j=1
u
2
j
from the denition of the dot product above the angle is dened now as:
cos
1
_
u v
uv
_
(Problem: Prove the Schwarz inequality: |uv| uv, Where
|A| is the absolute value and dened as

A
2
)
1.2.1 Orthogonality and Orthonormality
Vectors are said orthogonal if the angle is /2 and their dot product is
zero. They are said orthonormal if they are orthogonal and their norms are
equals to one. Any non-zero vector u can be scaled to have a unit norm
by multiplying it by 1/u. u =
1
u
u is a normalized vector. (Problem:
Show that u = 1. Write a vector in the R
3
whose norm is equal
to one. Write two orthonormal vectors).
4
1.3 Generalized Vector Space
In section 1, we introduced a set of rules for the manipulations of vectors
that compose a vector space. We can generalize these ideas as follows: The
arithmetic relations we introduced are still valid such that if u and v belong
to the vector space S so does u v. From now on we will replace by +.
We can also dene an addition operation in R
n
such that instead of :
u +v = u
1
+ v
1
, ..., u
n
+ v
n
we could have:
u +v = u
1
+ 2v
1
, ..., u
n
+ 2v
n
(Show if the rules we presented in section 1 are still valid, namely
commutativity (under which conditions are they valid)). If the
rules are not satised then S is not a vector space.) Of course
R
n
is one of the many vector spaces. Vectors could be functions, matrices,
provided that the rules in section 1 are satised. A vector space could have
an inner product (similar to the dot product we introduced earlier) and a
norm also similar to the one introduced before but these two properties are
not necessary when dening a vector space.
we can introduce an inner product in R
n
such that:
u v u
1
v
1
+ ... + u
n
v
n
=
n

j=1
u
j
v
j
we can also introduce a weighted inner product such that:
u v w
1
u
1
v
1
+ ... + w
n
u
n
v
n
=
n

j=1
w
j
u
j
v
j
Such that if we add a norm these become
u

_
n

j=1
u
2
j
which we call the Euclidean norm
or
u

_
n

j=1
w
j
u
2
j
5
which we call the Modied Euclidean norm and useful in optimization theory
as we will see later when we study matrices.
We can also choose a norm such that
u |u
1
| + ... +|u
n
| =
n

j=1
|u
j
| The taxicab norm following Struble
1.4 Span, Subspace, linear dependence, basis and di-
mension
1.4.1 Span
If u
1
, ..., u
k
are vectors in a vector space S then the set of all linear combi-
nations of these vectors of the form
u =
1
u
1
+ ... +
k
u
k
where
k
are scalars is called the span of u
1
, ...u
k
.
1.4.2 Subspace
A subspace of a vector space is a nonempty subset that satises the require-
ments for a vector space: Linear combinations stay in the subspace. If
we add two vectors in the subspace their sum stays in the subspace and if
we multiply the vectors from a subspace by a scalar the results stays in the
subspace. It follows that a span is itself a subspace.
Example: lets nd out if the span of u
1
= {5, 1} and u
2
{1, 3} is all of R
2
or only a part of it? Let v = {v
1
, v
2
} be any given vector in R
2
and try to
express v =
1
u
1
+
2
u
2
. That is
(v
1
, v
2
) =
1
(5, 1) +
2
(1, 3)
= (5
1
,
1
) + (
2
+ 3
2
)
= (5
1
+
2
,
1
+ 3
2
)
Equating the components:
5
1
+
2
= v
1

1
+ 3
2
= v
2
6
After some easy algebra we nd out that the the s are solvable for every
v
1
and v
2
. This means that {u
1
, u
2
} spans R
2
. In other words every vector v
in R
2
can be expressed as a linear combination of vectors u
1
, u
2
.(Problem:
If v = (6, 4) nd the expression of v as a function of u
1
, u
2
.)
1.4.3 Linear dependence
A set of vectors u
1
, ..., u
k
from the same vector space is said linearly dependent
(LD) if at least one of them can be expressed as a linear combination of
the others. If there is no such combination the vectors are said linearly
independent (LI). (Problem1: Prove that u
1
= (1, 0), u
2
= (1, 1) and
u
3
= (5, 4) are LD. Prove that u
1
= (1, 0) and u
2
= (1, 1) are LI.
Problem2: Prove that every nite set of orthogonal set of nonzero
vectors is LI.). It would make no sense to ask this question of LD between
vectors belonging to R
2
and R
7
. They should come form the same vector
space.
1.4.4 Bases, Expansions and Dimensions
Expansions: Like functions f(x) that can expanded in a combination
series of power of x, vectors too can be expanded in a linear combination
of a set of vectors we call base vectors e
1
, ..., e
k
such that:
u =
1
e
1
+ ... +
k
e
k
In contrast to the expansion of function in the powers of the variable,
the expansion of the vectors e is nite. The vectors e are LI so that
any vectors can be expressed as a linear combination of them. If R
2
we
would talk about e
1
and e
2
.
Basis: The set of vectors e
1
, ..., e
k
of S is a basis if each u can be
expressed uniquely in the form
u =
1
e
1
+ ... +
k
e
k
=
k

j=1

j
e
j
Prove the following theorem: A nite set {e
1
, ..., e
k
} in a vector
space S is a basis for S if and only if it spans S and is LI.
Using this theorem, show that the set e
1
= (2, 1) and e
2
= (2, 4) is a
basis for R
2
.
7
Dimension: If the greatest number of LI vectors that can be found in a
vector space S is k, then S is k-dimensional and we write dimS = k.
If a basis is made of k vectors, then S is kdimensional. For instance:
dim R
n
= n. Furthermore, if a vector space S admits a basis consisting
of k vectors, then S is k-dimensional.
Orthogonal bases: To expand a vector we need to solve a set of equation
which number depends on the dimension of the space. If the dimen-
sion is large the process could be quite laborious. Now, suppose that
{e
1
, ..., e
k
} is an orthogonal basis for S so that
e
i
e
j
=
ij
(Where is the kronecker symbol). in other words
e
i
e
j
= 0 if i = j
Now we wish to have the following expansion:
u =
1
e
1
+ ... +
k
e
k
and determine the s. To do so we need to dot u with the es we obtain
a linear system
u e
1
= (e
1
e
1
)
1
+ ... + 0
k
,
u e
2
= 0
1
+ (e
2
e
2
)
2
+ 0
3
+ ... + 0
k
,
.
.
.
u e
k
= 0
1
+ ... + 0
k1
+ (e
k
e
k
)
k
,
Solving for the s we nd

1
=
u e
1
e
1
e
1
,
2
=
u e
2
e
2
e
2
,
k
=
u e
k
e
k
e
k
,
Thus if the basis is orthogonal, the expansion of any given u is simply:
u =
_
u e
1
e
1
e
1
_
e
1
+ +
_
u e
k
e
k
e
k
_
e
k
=
k

j=1
_
u e
j
e
j
e
j
_
e
j
8
If the vectors es are normalized that is e
j
= 1 so that they are
orthonormal, then the former equation becomes:
u = (u e
1
) e
1
+ + (u e
k
) e
k
=
k

j=1
(u e
j
) e
j
If the set {e
1
, ..., e
k
} is an orthonormal basis then the last equality is
called the best approximation (Prove this statement by rst writing a
vector u as a linear combination of the es and show that it reduces to
the last equality). The best approximation is also called the orthogonal
projection of u onto the subspace spanned by the es.
Problem (The Gram-Schmit Orthogonalization process): Given
k LI vectors v
1
, , v
k
, it is possible to obtain from them k orthonormal
vectors e
1
, , e
k
, in span {v
1
, , v
k
} by the Gram-Schmit process,
by taking e
1
equal to v
1
, taking e
2
equal to a suitable linear combina-
tion of v
1
and v
2
, taking e
3
equal to a suitable linear combination of
v
1
, v
2
and v
3
and so on and then normalizing the results. The resulting
orthonormal set is as follows:
e
1
=
v
1
v
1

,
e
2
=
v
2
(v
2
e
1
) e
1
v
2
(v
2
e
1
) e
1

,
.
.
.
e
j
=
v
j

j1
i=1
(v
j
e
i
) e
i
v
j

j1
i=1
(v
j
e
i
) e
i

, through j = k.
Verify that each e
j
dened above is a linear combination
of v
1
, , v
k
, and that the e
j
s are orthonormal. [In verify-
ing that e
j
= 1, be sure to show that each denominator is
nonzero. ]
Problem (Bessel inequality): Given a vector u in S and an or-
thonormal set e
1
, , e
N
in S, what is the best approximation?
u
N

j=1
c
j
e
j
9
in other words how do we compute the coecients c
j
so that the error
vector

E = u

N
j=1
c
j
e
j
is as small as possible? Or how we choose
the c
j
s so that the norm

E is as minimum? To do that we will work


with

E
2
to avoid square roots.

E
2
=

E

E =
_
u
N

j=1
c
j
e
j
_

_
u
N

j=1
c
j
e
j
_
= u u 2
N

j=1
c
j
(u e
j
) +
N

j=1
c
2
j
,
(Prove the last step).
If we dene (u e
j
)
j
and noting that u u = u
2
we may write
now

E
2
=
N

j=1
c
2
j
2
N

j=1

j
c
j
+u
2
,
and completing the square:

E
2
=
N

j=1
(c
j

j
)
2

j=1

2
j
+u
2
Beginning with this very last equality derive the Bessel inequality
N

j=1
(u e
j
)
2
u
2
You can prove, but you dont have to, that if u is in the span { e
1
, , e
k
}
or if dimS = N then the bessel inequality becomes an equality. What is
the common name of the Bessel Equality in two and three dimensions?
10

You might also like