Nothing Special   »   [go: up one dir, main page]

Mth501 Midterm Short Notes 1 To 18

Download as pdf or txt
Download as pdf or txt
You are on page 1of 53

MTH501 Short Notes Mid term

Lec 1 to18
BY vuonlinehelp.blogspot.com
YouTube link:
Website Link:
https://vuonlinehelp.blogspot.com/

Important Note

You should never copy paste the same answer which we are providing.
Please do make some changes in the solution file. If you appreciate our
work, please share our website vuonlinehelp.blogspot.com with your
friends.

‫آپ کو کبھی بھی وہی جواب کاپی پیسٹ نہیں کرنا‬


‫چاہیے جو ہم فراہم کر رہے ہیں۔ براہ کرم حل‬
‫فائل میں کچھ تبدیلیاں کریں۔ اگر آپ ہمارے‬
‫کام کی تعریف کرتے ہیں تو براہ کرم ہماری ویب‬
‫ کو اپنے دوستوں کے‬vuonlinehelp.blogspot.com ‫سائٹ‬
‫ساتھ شیئر کریں‬
Lecture 1 Introduction and Overview

What is Algebra?

Algebra is named in honor of Mohammed Ibn-e- Musa al-Khowârizmî. Around 825, he


wrote a book entitled Hisb al-jabr u'l muqubalah, , Al-jabr, presented rules for solving
equations.

Algebraic Term

The basic unit of an algebraic expression is a term. In general, a term is either a product
of a number and with one or more variables.

Algebraic Expressions

An expression is a collection of numbers, variables, and +ve sign or –ve sign, of


operations that must make mathematical and logical behaviour.

For example

8X2 +9X −1 is an algebraic expression.

What is Linear Algebra?

One of the most important problems in mathematics is that of solving systems of linear
equations. Linear algebra is a branch of mathematics that deals with linear equations and
their representations in the vector space using matrices. In other words, linear algebra is
the study of linear functions and vectors. It is one of the most central topics of
mathematics. Most modern geometrical concepts are based on linear algebra.

Elementary Linear Algebra

Elementary linear algebra introduces students to the basics of linear algebra. This
includes simple matrix operations, various computations that can be done on a system of
linear equations, and certain aspects of vectors. Some important terms associated with
elementary linear algebra are given below:

Scalars
A scalar is a quantity that only has magnitude and not direction. It is an element that is
used to define a vector space. In linear algebra, scalars are usually real numbers.

Vectors
A vector is an element in a vector space. It is a quantity that can describe both the
direction and magnitude of an element.
Vector Space
The vector space consists of vectors that may be added together and multiplied by
scalars.

Matrix
A matrix is a rectangular array wherein the information is organized in the form of rows
and columns. Most linear algebra properties can be expressed in terms of a matrix.

Matrix Operations
These are simple arithmetic operations such as addition, subtraction,
and multiplication that can be conducted on matrices.

Applications of Linear algebra

Linear algebra makes it possible to work with large arrays of data. It has many
applications in many diverse fields, such as

 Computer Graphics,
 Electronics,
 Chemistry,
 Biology,
 Differential Equations,
 Economics,
 Business,
 Psychology,
 Engineering,
 Analytic Geometry,
 Chaos Theory,
 Cryptography,
 Fractal Geometry,
 Game Theory,
 Graph Theory,
 Linear Programming,
 Operations Research

Why using Linear Algebra?

Linear Algebra is then useful for solving problems in such applications in topics such as
Physics, Fluid Dynamics, Signal Processing and, more generally Numerical Analysis.
Principal Linear Algebra

There are two principal aspects of linear algebra:

• Theoretical and computational.


• A major part of mastering the subject consists in learning how these two aspects
are related and how to move from one to the other.

Recommended Books and Supported Material

The lectures are based on the material taken from the books mentioned below.

1. Linear Algebra and its Applications (3rd Edition) by David C. Lay.


2. Contemporary Linear Algebra by Howard Anton and Robert C. Busby.
3. Introductory Linear Algebra (8th Edition) by Howard Anton and Chris Rorres.
4. Introduction to Linear Algebra (3rd Edition) by L. W. Johnson, R.D. Riess and
1. J.T. Arnold.
5. Linear Algebra (3rd Edition) by S. H. Friedberg, A.J. Insel and L.E. Spence.
6. Introductory Linear Algebra with Applications (6th Edition) by B. Kolman.

Lecture 2

Introduction to Matrices

Matrix

A matrix is a collection of numbers or functions arranged into rows and columns.

Elements

Matrices are denoted by capital letters A, B,,Y, Z . The numbers or functions are called
elements of the matrix. The elements of a matrix are denoted by small letters a,b,, y,z.

Rows and Columns


The horizontal and vertical lines in a matrix are, respectively, called the rows and columns of the matrix.

Order of a Matrix
The size (or dimension) of matrix is called as order of matrix. Order of matrix is based on the number of rows and number of columns. It

can be written as r c ×; r means no. of row and c means no. of columns.

Square Matrix

A matrix with equal number of rows and columns is called square matrix.

Equality of matrices

Two matrices are said to be equal if: Both the matrices are of the same order i.e., they
have the same number of rows and columns an m × n = B m × n.

Multiple of matrix
A multiple of a matrix A by a nonzero constant k is defined to multiplication of a matrix
by a scalar mathematically as: If A = [aij] m × n is a matrix and k is a scalar, then kA is
another matrix obtained by multiplying each element of A by the scalar k.
Addition of Matrices

Only matrices of the same order may be added by adding corresponding elements. If A =
[aij] and B = [ij] b are two m× n matrices then A + B = [aij + bij] .Obviously order of the
matrix A + B is m× n

Difference of Matrices

The difference of two matrices A and B of same order m× n is defined to be the matrix A
− B = A + (−B).

Multiplication of Matrices

We can multiply two matrices if and only if, the number of columns in the first matrix
equals the number of rows in the second matrix. Otherwise, the product of two matrices
is not possible.

Am×n Bn× p = Cm× p ⋅


Zero Matrix or Null matrix

A matrix whose all entries are zero is called zero matrix or null matrix and it is denoted

by O.

Associative Law

The matrix multiplication is associative. This means that if A, B and C are m× p , p × r


and r × n matrices, then A(BC) = (AB)C

Distributive Law

If B and C are matrices of order r × n and A is a matrix of order m× r , then the


distributive law states that

 A(B + C) = AB + AC
 Furthermore, if the product (A + B)C is defined,
 then (A+ B)C = AC + BC

Determinant of a Matrix

Associated with every square matrix A of constants, there is a number called the
determinant of the matrix,

Which is denoted by det (A) or |A |.


Transpose of a Matrix

The transpose of m× n matrix A is denoted by A tr and it is obtained by interchanging


rows of A into its columns. In other words, rows of A become the columns of.A tr .
Clearly A tr is m× n matrix.

Properties of the Transpose


The following properties are valid for the transpose;

Multiplicative Inverse

Suppose that A is a square matrix of order n × n. If there exists an n × n matrix B such


that AB = BA = I , then B is said to be the multiplicative inverse of the matrix A and is
denoted by B = A−1.

Singular and Non-Singular Matrices

Singular matrix
Singular matrix: A square matrix whose determinant is 0 is called singular matrix.

Nonsingular matrix
Nonsingular matrix: A square matrix that is not singular, i.e. one that has matrix inverse.
Nonsingular matrices are sometimes also called regular matrices. A square matrix is
nonsingular iff its determinant is non-zero.

Minor of Matrix
The minor of matrix is for each element of matrix and is equal to the part of the matrix
remaining after excluding the row and the column containing that particular element. The
new matrix formed with the minors of each element of the given matrix is called the
minor of matrix.

The derivative matrix

The definition of differentiability in multivariable calculus is a bit technical. There


are subtleties to watch out for, as one has to remember the existence of the derivative is a
more stringent condition than the existence of partial derivatives.

Lecture 3

Systems of Linear Equations

Linear Equations

Linear equations are equations in which the variables are raised to the power of 1 and are
not multiplied or divided by each other. They represent straight lines on a graph and have
the general form:

y = mx + b

Here, y and x are variables, m represents the slope of the line, and b is the y-intercept,
which is the point where the line intersects the y-axis.

For example, let's solve the equation 2x + 3 = 9:

 Start with the equation: 2x + 3 = 9.


 Simplify: 2x = 9 - 3, which becomes 2x = 6.
 Move the constant term (3) to the right side: 2x = 6 - 3, which becomes 2x = 3.
 Divide both sides of the equation by 2 to isolate x: x = 3/2 or x = 1.5.
 Check the solution: 2(1.5) + 3 = 9. It satisfies the equation.

Homogeneous Linear equation


A system of linear equations having matrix form AX = O, where O represents a zero
column matrix, is called a homogeneous system.

System of Linear Equations

A finite set of linear equations is called a system of linear equations or linear system. The
variables in a linear system are called the unknowns.

General System of Linear Equations

A general system of linear equations consists of multiple equations with multiple


variables. It can be represented in matrix form as:

A*X=B

Here, A is the coefficient matrix, X is the variable matrix, and B is the constant matrix.

Solution of a System of Linear Equations

Equation in the system a true statement. The set of all such solutions of a linear
system is called its solution set.

Consistent and inconsistent system

A linear system is said to be consistent if it has at least one solution and it is called
inconsistent if it has no solutions.
Parametric Representation

It is very convenient to describe the solution set in this case is to express it


parametrically. We can do this by letting y = t and solving for x in terms of t, or by letting
x = t and solving for y in terms of t.

Matrix Notation

The essential information of a linear system can be recorded compactly in a rectangular


array called a matrix.

is called the
coefficient matrix (or matrix of coefficients) of the system.

It is always denoted by Ab =

Elementary Row Operations

1. (Replacement) Replace one row by the sum of itself and a nonzero multiple of
another row.
2. (Interchange) Interchange two rows.
3. (Scaling) Multiply all entries in a row by a nonzero constant.
Row equivalent matrices

A matrix B is said to be row equivalent to a matrix A of the same order if B can be


obtained from A by performing a finite sequence of elementary row operations of A.

Lecture 4

Row Reduction and Echelon Forms

Echelon form of a matrix

A rectangular matrix is in echelon form (or row echelon form) if it has the following
three properties:

1. All nonzero rows are above any rows of all zeros


2. Each leading entry of a row is in a column to the right of the leading entry of the
row above it.
3. All entries in a column below a leading entry are zero.

Reduced Echelon Form of a matrix

If a matrix in echelon form satisfies the following additional conditions, then it is in


reduced echelon form (or reduced row echelon form):

4. The leading entry in each nonzero row is 1.


5. Each leading 1 is the only nonzero entry in its column

Pivot Positions

A pivot position in a matrix A is a location in A that corresponds to a leading entry in an


echelon form of A.

Pivot column

A pivot column is a column of A that contains a pivot position.

Pivot element
A pivot is a nonzero number in a pivot position that is used as needed to create zeros via
row operations The Row Reduction Algorithm consists of four steps, and it produces a
matrix in echelon form. A fifth step produces a matrix in reduced echelon form.

Example 3

Apply elementary row operations to transform the following matrix first into echelon
form and then into reduced echelon form.

Solution

STEP 1

Begin with the leftmost nonzero column. This is a pivot column. The pivot position is at
the top
Lecture 5

Vector Equations

Vector Equations

Vector equations ares used to represent the equation of a line or a plane with the help of
the variables x, y, z. The vector equation defines the placement of the line or a plane in
the three-dimensional framework. The vector equation of a line is r = a + λb, and the
vector equation of a plane is r.n = d.

Column Vector

“A matrix with only one column is called column vector or simply a vector”.

Vectors in R2

If  is the set of all real numbers then the set of all vectors with two entries is denoted by
R2=R ×R. The entries in vectors are assumed to be the elements of a set, called a Field. It
is denoted by F.

Equal Vector Formula

The formula to check for an equal vector is: If we have two vectors

And then vectors A and B are equal if and only if x =


p, y = q, and z = r, that is, they have equal coordinates.

Equality of vectors in R2
Addition of Vectors

Given two vectors u and v in R2 , their sum is the vector u + v obtained by adding
corresponding entries of the vectors u and v, which is again a vector in R2.

Multiplication of Vectors and Scalar Quantity

A vector relates two given points. It is a mathematical quantity having both the
Magnitude and the direction.

Multiplication of Vectors
Multiplication of vectors can be of two types:

(i) Scalar Multiplication


(ii) Vector Multiplication

Here, we will discuss only the Scalar Multiplication by

Scalar Multiplication of a vector

A vector u and a real number c, the scalar multiple of u by c is the vector cu obtained by
multiplying each entry in u by c.

Geometric Descriptions of R2

Consider a rectangular coordinate system in the plane. Because each point in the plane is
determined by an ordered pair of numbers, we can identify a geometric point (a, b) with

the column vector . So we may regard R2 as the set of all points in the plane.

Vectors in R3
Vectors in R3 are 3 × 1 column matrices with three entries. They are represented
geometrically by points in a three-dimensional coordinate space, with arrows from the
origin sometimes included for visual clarity.

Vectors in Rn

If n is a positive integer, Rn (read “r-n”) denotes the collection of all lists (or ordered n-
tuples) of n real numbers, usually written as n×1 column matrices, such as

The vector whose all entries are zero is called the zero vector and is denoted by O. (The
number of entries in O will be clear from the context.)

Algebraic Properties of Rn

Linear Combinations

Let V be a vector space over the field K. As usual, we call elements of V vectors and call
elements of K scalars. If v1,...,vn are vectors and a1,...,an are scalars, then the linear
combination of those vectors with those scalars as coefficients is

Spanning Set
A subset S of a vector space V is called a spanning set for V if Span(S) = V. Examples.
A Geometric Description of Span {u, v}
A Geometric Description of Span {u, v} Take u and v in R3, with v not a multiple of u.
Span {u, v} = plane containing u, v, and the origin 0. = the plane in R3 spanned by u and
v.

Vector Equations of Line

Vector equations of a line can be computed with the help of any two points on the line, or
with the help of a point on the line and a parallel vector. The two methods of forming a
vector form of the equation of a line are as follows.

So by definition of parallel vectors x– x0 = tv for some scalar t. it is also called a


parameter which varies from − ∞to +∞. The variable point x traces out the line, so the
line can be represented by the equation

This is a vector equation of the line through x0 and parallel to v. In the special case,
where x0 = 0, the line passes through the origin, it simplifies to x = tv (− ∞to +∞.)

Parametric Equations of a Line in R2


These are called parametric equations of the line in R2.
Lecture 6

Matrix Equations

Matrix Equations

If A is an m n × matrix, with columns a1, a2, … , an and if x is in Rn , then the product of


A and x denoted by Ax, is the linear combination of the columns of A using the
corresponding entries in x as weights, that is,

Which has the form Ax = b, and we shall call such an equation a matrix equation, to
distinguish it from a vector equation.

When we say “A is an m×n matrix,” we mean that A has m rows and n columns.

Lecture 7

Solution Sets of Linear Systems

Solution Set

A solution of a linear system is an assignment of values to the variables x1, x2,... , xn such
that each of the equations in the linear system is satisfied. The set of all possible solutions
is called the Solution Set.

Homogeneous Linear System


A system of linear equations is said to be homogeneous if it can be written in the form Ax
= 0, where A is an m× n matrix and 0 is the zero vector in Rm.

Trivial Solution

A homogeneous system Ax = 0 always has at least one solution, namely, x = 0 (the zero
vector in Rn ). This zero solution is usually called the trivial solution of the homogeneous
system.

Nontrivial solution

A solution of a linear system other than trivial is called its nontrivial solution. i.e the
solution of a homogenous equation Ax = 0 such that x ≠ 0 is called nontrivial solution,
that is, a nonzero vector x that satisfies Ax = 0.

Existence and Uniqueness Theorem

The homogeneous equation Ax = 0 has a nontrivial solution if and only if the equation
has at least one free variable.

Geometric Interpretation

Since neither u nor v is a scalar multiple of the other, so these are not parallel, the

solution set is a plane through the origin.

Parametric Vector Form of the solution

The equation x = su + tv (s, t in R) is called a parametric vector equation of the plane.

Steps of Writing a Solution Set (of a Consistent System) in a Parametric Vector Form

 Step 1: Row reduces the augmented matrix to reduced echelon form.


 Step 2: Express each basic variable in terms of any free variables appearing in an
equation.
 Step 3: Write a typical solution x as a vector whose entries depend on the free
variables if any.
 Step 4: Decompose x into a linear combination of vectors (with numeric entries)
using the free variables as parameters.

Lecture 8

Linear Independence

Linear Independence

An indexed set of vectors {v1, v2... vp} in Rn is said to be linearly independent if the
vector equation has only the trivial solution. An indexed set of
vectors {v1, v2, ... , vp} is said to be linearly dependent if there exist weights

not all zero, such that

Linear Independence of Matrix Columns

Suppose that we begin with a matrix instead of a set of vectors. The

matrix equation Ax = 0 can be written as .

Each linear dependence relation among the columns of A corresponds to a nontrivial


solution of Ax = 0.

The columns of a matrix A are linearly independent if and only if the equation Ax = 0 has
only the trivial solution.
Lecture 9

Linear Transformations

Matrix Equation

An equation Ax = b is called a matrix equation in which a matrix A acts on a vector x by


multiplication to produce a new vector called b.

Solution of Matrix Equation

Solution of the Ax = b consists of those vectors x in the domain that are transformed into
the vector b in range. Matrix equation Ax = b is an important.

Transformation of Function or Mapping

A transformation (or function or mapping) T from Rn to Rm is a rule that assigns to each


vector x in Rn , an image vector T(x) in Rm.

The set R n is called the domain of T, and Rm is called the co-domain of T. For x in Rn the
set of all images T(x) is called the range of T.

Shear transformation

The transformation defined by T (x) = Ax is called a shear


transformation.
Linear transformations

Linear transformations preserve the operations of vector addition and scalar


multiplication

Properties

If T is a linear transformation, then


Lecture 10

The Matrix of a Linear Transformation

Theorem

Let be a linear transformation. Then there exists a unique matrix A such that
T (x) = Ax for all x in Rn.

In fact, A is the m n × matrix whose jth column is the vector T (ej), where j e is the jth
column of the identity matrix in Rn .
The matrix A in (1) is called the standard matrix for the linear transformation T. We
know that every linear transformation from Rn to Rm is a matrix transformation and vice
versa.

Existence and Uniqueness of the solution of T(x)=b

Existence

A mapping: T * Rn R m → is said to be onto Rm if each b in Rm is the image of at least


one x in Rn.

Uniqueness

A mapping: T * Rn R m → is said to be onto Rm if each b in Rm is the image of at most


one x in Rn.

Theorem

Let be a linear transformation. Then T is one-to-one if and only if the


equation T (x) = 0 has only the trivial solution.

Proof:

Since T is linear, T (0) = 0 if T is one-to-one, then the equation T (x) = 0 has at most one
solution and hence only the trivial solution. If T is not one-to-one, then there is a b that is
the image of at least two different vectors in Rn (say, u and v). That is, T (u) = b and T (v)
= b.

But then, since T is linear .

The vector u – v is not zero, since u ≠ v. Hence the equation T (x) = 0 has more than one
solution. So either the two conditions in the theorem are both true and they are both false.

Kernel of a Linear Transformation

A linear map (or transformation, or function) f: s → T transforms elements of a vector


space S called domain into elements of another vector space T called codomain.

The kernel (or null space) of a linear transformation is the subset of the domain that is
transformed into the zero vector.
One-One Linear Transformation

A transformation T : R n → Rm is one-to-one if, for every vector b in Rm , the equation T (


x )= b has at most one solution x in Rn.

Lecture 11

Matrix Operations

Matrix Operations

Matrix operations mainly involve three algebraic operations, which are the addition of
matrices, subtraction of matrices, and multiplication of matrices. Matrix is a rectangular
array of numbers or expressions arranged in rows and columns. Important applications of
matrices can be found in Mathematics.

(i-j)th Element of a matrix

The element aij appearing in the ith row and jth column of the matrix is said to be the
ijth element. The matrix is also denoted by the symbol (a ij)m,n.

Horizontal lines of a matrix are called rows. Vertical lines of a matrix are called columns.

Each number or function aij is called its element.

Diagonal Matrix
A square matrix in which every element except the principal diagonal elements is zero is
called a Diagonal Matrix. A square matrix D = [dij]n x n will be called a diagonal matrix if
dij = 0, whenever i is not equal to j.

Null Matrix or Zero Matrixes

An m n × matrix whose entries are all zero is a Null or zero matrixes and is always
written as O. A null matrix may be of any order.

Equal Matrices

Two matrices are said to be equal if they have the same size (i.e., the same number of
rows and columns) and same corresponding entries.

Matrix Multiplication

Matrix multiplication, also known as matrix product and the multiplication of two
matrices, produces a single matrix. It is a type of binary operation. If A and B are the two
matrices, then the product of the two matrices A and B are denoted by: X = AB.

Example:
Row-Column Rule for Computing AB
The number of rows of the resulting matrix equals the number of rows of the first matrix,
and the number of columns of the resulting matrix equals the number of columns of the
second matrix.

Example:
Properties of Matrix Multiplication

Transpose of a Matrix

The transpose of a matrix is found by interchanging its rows into columns or columns
into rows. The transpose of the matrix is denoted by using the letter “T” in the superscript
of the given matrix. For example, if “A” is the given matrix, then the transpose of the
matrix is represented by A' or AT.

Lecture 12

The Inverse of a Matrix

Inverse of a square Matrix

If A is an n × n matrix, a matrix C of order n × n is called multiplicative inverse of A if


where I is the n × n identity matrix.

Invertible Matrix

If the inverse of a square matrix exists, it is called an invertible matrix. In this case, we
say that A is invertible and we call C an inverse of A.

Singular matrix and non-singular matrix


A matrix that is not invertible is sometimes called a singular matrix, and an invertible
matrix is called a non-singular matrix.

Elementary Matrices

There are three types of elementary row operations that can be performed on a matrix:
There are three types of elementary operations

1. Interchanging of any two rows


2. Multiplication to a row by a nonzero constant
3. Adding a multiple of one row to another

Elementary matrix

An elementary matrix is a square matrix that has been obtained by performing an


elementary row or column operation on an identity matrix.

The operations on the right side of this table are called the inverse operations of the
corresponding operations on the left side.

Example
Lecture 13

Characterizations of Invertible Matrices

Theorem

Let A be a For any b For any b ∈ Rn , the equation Ax = b …..(1) Has the unique solution
i.e. x = A-1 * b.
Proof

Since A is invertible and b ∈ Rn , be any vector. Then, we must have a matrix A −1b
which is a solution of eq. (1) .i.e.

Uniqueness

For uniqueness, we assume that there is another solution u. Indeed, it is a solution of eq.
(1) so it must be u = A-1 b, it means x = A-1 b = u. This shows that u = x.

Theorem

Let A and B be the square matrices such that AB = I. Then, A and B are invertible with B
= A-1 and A = B-1
Theorem (Invertible Matrix Theorem)
Let A be a square n × n matrix. Then the following statements are equivalent. (Means if
any one holds then all are true).
Invertible Linear Transformations

Recall that matrix multiplication corresponds to composition of linear transformations.


When a matrix A is invertible, the equation A-1 Ax = x can be viewed as a statement
about linear transformations.

Lecture 14

Partitioned Matrices

Partitioned Matrices

A block matrix or a partitioned matrix is a partition of a matrix into rectangular smaller


matrices called blocks.

General Partitioning of a Matrix

A matrix can be partitioned (subdivided) into sub matrices (also called blocks) in various
ways by inserting lines between selected rows and columns.

Example

Let A be a general matrix of 5 3 × order, we have

Partition (a)
Partition (b)

The ci in the partitioned matrix are sometimes called the column matrices of A.

Partition (c)

The ri in the partitioned matrix are sometimes called the row matrices of A.

Toeplitz matrix

A matrix in which each descending diagonal from left to right is constant is called a
Toeplitz matrix or diagonal-constant matrix.

Block Toeplitz matrix

A blocked matrix in which blocks (blocked matrices) are repeated down the diagonals of
the matrix is called a blocked Toeplitz matrix.

Block Diagonal Matrices


A partitioned matrix A is said to be block diagonal if the matrices on the main diagonal
are square and all other position matrices are zero, i.e.

Block Upper Triangular Matrices


A partitioned square matrix A is said to be block upper triangular if the matrices on the
main diagonal are square and all matrices below the main diagonal are zero; that is, the
matrix is partitioned as

Example
Lecture 15 Matrix Factorizations

Matrix Factorization
A factorization of a matrix as a product of two or more matrices is called Matrix
Factorization.

Uses of Matrix Factorization


Matrix factorizations will appear at a number of key points throughout the course. This
lecture focuses on a factorization that lies at the heart of several important computer
programs widely used in applications.

LU Factorization or LU-decomposition
LU factorization is another name as LU decomposition, as the both titles indicate that a
given matrix can be expressed in two smaller matrices, which include an upper triangular
matrix and a lower triangular matrix. The product of these two matrices reveals the given
matrix
If a square matrix A can be reduced to row echelon form with no row interchanges, then
A has an LU-decomposition.

Gaussian elimination

Gaussian elimination is a method in which an augmented matrix is subjected to row


operations until the component corresponding to the coefficient matrix is reduced to
triangular form.

After we have obtained our triangular matrix, there are two different approaches we can
use to solve a system of linear equations:

1. Forward substitution
2. Back substitution

Forward substitution
the procedure of solving a system of linear algebraic equations (SLAE) with a lower
triangular coefficient matrix is known as forward substitution. Solving an SLAE with a
triangular matrix form is a variant of the generic substitution approach.

Equation
Lx=y

 L represents the factor of the matrix of the lower triangle.


 x represents the variable of matrix.
 y is the result vector.

The matrix form of a lower triangle:


Visualization of forward substitution

The visualization shows how forward substitution works. The method transforms the
matrix into a lower triangular form and then starts solving an equation from top to
bottom.

 The diagram above shows how forward substitution works. In this process, we
make a lower triangle and start from the top.
 As we can see at the top, only x exists, and other values are zero, so it is easy to
find a value of x and use it for the next step.
 In the second step, we find the value of y by using the value of x, which came from
the first step.
 Similarly, in the third step, we use x and y values and find the value of z.

Back substitution

The procedure of solving an SLAE with an upper triangular coefficient matrix is known
as back substitution.

Equation
Ux=y

 U represents the factor of matrix of upper triangle.


 x represents the variable of matrix.
 y is the result vector.

The matrix form of an upper triangle:


Visualization of backward substitution

It shows how the backward substitution works. The method transforms the matrix into an
upper triangular form and then starts solving an equation from bottom to top.

 The lower diagram shows how back substitution works. In this process we make
an upper triangle and start from the bottom.
 As we can see at the bottom, only z exists and other values are zero, so it is easy to
find a value of z and use it for next step.
 In the second step, we find the value of y using the value of z, which came from
previous step.
 Similarly, in the third step, we use y and z to find the value of x.

Lecture 16

Iterative Solutions of Linear Systems

Iterative Methods for Linear Systems


One of the most important and common applications of numerical linear algebra is the
solution of linear systems that can be expressed in the form A*x = b. When A is a large
sparse matrix, you can solve the linear system using iterative methods, which enable you
to trade-off between the run time of the calculation and the precision of the solution. This
topic describes the iterative methods available in MATLAB to solve the equation A*x =
b.
In computational mathematics, an iterative method is a mathematical procedure that
uses an initial value to generate a sequence of improving approximate solutions for a
class of problems, in which the n-th approximation is derived from the previous ones.
A specific implementation with termination criteria for a given iterative method
like gradient descent, hill climbing, Newton's method, or quasi-Newton
methods like BFGS, is an algorithm of the iterative method. An iterative method is
called convergent if the corresponding sequence converges for given initial
approximations. A mathematically rigorous convergence analysis of an iterative method
is usually performed; however, heuristic-based iterative methods are also common.
In contrast, direct methods attempt to solve the problem by a finite sequence of
operations. In the absence of rounding errors, direct methods would deliver an exact
solution (for example, solving a linear system of equations Ax=b by Gaussian
elimination). Iterative methods are often the only choice for nonlinear equations.
However, iterative methods are often useful even for linear problems involving many
variables (sometimes on the order of millions), where direct methods would be
prohibitively expensive (and in some cases impossible) even with the best available
computing power.
If an equation can be put into the form f(x) = x, and a solution x is an attractive fixed
point of the function f, then one may begin with a point x1 in the basin of attraction of x,
and let xn+1 = f(xn) for n ≥ 1, and the sequence {xn}n ≥ 1 will converge to the solution x.
Here xn is the nth approximation or iteration of x and xn+1 is the next or n + 1 iteration
of x. Alternately, superscripts in parentheses are often used in numerical methods, so as
not to interfere with subscripts with other meanings. (For example, x(n+1) = f(x(n)).) If the
function f is continuously differentiable, a sufficient condition for convergence is that
the spectral radius of the derivative is strictly bounded by one in a neighborhood of the
fixed point. If this condition holds at the fixed point, then a sufficiently small
neighborhood (basin of attraction) must exist.

The Gauss-Seidel Method

In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann
method or the method of successive displacement, is an iterative method used to solve
a system of linear equations. It is named after the German mathematicians Carl Friedrich
Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method.

The Gauss–Seidel method is an iterative technique for solving a square system of n linear

equations. Let be a square system of n linear equations, where:


Alternative Approach

Alternative theories are number oriented rather than digit oriented, calculated from the
left rather than the right, and are flexible options as opposed to the ‘one right way’. They
have many benefits which include students have a greater understanding of numbers and
as a result there are fewer errors and less reteaching required.

Comparison of Jacobi’s and Gauss-Seidel method

The difference between the Gauss–Seidel and Jacobi methods is that the Jacobi method
uses the values obtained from the previous step while the Gauss–Seidel method always
applies the latest updated values during the iterative procedures, as demonstrated.

Condition for the Convergence of both Iterative Methods

If the function f is continuously differentiable, a sufficient condition for convergence is


that the spectral radius of the derivative is strictly bounded by one in a neighborhood of
the fixed point. If this condition holds at the fixed point, then a sufficiently small
neighborhood (basin of attraction) must exist.

An n × n matrix A is said to be strictly diagonally dominant if the absolute value of each


diagonal entry exceeds the sum of the absolute values of the other entries in the same
row.

Lecture 17

Introduction to Determinant

In algebra, the determinant is a special number associated with any square matrix. As we
have studied in earlier classes, that the determinant of 2 x 2 matrix is defined as the
product of the entries on the main diagonal minus the product of the entries off the main
diagonal. The determinant of a matrix A is denoted by det (A) or |A|.

Example
Cofactor of an element

A cofactor is a number that is obtained by eliminating the row and column of a particular
element which is in the form of a square or rectangle. The cofactor is preceded by a
negative or positive sign based on the element's position.

Determinants of Triangular Matrices

Determinants of the triangular matrices are also easy to evaluate regardless of size.

Example
Lecture 18

Properties of Determinants

Some of them have already been discussed and you will be familiar with these. These
properties become helpful, while computing the values of the determinants. The secret of
determinants lies in how they change when row or column operations are performed.

(Row Operations):

Let A be a square matrix.

a. If a multiple of one row of A is added to another row, the resulting determinant


will remain same.
b. If two rows of A are interchanged to produce B, then det B = –det A.
c. If one row of A is multiplied by k to produce B, then det B = k . det A.

Example
An Algorithm to evaluate the determinant

Algorithm means a sequence of a finite number of steps to get a desired result. The word
Algorithm comes from the famous Muslim mathematician AL-Khwarizmi who invented
the word algebra.

The step-by-step evaluation of det(A) of order n is obtained as follows:

 Step 1: By an interchange of rows of A (and taking the resulting sign into


account) bring a non zero entry to (1,1) the position (unless all the entries in the
first column are zero in which case det A=0).
 Step 2: By adding suitable multiples of the first row to all the other rows, reduce
the (n-1) entries, except (1,1) in the first column, to 0. Expand det(A) by its first
column. Repeat this process or continue the following steps.
 Step 3: Repeat step 1 and step 2 with the last remaining rows concentrating on the
second column.
 Step 4: Repeat step 1,step2 and step 3 with the remaining (n-2) rows, (n-3) rows
and so on, until a triangular matrix is obtained.
 Step5: Multiply all the diagonal entries of the resulting triangular matrix and then
multiply it by its sign to get det(A).

Lecture 19

Cramer’s Rule, Volume, and Linear Transformations

Cramer’s Rule

Cramer’s rule is needed in a variety of theoretical calculations. For instance, it can be


used to study how the solution of Ax = b is affected by changes in the entries of b.
However, the formula is inefficient for hand calculations, except for 2 × 2or perhaps 3 ×
3 matrices.
Inverse of Matrix

The inverse of matrix is a matrix, which on multiplication with the given matrix gives
the multiplicative identity. For a square matrix A, its inverse is A-1, and A · A-1 = A-1· A
= I, where I is the identity matrix. The matrix whose determinant is non-zero and for
which the inverse matrix can be calculated is called an invertible matrix.

In the case of real numbers, the inverse of any real number a was the number a-1, such
that a times a-1 equals 1. We knew that for a real number, the inverse of the number was
the reciprocal of the number, as long as the number wasn't zero. The inverse of a square
matrix A, denoted by A-1, is the matrix so that the product of A and A-1 is the identity
matrix. The identity matrix that results will be the same size as matrix A.

The formula to find the inverse of a matrix is: A-1 = 1/|A| · Adj A, where

 |A| is the determinant of A and


 Adj A is the adjoint of A
Determinants as Area or Volume

Determinants can be interpreted geometrically as areas and volumes. This might make
intuitive sense if we observe that is the area of a parallelogram determined by and . We
are used to working with column vectors. In this section, however, when we use vectors
to form a matrix we will regard them as row vectors.

Lecture 20

Vector Spaces and Subspaces

Vector Spaces and Subspaces

A subspace is a vector space that is contained within another vector space. So every
subspace is a vector space in its own right, but it is also defined relative to some other
(larger) vector space. We will discover shortly that we are already familiar with a wide
variety of subspaces from previous sections.

S: Subspace

Suppose that V and W are two vector spaces that have identical definitions of vector
addition and scalar multiplication, and that W is a subset of V, W⊆V⊆. Then W is
a subspace of V.

Space and W is a subset of V, W⊆V⊆. Endow W with the same operations as V. Then W
is a subspace if and only if three conditions are met

1. Wis nonempty, W≠∅≠∅.


2. If x∈W∈ and y∈W∈, then x + y ∈ W + ∈.
3. If α∈C∈ and x∈W∈, then αx ∈ W ∈.

So just three conditions, plus being a subset of a known vector space, gets us all ten
properties. Fabulous! This theorem can be paraphrased by saying that a subspace is “a
nonempty subset (of a vector space) that is closed under vector addition and scalar
multiplication.”

Axioms of Vector Space

1. Closure Property For any two vectors u & v ∈V, implies u + v ∈V


2. Commutative Property For any two vectors u & v ∈V, implies u + v = v + u
3. Associative Property For any three vectors u, v, w ∈V, u + (v + w) = (u + v) + w
4. Additive Identity For any vector u ∈V, there exist a zero vector 0 such that 0 + u =
u+0=u
5. Additive Inverse For each vector u ∈V, there exist a vector –u in V such that -u +
u = 0 = u + (-u)
6. Scalar Multiplication For any scalar k and a vector u ∈V implies k u ∈V
7. Distributive Law For any scalar k if u & v ∈V, then k (u + v) = k u + k v
8. For scalars m, n and for any vector u ∈V, (m + n) u = m u + n u
9. For scalars m, n and for any vector u ∈V, m (n u) = (m n) u = n (m u)
10. For any vector u ∈V, 1u = u where 1 is the multiplicative identity of real numbers.

Lecture 21

Null Spaces, Column Spaces, and Linear Transformations

Null Spaces

"Null spaces" refer to a concept in linear algebra. In the context of a matrix, the null
space, also known as the kernel, represents the set of all vectors that, when multiplied by
the matrix, yield the zero vector.

To put it simply, the null space of a matrix consists of all the possible solutions to the
equation Ax = 0, where A is the matrix and x is a vector. These solutions form a subspace
of the vector space in which the matrix operates.

The null space is important in various applications, such as solving systems of linear
equations, understanding the rank of a matrix, and finding a basis for the solution space.
It helps us explore the linear dependencies and determine the solutions to homogeneous
systems of equations.
Column Spaces

To put it simply, if you have a matrix A, the column space is the set of all possible linear
combinations of the columns of A. It represents the entire range of values that can be
obtained by multiplying the matrix with a vector.

The column space is useful in understanding the properties and behavior of a matrix. It
helps determine the dimension of the range of a matrix, which is equivalent to the number
of linearly independent columns. The column space also plays a crucial role in
applications such as solving systems of linear equations, finding bases for vector spaces,
and performing matrix transformations.

Linear Transformations

Linear map or linear function is a function between two vector spaces that preserves
addition and scalar multiplication.

More precisely, a function T: V -> W is considered a linear transformation if it satisfies


the following two properties for any vectors u and v in the vector space V and any scalar
c:

1. T(u + v) = T(u) + T(v) (Preservation of addition)


2. T(cu) = cT(u) (Preservation of scalar multiplication)

In simpler terms, a linear transformation takes vectors as inputs and produces vectors as
outputs, while preserving vector addition and scalar multiplication.

Linear transformations have several important properties and applications. They are used
to study the relationship between vector spaces, analyze systems of linear equations, and
solve problems in various fields such as physics, engineering, and computer science.
They can be represented by matrices, and their properties can be analyzed through
concepts like null spaces, column spaces, rank, and eigenvectors.

Lecture 22

Linearly Independent Sets; Bases

Linearly Independent Sets; Bases

Let V be an arbitrary nonempty set of objects on which two operations are defined,
addition and multiplication by scalars. If the following axioms are satisfied by all objects
u, v, w in V and all scalars l and m, then we call V a vector space.

Axioms of Vector Space

Subspace
A subset W of a vector space V is called a subspace of V if W itself is a vector space
under the addition and scalar multiplication defined on V.
Linearly independent, linearly independent set, linearly dependent, linearly dependent set

If the trivial solution is the only solution to this equation then the vectors in the set are
called linearly independent and the set is called a linearly independent set. If there is
another solution then the vectors in the set are called linearly dependent and the set is
called a linearly dependent set.

Useful results

• A set containing the zero vector is linearly dependent.


• A set of two vectors is linearly dependent if and only if one is a multiple of the
other.
• A set containing one nonzeoro vector is linearly independent. i.e. consider the set
containing one nonzeoro vector {v1} so {v1} is linearly independent when 1 v ≠ 0
.
• A set of two vectors is linearly independent if and only if neither of the vectors is a
multiple of the other.

The Spanning Set Theorem As we will see, a basis is an “efficient” spanning set that
contains no unnecessary vectors. In fact, a basis can be constructed from a spanning set
by discarding unneeded vectors.

The Spanning Set Theorem states that for a vector space V, if a set of vectors S can
generate or "span" the entire vector space V, then any vector in V can be expressed as a
linear combination of the vectors in S.

In other words, if S = {v1, v2, v3, ..., vn} is a set of vectors in V, and every vector in V
can be written as a linear combination of v1, v2, v3, ..., vn, then S is said to span V.

The Spanning Set Theorem is a fundamental concept in linear algebra as it helps us


understand the relationship between vector spaces and their spanning sets. It allows us to
determine whether a set of vectors can generate the entire vector space or if it is
insufficient.

Additionally, the Spanning Set Theorem is closely related to the concept of linear
independence. If a set of vectors spans a vector space and is also linearly independent, it
forms a basis for that vector space. A basis is a set of vectors that can generate the entire
vector space and has the property that no vector can be expressed as a linear combination
of the other vectors in the set. Two Views of a Basis When the Spanning Set Theorem is
used, the deletion of vectors from a spanning set must stop when the set becomes linearly
independent.

You might also like