Mth501 Midterm Short Notes 1 To 18
Mth501 Midterm Short Notes 1 To 18
Mth501 Midterm Short Notes 1 To 18
Lec 1 to18
BY vuonlinehelp.blogspot.com
YouTube link:
Website Link:
https://vuonlinehelp.blogspot.com/
Important Note
You should never copy paste the same answer which we are providing.
Please do make some changes in the solution file. If you appreciate our
work, please share our website vuonlinehelp.blogspot.com with your
friends.
What is Algebra?
Algebraic Term
The basic unit of an algebraic expression is a term. In general, a term is either a product
of a number and with one or more variables.
Algebraic Expressions
For example
One of the most important problems in mathematics is that of solving systems of linear
equations. Linear algebra is a branch of mathematics that deals with linear equations and
their representations in the vector space using matrices. In other words, linear algebra is
the study of linear functions and vectors. It is one of the most central topics of
mathematics. Most modern geometrical concepts are based on linear algebra.
Elementary linear algebra introduces students to the basics of linear algebra. This
includes simple matrix operations, various computations that can be done on a system of
linear equations, and certain aspects of vectors. Some important terms associated with
elementary linear algebra are given below:
Scalars
A scalar is a quantity that only has magnitude and not direction. It is an element that is
used to define a vector space. In linear algebra, scalars are usually real numbers.
Vectors
A vector is an element in a vector space. It is a quantity that can describe both the
direction and magnitude of an element.
Vector Space
The vector space consists of vectors that may be added together and multiplied by
scalars.
Matrix
A matrix is a rectangular array wherein the information is organized in the form of rows
and columns. Most linear algebra properties can be expressed in terms of a matrix.
Matrix Operations
These are simple arithmetic operations such as addition, subtraction,
and multiplication that can be conducted on matrices.
Linear algebra makes it possible to work with large arrays of data. It has many
applications in many diverse fields, such as
Computer Graphics,
Electronics,
Chemistry,
Biology,
Differential Equations,
Economics,
Business,
Psychology,
Engineering,
Analytic Geometry,
Chaos Theory,
Cryptography,
Fractal Geometry,
Game Theory,
Graph Theory,
Linear Programming,
Operations Research
Linear Algebra is then useful for solving problems in such applications in topics such as
Physics, Fluid Dynamics, Signal Processing and, more generally Numerical Analysis.
Principal Linear Algebra
The lectures are based on the material taken from the books mentioned below.
Lecture 2
Introduction to Matrices
Matrix
Elements
Matrices are denoted by capital letters A, B,,Y, Z . The numbers or functions are called
elements of the matrix. The elements of a matrix are denoted by small letters a,b,, y,z.
Order of a Matrix
The size (or dimension) of matrix is called as order of matrix. Order of matrix is based on the number of rows and number of columns. It
Square Matrix
A matrix with equal number of rows and columns is called square matrix.
Equality of matrices
Two matrices are said to be equal if: Both the matrices are of the same order i.e., they
have the same number of rows and columns an m × n = B m × n.
Multiple of matrix
A multiple of a matrix A by a nonzero constant k is defined to multiplication of a matrix
by a scalar mathematically as: If A = [aij] m × n is a matrix and k is a scalar, then kA is
another matrix obtained by multiplying each element of A by the scalar k.
Addition of Matrices
Only matrices of the same order may be added by adding corresponding elements. If A =
[aij] and B = [ij] b are two m× n matrices then A + B = [aij + bij] .Obviously order of the
matrix A + B is m× n
Difference of Matrices
The difference of two matrices A and B of same order m× n is defined to be the matrix A
− B = A + (−B).
Multiplication of Matrices
We can multiply two matrices if and only if, the number of columns in the first matrix
equals the number of rows in the second matrix. Otherwise, the product of two matrices
is not possible.
A matrix whose all entries are zero is called zero matrix or null matrix and it is denoted
by O.
Associative Law
Distributive Law
A(B + C) = AB + AC
Furthermore, if the product (A + B)C is defined,
then (A+ B)C = AC + BC
Determinant of a Matrix
Associated with every square matrix A of constants, there is a number called the
determinant of the matrix,
Multiplicative Inverse
Singular matrix
Singular matrix: A square matrix whose determinant is 0 is called singular matrix.
Nonsingular matrix
Nonsingular matrix: A square matrix that is not singular, i.e. one that has matrix inverse.
Nonsingular matrices are sometimes also called regular matrices. A square matrix is
nonsingular iff its determinant is non-zero.
Minor of Matrix
The minor of matrix is for each element of matrix and is equal to the part of the matrix
remaining after excluding the row and the column containing that particular element. The
new matrix formed with the minors of each element of the given matrix is called the
minor of matrix.
Lecture 3
Linear Equations
Linear equations are equations in which the variables are raised to the power of 1 and are
not multiplied or divided by each other. They represent straight lines on a graph and have
the general form:
y = mx + b
Here, y and x are variables, m represents the slope of the line, and b is the y-intercept,
which is the point where the line intersects the y-axis.
A finite set of linear equations is called a system of linear equations or linear system. The
variables in a linear system are called the unknowns.
A*X=B
Here, A is the coefficient matrix, X is the variable matrix, and B is the constant matrix.
Equation in the system a true statement. The set of all such solutions of a linear
system is called its solution set.
A linear system is said to be consistent if it has at least one solution and it is called
inconsistent if it has no solutions.
Parametric Representation
Matrix Notation
is called the
coefficient matrix (or matrix of coefficients) of the system.
It is always denoted by Ab =
1. (Replacement) Replace one row by the sum of itself and a nonzero multiple of
another row.
2. (Interchange) Interchange two rows.
3. (Scaling) Multiply all entries in a row by a nonzero constant.
Row equivalent matrices
Lecture 4
A rectangular matrix is in echelon form (or row echelon form) if it has the following
three properties:
Pivot Positions
Pivot column
Pivot element
A pivot is a nonzero number in a pivot position that is used as needed to create zeros via
row operations The Row Reduction Algorithm consists of four steps, and it produces a
matrix in echelon form. A fifth step produces a matrix in reduced echelon form.
Example 3
Apply elementary row operations to transform the following matrix first into echelon
form and then into reduced echelon form.
Solution
STEP 1
Begin with the leftmost nonzero column. This is a pivot column. The pivot position is at
the top
Lecture 5
Vector Equations
Vector Equations
Vector equations ares used to represent the equation of a line or a plane with the help of
the variables x, y, z. The vector equation defines the placement of the line or a plane in
the three-dimensional framework. The vector equation of a line is r = a + λb, and the
vector equation of a plane is r.n = d.
Column Vector
“A matrix with only one column is called column vector or simply a vector”.
Vectors in R2
If is the set of all real numbers then the set of all vectors with two entries is denoted by
R2=R ×R. The entries in vectors are assumed to be the elements of a set, called a Field. It
is denoted by F.
The formula to check for an equal vector is: If we have two vectors
Equality of vectors in R2
Addition of Vectors
Given two vectors u and v in R2 , their sum is the vector u + v obtained by adding
corresponding entries of the vectors u and v, which is again a vector in R2.
A vector relates two given points. It is a mathematical quantity having both the
Magnitude and the direction.
Multiplication of Vectors
Multiplication of vectors can be of two types:
A vector u and a real number c, the scalar multiple of u by c is the vector cu obtained by
multiplying each entry in u by c.
Geometric Descriptions of R2
Consider a rectangular coordinate system in the plane. Because each point in the plane is
determined by an ordered pair of numbers, we can identify a geometric point (a, b) with
the column vector . So we may regard R2 as the set of all points in the plane.
Vectors in R3
Vectors in R3 are 3 × 1 column matrices with three entries. They are represented
geometrically by points in a three-dimensional coordinate space, with arrows from the
origin sometimes included for visual clarity.
Vectors in Rn
If n is a positive integer, Rn (read “r-n”) denotes the collection of all lists (or ordered n-
tuples) of n real numbers, usually written as n×1 column matrices, such as
The vector whose all entries are zero is called the zero vector and is denoted by O. (The
number of entries in O will be clear from the context.)
Algebraic Properties of Rn
Linear Combinations
Let V be a vector space over the field K. As usual, we call elements of V vectors and call
elements of K scalars. If v1,...,vn are vectors and a1,...,an are scalars, then the linear
combination of those vectors with those scalars as coefficients is
Spanning Set
A subset S of a vector space V is called a spanning set for V if Span(S) = V. Examples.
A Geometric Description of Span {u, v}
A Geometric Description of Span {u, v} Take u and v in R3, with v not a multiple of u.
Span {u, v} = plane containing u, v, and the origin 0. = the plane in R3 spanned by u and
v.
Vector equations of a line can be computed with the help of any two points on the line, or
with the help of a point on the line and a parallel vector. The two methods of forming a
vector form of the equation of a line are as follows.
This is a vector equation of the line through x0 and parallel to v. In the special case,
where x0 = 0, the line passes through the origin, it simplifies to x = tv (− ∞to +∞.)
Matrix Equations
Matrix Equations
Which has the form Ax = b, and we shall call such an equation a matrix equation, to
distinguish it from a vector equation.
When we say “A is an m×n matrix,” we mean that A has m rows and n columns.
Lecture 7
Solution Set
A solution of a linear system is an assignment of values to the variables x1, x2,... , xn such
that each of the equations in the linear system is satisfied. The set of all possible solutions
is called the Solution Set.
Trivial Solution
A homogeneous system Ax = 0 always has at least one solution, namely, x = 0 (the zero
vector in Rn ). This zero solution is usually called the trivial solution of the homogeneous
system.
Nontrivial solution
A solution of a linear system other than trivial is called its nontrivial solution. i.e the
solution of a homogenous equation Ax = 0 such that x ≠ 0 is called nontrivial solution,
that is, a nonzero vector x that satisfies Ax = 0.
The homogeneous equation Ax = 0 has a nontrivial solution if and only if the equation
has at least one free variable.
Geometric Interpretation
Since neither u nor v is a scalar multiple of the other, so these are not parallel, the
Steps of Writing a Solution Set (of a Consistent System) in a Parametric Vector Form
Lecture 8
Linear Independence
Linear Independence
An indexed set of vectors {v1, v2... vp} in Rn is said to be linearly independent if the
vector equation has only the trivial solution. An indexed set of
vectors {v1, v2, ... , vp} is said to be linearly dependent if there exist weights
The columns of a matrix A are linearly independent if and only if the equation Ax = 0 has
only the trivial solution.
Lecture 9
Linear Transformations
Matrix Equation
Solution of the Ax = b consists of those vectors x in the domain that are transformed into
the vector b in range. Matrix equation Ax = b is an important.
The set R n is called the domain of T, and Rm is called the co-domain of T. For x in Rn the
set of all images T(x) is called the range of T.
Shear transformation
Properties
Theorem
Let be a linear transformation. Then there exists a unique matrix A such that
T (x) = Ax for all x in Rn.
In fact, A is the m n × matrix whose jth column is the vector T (ej), where j e is the jth
column of the identity matrix in Rn .
The matrix A in (1) is called the standard matrix for the linear transformation T. We
know that every linear transformation from Rn to Rm is a matrix transformation and vice
versa.
Existence
Uniqueness
Theorem
Proof:
Since T is linear, T (0) = 0 if T is one-to-one, then the equation T (x) = 0 has at most one
solution and hence only the trivial solution. If T is not one-to-one, then there is a b that is
the image of at least two different vectors in Rn (say, u and v). That is, T (u) = b and T (v)
= b.
The vector u – v is not zero, since u ≠ v. Hence the equation T (x) = 0 has more than one
solution. So either the two conditions in the theorem are both true and they are both false.
The kernel (or null space) of a linear transformation is the subset of the domain that is
transformed into the zero vector.
One-One Linear Transformation
Lecture 11
Matrix Operations
Matrix Operations
Matrix operations mainly involve three algebraic operations, which are the addition of
matrices, subtraction of matrices, and multiplication of matrices. Matrix is a rectangular
array of numbers or expressions arranged in rows and columns. Important applications of
matrices can be found in Mathematics.
The element aij appearing in the ith row and jth column of the matrix is said to be the
ijth element. The matrix is also denoted by the symbol (a ij)m,n.
Horizontal lines of a matrix are called rows. Vertical lines of a matrix are called columns.
Diagonal Matrix
A square matrix in which every element except the principal diagonal elements is zero is
called a Diagonal Matrix. A square matrix D = [dij]n x n will be called a diagonal matrix if
dij = 0, whenever i is not equal to j.
An m n × matrix whose entries are all zero is a Null or zero matrixes and is always
written as O. A null matrix may be of any order.
Equal Matrices
Two matrices are said to be equal if they have the same size (i.e., the same number of
rows and columns) and same corresponding entries.
Matrix Multiplication
Matrix multiplication, also known as matrix product and the multiplication of two
matrices, produces a single matrix. It is a type of binary operation. If A and B are the two
matrices, then the product of the two matrices A and B are denoted by: X = AB.
Example:
Row-Column Rule for Computing AB
The number of rows of the resulting matrix equals the number of rows of the first matrix,
and the number of columns of the resulting matrix equals the number of columns of the
second matrix.
Example:
Properties of Matrix Multiplication
Transpose of a Matrix
The transpose of a matrix is found by interchanging its rows into columns or columns
into rows. The transpose of the matrix is denoted by using the letter “T” in the superscript
of the given matrix. For example, if “A” is the given matrix, then the transpose of the
matrix is represented by A' or AT.
Lecture 12
Invertible Matrix
If the inverse of a square matrix exists, it is called an invertible matrix. In this case, we
say that A is invertible and we call C an inverse of A.
Elementary Matrices
There are three types of elementary row operations that can be performed on a matrix:
There are three types of elementary operations
Elementary matrix
The operations on the right side of this table are called the inverse operations of the
corresponding operations on the left side.
Example
Lecture 13
Theorem
Let A be a For any b For any b ∈ Rn , the equation Ax = b …..(1) Has the unique solution
i.e. x = A-1 * b.
Proof
Since A is invertible and b ∈ Rn , be any vector. Then, we must have a matrix A −1b
which is a solution of eq. (1) .i.e.
Uniqueness
For uniqueness, we assume that there is another solution u. Indeed, it is a solution of eq.
(1) so it must be u = A-1 b, it means x = A-1 b = u. This shows that u = x.
Theorem
Let A and B be the square matrices such that AB = I. Then, A and B are invertible with B
= A-1 and A = B-1
Theorem (Invertible Matrix Theorem)
Let A be a square n × n matrix. Then the following statements are equivalent. (Means if
any one holds then all are true).
Invertible Linear Transformations
Lecture 14
Partitioned Matrices
Partitioned Matrices
A matrix can be partitioned (subdivided) into sub matrices (also called blocks) in various
ways by inserting lines between selected rows and columns.
Example
Partition (a)
Partition (b)
The ci in the partitioned matrix are sometimes called the column matrices of A.
Partition (c)
The ri in the partitioned matrix are sometimes called the row matrices of A.
Toeplitz matrix
A matrix in which each descending diagonal from left to right is constant is called a
Toeplitz matrix or diagonal-constant matrix.
A blocked matrix in which blocks (blocked matrices) are repeated down the diagonals of
the matrix is called a blocked Toeplitz matrix.
Example
Lecture 15 Matrix Factorizations
Matrix Factorization
A factorization of a matrix as a product of two or more matrices is called Matrix
Factorization.
LU Factorization or LU-decomposition
LU factorization is another name as LU decomposition, as the both titles indicate that a
given matrix can be expressed in two smaller matrices, which include an upper triangular
matrix and a lower triangular matrix. The product of these two matrices reveals the given
matrix
If a square matrix A can be reduced to row echelon form with no row interchanges, then
A has an LU-decomposition.
Gaussian elimination
After we have obtained our triangular matrix, there are two different approaches we can
use to solve a system of linear equations:
1. Forward substitution
2. Back substitution
Forward substitution
the procedure of solving a system of linear algebraic equations (SLAE) with a lower
triangular coefficient matrix is known as forward substitution. Solving an SLAE with a
triangular matrix form is a variant of the generic substitution approach.
Equation
Lx=y
The visualization shows how forward substitution works. The method transforms the
matrix into a lower triangular form and then starts solving an equation from top to
bottom.
The diagram above shows how forward substitution works. In this process, we
make a lower triangle and start from the top.
As we can see at the top, only x exists, and other values are zero, so it is easy to
find a value of x and use it for the next step.
In the second step, we find the value of y by using the value of x, which came from
the first step.
Similarly, in the third step, we use x and y values and find the value of z.
Back substitution
The procedure of solving an SLAE with an upper triangular coefficient matrix is known
as back substitution.
Equation
Ux=y
It shows how the backward substitution works. The method transforms the matrix into an
upper triangular form and then starts solving an equation from bottom to top.
The lower diagram shows how back substitution works. In this process we make
an upper triangle and start from the bottom.
As we can see at the bottom, only z exists and other values are zero, so it is easy to
find a value of z and use it for next step.
In the second step, we find the value of y using the value of z, which came from
previous step.
Similarly, in the third step, we use y and z to find the value of x.
Lecture 16
In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann
method or the method of successive displacement, is an iterative method used to solve
a system of linear equations. It is named after the German mathematicians Carl Friedrich
Gauss and Philipp Ludwig von Seidel, and is similar to the Jacobi method.
The Gauss–Seidel method is an iterative technique for solving a square system of n linear
Alternative theories are number oriented rather than digit oriented, calculated from the
left rather than the right, and are flexible options as opposed to the ‘one right way’. They
have many benefits which include students have a greater understanding of numbers and
as a result there are fewer errors and less reteaching required.
The difference between the Gauss–Seidel and Jacobi methods is that the Jacobi method
uses the values obtained from the previous step while the Gauss–Seidel method always
applies the latest updated values during the iterative procedures, as demonstrated.
Lecture 17
Introduction to Determinant
In algebra, the determinant is a special number associated with any square matrix. As we
have studied in earlier classes, that the determinant of 2 x 2 matrix is defined as the
product of the entries on the main diagonal minus the product of the entries off the main
diagonal. The determinant of a matrix A is denoted by det (A) or |A|.
Example
Cofactor of an element
A cofactor is a number that is obtained by eliminating the row and column of a particular
element which is in the form of a square or rectangle. The cofactor is preceded by a
negative or positive sign based on the element's position.
Determinants of the triangular matrices are also easy to evaluate regardless of size.
Example
Lecture 18
Properties of Determinants
Some of them have already been discussed and you will be familiar with these. These
properties become helpful, while computing the values of the determinants. The secret of
determinants lies in how they change when row or column operations are performed.
(Row Operations):
Example
An Algorithm to evaluate the determinant
Algorithm means a sequence of a finite number of steps to get a desired result. The word
Algorithm comes from the famous Muslim mathematician AL-Khwarizmi who invented
the word algebra.
Lecture 19
Cramer’s Rule
The inverse of matrix is a matrix, which on multiplication with the given matrix gives
the multiplicative identity. For a square matrix A, its inverse is A-1, and A · A-1 = A-1· A
= I, where I is the identity matrix. The matrix whose determinant is non-zero and for
which the inverse matrix can be calculated is called an invertible matrix.
In the case of real numbers, the inverse of any real number a was the number a-1, such
that a times a-1 equals 1. We knew that for a real number, the inverse of the number was
the reciprocal of the number, as long as the number wasn't zero. The inverse of a square
matrix A, denoted by A-1, is the matrix so that the product of A and A-1 is the identity
matrix. The identity matrix that results will be the same size as matrix A.
The formula to find the inverse of a matrix is: A-1 = 1/|A| · Adj A, where
Determinants can be interpreted geometrically as areas and volumes. This might make
intuitive sense if we observe that is the area of a parallelogram determined by and . We
are used to working with column vectors. In this section, however, when we use vectors
to form a matrix we will regard them as row vectors.
Lecture 20
A subspace is a vector space that is contained within another vector space. So every
subspace is a vector space in its own right, but it is also defined relative to some other
(larger) vector space. We will discover shortly that we are already familiar with a wide
variety of subspaces from previous sections.
S: Subspace
Suppose that V and W are two vector spaces that have identical definitions of vector
addition and scalar multiplication, and that W is a subset of V, W⊆V⊆. Then W is
a subspace of V.
Space and W is a subset of V, W⊆V⊆. Endow W with the same operations as V. Then W
is a subspace if and only if three conditions are met
So just three conditions, plus being a subset of a known vector space, gets us all ten
properties. Fabulous! This theorem can be paraphrased by saying that a subspace is “a
nonempty subset (of a vector space) that is closed under vector addition and scalar
multiplication.”
Lecture 21
Null Spaces
"Null spaces" refer to a concept in linear algebra. In the context of a matrix, the null
space, also known as the kernel, represents the set of all vectors that, when multiplied by
the matrix, yield the zero vector.
To put it simply, the null space of a matrix consists of all the possible solutions to the
equation Ax = 0, where A is the matrix and x is a vector. These solutions form a subspace
of the vector space in which the matrix operates.
The null space is important in various applications, such as solving systems of linear
equations, understanding the rank of a matrix, and finding a basis for the solution space.
It helps us explore the linear dependencies and determine the solutions to homogeneous
systems of equations.
Column Spaces
To put it simply, if you have a matrix A, the column space is the set of all possible linear
combinations of the columns of A. It represents the entire range of values that can be
obtained by multiplying the matrix with a vector.
The column space is useful in understanding the properties and behavior of a matrix. It
helps determine the dimension of the range of a matrix, which is equivalent to the number
of linearly independent columns. The column space also plays a crucial role in
applications such as solving systems of linear equations, finding bases for vector spaces,
and performing matrix transformations.
Linear Transformations
Linear map or linear function is a function between two vector spaces that preserves
addition and scalar multiplication.
In simpler terms, a linear transformation takes vectors as inputs and produces vectors as
outputs, while preserving vector addition and scalar multiplication.
Linear transformations have several important properties and applications. They are used
to study the relationship between vector spaces, analyze systems of linear equations, and
solve problems in various fields such as physics, engineering, and computer science.
They can be represented by matrices, and their properties can be analyzed through
concepts like null spaces, column spaces, rank, and eigenvectors.
Lecture 22
Let V be an arbitrary nonempty set of objects on which two operations are defined,
addition and multiplication by scalars. If the following axioms are satisfied by all objects
u, v, w in V and all scalars l and m, then we call V a vector space.
Subspace
A subset W of a vector space V is called a subspace of V if W itself is a vector space
under the addition and scalar multiplication defined on V.
Linearly independent, linearly independent set, linearly dependent, linearly dependent set
If the trivial solution is the only solution to this equation then the vectors in the set are
called linearly independent and the set is called a linearly independent set. If there is
another solution then the vectors in the set are called linearly dependent and the set is
called a linearly dependent set.
Useful results
The Spanning Set Theorem As we will see, a basis is an “efficient” spanning set that
contains no unnecessary vectors. In fact, a basis can be constructed from a spanning set
by discarding unneeded vectors.
The Spanning Set Theorem states that for a vector space V, if a set of vectors S can
generate or "span" the entire vector space V, then any vector in V can be expressed as a
linear combination of the vectors in S.
In other words, if S = {v1, v2, v3, ..., vn} is a set of vectors in V, and every vector in V
can be written as a linear combination of v1, v2, v3, ..., vn, then S is said to span V.
Additionally, the Spanning Set Theorem is closely related to the concept of linear
independence. If a set of vectors spans a vector space and is also linearly independent, it
forms a basis for that vector space. A basis is a set of vectors that can generate the entire
vector space and has the property that no vector can be expressed as a linear combination
of the other vectors in the set. Two Views of a Basis When the Spanning Set Theorem is
used, the deletion of vectors from a spanning set must stop when the set becomes linearly
independent.