20MCA203 Design & Analysis of Algorithms
20MCA203 Design & Analysis of Algorithms
20MCA203 Design & Analysis of Algorithms
algorithm is measured by assuming that all other factors e.g. speed of processor, are
executed on target computer machine. In this analysis, actual statistics like running
Suppose X is treated as an algorithm and N is treated as the size of input data, the time
and space implemented by the Algorithm X are the two main factors which determine the
efficiency of X.
Time Factor − The time is calculated or measured by counting the number of key
operations such as comparisons in sorting algorithm.
Space Factor − The space is calculated or measured by counting the maximum memory
space required by the algorithm.
The complexity of an algorithm f(N) provides the running time and / or storage space
needed by the algorithm with respect of N as the size of input data.
Space Complexity
Space complexity of an algorithm represents the amount of memory space needed the
algorithm in its life cycle.
Space needed by an algorithm is equal to the sum of the following two components
A fixed part that is a space required to store certain data and variables (i.e. simple
variables and constants, program size etc.), that are not dependent of the size of the
problem.
A variable part is a space required by variables, whose size is totally dependent on the
size of the problem. For example, recursion stack space, dynamic memory allocation etc.
Space complexity S(p) of any algorithm p is S(p) = A + Sp(I) Where A is treated as the fixed
part and S(I) is treated as the variable part of the algorithm which depends on instance
characteristic I.
Algorithm
SUM(P, Q)
Step 1 - START
Step 2 - R ← P + Q + 10
Step 3 - Stop
Here we have three variables P, Q and R and one constant. Hence S(p) = 1+3. Now space is
dependent on data types of given constant types and variables and it will be multiplied
accordingly.
Time Complexity
numerical function t(N), where t(N) can be measured as the number of steps, provided
For example, in case of addition of two n-bit integers, N steps are taken. Consequently, the
total computational time is t(N) = c*n, where c is the time consumed for addition of two
bits. Here, we observe that t(N) grows linearly as input size increases.
There are different types of time complexities used:
and many more complex notations like Exponential time, Quasilinear time, factorial time, etc.
are used based on the type of functions defined.
Constant time – O (1)
An algorithm is said to have constant time with order O (1) when it is not
dependent on the input size n. Irrespective of the input size n, the runtime will
An algorithm is said to have a linear time complexity when the running time
increases linearly with the length of the input. When the function involves
checking all the values in input data, such function has Time complexity with this
order O(n).
Logarithmic time – O (log n)
An algorithm is said to have a logarithmic time complexity when it reduces the size of the
input data in each step. This indicates that the number of operations is not the same as
the input size. The number of operations gets reduced as the input size increases.
Algorithms with Logarithmic time complexity are found in binary trees or binary search
functions. This involves the search of a given value in an array by splitting the array into
two and starting searching in one split. This ensures the operation is not done on every
An algorithm is said to have a non – linear time complexity where the running
time increases non-linearly (n^2) with the length of the input. Generally, nested
loops come under this time complexity order where for one loop takes O(n) and if
the function involves a loop within a loop, then it goes for O(n)*O(n) = O(n^2)
order.
Similarly, if there are ‘m’ loops defined in the function, then the order is given by O (n ^
n. There are three different notations: big O, big Theta (Θ), and big
Omega (Ω). big-Θ is used when the running time is the same for all
cases, big-O for the worst case running time, and big-Ω for the best
algorithm.
Big Oh Notation, Ο
The notation Ο(n) is the formal way to express the upper bound of
an algorithm's running time. It measures the worst case time
complexity or the longest amount of time an algorithm can possibly
take to complete.
because in the best case scenario the list is already sorted, and the
The notation Ω(n) is the formal way to express the lower bound of
take to complete.
The notation θ(n) is the formal way to express both the lower
represented as follows −
runtime to retrieve the first value in a Stack because it is all the way
at the bottom.
Recurrence Equation
with smaller inputs. A recurrence can be used to represent the running duration
Solution:
T (n) ≤c logn.
Put this in given Recurrence Equation.
2) Iteration Methods
It means to expand the recurrence and express it as a summation of terms of n
and initial condition.
T (n) = 1 if n=1
= 2T (n-1) if n>1
Solution:
T (n) = 2T (n-1)
= 2[2T (n-2)] = 22T (n-2)
= 4[2T (n-3)] = 23T (n-3)
= 8[2T (n-4)] = 24T (n-4) (Eq.1)
Repeat the procedure for i times
T (n) = 2i T (n-i)
Put n-i=1 or i= n-1 in (Eq.1)
T (n) = 2n-1 T (1)
= 2n-1 .1 {T (1) =1 .....given}
= 2n-1
3) Recurrence Tree Method:
We sum the costs within each of the levels of the tree to obtain a set of pre-
level costs and then sum all pre-level costs to determine the total cost of all
levels of the recursion.
A Recursion Tree is best used to generate a good guess, which can be verified
by the Substitution Method.
Consider the following recurrence Obtain the asymptotic bound using recursion
tree method.
Solution: The recursion trees for the above
recurrence
4) Master Method
The Master Method is used for solving the following types of recurrence
In the function to the analysis of a recursive algorithm, the constants and function
take on the following significance:
Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to take a
dispute on a huge input, break the input into minor pieces, decide the problem on each of
the small pieces, and then merge the piecewise solutions into a global solution. This
mechanism of solving the problem is called the Divide & Conquer Strategy.
Divide and Conquer algorithm consists of a dispute using the following three steps.
Combine: Put together the solutions of the sub problems to get the solution to the whole
problem.
Advantages of Divide and Conquer
Divide and Conquer tend to successfully solve one of the biggest problems, such as the
Tower of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems
for which you have no basic idea, but with the help of the divide and conquer approach,
it has lessened the effort as it works on dividing the main problem into two halves and
then solve them recursively. This algorithm is much faster than other algorithms.
It efficiently uses cache memory without occupying much space because it solves simple
sub problems within the cache memory instead of accessing the slower main memory.
Since these algorithms inhibit parallelism, it does not involve any modification and is
handled by systems incorporating parallel processing.
Disadvantages of Divide and Conquer
It may even crash the system if the recursion is performed rigorously greater
operations are specified by other procedures whose precise meanings are left undefined.
The control abstraction for divide and conquer technique is DANDC(P), where P is the
problem to be solved.
SMALL (P) is a Boolean valued function which determines whether the input size is
small enough so that the answer can be computed without splitting. If this is so
function ‘S’ is invoked otherwise, the problem ‘p’ into smaller sub problems.
These sub problems p1, p2, . . . , pk are solved by recursive application of DANDC.
If the sizes of the two sub problems are approximately equal then the computing
Suppose we have an array A, such that our main concern will be to sort
the subsection, which starts at index p and ends at index r, represented
by A[p..r].
Divide
If assumed q to be the central point somewhere in between p and r, then we will
fragment the subarray A[p..r] into two arrays A[p..q] and A[q+1, r].
Conquer
After splitting the arrays into two halves, the next step is to conquer. In this step,
we individually sort both of the subarrays A[p..q] and A[q+1, r]. In case if we did
not reach the base situation, then we again follow the same procedure, i.e., we
further segment these subarrays followed by sorting them separately.
Combine
As when the base step is acquired by the conquer step, we successfully get our
sorted subarrays A[p..q] and A[q+1, r], after which we merge them back to form a
new sorted array [p..r].
Merge( ) Function
Consider the following example of an unsorted array, which we are going to sort
with the help of the Merge Sort algorithm.
A= (36,25,40,2,7,80,15)
Step1: The merge sort algorithm iteratively divides an array into equal halves until
we achieve an atomic value. In case if there are an odd number of elements in an
array, then one of the halves will have more elements than the other half.
Step2: After dividing an array into two subarrays, we will notice that it did not
hamper the order of elements as they were in the original array. After now, we will
further divide these two arrays into other halves.
Step3: Again, we will divide these arrays until we achieve an atomic value, i.e., a
value that cannot be further divided.
Step4: Next, we will merge them back in the same way as they were broken
down.
Step5: For each list, we will first compare the element and then combine them to
form a new sorted list.
Step6: In the next iteration, we will compare the lists of two data values and
merge them back into a list of found data values, all placed in a sorted manner.
Analysis of Merge Sort:
Let T (n) be the total time taken by the Merge Sort algorithm.
Best Case Complexity: The merge sort algorithm has a best-case time
complexity of O(n*log n) for the already sorted array.
Divide: Rearrange the elements and split arrays into two sub-
arrays and an element in between search that each element in left
sub array is less than or equal to the average element and each
element in the right sub- array is larger than the middle element.
Comparing 44 to the right-side elements, and if right-side elements are smaller than
44, then swap it. As 22 is smaller than 44 so swap them.
Worst Case Analysis: It is the case when items are already in sorted
form and we try to sort them again. This will takes lots of time and
space.
Equation:
T (n) =T(1)+T(n-1)+n
T (1) is time taken by pivot element.
the randpartition() function chooses pivot randomly as 7 and then swaps it with
the first element of the array and then the partition() function call takes place,
which divides the array into two halves. The first half has elements less than 7 and
For elements less than 7, in 5th call, randpartition() function chooses 2 as pivot
element randomly and then swap it with first element and call to
the partition() function takes place. After the 7th and 8th call, no further calls can
take place as only one element left in both the calls. Similarly, you can observe the