Ics 121 DS: Class 4 Asymptotic Notations
Ics 121 DS: Class 4 Asymptotic Notations
Ics 121 DS: Class 4 Asymptotic Notations
a. Do we have oil?
To go from city “A” to city “B”, there can be many ways of accomplishing this, by bus, by plane
by train, and by bicycle.
Likewise, multiple algorithms are available for solving the same problem, for example sorting.
Algorithm analysis helps us to determine which algorithm is most efficient in terms of time and
space consumed.
How to Compare Algorithms
• Execution times?
• Ideal solution? Let us assume that we express the running time of a given
algorithm as a function of the input size n (i.e., f(n)) and compare these different
functions corresponding to running times.
1. First, there is the effort involved in programming and testing two algorithms when at best you
want to keep only one.
2. Second, when empirically comparing two algorithms there is always the chance that one of
the programs was “better written” than the other.
3. Third, the choice of empirical test cases might unfairly favour one algorithm.
4. Fourth, you could find that even the better of the two algorithms does not fall within your
resource budget. Then you go for implementing a new algorithm.
• Let us assume that you go to a shop to buy a car and a bicycle. If your friend sees you there and
asks what you are buying, then in general you say buying a car.
• We can represent the cost of the car and the cost of the bicycle in terms of function, and for a
given function ignore the low order terms that are relatively insignificant.
• As an example, in the case below, 𝑛4 , 2𝑛2 , 100n and 500 are the individual costs of some function
and approximate to 𝑛4 since 𝑛4 is the highest rate of growth.
An algorithm that takes a time of 200n2 will be faster than some other algorithm that takes n3 time, for
any value of n larger than 200.
Contd.,
• If we have two algorithms with the following expressions representing the time required by them for
execution, then:
• Expression 1: (20n2 + 3n - 4)
• The main idea behind casting aside the less important part is to make things manageable.
• First analyse the algorithm to find out an expression to define it's time requirements and then
analyse how that expression will grow as the input(n) will grow.
Basics of Asymptotic analysis
• In computer programming, the asymptotic analysis tells us the execution time of an algorithm.
The lesser the execution time, the better the performance of an algorithm is.
• For example, let’s assume we have to add an element at the starting of an array.
• For example, traversing through an array of 5 elements will take less time as compared to
traversing through an array of 500 elements.
• Hence, if the input size is ‘n’, then f(n) is a function of ‘n’ denoting the time complexity.
How to calculate f(n) in data structure?
• We usually compare data structures by comparing their f(n)values: the growth rate of an
algorithm.
If n=1 :
• Worst case: the maximum time required for the execution (slowest time to complete).
• Best case: the minimum time taken for the execution (fastest time to complete).
Terms used to describe the running-time equation for an algorithm: Lower and Upper bound.
Lower bound: a value that is less than or equal to every element of a set of data.
Upper bound: a value that is greater than or equal to every element of a set of data.
• Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1.
f(n) = 3n + 2
If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all values of
C > 0 and n0>= 1
⇒3n + 2 <= C n
By using Big - Oh notation we can represent the time complexity as f(n) 3n + 2 = O(n).
Contd.,
f(n) = 3n + 2
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values of
C > 0 and n0>= 1
⇒3n + 2 >= C n
By using Big - Omega notation we can represent the time complexity as f(n) = 3n + 2 = Ω(n)
Big - Theta Notation (Θ)
• Big - Theta notation is used to define the average bound of an algorithm in terms of Time Complexity.
• That means Big - Theta notation always indicates the average time required by an algorithm for all input
values.
• Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1. Then we can
represent f(n) as Θ(g(n)).
If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) <= C2 g(n) for all values
of C1 > 0, C2 > 0 and n0>= 1
By using Big - Theta notation we can represent the time complexity as 3n + 2 = Θ(n)
Overview
Upper bound (At most) lower bound (At least) Exact time
Why have three different notations?
• Let’s try to find out the complexity of the linear search algorithm. Suppose you
want to search an element in an array using the linear search algorithm.
• If the element is found at the first position itself, it is a best-case and the program
will take the least time for execution. The complexity will be Ω(1).
• If the element is found at the last position, it will be the worst-case scenario and
the program will take maximum time for execution. Its time complexity will be
O(n).
• The average case complexity is between worst case and best case so it becomes
θ(n/1). Since we can ignore the constant terms in asymptotic notations the
average-case complexity will be θ(n).
Tips in Asymptotic notations
Masters Theorem
• Finding the the time complexity of an algorithm with recurrence relations is tricky.
• So, the method of divide and conquer is used here, called the master theorem (analysis of
algorithms).
• How Divide and Conquer Algorithms Work? Here are the steps involved:
• Conquer: Solve the smaller sub-problems recursively. If the subproblem is small enough, then
solve it directly.
• Combine: Combine the solutions of the sub-problems that are part of the recursive process to
solve the actual problem.
• The complexity of the divide and conquer algorithm is calculated using the master theorem.
Masters Theorem .. Contd.,
n = size of the problem,
a = number of sub-problems,
b = size of each sub-problem,
θ() = work done on dividing into sub-problems and
Where a>=1, b>1, k>=0 combining them to get the solution.
Masters Theorem .. Contd.,
• T(n) = 3T(n/2) + n2
Now, a = 3 and bk = 22 = 4.
Clearly, a < bk.
So, we follow case-03.
Since p = 0, so we have: T(n) = θ (nklogpn) which is T(n) = θ (n2log0n)
Thus, T(n) = θ (n2)
Masters Theorem .. Contd.,
• T(n) = 2T(n/2) + nlogn