Nothing Special   »   [go: up one dir, main page]

Ics 121 DS: Class 4 Asymptotic Notations

Download as pdf or txt
Download as pdf or txt
You are on page 1of 34

ICS 121 DS

Class 4 Asymptotic Notations


In mathematically

Data Structure + Algorithm = Program

Data Structure: Organization of data needed to solve the problem.


Algorithm: Outline, the essence of a computational procedure, step-by-step instructions.
Program: an implementation of an algorithm in some programming language.
What is an algorithm: Let us consider the problem of preparing an omelette
1) Get the frying pan.

2) Get the oil.

a. Do we have oil?

i. If yes, put it in the pan.

ii. If no, do we want to buy oil?

1. If yes, then go out and buy.

2. If no, we can terminate.

3) Turn on the stove, etc...

An algorithm is the step-by-step unambiguous instructions to solve a given problem.


Algorithm..Contd.,
• There are two main criteria for judging the merits of algorithms:
Correctness
Efficiency

• Why the Analysis of Algorithms?

To go from city “A” to city “B”, there can be many ways of accomplishing this, by bus, by plane
by train, and by bicycle.

Likewise, multiple algorithms are available for solving the same problem, for example sorting.

Algorithm analysis helps us to determine which algorithm is most efficient in terms of time and
space consumed.
How to Compare Algorithms
• Execution times?

• Number of statements executed?

• Ideal solution? Let us assume that we express the running time of a given
algorithm as a function of the input size n (i.e., f(n)) and compare these different
functions corresponding to running times.

This kind of comparison is independent of machine time, programming style, etc.


How do you compare two algorithms for solving some problem in terms of
efficiency
• Implement both algorithms as computer programs and then run them.

• This approach is often unsatisfactory for four reasons.

1. First, there is the effort involved in programming and testing two algorithms when at best you
want to keep only one.
2. Second, when empirically comparing two algorithms there is always the chance that one of
the programs was “better written” than the other.
3. Third, the choice of empirical test cases might unfairly favour one algorithm.
4. Fourth, you could find that even the better of the two algorithms does not fall within your
resource budget. Then you go for implementing a new algorithm.

Asymptotic analysis measures the efficiency of an algorithm, or its implementation as a


program, as the input size becomes large.
Rate of Growth?
• The rate at which the running time increases as a function of input is called rate of growth.

• Let us assume that you go to a shop to buy a car and a bicycle. If your friend sees you there and
asks what you are buying, then in general you say buying a car.

Total cost = cost of car + cost of bicycle

Total cost = cost of car (approximately)

• We can represent the cost of the car and the cost of the bicycle in terms of function, and for a
given function ignore the low order terms that are relatively insignificant.

• As an example, in the case below, 𝑛4 , 2𝑛2 , 100n and 500 are the individual costs of some function
and approximate to 𝑛4 since 𝑛4 is the highest rate of growth.

𝑛4 + 2𝑛2 + 100n + 500 approx. 𝑛4


Contd.,

Let us take an example, if some algorithm has a time


complexity of T(n) = (n2 + 3n + 4), which is a quadratic
equation.
For large values of n, the 3n + 4 part will become
insignificant compared to the n2 part.

For n = 1000, n2 will be 1000000 while 3n + 4 will be 3004

An algorithm that takes a time of 200n2 will be faster than some other algorithm that takes n3 time, for
any value of n larger than 200.
Contd.,
• If we have two algorithms with the following expressions representing the time required by them for
execution, then:

• Expression 1: (20n2 + 3n - 4)

• Expression 2: (n3 + 100n - 2)

• The main idea behind casting aside the less important part is to make things manageable.

• First analyse the algorithm to find out an expression to define it's time requirements and then
analyse how that expression will grow as the input(n) will grow.
Basics of Asymptotic analysis
• In computer programming, the asymptotic analysis tells us the execution time of an algorithm.
The lesser the execution time, the better the performance of an algorithm is.

• For example, let’s assume we have to add an element at the starting of an array.

• For example, traversing through an array of 5 elements will take less time as compared to
traversing through an array of 500 elements.

• Hence, if the input size is ‘n’, then f(n) is a function of ‘n’ denoting the time complexity.
How to calculate f(n) in data structure?
• We usually compare data structures by comparing their f(n)values: the growth rate of an
algorithm.

• For example: Let’s assume a function f(n) = 8n2 +5n+12.

Here n represents the number of instructions executed.

If n=1 :

Percentage of time taken due to 8n2 = (8/(8+5+12))*100 = 32%

Percentage of time taken due to 5n = (5/(8+5+12))*100 = 20%

Percentage of time taken due to 12 = (12/(8+5+12))*100 = 48%


Calculate f(n)..Contd.,
n 8n2 5n 12
1 32% 20% 48%
10 92.8% 5.8% 1.4%
100 99.36% 0.62% 0.015%
1000 99.93% 0.06% ≈0%

Therefore, the complexity for this algorithm is :F(n) = 8n2


This is the approximate time complexity which is very close to the actual result.
This measure of approximate time complexity is known as asymptotic complexity.
Here, we are considering the term which takes most of the time and eliminating the unnecessary terms.
Though, we’re not calculating the exact running time still it is very close to it.
Efficiency of Algorithms

• The performances of algorithms can be measured on the scales of time and


space.

• Time Complexity: amount of computer time it needs to run to completion.


• Space Complexity: amount of space needed by the algorithm or program to run
to completion.
Time complexity
• The time required for the execution of an algorithm categorized into:

• Worst case: the maximum time required for the execution (slowest time to complete).

• Best case: the minimum time taken for the execution (fastest time to complete).

• Average case: average time taken for the execution.

Terms used to describe the running-time equation for an algorithm: Lower and Upper bound.

Lower bound: a value that is less than or equal to every element of a set of data.

Upper bound: a value that is greater than or equal to every element of a set of data.

Example: in {3,5,11,20,22} 3 is a lower bound, and 22 is an upper bound.


Asymptotic notations
• Asymptotic notation of an algorithm is a mathematical representation of its
complexity

• Big O notation (O)

• Omega notation (Ω)

• Theta notation (θ)


Big ‘O’ Notation
• Big - Oh notation is used to define the upper bound of an algorithm, which means worst case of
an algorithm in terms of Time Complexity.
• Big - Oh Notation can be defined as follows...

• Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term. If f(n) <= C g(n) for all n >= n0, C > 0 and n0 >= 1.

• Then we can represent f(n) as O(g(n)).


Big ‘O’ Notation .. Contd.,
• Example

f(n) = 3n + 2

If we want to represent f(n) as O(g(n)) then it must satisfy f(n) <= C g(n) for all values of
C > 0 and n0>= 1

f(n) <= C g(n)

⇒3n + 2 <= C n

Above condition is always TRUE for all values of C = 4 and n >= 2.

By using Big - Oh notation we can represent the time complexity as f(n) 3n + 2 = O(n).
Contd.,

• Find upper bound for f(n) = 𝑛2 + 1


𝑛2 + 1 ≤ 2 𝑛2 with c = 2 and n0 = 1
f(n) = O (𝑛2 )
• Find upper bound for f(n) = 2𝑛3 – 2𝑛2
2𝑛3 – 2𝑛2 ≤ 3𝑛3 with c = 2 and n0 = 1
f(n) = O (𝑛3 )
Big - Omege Notation (Ω)
• The Big-Omega notation always indicates the minimum time required by an algorithm for all input values.
That means Big-Omega notation describes the best case of an algorithm time complexity.
• Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term. If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1.

• Then we represent f(n) = Ω(g(n))


Big - Omege Notation .. Contd.,
• Example

f(n) = 3n + 2

If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values of
C > 0 and n0>= 1

f(n) >= C g(n)

⇒3n + 2 >= C n

Above condition is always TRUE for all values of C = 1 and n >= 1.

By using Big - Omega notation we can represent the time complexity as f(n) = 3n + 2 = Ω(n)
Big - Theta Notation (Θ)
• Big - Theta notation is used to define the average bound of an algorithm in terms of Time Complexity.
• That means Big - Theta notation always indicates the average time required by an algorithm for all input
values.
• Consider function f(n) as time complexity of an algorithm and g(n) is the most significant
term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1. Then we can
represent f(n) as Θ(g(n)).

• Then we can represent f(n) = Θ(g(n))


Big - Theta Notation (Θ) .. Contd.,
• f(n) = 3n + 2

If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) <= C2 g(n) for all values
of C1 > 0, C2 > 0 and n0>= 1

C1 g(n) <= f(n) <= C2 g(n)

⇒C1 n <= 3n + 2 <= C2 n

Above condition is always TRUE for all values of C1 = 1, C2 = 4 and n >= 2.

By using Big - Theta notation we can represent the time complexity as 3n + 2 = Θ(n)
Overview

Worst Case Best Case Average Case

Upper bound (At most) lower bound (At least) Exact time
Why have three different notations?
• Let’s try to find out the complexity of the linear search algorithm. Suppose you
want to search an element in an array using the linear search algorithm.
• If the element is found at the first position itself, it is a best-case and the program
will take the least time for execution. The complexity will be Ω(1).
• If the element is found at the last position, it will be the worst-case scenario and
the program will take maximum time for execution. Its time complexity will be
O(n).
• The average case complexity is between worst case and best case so it becomes
θ(n/1). Since we can ignore the constant terms in asymptotic notations the
average-case complexity will be θ(n).
Tips in Asymptotic notations
Masters Theorem
• Finding the the time complexity of an algorithm with recurrence relations is tricky.

• So, the method of divide and conquer is used here, called the master theorem (analysis of
algorithms).
• How Divide and Conquer Algorithms Work? Here are the steps involved:

• Divide: Divide the given problem into sub-problems using recursion.

• Conquer: Solve the smaller sub-problems recursively. If the subproblem is small enough, then
solve it directly.

• Combine: Combine the solutions of the sub-problems that are part of the recursive process to
solve the actual problem.
• The complexity of the divide and conquer algorithm is calculated using the master theorem.
Masters Theorem .. Contd.,
n = size of the problem,
a = number of sub-problems,
b = size of each sub-problem,
θ() = work done on dividing into sub-problems and
Where a>=1, b>1, k>=0 combining them to get the solution.
Masters Theorem .. Contd.,
• T(n) = 3T(n/2) + n2

• We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).


• Then, we have: a = 3, b = 2, k = 2, p = 0

Now, a = 3 and bk = 22 = 4.
Clearly, a < bk.
So, we follow case-03.
Since p = 0, so we have: T(n) = θ (nklogpn) which is T(n) = θ (n2log0n)
Thus, T(n) = θ (n2)
Masters Theorem .. Contd.,
• T(n) = 2T(n/2) + nlogn

• We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).


• Then, we have: a = 2, b = 2, k = 1, p = 1
• Now, a = 2 and bk = 21 = 2.
• Clearly, a = bk.
• So, we follow case-02.
• Since p = 1, so we have: T(n) = θ (nlogba.logp+1n) which is T(n) = θ (nlog22.log1+1n)
Thus T(n) = θ (nlog2n)
Masters Theorem .. Contd.,

• T(n) = 2T(n/4) + n0.51

• We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).


• Then, we have: a = 2, b = 4, k = 0.51, p = 0
• Now, a = 2 and bk = 40.51 = 2.0279.
• Clearly, a < bk.
• So, we follow case-03.
• Since p = 0, so we have: T(n) = θ (nklogpn) .. T(n) = θ (n0.51log0n)
• Thus we have: T(n) = θ (n0.51)
Masters Theorem .. Contd.,

• Solve the following recurrence relation using Master’s theorem-


• T(n) = √2T(n/2) + logn
Masters Theorem .. Contd.,

• We compare the given recurrence relation with T(n) = aT(n/b) + θ (nklogpn).


• Then, we have: a = √2, b = 2, k = 0, p = 1
• Now, a = √2 = 1.414 and bk = 20 = 1.
• Clearly, a > bk.
• So, we follow case-01.
• So, we have: T(n) = θ (nlogba) which is T(n) = θ (nlog2√2)
T(n) = θ (n1/2)
Thus T(n) = θ (√n)
Master theorem limitations

4. where f(n) is not a polynomial. For example, when f(n)=2n


5. Where T(n) is not a monotone. Example, when T(n)=sin(n)T(n)=sin(n)
Problems
• T(n)=T(n/2)+n2
• T(n)=2T(n/2)+n
• T(n) = 64T(n/8) – n2logn
• T(n) = 3T(n/2) + n
• T(n) = 3T(n/4) + nlogn
• T(n) = 2T (n/2) + n/logn

You might also like