Nothing Special   »   [go: up one dir, main page]

Algorithm-Complexity Reviewer

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

ALGORITHM COMPLEXITY Input − An algorithm should have 0 or more

well-defined inputs.
LECTURE 1
Output − An algorithm should have 1 or more
What is algorithms? well-defined outputs, and should match the
-Algorithms are step-by-step instructions for desired output.
solving a particular problem. Finiteness − Algorithms must terminate after a
-In Computer Science, algorithms are at the finite number of steps.
heart of software development. Feasibility − Should be feasible with the
Examples: Sorting a list of numbers, searching available resources.
for an item in a database, routing directions in a Independent − An algorithm should have step-
mapping app. by-step directions, which should be
How to Write an Algorithm? independent of any programming code.

-There are no well-defined standards for writing Pseudocode


algorithms. -Pseudocode gives a high-level description of an
-It is problem and resource dependent. algorithm without the ambiguity associated
with plain text but also without the need to
-Algorithms are never written to support a know the syntax of a particular programming
particular programming code. language.
-We write algorithms in a step-by-step manner, -The running time can be estimated in a more
but is not always the case. general manner by using Pseudocode to
represent the algorithm as a set of fundamental
-Algorithm writing is a process and is executed
operations which can then be counted.
after the problem domain is well-defined.

-In design and analysis of algorithms, usually the Difference between Algorithm and
second method is used to describe an Pseudocode
algorithm.
Algorithm
-It makes it easy for the analyst to analyze the
-Algorithm is a formal definition with some
algorithm ignoring all unwanted definitions.
specific characteristics that describes a process.
-We can observe what operations are being
- Could be executed by a Turing-complete
used and how the process is flowing.
computer machine to preform a specific task.
Characteristics of Algorithms Pseudocode
Unambiguous − Algorithm should be clear and Pseudoce is an informal and human readable
unambiguous. Each of its steps (or phases), and description of an algorithm leaving many
their inputs/outputs should be clear and must granular details of it.
lead to only one meaning.
Writing a pseudocode has no restriction of Best-case − The minimum number of steps
styles and its only objective is to describe the taken on any instance of size a.
high level steps of algorithm in a much realistic
Average case − An average number of steps
manner in natural language.
taken on any instance of size a.
Importance of Algorithms
Amortized − A sequence of operations applied
-Efficient algorithms are crucial for optimizing to the input of size a averaged over time.
software performance.
-To solve a problem, we need to consider time
Algorithms impact real-world applications: as well as space complexity as the program may
run on a system where memory is limited but
-Finance
adequate space is available or may be vice-
-Healthcare versa.

-Transportation -In this context, if we compare bubble sort and


merge sort. Bubble sort does not require
-Education additional memory, but merge sort requires
-Entertainment additional space.

-Etc. -Though time complexity of bubble sort is


higher compared to merge sort, we may need
-Good algorithms save time, energy and to apply bubble sort if the program needs to run
resources. in an environment, where memory is very
Algorithm Analysis limited.

-Different algorithms can solve the same Measuring Algorithm Efficiency


problem, but their efficiency varies. Time Complexity – How execution time grows
-Analysis helps us compare algorithms and with input size.
choose the most suitable one. Space Complexity – How memory usage grows
-By considering an algorithm for a specific with input size.
problem, we can begin to develop pattern
recognition so that similar types of problems
can be solved by the help of this algorithm. LECTURE 3
-Analysis of algorithm is the process of analyzing Sorting Algorithms
the problem-solving capability of the algorithm
in terms of the time and size required (the size The efficiency of an algorithm depends on two
of memory for storage while implementation). parameters:

-However, the main concern of analysis of -Time Complexity


algorithms is the required time or performance.
-Space Complexity
Generally, we perform the following types of
analysis. Time Complexity

Worst-case − The maximum number of steps -Time Complexity is defined as the number of
taken on any instance of size a. times a particular instruction set is executed
rather than the total time taken. It is because The notation Ω(n) is used to express the lower
the total time took also depends on some bound of an algorithm's running time.
external factors like the compiler used,
Theta Notation, θ
processor’s speed, etc.
The notation θ(n) lies between O(n) and Ω(n)
Space Complexity
and is used to express the exact asymptotic
-Space Complexity is the total memory space behavior of an algorithm’s running time.
required by the program for its execution.
“Generally, the algorithm’s performance is
One important thing here is that in spite of heavily reliant on the input data and its type;
these parameters the efficiency of an algorithm therefore, the worst-case time complexity is
also depends upon the nature and size of the normally used because sometimes it’s
input. impossible to predict all variations in the input
data.”
Best Time Complexity

Define the input for which algorithm takes less Bubble Sort
time or minimum time. In the best case -In bubble sort, we compare each adjacent pair.
calculate the lower bound of an algorithm. If they are not in the correct order, we swap
Example: In the linear search when search data them. We keep repeating the previous step
is present at the first location of large data then until no swaps are needed, which indicates all
the best case occurs. the elements are sorted.

Average Time Complexity Time Complexity:

In the average case take all random inputs and Worse case: O(n2)
calculate the computation time for all inputs. -When the array is reverse-sorted, we iterate
And then we divide it by the total number of through the array (n - 1) times. In the first
inputs. iteration, we do (n - 1) swaps, (n - 2) in the
Worst Time Complexity second, and so on until in the last iteration
where we do only one swap. Thus the total
Define the input for which algorithm takes a number of swaps sum up to n * (n - 1) / 2.
long time or maximum time. In the worst
calculate the upper bound of an algorithm. Average case: O(n2)

Example: In the linear search when search data -For a completely random array, the total
is present at the last location of large data then number of swaps averages out to be around
the worst case occurs. n2 / 4, which is again O(n2).

Following are the various notations used for Best case: O(n)
expressing time complexity: -In the first iteration of the array, if we do not
Big Oh Notation, O perform any swap, we know that the array is
already sorted so stop sorting, therefore the
The notation Ο(n) is used to express the upper time complexity turns out to be linear.
bound of an algorithm's running time.
Space Complexity:
Omega Notation, Ω
-Since we use only a constant amount of Time Complexity:
additional memory apart from the input array,
Worse case: O(n2)
the space complexity is O(1).
-When we apply insertion sort on a reverse-
Selection Sort sorted array, it will insert each element at the
-Selection sort is a simple sorting algorithm that beginning of the sorted subarray, making it the
divides the array into two parts: a subarray of worst time complexity of insertion sort.
already sorted elements and a subarray of Average case: O(n2)
remaining elements to be sorted. The sorted
subarray is initially empty. We iterate over the When the array elements are in random order,
array (n - 1) times. In each iteration, we find the the average running time is O(n2 / 4) =O(n2).
smallest element from the unsorted subarray
Best case: O(n)
and place it at the end of the sorted subarray.
-When we initiate insertion sort on an already
Worst case = Average Case = Best Case = O(n2)
sorted array, it will only compare each element
We perform the same number of comparisons to its predecessor, thereby requiring n steps to
for an array of any given size. In the first sort the already sorted array of n elements.
iteration, we perform (n - 1) comparisons, (n - 2)
Space Complexity:
in the second, and so on until the last iteration
where we perform only one comparison. Thus Since we use only a constant amount of
the total number of comparisons sum up to n * additional memory apart from the input array,
(n -1) / 2. The number of swaps performed is at the space complexity is O(1).
most n - 1. So the overall time complexity is
quadratic. Quick Sort
Space Complexity: -Quicksort is a relatively more complex
algorithm. It uses a divide-and-conquer strategy
Since we are not using any extra data structure to divide the array into two subarrays. We
apart from the input array, the space choose an element called pivot and we then
complexity is O(1). place it in its correct index and then reorder the
array by moving all the elements less than pivot
Insertion Sort to its left and all the elements greater than it to
Like selection sort, the insertion sort algorithm its right.
also divides the array into two parts: a subarray
We then recursively sort the subarrays of these
of already sorted elements and a subarray of
subarrays until the entire array is sorted. The
remaining elements to be sorted. The sorted
efficiency of the quicksort algorithm heavily
subarray initially contains only the first element
depends on the selection of the pivot element.
of the array. We iterate over each of the
remaining elements and put them in their
correct position in the sorted subarray.
Time Complexity: Worst case: O(n)

Worse case: O(n2) -This happens when the pivot element is the
largest or smallest element of the array in every
-When the array is sorted and we choose the
recursive call. The size of the subarray after
leftmost element as pivot, or the array is
partitioning will be n-1 and 1. In this case, the
reverse-sorted and we choose the rightmost
size of the recursive tree will be n.
element as pivot, the time complexity becomes
quadratic since partitioning the array results in Best case: O(log n)
highly unbalanced subarrays in such cases.
-This happens when the pivot element’s correct
Also when there are a large number of identical position in the partitioned array is in the middle
elements in the array, optimal partitioning every time. The size of subarrays will be half the
becomes hard, resulting in quadratic time size of the original array. In this case, the
complexity. recursive tree’s size will be O(log n).

Average case and best case: O(n log n)

The best case for quick-sort happens when we


successfully pick the median element for
partitioning every time. Such partitioning allows
us to divide the array in half every time.

We can avoid the worst-case in quicksort almost


always by choosing an appropriate pivot. There
are various ways to achieve this:

-Pick the pivot from the middle of the array

-Adopt a random selection of pivots

-Take the median of three pivot candidates, i,e.,


choose the median of the first, middle, and the
last elements of the array as the pivot.

These methods result in almost equal


partitioning of the array, on average. This way
the average case time complexity of quicksort
practically becomes O(n log n).

Space Complexity:

Although quicksort doesn’t use auxiliary space


to store array elements, additional space is
required for creating stack frames in recursive
calls.

You might also like