Nothing Special   »   [go: up one dir, main page]

Data_structure_dessalew

Download as pdf or txt
Download as pdf or txt
You are on page 1of 94

Data Structure and Algorithms

Instructor: Dessalew G.
Haramaya University
Chapter One: Introduction
This Chapter Covers:
Primitive and Non-primitive Data structures
Algorithm Analysis:
Computational and asymptotic complexity
Best case, Worst Case, Average Case Analysis

Order of Magnitude: Big-O, Ω, Θ, little-o notations


Data Structure
A data structure is a way of storing data in a computer so
that it can be used efficiently.
It is the specification of the elements of the structure, the
relationships between them, and the operation that may be
performed upon them.
Consider as an example books in a library. It can be stored
in a shelf arbitrarily or using some defined orders such as
sorted by Title, Author and so on.
It can be said that the study of data structures involves the
storage, retrieval and manipulation of information.
In other words, the possible logical and physical
arrangement of data items to provide an efficient solution
for a problem is called Data Structure.
Operations on data Structures
The various operations that can be performed over Data
Structures are
Create
Destroy
Access
Update
Primitive Data Structures
Integers, Real Numbers, Characters, Logical, Pointers,…
Non-Primitive Data Structure
Non-Primitive data structures are those that can be obtained
from the primitive data structures.
The non-primitive data structures and number of such data
structures are not fixed.
If the allocation of memory is sequential or continuous then
such a data structure is called a non-primitive linear
structure.
Example: arrays, stacks, queues, linked lists, etc.
If the allocation of memory is not sequential but random or
discontinuous then such a data structure is called a Non-
primitive non-linear structure. Example, Trees, Graphs, etc.
Algorithm Analysis
Algorithm is a step by step procedure to solve a problem.
E.g. baking cake, industrial activities, Student Registration,
etc, all need an algorithm to follow.
The purpose of an algorithm is to accept input values, to
change a value hold by a data structure, to re-organize the
data structure itself (e.g. sorting), to display the content of
the data structure, and so on.
More than one algorithm is possible for the same task.
Properties of algorithms
Finiteness: any algorithm should have finite number of
steps to be followed.
Absence of ambiguity: the algorithm should have one and
only one interpretation during execution.
Sequential: it should have a start step and a halt step for a
final result
Feasibility: the possibility of each algorithm to be executed.
Input/Output: zero or more input, one or more output.
And so on.
Algorithm Analysis Concepts
Complexity analysis is concerned with determining the
efficiency of algorithms.
How do we measure the efficiency of algorithms?
1. Empirical (Computational) Analysis
Here the total running time of the program is
considered.
It uses the system time to calculate the running time
and it can’t be used for measuring efficiency of
algorithms.
This is because the total running time of the program
algorithm varies on the:
Processor speed
Current processor load
Input size of the given algorithm
and software environment (multitasking, single tasking,…)
What is Efficiency depends on?
Execution speed (most important)
Amount of memory used
Efficiency differences may not be noticeable for small data,
but become very important for large amounts of data
Exercise One
Write a program that displays the total running time of a
given algorithm based on different situations such as
processor speed, input size, processor load, and software
environment (DOS and Windows).
2. Theoretical (Asymptotic Complexity) Analysis

In most real-world examples it is not so easy to calculate the


complexity function (relationship between t and n very complex)
But not normally necessary, e.g. consider t = f(n) = n2 + 5n ; For all
n>5, n2 is largest, and for very large n, the 5n term is insignificant
Therefore we can approximate f(n) by the n2 term only. This is
called asymptotic complexity
An approximation of the computational complexity that holds for
large n
Used when it is difficult or unnecessary to determine true
computational complexity
Usually it is difficult/impossible to determine computational
complexity. So, asymptotic complexity is the most common
measure
How do we estimate the complexity of
algorithms?
1. Algorithm analysis – Find f(n)- helps to determine the
complexity of an algorithm
2. Order of Magnitude – g(n) belongs to O(f(n)) – helps to
determine the category of the complexity to which it
belongs.
Analysis Rule:
• 1. Assume an arbitrary time unit
• 2. Execution of one of the following operations takes time unit 1
– Assignment statement Eg. Sum=0;
– Single I/O statement;. E.g. cin>>sum; cout<<sum;
– Single Boolean statement. E.g. !done
– Single arithmetic. E.g. a+b
– Function return. E.g. return(sum);
• 3. selection statement
– Time for condition evaluation + the maximum time of its clauses
• 4. Loop statement
– ∑(no of iteration of Body) +1 + n+1 + n (initialization time + checking +
update)
• 5. For function call
– 1+ time(parameters) + body time
Estimation of Complexity of Algorithm
• Calculate T(n) for the following
• 1. k=0;
Cout<<“enter an integer”;
Cin>>n;
For (i=0;i<n;i++)
K++
• T(n) = 1+1+1+ (1+n+1+ n +n)
=5+3n

• 2. i=0;
While (i<n)
{
x++;
i++;
}
J=1;
While(j<=10)
{
x++;
j++
}
• T(n)=1+n+1+n+n+1+11+10+10
=3n+34

3. for(i=1;i<=n;i++)
for(j=1;j<=n; j++)
k++;
T(n)=1+n+1+n+n(1+n+1+n+n)=3n2+4n+2

4.Sum=0;
if(test==1)
{
for (i=1;i<=n;i++)
sum=sum+i
}
else
{
cout<<sum;
}
• T(n)=1+1+Max(1+n+1+n+n+n,1)= 4n+4
Exercise
• Calculate T(n) for the following codes
A. sum=0;
for(i=1;i<=n;i++)
for(j=1;j<=m;j++)
Sum++;

B. sum=0;
for(i=1;i<=n;i++)
for(j=1;j<=i;j++)
sum++;
Algorithm Analysis Categories
• Algorithm must be examined under different situations to
correctly determine their efficiency for accurate
comparisons
• Best Case Analysis: Assumes that data are arranged in
the most advantageous order. It also assumes the
minimum input size.
• E.g.
• For sorting – the best case is if the data are arranged in the
required order.
• For searching – the required item is found at the first position.
• Note: Best Case computes the lower boundary of T(n)
• It causes fewest number of executions
Worst case analysis
• Assumes that data are arranged in the disadvantageous
order.
• It also assumes that the input size is infinite.
• Eg.
• For sorting – data are arranged in opposite required order
• For searching – the required item is found at the end of the
item or the item is missing
• It computes the upper bound of T(n) and causes maximum
number of execution.
Average case Analysis
Assumes that data are found in random order.
It also assumes random or average input size.
E.g.
For sorting – data are in random order.
For searching – the required item is found at any position
or missing.
It computes optimal bound of T(n)
It also causes average number of execution.
Best case and average case can not be used to estimate
(determine) complexity of algorithms
Worst case is the best to determine the complexity of
algorithms.
Order Of Magnitude
Order Of Magnitude refers to the rate at which the storage
or time grows as function of problem size (function (n)).
It is expressed in terms of its relationship to some known
functions – asymptotic analysis.
Asymptotic complexity of an algorithm is an
approximation of the computational complexity that holds
for large amounts of input data.

Types of Asymptotic Notations


1. Big-O Notation
2. Big – Omega (Ω)
3. Big –Theta (Θ)
4. Little o (small o)
1. Big-O Notation
Definition : The function T(n) is O(F(n)) if there exist
constants c and N such that T(n) ≤ c.F(n) for all n ≥ N.
As n increases, T(n) grows no faster than F(n) or in the long
run (for large n) T grows at most as fast as F
It computes the tight upper bound of T(n)
Describes the worst case analysis.
Properties of Big-O Notation
If T(n) is O(h(n)) and F(n) is O(h(n)) then T(n) + F(n) is
O(h(n)).
The function a.nk is O(nk) for any a and k
The function logan is O(logbn) for any positive numbers a and
b≠1
Examples
Find F(n) such that T(n) = O(F(n)) for T(n) = 3n+5
Solution:
3n+5≤ cn
c=6, N=2
F(n)=n because c=6 for 3n+5≤ 6n, for All n >=2
T(n) = O(n)
Example
T(n) = n2 + 5n
T(n) ≤ c.F(n)
n2 + 5n ≤ c. n2
1 + (5/n) ≤ c
F(n) = n2
T(n) = O(n2)
• Therefore if we choose N=5, then c= 2; if we choose N=6,
then c=1.83, and so on.
So what are the ‘correct’ values for c and N ? The answer to
this question, it should be determined for which value of N a
particular term in T(n) becomes the largest and stays the
largest.
In the above example, the n2 term becomes larger than the 5n
term at n>5, so N=5, c=2 is a good choice.
2. Big – Omega
Definition : The function T(n) is Ω(F(n)) if there exist
Constants c and N such that
T(n) ≥ c.F(n) for all n ≥ N.
As n increases T(n) grows no slower than F(n) or in the
long run (for large n) T grows at least as fast as F
It computes the tight lower bound of T(n).
Describes the best case analysis
Examples
Find F(n) such that T(n) = Ω(F(n))
for T(n) = 3n+5
F(n) = n as 3n+5>=c.n for c=1 and n=0
3. Big Theta Notation
Definition: The function T(n) is Θ(F(n)) if there exist
constants c1, c2 and N such that c1.F(n) ≤ T(n) ≤ c2.F(n)
for all n ≥ N.
As n increases, T(n) grows as fast as F(n)
It computes the tight optimal bound of T(n).
Describes the average case analysis.
Example:
Find F(n) such that T(n) = Θ(F(n)) for T(n)=2n2.
Solution: c1n2 ≤ 2n2 ≤ c2n2
F(n)=n2 because for c1=1 and c2=3,
n2 ≤ 2n2 ≤ 3n2 for all n.
4. Little o (small o) notation
Defn: The function T(n) is o(F(n)) if T(n) is O(F(n)) but
T(n) is not Θ(F(n) or for some constants c, N: if T(n) <
cF(n) for all n>=N
As n increases F(n) grows strictly faster than T(n).
It computes the non-tight upper bound of T(n)
Describes the worst case analysis
Example:
Find F(n) such that T(n) = o(F(n)) for T(n)=n2.
Solution: n2<c n2
F(n)=n2 because for c=2,
n2<2n2 for all n>=1
Big O Analysis
.
Assignment
Write a program that displays the total running time for
various input sizes of the complexity category function and
plot a graph (input size Vs time) for each function. By
observing the graph write functions in ascending order of
complexity.

Deadline :
Finding Asymptotic Complexity:
Examples
Rules to find Big-O from a given T(n)
Take highest order
Ignore the coefficients
Reading Assignment
Big-OO notation
Amortized complexity
Finding Big O of given Algorithm
1. for (i=1;i<=n;i++)
Cout<<i;
T(n) = 1+n+1+n+n
=3n+2
T(n) = O(n)

2. For (i=1;i<=n;i++)
For(j=1;j<=n;j++)
Cout<<i;
T(n)=1+n+1+n+n(1+n+1+n+n)
=3n2 +4n+2
T(n)=O(n2)

3. for (i=1; i<=n; i++)
Cout<<i;
T(n) = 1+n+1+n+n
= 3n+2
T(n) = O(n)
Exercise
Find Big O of the following algorithm
1. for(i=1;i<=n;i++)
Sum=sum+i;
For (i=1;i<=n;i++)
For(j=1;j<=m;j++)
Sum++;

2. if(k==1)
{
For (i=1;i<=100;i++)
For (j=1;j<=1000;j++)
Cout<<I
}
Else if(k==2)
{
For(i=1;i<=n;i=i*2)
Cout<<i;
}
Else
{
{for
(i=1;i<=n;i++)
Sum++;
}

3. for (i=1; i<=n;i++)
{
If (i<10)
For(j=1;j<=n;j=j*2)
Cout<<j;
Else
Cout<<i;
}

4. for (int i=1; i<=N; ++i)
for (int j=i; j>0; --j)
{} // do nothing
5. for (int i=1; i<N; i=i*2)
{}
6. for (int i=0; i<N; ++i)
for (int j=i; j>0; j=j/2)
{}

7. for (int i=1; i<N; ++i)


for (int j=0; j<N; j+=i)
{}
8. int i=0, sum=0;
while (sum < N)
sum = sum + i++;
9. for (int i=1; i<N; i=i*2)
for (int j=0; j<i; ++j)
{}

10. i=1;
while (i<n)
{
i=i*2;
}
…..
11. for ( int i = 0; i < n; i++ )
for ( int j = 0; j <= n - i; j++ )
int a = i;
for (int i = 0; i < n; i ++ )
for (int j = 0; j < n*n*n ; j++ )
sum++;

12. void Test( int N){
for(int i =0 ; i < N ; i= i*4){
for(int j = N ; j > 2; j = j/3){
cout<<“I have no idea what this does”;
}
}
}

13. for (int i = 0; i < n; i++)
sum++; for
(int j = 0; j < n; j++)
sum++;
Quiz one - 5 %
1. Determine the big-O expression for
T(n) = 3 log n^2 + 2
2 Big O computes the worst case analysis (T/F).
Chapter Two: Searching and Sorting
Algorithms
This Chapter Covers:
Sequential search
Binary search
Simple sort
Bubble sort
Selection sort
Insertion sort
Advanced Sorting Algorithms
Shell Sort, Heap Sort, Quick Sort, Merge Sort
Why do we study sorting and searching
algorithms?
they are the most common & useful tasks in any
software development
they take more than 25% of running time of
computers task
Example:
Searching documents over the internet
Searching files and folders in a hard disk
Sorting students by their name, year and so on
Sorting file, search results by file name, date created and so
on
Deleting and recording data from a database
Simple searching Algorithm
Searching is a process of finding an element from a
collection of elements.
Searching algorithms to be considered:
Sequential search
Binary search
Sequential Searching
It is natural way of searching
The algorithm does not assume that the data is sorted.
Algorithm:
Search list from the beginning until key is found or end of list
reached.
Implementation:
int i;
bool found =false;
for(i=0;i<n;i++;)
if (DataElement[i]==key)
{
found=true;
break;
}
return (found);
Example

List index 0 1 2 3 4 5 6 7
Data C A E F N Q R K
Key=Q after six comparison the algorithm returns found
Key=X after 8 comparison the algorithm returns found false

What is the Big O of sequential searching algorithm?


For searching algorithm the dominant factor is comparison of
keys
How many comparisons are made in sequential searching?
The maximum number of comparison is n, i.e. O (n).
Binary Searching
Algorithm:
1. Divide the list into halves
2. see which half of the list the key belongs
3. repeat 1 and 2 until only one element remains
4. if the remaining element is equal to search key then
found =true else key not found.
Implementation:
int Top =n-1, Bottom=0;middle;
bool found=false;
while (Top>bottom)
{
Middle=(Top+Bottom)/2;
if( dataElement[middle]<key)
Bottom=middle+1;
else
Top=middle;
}
if (dataElement[top]==key)
found =true;
return found;
How many comparisons are made in
binary search algorithm?
n Max. Number of comparison
1 1
2 2
4 3
8 4
16 5
32 6
Then for n number of data items, logn+1 number of
comparison is needed; so, it is O(logn).
Simple (Elementary) Sorting
Algorithms

For searching, sorting is important


Take as an example a telephone directory
Efficiency of the algorithm should be assessed
Sorting algorithms commonly consist of two types of
operation: comparisons and data movements
A comparison is simply comparing one value in a list
with another, and a data movement (swapping) is an
assignment.
Sorting algorithms to be considered:
Simple sort, bubble sort, selection sort, insertion sort
Simple Sort Algorithm
In simple sort algorithm,
the first element is compared with the second, third, and
all subsequent elements.

If any of these other elements is less than the current first
element then the first element is swapped with that
element.

The above step is repeated with the second, third and all
other subsequent elements.
Implementation:
void swap( dataType x, dataType y)
{
dataType temp;
temp=x;
x=y;
y=temp;
}
for (i=0;i<=n-2;i++)
for(j=i+1;j<=n-1;j++)
if(dataElement[i]>dataElement[j])
Swap(dataElement[i],dtataElement[j]);
Example

i j (pass) Data elements Remarks


0 1 2,4,3,1 The smallest element
2 2,4,3,1 is placed in its
proper position
3 1,4,3,2
1 2 1,3,4,2 The second smallest
3 1,2,4,3 element is placed
in its proper place.
2 3 1,2,3,4 All the data elements
are sorted at the n-
1 path
footer 61
Algorithm Analysis
Analysis involves number of comparisons and
number of swaps.
Iteration Number of comparison number of swaps (compares)
1st pass n-1 n-1
2nd pass n-2 n-2
. . .
. . .
. . .
(n-1)th 1 1
Total n(n-1)/2 n(n-1)/2
Therefore Big O of simple sort algorithm is
n(n-1)/2+n(n-1)/2=O(n2)
Bubble Sort Algorithm
It is a method which uses the interchanging of adjacent pair
of element in the array.
After each pass through the data one element (the largest or
smallest element) will be moved down to the end of the
array in its proper place.

Implementation:
for (int i = 0; i < n-1; i++)
for (int j = n-1; j > i; --j)
if (data[j] < data[j-1])
swap(data[j],data[j-1]);
Example data element= 5,2,3,8,1
Analysis
Passes number of comparison number of swaps
1 n-1 n-1
2 n-2 n-2
. . .
. . .
. . .
n-1 1 1
total n(n-1)/2 n(n-1)/2
=O(n2)
Selection Sort Algorithm
It is in many ways similar to both simple and bubble
algorithm.
Rather than swapping the neighbors continuously as the
algorithm traverses the sub array to be sorted, as done in
the bubble sort case, this algorithm finds the minimum
element of the sub array and swaps it with the pivot
elements.
Implementation

for (i=0; i<=n-2;i++)


{
min_index=i;
for (j=(i+1); j<=n-1;j++)
if (dataElement[j] <= dataElement[min_index])
min_index=j;
swap (dataElement[i],dataElement[min_index]);
}
Example
Analysis:
Pass number of comparison number of swaps
1 n-1 1
2 n-2 1
. . .
. . .
. . .
n-1 1 1
total n(n-1)/2 n-1
It is in O(n2)
Insertion Sort Algorithm
As each element in the list is examined it is put into
its proper place among the elements that have been
examined ( and put in correct order).

When the last element is put into its proper position


the list is sorted and the algorithm is done.

It behaves in the worst case like the inefficient


bubble sort and selection sort algorithms.

But in many average case it performs better.


Implementation
for (i=1;i<=n-1;i++)
{
inserted =false;
j=i;
while ((j>=1) && (inserted == false))
{
If (dataElement[j]<dataElement[j-1])
Swap(dataElement[j],dataElement[j-1]);
else
inserted=true;
j--
}
}
Analysis
Pass number of comparison number of swaps
1 1 1
2 2 2
. . .
. . .
. . .
n-1 n-1 n-1
Total n(n-1)/2 n(n-1)/2
It is in O(n2)
Assignment - II
Write a C++ program that will implement all of the
simple sorting algorithms. The program should
accept different unsorted data items from user and
sort using all algorithms and should tell which
algorithm is efficient for that particular unsorted data
items. (Hint use how many comparison and swaps
are done so the one with the less number of swap and
comparison number is considered as an efficient
algorithm for that particular data items.)
Advanced Sorting Algorithms

This chapter covers:


Shell Sort
 Heap Sort
Quick Sort
Merge Sort

74
Shell Sort
The shell sort, also known as the diminishing increment
sort, was developed by Donald L. Shell in 1959.
The idea behind shell sort is that it is faster to sort an
array if parts of it are already sorted.
The original array is first divided into a number of smaller
subarrays, these subarrays are sorted, and then they are
combined into the overall array and this is sorted.

Pseudocode for shell sort:


Divide data into h subarrays
for (i = 1; i < h; i++)
Sort subarray i
Sort array data 75
How should the original array be divided into
subarrays?
One approach would be to divide the array into a
number of subarrays consisting of contiguous
elements (i.e. elements that are next to each other).
For example, the array [abcdef] could be divided into
the subarrays [abc] and [def]. However, shell sort uses
a different approach: the subarrays are constructed by
taking elements that are regularly spaced from each
other.
For example, a subarray may consist of every second
element in an array, or every third element, etc.
For example, dividing the array [abcdef] into two
subarrays by taking every second element results in
the subarrays [ace] and [bdf]
76

Actually, shell sort uses several iterations of this technique.
First, a large number of subarrays, consisting of widely spaced
elements, are sorted.
Then, these subarrays are combined into the overall array, a
new division is made into a smaller number of subarrays and
these are sorted.
In the next iteration a still smaller number of subarrays is
sorted.
This process continues until eventually only one subarray is
sorted, the original array itself.

77

78
In the above example:

In the first iteration, five subarrays are constructed by taking


every fifth element.
These subarrays are sorted (this phase is known as the 5-sort).
In the second iteration, three subarrays are constructed by
taking every third element, and these arrays are sorted (the 3-
sort).
In the final iteration, the overall array is sorted (the 1-sort).
Notice that the data after the 3-sort but before the 1-sort is
almost correctly ordered, so the complexity of the final 1-sort is
reduced.

79
In the above example, we used three iterations: a 5-
sort, a 3-sort and a 1-sort.
This sequence is known as the diminishing increment.
But how do we decide on this sequence of increments?
Unfortunately, there is no definitive answer to this
question. In the original algorithm proposed by
Donald L. Shell, powers of 2 were used for the
increments, e.g. 16, 8, 4, 2, 1.
However, this is not the most efficient technique.
Experimental studies have shown that increments
calculated according to the following conditions lead
to better efficiency:
h1 = 1
hi+1 = 3hi + 1
For example, for a list of length 10,000 the sequence of
increments would be 3280, 1093, 364, 121, 40, 13, 4, 1.
80
Another decision that must be made with shell sort is
what sorting algorithms should be used to sort the
subarrays at each iteration?
A number of different choices have been made: for
example, one technique is to use insertion sort for
every iteration and bubble sort for the last iteration.
Actually, whatever simple sorting algorithms are used
for the different iterations, shell sort performs better
than the simple sorting algorithms on their own.
It may seem that shell sort should be less efficient,
since it performs a number of different sorts, but
remember that most of these sorts are on small arrays,
and in most cases the data is already almost sorted.
An experimental analysis has shown that the
complexity of shell sort is approximately O(n1.25), which
is better than the O(n2) offered by the simple
algorithms. 81
Example2:
sort: (5, 8, 2, 4, 1, 3, 9, 7, 6, 0)
Solution:
4 – sort
Sort(5, 1, 6) – 1, 8, 2, 4, 5, 3, 9, 7, 6 , 0
Sort(8, 3, 0) – 1, 0, 2, 4, 5, 3, 9, 7, 6, 8
Sort(2, 9) – 1, 0, 2, 4, 5, 3, 9, 7, 6, 8
Sort(4,7) - 1, 0, 2, 4, 5, 3, 9, 7, 6, 8

1- sort
Sort(1, 0, 2, 4, 5, 3, 9, 7, 6, 8) – 0,1, 2, 3, 4, 5, 6, 7,8,9
82
Heap Sort
Heap is a binary tree that has two properties:
The value of each node is greater than or equal to the values stored
in each of its children.
The tree is perfectly balanced, and the leaves in the last level are
all in the leftmost positions. (filled from left to right)
A property of the heap data structure is that the largest
element is always at the root of the tree.
A common way of implementing a heap is to use an array

83
Heap Sort…
The heap sort works by first rearranging the input array so
that it is a heap, and then removing the largest elements
from the heap one by one. Pseudocode for the heap sort is
given below:
HeapSort(data):
Transform data into a heap
for (i = n-1; i > 1; i--)
Swap the root with element in position i
Restore the heap property for the tree data[0] … data[i-1]

84
Example

85
Quick Sort
Quicksort was developed by C. A. R. Hoare in 1961, and is
probably the most famous and widely used of sorting
algorithms.
The guiding principle behind the algorithm is similar to
that of shell sort: it is more efficient to sort a number of
smaller subarrays than to sort one big array

86
The Algorithm
In quicksort the original array is first divided into two
subarrays, the first of which contains only elements that
are less than or equal to a chosen element, called the
bound or pivot.
The second subarray contains elements that are greater
than or equal to the bound. If each of these subarrays is
sorted separately they can be combined into a final
sorted array.
To sort each of the subarrays, they are both subdivided
again using two new bounds, making a total of 4
subarrays.
The partitioning is repeated again for each of these
subarrays, and so on until the subarrays consist of a
single element each, and do not need to be 87
sorted

Quicksort is inherently recursive in nature, since it consists
of the same simple operation (the partitioning) applied to
successively smaller subarrays.
Pseudocode for quicksort is given below:

88

The pivot element can be selected in many ways. We
may select the first element as a pivot , or we may select
the middle element of the array or may be a randomly
selected pivot [bound] element can be used. The
following example demonstrates a pivot element selected
from the middle of the data. In the average, quicksort
has complexity of nlogn.

89
Example [ pivot selected from the middle]

90
In the above Example
To begin with, the middle element 6 is chosen as the
bound, and the array partitioned into two subarrays.
Each of these subarrays is then partitioned using the
bounds 2 and 10 respectively.
In the third phase, two of the four subarrays only have
one element, so they are not partitioned further.
The other two subarrays are partitioned once more,
using the bounds 3 and 7.
Finally we are left with only subarrays consisting of a
single element. At this point, the subarrays and bounds
are recombined successively, resulting in the sorted
array. 91
Merge Sort
Mergesort also works by successively partitioning the
array into two subarrays, but it guarantees that the
subarrays are of approximately equal size.
This is possible because in mergesort the array is
partitioned without regard to the values of their
elements: it is simply divided down the middle into
two halves.
Each of these halves is recursively sorted using the
same algorithm.
After the two subarrays have been sorted, they are
merged back together again.
The recursion continues until the subarrays consist of
a single element, in which case they are already sorted.
92
Pseudocode code for merge sort

93
Example

94

You might also like