Nothing Special   »   [go: up one dir, main page]

SDA

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 28

• These topics recap, questions –most with

answers-, and notes help you as a study guide


by showing you the areas to focus on and the
type of questions that may appear on exam.
So, please stick to this slide while you read.
ADT and data structure
• An abstract data type (ADT) is the specification of a data type within
some language, independent of an implementation.
• The interface for the ADT is defined in terms of a type and a set of
operations on that type.
• The behavior of each operation is determined by its inputs and
outputs.
• An ADT does not specify how the data type is implemented.
• These implementation details are hidden from the user of the ADT and
protected from outside access, a concept referred to as encapsulation.
• ADT is a mathematical/logical/abstract model of an entity
• That is, an abstract data type logically defines an interface through its
operations.
• Data structure is a physical/implementation model of an entity
Algorithm
• What?
• Analysis
• The two approaches/methods
– Priori- affected only by input size and nature of the input
– Posteriori- affected by input size and nature of the input plus …..

• Why analysis?
To determine:
its time requirement
its space requirement
its bandwidth requirement
power consumption //if the problem is to develop system programs
• Properties of algorithms
– Briefly and correctly explain.

Q. There are some algorithms that do not terminate for certain inputs.//true or false?
Algorithm…
Q. Given the growth functions:
n!,2n,2n2,5nlogn,20n,10n,2​n​,2n^​2​,5nlogn,20n,10n, what is the valueof n for
which the growth function 20n is the most efficient of the six?
A. 1
B. 5
C. 11
D. None //you cannot find such value
Q2. Arrange the above growth functions in increasing order.
Q3. Arrange the following growth functions in increasing order.
2log3 n, log2n, nloglogn, 2logn
Asymptotic notations

• Big-oh: O
• Big omega: Ω
• Theta: Ѳ
• Give formal(mathematical) definition of each.
• Their implications (interpretations)
Proving big-oh,omega and theta

• Example, given f(n)=2n^2 +6n +1 and g(n)=n^2,


Show that:
f(n)=O(g(n)
f(n)=Ω(g(n))
f(n)=Ѳ(g(n))
• It just involves finding c,c1 &c2 &n0 so that the inequalities hold
based on the definition for each.
• Remember that you can find more than one values for c,c1&c2
&n0.
• Define growth rate of an algorithm.
• The higher the growth rate of an algorithm, the slower it is(less
efficient) and vice-versa.
Finding time complexity using step(frequency) count method)

• Determine i) the time complexity(derive some polynomial function) and ii) big-
oh,O for the following code fragments. Assume all variables are of type int.
• Example1
Void f(int n)
{
int i,j,k,count=0;………………………1…………….c1
for(i=n/2,i<=n;i++)……………………n+1………..c2(n+1)
for(j=1;j+n/2<=n;j++)……..n*n+1)……c3(n*(n+1))
for(k=1;k<=n;k*2)…..n^2logn…c4(n^2logn))
count++;……1*n^2logn….c5(n^2logn)
}
After adding and multiplying c’s with the terms, u will get some polynomial of degree
2 of an^2logn +bn^2 + cn+d ~the time complexity .
Therefore, the asymptotic time complexity(big-oh is O(n^2logn)
Finding time complexity using step(frequency) count method)….

• Example 2. find the time complexity(asymptotic) of the following code


fragment.
void f(int n)
{
int i,j,k,count=0;
//executes O(n ) times
for(i=n/2,i<=n;i++)
//executes O(n)*O(logn) times //O(nlogn)
for(j=1;j<=n;j*2)
//executes O(nlogn)*O(n)//O(n^2logn)
for(k=1;k<=n;k++
count++;//executes 1* O(n^2logn)
}
So, the asymptotic time complexity is O(n^2logn)
Finding time complexity using step(frequency) count method)…

• Example3. what is the time complexity of the ff. code fragment?


void f(int n)
{
if(n<=1)………………………………..O(1/0)
In i,j; ………………………………….O(1)
for(i=1;i<=n;i++)……………………O(n)
for(j=1;j<=n;j++){…………O(1)//executes only once for any
value of j<=n because of the break
cout<<“welcome\n”;…..O(1)
break;
}
}
So, the time complexity is O(n)
Finding time complexity using step(frequency) count
method)…

• Example4. what is the time complexity of the ff. code fragment?


Void f(int n)
{
int i,j;…………………………………….1
for(i=1;i<=n/3;i++)…………………..n+1//for worst case
for(j=1;j<=n;j+=4){………..n*(n+1)//for worst case
bound
cout<<“welcome\n”;1*n^2
}
}
Thus, time complexity ,T(n)=2+2n+2n^2=O(n^2)
Finding time complexity using step(frequency) count
method)…

• Example5. what is the time complexity of the ff. code


fragment?
Int I;
For(i=1;i<=n;
{
i=i*2;
}
Time complexity, T(n)=O(logn), why?
Finding time complexity using step(frequency) count method)…

• Example5. what is the time complexity of the


ff. code fragment?
Int I;
For(i=1;i<=32;
{
i=i*2;
}
Time complexity, T(n)=?
Finding time complexity using step(frequency)
count method)…
• Example5. what is the time complexity of the
ff. code fragment?
Int I;
For(i=1;i<=n;
{
i=i/2;
}
Time complexity, T(n)=O(logn) too
Finding time complexity…

• Example6. what is the time complexity of the ff. code fragment?


void fun(int n)
{
i=1;
While(i<n){//executes n+1 times………………n+1
j=n;//1*n……………………………………….n
While(j>0){//executes nlogn………….nlogn
j=j/2;…………………………………1*nlogn
i=i*2;//executes logn times……………..logn// outer while loop
}
}
}//1+2n+2nlogn +logn
So, time complexity, T(n)= O(nlogn)
Finding time complexity…

• Example7. what is the time complexity of the


ff. code fragment?
Int I;
For(i=1;i<=100;i++)
cout<<i;
a. O(n) b. O(1) c. O(100) d. b & c
Faster computer or faster algorithm?
Q1. Suppose that a particular algorithm has time complexity T(n)=8n
and that executing an implementation of it on a particular machine
takes t seconds for n inputs. Now suppose that we are presented with
a machine that is 64 times as fast. How many inputs could we process
on the new machine in t seconds?
• Let C be total number of CPU cycles taken by PC A to process n
inputs which is T(n)=8n. What is the input size, n1, PC B (64 times
faster than PC A) can process in the same amount of time (t
seconds)?
PC A in t secs: C=T(n)=8n
PC B in t secs: 64C=T(n1)=8n1
8n1*C=8n*64C
n1=8n*64C/8C
n1=64n
PC B can solve a problem of input size 64 times the original input size
in the same amount of time.
Faster computer or faster algorithm?
Q2. Suppose that a particular algorithm has time complexity T(n)=T(n)=n^​2​and
that executing an implementation of it on a particular machine takes t seconds for
n inputs. Now suppose that we are presented with a machine that is 64 times as
fast. How many inputs could we process on the new machine in t seconds?
• Follow the same method above to solve the problem.
PC A in t secs: C=T(n)=n^2
PC B in t secs: 64C=T(n1)=n1^2
Cn1^2=64Cn^2
n1^2=64Cn^2/C
n1^2=64n^2
n1=sqrt(64n^2)
n1=8n
That is, a gain in input size on the faster PC 8 times the size on the slower PC for
the amount of time.
Faster computer or faster algorithm?
Q3. Suppose that a particular algorithm has time complexity
T(n)=T(n)=2^n​​and that executing an implementation of it on a
particular machine takes t seconds for n inputs. Now suppose that we
are presented with a machine that is 10 times as fast. How many
inputs could we process on the new machine in t seconds?
By following the same procedure,
n1=n +3.
Conclusion:
Base on the above observations, buying a faster computer to solve a
problem of bigger size in less time is not the best solution; the solution
is designing a faster(efficient) algorithm.
Sorting and searching Algorithms

 Given the input array [55, 33, 44, 22, 11],


show how each of the ff. algorithms works when applied to sort the array(as discussed in class):
– Bubble sorting algorithms
– Insertion sorting algorithm
– Selection sorting algorithm
• For example, what is the intermediate result of each sorting algorithm
after the 3rd pass?
• And you have to be able to implement (write function) for each sorting
and searching algorithm.
• And also show how binary search algorithm works when applied to the
array.
• You have to know (at least theoretically) the asymptotic time complexity
of the searching and sorting algorithms in all cases.
Functions to create and display a singly-linked list

• //First, declare node structure – global declaration


#include <iostream>
using namespace std;
struct node
{
int data;
node *next;
};
node *head=0,*p,*newnode; //globally declared pointers
//next slides for the functions
void createSLL()
{
p=head; //p and head contain NULL or 0 - no node created
int choice;
do
{
newnode=new node ;//allocates memory of 8 bytes & stores address of 1 st byte in // newnode
cout<<"Enter an integer ";
cin>>newnode->data;
newnode->next=0;
if(head==NULL) //no node created yet
{
head=p=newnode; //need p b/c can’t move head pointer
}
else{
p->next=newnode;
p=newnode;
}
cout<<"Do you want to continue(enter 0 or 1)? "; //need the loop to create more
cin>>choice;
}while(choice); //end of loop, you can also use for…loop
}//end of the function
Function to display elements of singly linked list
void display()
{
p=head; //p should point to the head node
while(p!=NULL) //stop when p=NULL/0- no more node
{
cout<<p->data<<" "; //display data

p=p->next; //advance p to the next node


}
}
//main function –call the functions from here
int main()
{

createSLL();
display();
return 0;
}
And also be aware of the following…

• Think of writing functions to:


– insert node at the beginning
– Insert node at end
– Insert node at position
in already created linked list
– Delete node from the beginning
– Delete node from the end
– Delete node at position
from an existing linked list
Time complexity of a singly linked list basic operations

• create(): O(n)
• Traverse(): O(n)
• insertatBeg(x): O(1)
• insertAtPos(x): O(n/2) ~O(n)
• insertatEnd(x): O(n) //need to traverse to get there
• deletefromBeg(x): O(n)
• deletefromPos(x): O(n/2)~ O(n)
• deletefromEnd(x): O(n)
• display(): O(n)
struct Node
{
int data;
Node* next;
};

Assume ,based on the node structure definition above, the following sequence of
operations were performed on the above singly linked list.
Node* P. //P is a pointer.
1. P=First->next->next
2. P->next->next->next = First
3. P=P->next->next
4. cout<<p->data
The cout statement outputs____________.//the answer is a – analyze how
a. E
b. D
c. C
d. B
struct Node
{
int data;
Node* next;
};

150 200 400 500


100
P
100 q 300
300
Use the node structure above and write a C++ code snippet that
inserts the given node after C (between C & D). Notice P is
pointing to the head node & you need to use it.
Hint: Establish the right link first and then the left link.
Solution:
p=p->next->next; //p now points to node 3
q->next=p->next; //right link established, link b/n C & D broken
p->next=q; //left link set
• Which one of the following data structure is the best to answer question like
“What is the item at position n?”
A. array //yes, array is a random access data structure
B. singly linked list
C. doubly linked list
D. all
• What would be the asymptotic time complexity to find an element in the linked
list?
A. O(1)
B. O(n)// the need to travers
C. O(n2)
D. O(n4)
• What would be the asymptotic time complexity to add a node at the end of
singly linked list, if the pointer is initially pointing to the head of the list?
A. O(1)
B. O(n)
C. θ(n)
D. θ(1)
// B and C are correct answers
• Which of one of the following statements is not true about Linked List data
structure when it is compared with an array?
a) Arrays have better cache locality that can make them better in terms of
performance//true since they are contiguous
b) It is easy to insert and delete elements in Linked List//true
c) Random access is possible in a typical implementation of Linked Lists//true – the
first element
d) Access of elements in linked list takes less time than compared to arrays//not really
• In linked list each node contains a minimum of two fields. One field is data
field to store the data and the second field contains :
A. Pointer to the next node
B. Pointer to character
C. NULL
D. A & C// yes, the 2nd field of the last node points to nothing

You might also like