DAA
DAA
DAA
It
usually takes one or more inputs, runs them systematically through a series of steps, and
provides one or more outputs. Algorithms are typically associated with computing and are
an essential element of computer programming. NEED: 1.Problem-solving: Algorithms help
us solve complex problems by breaking them down into smaller, manageable steps. 2.
Efficiency: Algorithms enable us to solve problems efficiently, using the least amount of
resources (such as time and memory) necessary .3.Accuracy: Algorithms provide a way to
ensure that solutions are accurate and reliable. By following a set of well-defined
instructions, algorithms can produce consistent and accurate results. 4. Communication:
Algorithms provide a common language and framework for communication among
developers, making it easier to understand and work with each other’s code.5 .
Optimization: Algorithms can be optimized to improve their performance, making it
possible to achieve better results with less resources. FACTOR 1.Parallelism: A multi-
threaded program running on a multicore machine can be faster.2. CPU Utilization: If the
CPU is already utilized by other processes, the running time of the algorithm will increase.3.
Recursion: Recursion can cause a lot of overhead, increasing the running time of the
algorithm.4. Disk Layout: The layout of disks can affect the running time of the algorithm,
with RAID potentially being faster than NAS, SAN, or other storage layouts. 5. Buggy
Synchronization: Deadlocks or other synchronization issues can cause the algorithm or
program to stop working, increasing the running time. CRITERIA: 1. Input: The inputs should
be finite in number. 2: Output: The output should be finite in size. 3. Definiteness: The
algorithm should be clear and easy to understand. 4. Finiteness: Each step should be
feasible to execute. 5. Effectiveness: The algorithm should be efficient and produce the
desired result. 6. Language Independence: The algorithm can be implemented in more than
one programming language. 7. Unambiguity: Each step should lead to only one meaning. 8.
Termination: The algorithm should not get stuck in an infinite loop. 9. Correctness: The
algorithm should be free from errors. 10. Efficiency: he algorithm should be efficient and
produce the desired result.
ASYMPTOTIC NOTATION? Asymptotic notation is a mathematical tool used to describe the
complexity of algorithms, which is the amount of time or space an algorithm requires to
complete a task. It provides a way to analyze the performance of algorithms by ignoring
lower-order terms and focusing on the most significant factors. 1, Big O Notation (Upper
Bound): Big O notation represents the upper bound of an algorithm’s complexity. It gives
an upper limit on the amount of time or space an algorithm requires to complete a task.
Example: O(n^2) - The algorithm’s complexity grows quadratically with the input size n. 2.
Big Omega Notation (Lower Bound): Big Omega notation represents the lower bound of an
algorithm’s complexity. It gives a lower limit on the amount of time or space an algorithm
requires to complete a task. Example: Ω(n) - The algorithm’s complexity grows linearly with
the input size n. 3. Big Theta Notation (Tight Bound): Big Theta notation represents the tight
bound of an algorithm’s complexity. It gives a bound on both the upper and lower limits of
an algorithm’s complexity. Example: Θ(n^2) - The algorithm’s complexity grows
quadratically with the input size n.
MAX AND MIN? In a Max-Heap the key present at the root node must be greater than or
equal among the keys present at all of its children. The same property must be recursively
true for all sub-trees in that Binary Tree. In a Max-Heap the maximum key element present
at the root. MIN: In a Min-Heap the key present at the root node must be less than or equal
among the keys present at all of its children. The same property must be recursively true
for all sub-trees in that Binary Tree. In a Min-Heap the minimum key element present at
the root.
BINARY SEARCH :- Binary search is a classic algorithm used to find the position of a target
value within a sorted array. It operates by repeatedly dividing the search interval in half. If
the value of the search key is less than the item in the middle of the interval, the search
narrows to the lower half. Otherwise, it narrows to the upper half. Here's a step-by-step
description of the algorithm. Initialization: Initialize two pointers, left and right, to the start
and end of the list, respectively.
Loop until left is greater than right: Calculate the middle index, ‘mid’, using the formula
‘(left + right) // 2’. Compare the element at the middle index with the target: If the element
at ‘mid’ is equal to the target, return ‘mid’ (index of the target). If the element at ‘mid’ is
less than the target, move the ‘left’ pointer to ‘mid + 1’ to search the right half. If the
element at ‘mid’ is greater than the target, move the ‘right’ pointer to ‘mid – 1’ to search
the left half. If the target is not found: If the loop exits without returning, the target is not
in the list, so return -1.
STRASSENS MARTIX? Strassen's algorithm, developed by Volker Strassen in 1969, is a fast
algorithm for matrix multiplication. It is an efficient divide-and-conquer method that
reduces the number of arithmetic operations required to multiply two matrices compared
to the conventional matrix multiplication algorithm (the naive approach). The traditional
matrix multiplication algorithm has a time complexity of O(n^3) for multiplying two n x n
matrices. However, Strassen's algorithm improves this to O(n^log2(7)), which is
approximately O(n^2.81). The algorithm achieves this improvement by recursively breaking
down the matrix multiplication into smaller subproblems and combining the results.
Given two matrices A and B, of size n x n, we divide each matrix into four equal-sized
submatrices, each of size n/2 x n/2. :-A = | A11 A12 | B = | B11 B12 | , | A21 A22 | | B21
B22 |
We then define seven intermediate matrices: P1 = A11 * (B12 - B22), P2 = (A11 + A12) * B22,
P3 = (A21 + A22) * B11, P4 = A22 * (B21 - B11), P5 = (A11 + A22) * (B11 + B22), P6 = (A12 -
A22) * (B21 + B22) , P7 = (A11 - A21) * (B11 + B12)
Next, we recursively compute seven products of these submatrices, i.e., P1, P2, P3, P4, P5,
P6, and P7.
Finally, we combine the results to obtain the four submatrices of the resulting matrix C of
size n x n:
C11 = P5 + P4 - P2 + P6 , C12 = P1 + P2 , C21 = P3 + P4 , C22 = P5 + P1 - P3 - P7
Concatenate the four submatrices C11, C12, C21, and C22 to obtain the final result matrix
C.
GREEDY METHOD? The greedy method is a problem-solving approach used in algorithms
where, at each step, the best decision is made without regard for future consequences. It's
like navigating a maze by always taking the path that seems most promising at the moment
without considering whether it leads to the best outcome in the end. Element of greedy
method are. Greedy Choice Property: At each step of the algorithm, the locally optimal
choice is made without considering the entire problem. This means that the choice made
at each step appears to be the best option at that moment, with no consideration for future
consequences. Optimal Substructure: The problem can be solved by breaking it down into
smaller subproblems, each of which can be solved independently. Greedy Algorithm: The
algorithm iteratively makes the greedy choice and builds the solution incrementally. Proof
of Correctness: Proving that a greedy algorithm produces an optimal solution often involves
demonstrating that the locally optimal choices lead to a globally optimal solution. This may
require mathematical induction or other techniques to show that the greedy choice is part
of an optimal solution. Examples: Greedy algorithms are commonly used for optimization
problems such as minimum spanning trees, shortest path algorithms, and scheduling
problems.
4 QUEEN PROBLEM? To construct the state space tree, we can utilize the backtracking
technique, which allows us to explore all possible solutions.
Here is the step-by-step logic for generating the state space tree: 1.Begin by considering
each position (column) in the current row. 2.Check all the previous rows to determine if
there is already a queen placed in any of the columns. 3.Additionally, check the previous
diagonal columns to verify if there is a queen placed in any of those positions. 4.If either of
these conditions is true, indicating that two queens would be attacking each other,
backtrack to the previous row and move the previous queen one step forward. 5.If none of
the previous positions conflict, place the current queen in the current position and move
to the next row.
MINIMAM COST SPANING TREE? The minimum spanning tree has all the properties of a
spanning tree with an added constraint of having the minimum possible weights among all
possible spanning trees. Like a spanning tree, there can also be many possible MSTs for a
graph. Properties of a Spanning Tree: The spanning tree holds the below-mentioned
principles: 1. The number of vertices (V) in the graph and the spanning tree is the same.
2.There is a fixed number of edges in the spanning tree which is equal to one less than the
total number of vertices ( E = V-1 ). 3.The spanning tree should not be disconnected, as in
there should only be a single source of component, not more than that. 4.The spanning tree
should be acyclic, which means there would not be any cycle in the tree. 5.The total cost
(or weight) of the spanning tree is defined as the sum of the edge weights of all the edges
of the spanning tree. 6.There can be many possible spanning trees for a graph.
There are several algorithms to find the minimum spanning tree from a given graph, some
of them are listed below: Krushkal’s MST Algorithm , Prim’s MST Algorithm, Boruvka’s
Algorithm, Reverse-Delete Algorithm
0/1 KNAPSACK PROBLEM? The 0/1 knapsack problem means that the items are either
completely or no items are filled in a knapsack. For example, we have two items having
weights 2kg and 3kg, respectively. If we pick the 2kg item then we cannot pick 1kg item
from the 2kg item (item is not divisible); we have to pick the 2kg item completely. This is a
0/1 knapsack problem in which either we pick the item completely or we will pick that item.
The 0/1 knapsack problem is solved by the dynamic programming. The fractional knapsack
problem means that we can divide the item. For example, we have an item of 3 kg then we
can pick the item of 2 kg and leave the item of 1 kg. The fractional knapsack problem is
solved by the Greedy approach. Example of 0/1 knapsack problem.
Consider the problem having weights and profits are: Weights: {3, 4, 6, 5} ,Profits: {2, 3, 1,
4} .The weight of the knapsack is 8 kg. The number of items is 4. The above problem can
be solved by using the following method: xi = {1, 0, 0, 1} = {0, 0, 0, 1} = {0, 1, 0, 1} . The
above are the possible combinations. 1 denotes that the item is completely picked and 0
means that no item is picked. Since there are 4 items so possible combinations will be: 24
= 16; So. There are 16 possible combinations that can be made by using the above
problem. Once all the combinations are made, we have to select the combination that
provides the maximum profit.
P AND NP CLASS? A P class problem can be solved in "polynomial time," which means that
an algorithm exists for its solution such that the number of steps in the algorithm is
bounded by a polynomial function of n, where n corresponds to the length of the input for
the problem. This problem is easy to understand and tractable. A problem is said to be
Nondeterministic polynomial time that can be solvable in polynomial time by a
nondeterministic Turing machine. The solutions to the NP class problem are hard to find
since they are being solved by a non-deterministic machine.
Relation of P and NP classes : 1.P contains in NP 2.P=NP
1.Observe that P contains in NP. In other words, if we can solve a problem in polynomial
time, we can indeed verify the solution in polynomial time. More formally, we do not need
to see a certificate (there is no need to specify the vertex/intermediate of the specific path)
to solve the problem; we can explain it in polynomial time anyway.
2. However, it is not known whether P = NP. It seems you can verify and produce an output
of the set of decision-based problems in NP classes in a polynomial time which is impossible
because according to the definition of NP classes you can verify the solution within the
polynomial time. So this relation can never be held.
POLYNOMIAL TIME REDUCTION? Polynomial time reduction is a way of solving problem A
by the hypothetical routine for solving different problem B, which runs in polynomial time
. Basically, the polynomial reduction is a way of showing that problem A is not harder than
problem B. For example, we have some hypothetical algorithm, which can sort numerical
data in some polynomial time. The input to the algorithm can only be in numeric form.
Suppose we have a new problem to sort names of the cities across the country. What can
be done? Suppose we don’t have an efficient algorithm to handle string data. We can apply
some hashing functions on city names to map them to numeric values. Now, this is identical
to the first approach. Thus, the polynomial reduction is the way of turning one problem
into another problem whose solution can be found in polynomial time.
Reduction takes one of three forms :1.Restriction 2.Local Replacement 3.Component
design.
Consider two decision problems A and B. ..Reduction from A to B transforms input x of A to
equivalent input f(x) of B by applying some transformation function on x. ..So given an input
x to A, the reduction algorithm produces an intermediate result f(x) to problem B such that
f(x) to B returns “yes” only if input x to A returns “Yes”.
MERGE SORT? Merge sort is a sorting algorithm that follows the divide-and-conquer
approach. It works by recursively dividing the input array into smaller subarrays and sorting
those subarrays then merging them back together to obtain the sorted array. In simple
terms, we can say that the process of merge sort is to divide the array into two halves, sort
each half, and then merge the sorted halves back together. This process is repeated until
the entire array is sorted. Merge sort is a popular sorting algorithm known for its efficiency
and stability. It follows the divide-and-conquer approach to sort a given array of elements.
Here’s a step-by-step explanation of how merge sort works:
Divide: Divide the list or array recursively into two halves until it can no more be divided.
Conquer: Each subarray is sorted individually using the merge sort algorithm. Merge: The
sorted subarrays are merged back together in sorted order. The process continues until all
elements from both subarrays have been merged. Step 1: If it is only one element in the
list, consider it already sorted, so return. Step 2: Divide the list recursively into two halves
until it can no more be divided. Step 3: Merge the smaller lists into new list in sorted order.
QUICK SORT? QuickSort is a sorting algorithm based on the Divide and Conquer algorithm
that picks an element as a pivot and partitions the given array around the picked pivot by
placing the pivot in its correct position in the sorted array. WORKING The key process in
quickSort is a partition(). The target of partitions is to place the pivot (any element can be
chosen to be a pivot) at its correct position in the sorted array and put all smaller elements
to the left of the pivot, and all greater elements to the right of the pivot. Partition is done
recursively on each side of the pivot after the pivot is placed in its correct position and this
finally sorts the array.
BRANCH AND BOUND? The Least Cost (LC) Branch and Bound algorithm is a technique used
to solve optimization problems, such as the traveling salesman problem or the 0/1
knapsack problem. The goal is to systematically explore all possible solutions while using a
strategy to "prune" or eliminate large portions of the search space that cannot yield better
results than the current best solution. Here's a simplified explanation: Branching: This
involves dividing the problem into smaller subproblems. For example, in the 0/1 knapsack
problem, at each step, you decide whether to include an item in the knapsack or not.
Bounding: This involves calculating an optimistic estimate (lower bound for minimization
problems or upper bound for maximization problems) of the best possible solution that can
be obtained from a subproblem. Least Cost Node: This is the node (subproblem) with the
best optimistic estimate that hasn't been explored yet. The algorithm always expands this
node next. Algorithm steps : Initialization: Start with the root node, representing the initial
state of the problem. Calculate the bound for this node. Branching: Generate child nodes
by dividing the problem into smaller subproblems. Bounding: Calculate the bound for each
child node. This bound estimates the best solution that can be obtained from that
subproblem. Selection: Select the node with the least cost (best bound) that hasn't been
explored yet. Pruning: If the bound of a node is worse than the current best solution,
discard the node. Repeat: Repeat the branching, bounding, and selection process until all
nodes are either explored or pruned. Termination: The algorithm terminates when there
are no more nodes to explore.