Nothing Special   »   [go: up one dir, main page]

Dynamic Programming

Download as pdf or txt
Download as pdf or txt
You are on page 1of 36

Course Packet 10

Dynamic
Programming

Maria Lolita G. Masangcap


Introduction
This course packet defines the concept of dynamic programming
and shows significance in algorithm designing.
It explains how this technique is applied in solving different types of
computing problems.
At the end of this activity, students are expected to apply dynamic
programming concepts to the given problems.
Also, students are expected to design an algorithm implementing
the concept of algorithm designing in solving a computing problem.
Introduction

At the end of this course packet, you


will be able to define and solve
computing problems using the
concept of dynamic programming.
Design and Analysis of
Algorithm
Dynamic Programming

Optimization Problems Optimal Substructure Memoization

A solution to a
problem is recursively Solutions to
Maximize or described in terms of subproblems are
minimize solutions to remembered
subproblems

Dynamic Programming (DP) is applicable to optimization problems when a solution


to a problem has optimal substructure and becomes efficient using memoization.
Dynamic Programming Parts

• Recursively break the original problem to smaller


subproblems
Simple
• Bottom-up solve subproblems starting with the base
Subproblems
cases
• Remember solutions to subproblems

• Characterize the optimal solution to contain solutions


Optimal to subproblems
Substructure of
the Problems • Combine solutions to subproblems to solve bigger
containing problems
Dynamic Programming Efficiency

Subproblem Similar problems can contain subproblems in common.


Overlap
Two subproblems may contain a common sub-problem

This is when DP becomes more efficient than other methods.

Brute-force and Divide-and-Conquer methods may solve the same


subproblems over and over again because they are solved
independently.
Fibonacci Number
F0 = 0
F1 = 1
Fn = Fn-1 + Fn-2 for n > 1

The initial terms of the sequence


(F0, F1, …. = ( 0, 1, 1, 2, 3, 5, 8, 13, ….)

int fib (int n){


if (n<=1)
return n; Recursive Tree for Fib(5) -> 15 calls
return F(n-1) + Fib (n-2)
}
Fibonacci Number
This is a wasteful approach. It
unnecessarily repeats work.
• For instance, in the calculation
of Fib(5), the value Fib(2) is
computed three times.
• Instead, compute Fib(2) once,
store the result in a table, and
access it when needed Recursive Tree for Fib(5)
(memoization)

F -1 -1 -1 -1 -1 -1 F 0 1 1 2 3 5
0 1 2 3 4 5 0 1 2 3 4 5
Fibonacci Number
More Efficient Recursive Fib

ü Computes each F(i) only once


ü This technique is called memoization.
ü NIL – absence of a value
Fibonacci Number
Subproblem Dependencies
Figure out which subproblems rely on which other subproblems
Example:

Subproblem Computing Order


Then figure out an order for computing the subproblems that respects the
dependencies:
ü When you are solving a subproblem, you must first solve all the subproblems on
which it depends
ü Example: Just solve them in order
F0, F1, F2, F3, ……
Fibonacci: DP
DP Fibonacci Algorithm

Subproblem Computing Order

ü Can perform application-specific optimizations


ü Save space by only keeping last two numbers computed
ü Time reduced from exponential to linear
Fibonacci: DP
DP Fibonacci Algorithm

Subproblem Computing Order

ü Can perform application-specific optimizations


ü Save space by only keeping last two numbers computed
ü Time reduced from exponential to linear
0/1 Knapsack Problem
Recall:

ü Given a knapsack with maximum capacity W, and a set S consisting of n items


ü Each item i some weight wi and benefit value bi (all wi, bi and W are integer
values)
ü Problem: How to pack the knapsack to achieve maximum total value of packed
items?
ü It is called the “0/1 Knapsack Problem”, because each item must be entirely
taken or left behind.
ü Do not confuse with the “Fractional Knapsack Problem”, where fractions of items
can be taken.
0/1 Knapsack Problem
0/1 Knapsack Problem
Recursive formula for subproblems

ü If wk > w, item k cannot be part of the solution


ü Else, the best subset of Sk that has total weight w is the better of:
1. The best subset of Sk-1 that has total weight at most w, or
2. The best subset of Sk-1 that has total weight at most w-wk plus item k
0/1 Knapsack Algorithm

Example:
Let us run our algorithm on the
following data:

n = 4 (# of elements)
W = 8 (max weight capacity)
Elements (weight, benefit) :
(2, 1), (3, 2), (4, 5), (5, 6)

Running Time Analysis: O(n*W)


0/1 Knapsack Example
n = 4 (# of elements)
W = 8 (max weight capacity)
Elements (weight, benefit) :
(2, 1), (3, 2), (4, 5), (5, 6)
B[i, w] = max {B[i-1, w], B[i-1, w-wi] + bi}
B[3, 3] = max{B[3-1, 3], B[3-1, 3-4] + 5}
= max{B[2, 3], B[2, -1] + 5}
= max{2, undefined}
= B[2, 3] = 2

B[3, 5] = max{B[3-1, 5], B[3-1, 5-4] + 5} B[3, 6] = max{B[3-1, 6], B[3-1, 6-4] + 5} B[3, 7] = max{B[3-1, 7], B[3-1, 7-4] + 5}
= max{B[2, 5], B[2, 1] + 5} = max{B[2, 6], B[2, 2] + 5} = max{B[2, 7], B[2, 3] + 5}
= max{3, 0 + 5} = max{3, 1 + 5} = max{3, 2 + 5}
=5 =6 =7
0/1 Knapsack Example
n = 4 (# of elements)
W = 8 (max weight capacity)
Elements (weight, benefit) :
(2, 1), (3, 2), (4, 5), (5, 6)
Which of the items are included
and what is the maximum total
value (benefit) taken?

8–6=2 Items 2 and 4 are taken in the


2–2=0 knapsack with total value of 8.
Shortest Path Problem
§ Given: A set of objects (called vertices) and
A set of distances between objects (called
edges)
§ Find: The shortest part between any pair
of vertices

Floyd’s • The idea is to construct solution through series of


matrices D(0), …., D(n) using increasing subsets of the
Algorithm vertices allowed as intermediate
Floyd’s Algorithm

On the kth iteration, the algorithm determines


shortest paths between every pair of vertices
i, j that use only vertices among 1, …., k as
intermediate
Floyd’s Algorithm

Given Digraph Weight Matrix Distance Matrix – output from the


application of Floyd’s Algorithm

Weight matrix represents the distance between every pair of vertices in the form of given
weights. In constructing weight matrix, consider the following:
• For diagonal elements (representing self-loop), distance value is 0.
• For vertices having a direct edge between them, distance value is the weight of that edge.
• For vertices having no direct edge between them, distance value is ∞ .
Floyd’s Algorithm

Analysis: O(n3)
Floyd’s Algorithm

D[i, j] = min {D[i, j], D[i, k] + D[k, j]}


D[2, 3] = min {D[2, 3], D[2, 1] + D[1, 3]} D[3, 2] = min {D[3, 2], D[3, 1] + D[1, 2]} D[4, 2] = min {D[4, 2], D[4, 1] + D[1, 2]}
= min {∞, 2 + 3} = min {7, ∞ + ∞} = min {∞, 6 + ∞}
=5 =7 =∞

D[2, 4] = min {D[2, 4], D[2, 1] + D[1, 4]} D[3, 4] = min {D[3, 4], D[3, 1] + D[1, 4]} D[4, 3] = min {D[4, 3], D[4, 1] + D[1, 3]}
= min {∞, 2 + ∞} = min {1, ∞ + ∞} = min {∞, 6 + 3}
=∞ =1 =9
Floyd’s Algorithm

D[i, j] = min {D[i, j], D[i, k] + D[k, j]}


Application of Floyd’s Algorithm to the given graph. Updated elements are shown in bold.
Floyd’s Algorithm

D[i, j] = min {D[i, j], D[i, k] + D[k, j]} The final matrix must not contain ∞.
Application of Floyd’s Algorithm to the given graph. Updated elements are shown in bold.
Warshall’s Algorithm
ü It computes the transitive closure of a
relation
ü Alternatively: all paths in a directed graph
ü Example of transitive closure
Warshall’s Algorithm
Main idea: a path exists between two vertices i, j, if and only if:
ü There is an edge from i to j; or
ü There is a path from i to j through vertex 1; or
ü There is a path from i to j through vertex 1 and/or 2; or
ü There is a path from i to j through vertex 1, 2 and/or 3; or
ü ….
ü There is a path from i to j going through any of the other vertices
Warshall’s Algorithm

On the kth iteration, the algorithm determines


if a path exists between two vertices i, j using
just vertices among 1, …., k allowed as
intermediate

(path using just 1, ……, k-1)

(path from i to k and from k to j


using just 1, …., k – 1)
Warshall’s Algorithm

Given Digraph Adjacency Matrix Its transitive closure


Warshall’s Algorithm
Matrix Generation
Recurrence relating elements R(k) to elements of R(k-1) is:

It implies the following rules for generating R(k) from R(k-1):


Rule 1: If an element in row i and column j is 1 in R(k-1) , it remains 1 in R(k)
Rule 2: If an element in row i and column j is 0 in R(k-1) , it has to be changed
to 1 in R(k) if and only if the element in its row i and column k and
the element in its column j and row k are both 1’s in R(k-1)
Warshall’s Algorithm

Rule for changing zeros in Warshall’s Algorithm


Floyd’s Algorithm

Analysis: O(n3)
Warshall’s Algorithm

R(k)[i, j] = R(k-1)[i, j] or (R(k-1)[i, k] and R(k-1)[k, j] )


R(1)[2, 2] = R(0)[2, 2] or (R(0)[2, 1] and R(0)[1, 2]) R(1)[2, 3] = R(0)[2, 3] or (R(0)[2, 1] and R(0)[1, 3]) R(1)[2, 4] = R(0)[2, 4] or (R(0)[2, 1] and R(0)[1, 4])
= 0 or (0 and 1) = 0 or (0 and 0) = 1 or (0 and 0)
=0 =0 =1
R(1)[3, 2] = R(0)[3, 2] or (R(0)[3, 1] and R(0)[1, 2]) R(1)[3, 3] = R(0)[3, 3] or (R(0)[3, 1] and R(0)[1, 3]) R(1)[3, 4] = R(0)[3, 4] or (R(0)[3, 1] and R(0)[1, 4])
= 0 or (0 and 1) = 0 or (0 and 0) = 0 or (0 and 0)
=0 =0 =0
R(1)[4, 2] = R(0)[4, 2] or (R(0)[4, 1] and R(0)[1, 2]) R(1)[4, 3] = R(0)[4, 3] or (R(0)[4, 1] and R(0)[1, 3]) R(1)[4, 4] = R(0)[4, 4] or (R(0)[4, 1] and R(0)[1, 4])
= 0 or (1 and 1) = 1 or (1 and 0) = 0 or (1 and 0)
=1 =1 =0
Warshall’s Algorithm

Application of Warshall’s Algorithm to the given graph. New ones are in bold.
Warshall’s Algorithm

Application of Warshall’s Algorithm to the given graph. New ones are in bold.
Q&A

Any
Question???
CP10 Dynamic Programming

You might also like