Module 4 AOA
Module 4 AOA
Dynamic Programming
Dynamic Programming
• An algorithm design technique (like divide and
conquer)
• Divide and conquer
– Partition the problem into independent subproblems
– Solve the subproblems recursively
2
Dynamic Programming
• Applicable when subproblems are not independent
– Subproblems share subsubproblems
E.g.: Combinations:
n n-1 n-1
= +
k k k-1
n n
=1 =1
1 n
– A divide and conquer approach would repeatedly solve the
common subproblems
– Dynamic programming solves every subproblem just once and
stores the answer in a table
3
Example: Combinations
n n-1 n-1
= +
k k k-1
4
Dynamic Programming
• Used for optimization problems
– A set of choices must be made to get an optimal
solution
– Find a solution with the optimal value (minimum or
maximum)
– There may be many solutions that lead to an optimal
value
– Our goal: find an optimal solution
5
Dynamic Programming Algorithm
1. Characterize the structure of an optimal
solution
2. Recursively define the value of an optimal
solution
3. Compute the value of an optimal solution in a
bottom-up fashion
4. Construct an optimal solution from computed
information (not always necessary)
6
Assembly Line Scheduling
• Automobile factory with two assembly lines
– Each line has n stations: S1,1, . . . , S1,n and S2,1, . . . , S2,n
– Corresponding stations S1, j and S2, j perform the same function
but can take different amounts of time a1, j and a2, j
– Entry times are: e1 and e2; exit times are: x1 and x2
7
Assembly Line Scheduling
• After going through a station, can either:
– stay on same line at no cost, or
– transfer to other line: cost after Si,j is ti,j , j = 1, . . . , n - 1
8
Assembly Line Scheduling
• Problem:
what stations should be chosen from line 1 and which
from line 2 in order to minimize the total time through the
factory for one car?
9
One Solution
• Brute force
– Enumerate all possibilities of selecting stations
– Compute how long it takes in each case and choose
the best one
• Solution: 1 2 3 4 n
1 0 0 1 1
10
1. Structure of the Optimal Solution
11
1. Structure of the Optimal Solution
• Let’s consider all possible ways to get from the
starting point through station S1,j
– We have two choices of how to get to S1, j:
• Through S1, j - 1, then directly to S1, j
• Through S2, j - 1, then transfer over to S1, j
S1,j-1 S1,j
a
Line 1 a
1,j
-1
t 1,j
2,
j-
a
1
Line 2 2,
j-
S2,j-1
1
12
1. Structure of the Optimal Solution
• Suppose that the fastest way through S1, j is
through S1, j – 1
– We must have taken a fastest way from entry through S1, j – 1
– If there were a faster way through S1, j - 1, we would use it instead
• Similarly for S2, j – 1
S1,j-1 S1,j
Line 1 a
a
1,j
-1
t 1,j
Optimal Substructure 2,
j-
a
1
2,
Line 2 j-
S2,j-1
1
13
Optimal Substructure
• Generalization: an optimal solution to the
problem “find the fastest way through S1, j” contains
within it an optimal solution to subproblems: “find
the fastest way through S1, j - 1 or S2, j – 1”.
15
2. A Recursive Solution (cont.)
• Definitions:
– f* : the fastest time to get through the entire factory
– fi[j] : the fastest time to get from the starting point through station Si,j
16
2. A Recursive Solution (cont.)
• Base case: j = 1, i=1,2 (getting through station 1)
f1[1] = e1 + a1,1
f2[1] = e2 + a2,1
17
2. A Recursive Solution (cont.)
• General Case: j = 2, 3, …,n, and i = 1, 2
• Fastest way through S1, j is either:
– the way through S1, j - 1 then directly through S1, j, or
f1[j - 1] + a1,j
– the way through S2, j - 1, transfer from line 2 to line 1, then through S1, j
f2[j -1] + t2,j-1 + a1,j
S1,j-1 S1,j
Line 1 a
a
1,j
-1
t 1,j
e1 + a1,1 if j = 1
f1[j] =
min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2
e2 + a2,1 if j = 1
f2[j] =
min(f2[j - 1] + a2,j ,f1[j -1] + t1,j-1 + a2,j) if j ≥ 2
19
3. Computing the Optimal Solution
f* = min (f1[n] + x1, f2[n] + x2)
f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j)
f2[j] = min(f2[j - 1] + a2,j ,f1[j -1] + t1,j-1 + a2,j)
1 2 3 4 5
f1[j] f1(1) f1(2) f1(3) f1(4) f1(5)
4 times 2 times
20
3. Computing the Optimal Solution
• For j ≥ 2, each value fi[j] depends only on the
values of f1[j – 1] and f2[j - 1]
• Idea: compute the values of fi[j] as follows:
in increasing order of j
1 2 3 4 5
f1[j]
f2[j]
• Bottom-up approach
– First find optimal solutions to subproblems
– Find an optimal solution to the problem from the
subproblems 21
Example
e1 + a1,1, if j = 1
f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2
1 2 3 4 5
f1[j] 9 18[1] 20[2] 24[1] 32[1]
f* = 35[1]
f2[j] 12 16[1] 22[2] 25[1] 30[2]
22
FASTEST-WAY(a, t, e, x, n)
1. f1[1] ← e1 + a1,1
2. f2[1] ← e2 + a2,1 Compute initial values of f1 and f2
3. for j ← 2 to n
O(N)
4. do if f1[j - 1] + a1,j ≤ f2[j - 1] + t2, j-1 + a1, j
5. then f1[j] ← f1[j - 1] + a1, j
6. l1[j] ← 1 Compute the values of
f1[j] and l1[j]
7. else f1[j] ← f2[j - 1] + t2, j-1 + a1, j
8. l1[j] ← 2
9. if f2[j - 1] + a2, j ≤ f1[j - 1] + t1, j-1 + a2, j
10. then f2[j] ← f2[j - 1] + a2, j
11. l2[j] ← 2 Compute the values of
f2[j] and l2[j]
12. else f2[j] ← f1[j - 1] + t1, j-1 + a2, j
13. l2[j] ← 1
23
FASTEST-WAY(a, t, e, x, n) (cont.)
14. if f1[n] + x1 ≤ f2[n] + x2
15. then f* = f1[n] + x1
Compute the values of
16. l* = 1 the fastest time through the
17. else f* = f2[n] + x2 entire factory
18. l* = 2
24
4. Construct an Optimal Solution
Alg.: PRINT-STATIONS(l, n)
i ← l*
print “line ” i “, station ” n
for j ← n downto 2
do i ←li[j]
print “line ” i “, station ” j – 1’
1 2 3 4 5
f1[j]/l1[j] 9 18[1] 20[2] 24[1] 32[1]
l* = 1
f2[j]/l2[j] 12 16[1] 22[2] 25[1] 30[2]
25
Longest Common Subsequence
26
Longest Common Subsequence
• Given two sequences
X = 〈x1, x2, …, xm〉
Y = 〈y1, y2, …, yn〉
find a maximum length common subsequence
(LCS) of X and Y
• E.g.:
X = 〈A, B, C, B, D, A, B〉
• Subsequences of X:
– A subset of elements in the sequence taken in order
〈A, B, D〉, 〈B, C, D, B〉, etc.
27
Example
X = 〈A, B, C, B, D, A, B〉 X = 〈A, B, C, B, D, A,
B〉
Y = 〈B, D, C, A, B, A〉 Y = 〈B, D, C, A, B, A〉
29
Notations
• Given a sequence X = 〈x1, x2, …, xm〉 we define
the i-th prefix of X, for i = 0, 1, 2, …, m
Xi = 〈x1, x2, …, xi〉
30
A Recursive Solution
Case 1: xi = yj
e.g.: Xi = 〈A, B, D, E〉
Yj = 〈Z, B, E〉
c[i, j] = c[i - 1, j - 1] + 1
31
A Recursive Solution
Case 2: xi ≠ yj
e.g.: Xi = 〈A, B, D, G〉
Yj = 〈Z, B, D〉
33
3. Computing the Length of the LCS
0 if i = 0 or j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi ≠ yj
0 1 2 n
yj: y1 y2 yn
0 xi 0 0 0 0 0 0
1 x1 0 first
2 x2 0 second
i
0
0
m xm 0
j
34
Additional Information
0 if i,j = 0 A matrix b[i, j]:
c[i, j] = c[i-1, j-1] + 1 if xi = yj • For a subproblem [i, j] it
max(c[i, j-1], c[i-1, j]) if xi ≠ yj tells us what choice was
made to obtain the
0 1 2 3 n
b & c: yj: A C D F optimal value
0 xi 0 0 0 0 0 0 • If xi = yj
1 A b[i, j] = “ ”
0
2 B • Else, if c[i - 1,
0 c[i-1,j]
i j] ≥ c[i, j-1]
3 C 0 c[i,j-1] b[i, j] = “ ↑ ”
0 else
m D 0 b[i, j] = “ ← ”
j
35
LCS-LENGTH(X, Y, m, n)
1. for i ← 1 to m
2. do c[i, 0] ← 0 The length of the LCS if one of the sequences
3. for j ← 0 to n is empty is zero
4. do c[0, j] ← 0
5. for i ← 1 to m
6. do for j ← 1 to n
7. do if xi = yj
8. then c[i, j] ← c[i - 1, j - 1] + 1 Case 1: xi = yj
9. b[i, j ] ← “ ”
10. else if c[i - 1, j] ≥ c[i, j - 1]
11. then c[i, j] ← c[i - 1, j]
12. b[i, j] ← “↑”
Case 2: xi ≠ yj
13. else c[i, j] ← c[i, j - 1]
14. b[i, j] ← “←”
15. return c and b
Running time: Θ
(mn) 36
Example
0 if i = 0 or j = 0
X = 〈A, B, C, B, D, A, B〉
c[i, j] = c[i-1, j-1] + 1 if xi = yj
Y = 〈B, D, C, A, B, A〉
max(c[i, j-1], c[i-1, j]) if xi ≠ yj
If xi = yj 0 1 2 3 4 5 6
yj B D C A B A
c[i, j] ← c[i - 1, j - 1] + 1
0 xi 0 0 0 0 0 0 0
b[i, j] = “ ”
1 A ↑ ↑ ↑
0 0 0 0 1 ←1 1
Else if c[i - 1, j] ≥ c[i, j-1] 2 B ↑
0 1 ←1 ←1 1 2 ←2
c[i, j] ← c[i - 1, j] ↑ ↑ ↑ ↑
3 C 0 1 1 2 ←2 2 2
b[i, j] = “ ↑ ”
4 B ↑ ↑ ↑
0 1 1 2 2 3 ←3
Else 5 D ↑ ↑ ↑ ↑ ↑
0 1 2 2 2 3 3
c[i, j] ← c[i, j - 1] ↑ ↑ ↑ ↑
6 A 0 1 2 2 3 3 4
b[i, j] = “ ← ” ↑ ↑ ↑ ↑
7 B 0 1 2 2 3 4 4
37
4. Constructing a LCS
• Start at b[m, n] and follow the arrows
• When we encounter a “ “ in b[i, j] ⇒ xi = yj is an element
of the LCS
0 1 2 3 4 5 6
yj B D C A B A
0 xi 0 0 0 0 0 0 0
1 A ↑ ↑ ↑
0 0 0 0 1 ←1 1
2 B ↑
0 1 ←1 ←1 1 2 ←2
3 C ↑ ↑ ↑ ↑
0 1 1 2 ←2 2 2
4 B ↑ ↑ ↑
0 1 1 2 2 3 ←3
5 D ↑ ↑ ↑ ↑ ↑
0 1 2 2 2 3 3
6 A ↑ ↑ ↑ ↑
0 1 2 2 3 3 4
↑ ↑ ↑ ↑
7 B 0 1 2 2 3 4 4
38
PRINT-LCS(b, X, i, j)
1. if i = 0 or j = 0 Running time: Θ(m +
2. then return n)
3. if b[i, j] = “ ”
4. then PRINT-LCS(b, X, i - 1, j - 1)
5. print xi
6. elseif b[i, j] = “↑”
7. then PRINT-LCS(b, X, i - 1, j)
8. else PRINT-LCS(b, X, i, j - 1)
40
• “MJAU”
41
Lecture 13: The Knapsack Problem
1
0-1 Knapsack Problem
2
0-1 Knapsack Problem
3
Recall of the Divide-and-Conquer
4
The Idea of Dynamic Programming
5
The Idea of Developing a DP Algorithm
6
The Idea of Developing a DP Algorithm
7
Remarks on the Dynamic Programming Approach
8
Developing a DP Algorithm for Knapsack
9
Developing a DP Algorithm for Knapsack
10
Correctness of the Method for Computing 1 278( 6
Lemma: For
" / / , / / ,
1 278( H
6 ; @ A C E
B D 1 27 = " 6
: F 1 27 = " = 6
G
6
Proof: To compute 1 2<8 we note that we have only
two choices for file :
12
Example of the Bottom-up computation
Let
; " and
1 2 3 4
10 40 30 50
5 4 6 3
O P , QSRUT V
XW V 0
1 2 3 4 5 6 7 8 9 10
0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 10 10 10 10 10 10
2 0 0 0 0 40 40 40 40 40 50 50
3 0 0 0 0 40 40 40 40 40 50 70
4 0 0 0 50 50 50 50 90 90 90 90
Remarks:
Y The final output is
O [P Z \Q
V]TXW ^_V .
Y
` CQ Zba
The method described does not tell which subset gives the
optimal solution. (It is in this example).
13
The Dynamic Programming Algorithm
g
KnapSack( c SQ RdQ Qfe )
for (R W V to e )
O P VCQhRiTW V ;
for (W
to )
for (R W P V to e )
if (O R P 3Tj R )
,QkRUTW l mon ` O P Mp
qQhRUTrQ c P 3Tts O P M p
qQhR p R P 3T3T a ;
else O P ,QkRUTW OIP Mp
qQhRUT ;
O P Que T ;
v return
14
Constructing the Optimal Solution
6
Question: How do we use all the values x#y]y(z*2<8 to
determine the subset of files having the maximum
computing time?
15
Constructing the Optimal Solution
computing time?
If keep[ 5
] is 1, then } " . = We can now repeat
this argument for keep[
= ].
If keep[ H
] is 0, the } ~
ment for keep[
= " ]. and we repeat the argu-
Therefore, the following partial program will output the
elements of :
; ;
for (
; downto
if (keep 2<8
0
6 ; ; "
1)
)
; = 27 6 ;
output i;
16
The Complete Algorithm for the Knapsack Problem
g
KnapSack( c QSRdQ Qfe )
for (R W V to e )
O P VMQhRUTW V ;
for (W
to )
for (R W P V to e ) P O P C p q
QhR p R P 3T3T O P M p
qQhRUT ))
ifg ((R 3TXj R ) and (c 3Tts
OIP ,QhRUTW c P 3Tts O P Cp
qhQ R p R P 3 T3T ;
P ,QRUTW
;
v keep
g
else
OIP ,QhRUTW O P Mp
qQhRUT ;
P ,QRUTW V ;
v keep
W e ;
for (W downto P
ifg (keep ,Q TJW W
)
1)
W i; p R P 3T ;
output
v
O P Q
e T ;
v return
17
Travelling Salesman Problem
Ms. Mrunali Desai
Problem Statement
• A traveler needs to visit all the cities from a list, where distances
between all the cities are known and each city should be visited just
once. What is the shortest possible route that he visits each city
exactly once and returns to the origin city?
Problem Statement
• You are given-
• A set of some cities
• Distance between every pair of cities
• The main problem of the TSP is to find a tour of a given number (n) of
cities with a minimum distance, where each city visited exactly once
and returning to the starting city.
Brute-force solution
• Travelling salesman problem is the most notorious computational
problem. We can use brute-force approach to evaluate every possible
tour and select the best one. For n number of vertices in a graph,
there are (n - 1)! number of possibilities.
Dynamic Programming Solution
• Instead of brute-force using dynamic programming approach, the
solution can be obtained in lesser time, though there is no polynomial
time algorithm.
1
Dynamic method v.s.
Greedy method
■ Comparison: In the greedy
method, any decision is locally
optimal.
■ These locally optimal solutions will
finally add up to be a globally
optimal solution.
2
The Greedy Method
■ E.g. Find a shortest path from v0 to v3.
■ The greedy method can solve this problem.
■ The shortest path: 1 + 2 + 4 = 7.
3
The Greedy Method
■ E.g. Find a shortest path from v0 to v3 in the multi-stage
graph.
5
Algorithm
6
Greedy Method vs. Multistage
graph
■ E.g.
7
Dynamic Programming
■ Dynamic programming approach:
8
Dynamic Programming
9
Dynamic Programming
■ d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)}
= min{9+18, 5+13, 16+2} = 18.
■ d(C, T) = min{ 2+d(F, T) } = 2+2 = 4
■ d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
= min{1+22, 2+18, 5+4} = 9.
10
The advantages of dynamic
programming approach
■ To avoid exhaustively searching the entire solution
space (to eliminate some impossible solutions and
save computation).
■ To solve the problem stage by stage
systematically.
■ To store intermediate solutions in a table (array)
so that they can be retrieved from the table in
later stages of computation.
11
Comment
■ If a problem can be described by a
multistage graph then it can be solved
by dynamic programming.
12
Floyd’s Algorithm
Floyd’s Algorithm 1
All pairs shortest path
Floyd’s Algorithm 2
The weight matrix and the graph
1
3 v1 v2
9
5
1 2 3
v5
3 2
v4 v3
4
Floyd’s Algorithm 3
The subproblems
• Let D(k)[i,j]=weight of a shortest path from vi to vj
using only vertices from {v1,v2,…,vk} as
intermediate vertices in the path
– D(0)=W
– D(n)=D which is the goal matrix
Floyd’s Algorithm 4
The Recursive Definition:
Case 1: A shortest path from vi to vj restricted to using
only vertices from {v1,v2,…,vk} as intermediate vertices
does not use vk. Then D(k)[i,j]= D(k-1)[i,j].
Vj
Vi
Vj
Vi
Floyd’s Algorithm 6
The pointer array P
• Used to enable finding a shortest path
• Initially the array contains 0
Floyd’s Algorithm 7
Floyd's Algorithm Using n+1 D matrices
Floyd’s Algorithm 9
1 5 0
1 2 3
D =1
0 4 5 k=1
4 2 3 Vertex 1 can be
2 2 0 ∞
2
-3 intermediate
3 ∞ -3 0
node
1 2 3
1 0 4 5 D1[2,3] = min( D0[2,3], D0[2,1]+D0[1,3] )
1
D = = min (∝, 7)
2 2 0 7
=7
3 ∞ -3 0
3 0 0 0
Floyd’s Algorithm 10
1 2 3
1 5 D1 = 1 0 4 5 k=2
4 2 3 2 2 0 7 Vertices 1, 2
2
-3 3 ∞ -3 0 can be
1 2 3
intermediate
1 0 4 5 D2[1,3] = min( D1[1,3], D1[1,2]+D1[2,3] )
2
D = = min (5, 4+7)
2 2 0 7
=5
3 -1 -3 0
1 2 3
1 0 0 0 D2[3,1] = min( D1[3,1], D1[3,2]+D1[2,1] )
P= 2 0 0 1 = min (∝, -3+2)
= -1
3 2 0 0
Floyd’s Algorithm 11
D2 = 1
1 5 2 3
1 0 4 5 k=3
4 2 3
2 2 0 7 Vertices 1, 2, 3
-3
2 3 -1 -3 0 can be
1 2 3
intermediate
1
3
0 2 5 D3[1,2] = min(D2[1,2], D2[1,3]+D2[3,2] )
D =
2 2 0 7 = min (4, 5+(-3))
3 -1 -3 0 =2
1 2 3
D3[2,1] = min(D2[2,1], D2[2,3]+D2[3,1] )
1 0 3 0
= min (2, 7+ (-1))
P= 2 0 0 1 =2
3 2 0 0
Floyd’s Algorithm 12
Example
Floyd’s Algorithm 13
Example
Floyd’s Algorithm 14
The final distance matrix and P
Floyd’s Algorithm 15