0% found this document useful (0 votes)
56 views97 pages

Module 4 AOA

The document discusses the algorithm design technique of dynamic programming, which is applicable when subproblems overlap and need to be solved just once rather than repeatedly. It provides an example of using dynamic programming to solve the assembly line scheduling problem by characterizing an optimal solution, defining a recursive solution, and computing the optimal solution in a bottom-up manner. Dynamic programming is used to find optimal solutions to optimization problems by breaking them down into subproblems.

Uploaded by

Niramay K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views97 pages

Module 4 AOA

The document discusses the algorithm design technique of dynamic programming, which is applicable when subproblems overlap and need to be solved just once rather than repeatedly. It provides an example of using dynamic programming to solve the assembly line scheduling problem by characterizing an optimal solution, defining a recursive solution, and computing the optimal solution in a bottom-up manner. Dynamic programming is used to find optimal solutions to optimization problems by breaking them down into subproblems.

Uploaded by

Niramay K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

Analysis of Algorithms

Dynamic Programming
Dynamic Programming
• An algorithm design technique (like divide and
conquer)
• Divide and conquer
– Partition the problem into independent subproblems
– Solve the subproblems recursively

– Combine the solutions to solve the original problem

2
Dynamic Programming
• Applicable when subproblems are not independent
– Subproblems share subsubproblems

E.g.: Combinations:
n n-1 n-1
= +
k k k-1

n n
=1 =1
1 n
– A divide and conquer approach would repeatedly solve the
common subproblems
– Dynamic programming solves every subproblem just once and
stores the answer in a table

3
Example: Combinations

n n-1 n-1
= +
k k k-1

4
Dynamic Programming
• Used for optimization problems
– A set of choices must be made to get an optimal
solution
– Find a solution with the optimal value (minimum or
maximum)
– There may be many solutions that lead to an optimal
value
– Our goal: find an optimal solution

5
Dynamic Programming Algorithm
1. Characterize the structure of an optimal
solution
2. Recursively define the value of an optimal
solution
3. Compute the value of an optimal solution in a
bottom-up fashion
4. Construct an optimal solution from computed
information (not always necessary)

6
Assembly Line Scheduling
• Automobile factory with two assembly lines
– Each line has n stations: S1,1, . . . , S1,n and S2,1, . . . , S2,n
– Corresponding stations S1, j and S2, j perform the same function
but can take different amounts of time a1, j and a2, j
– Entry times are: e1 and e2; exit times are: x1 and x2

7
Assembly Line Scheduling
• After going through a station, can either:
– stay on same line at no cost, or
– transfer to other line: cost after Si,j is ti,j , j = 1, . . . , n - 1

8
Assembly Line Scheduling
• Problem:
what stations should be chosen from line 1 and which
from line 2 in order to minimize the total time through the
factory for one car?

9
One Solution
• Brute force
– Enumerate all possibilities of selecting stations
– Compute how long it takes in each case and choose
the best one
• Solution: 1 2 3 4 n

1 0 0 1 1

0 if choosing line 2 1 if choosing line 1


at step j (= 3) at step j (= n)

– There are 2n possible ways to choose stations


– Infeasible when n is large!!

10
1. Structure of the Optimal Solution

• How do we compute the minimum time of going through


a station?

11
1. Structure of the Optimal Solution
• Let’s consider all possible ways to get from the
starting point through station S1,j
– We have two choices of how to get to S1, j:
• Through S1, j - 1, then directly to S1, j
• Through S2, j - 1, then transfer over to S1, j

S1,j-1 S1,j
a
Line 1 a
1,j
-1
t 1,j

2,
j-
a
1
Line 2 2,
j-
S2,j-1
1

12
1. Structure of the Optimal Solution
• Suppose that the fastest way through S1, j is
through S1, j – 1
– We must have taken a fastest way from entry through S1, j – 1
– If there were a faster way through S1, j - 1, we would use it instead
• Similarly for S2, j – 1
S1,j-1 S1,j
Line 1 a
a
1,j
-1
t 1,j

Optimal Substructure 2,
j-
a
1
2,
Line 2 j-
S2,j-1
1

13
Optimal Substructure
• Generalization: an optimal solution to the
problem “find the fastest way through S1, j” contains
within it an optimal solution to subproblems: “find
the fastest way through S1, j - 1 or S2, j – 1”.

• This is referred to as the optimal substructure


property

• We use this property to construct an optimal


solution to a problem from optimal solutions to
subproblems
14
2. A Recursive Solution

• Define the value of an optimal solution in terms of the optimal


solution to subproblems

15
2. A Recursive Solution (cont.)
• Definitions:
– f* : the fastest time to get through the entire factory
– fi[j] : the fastest time to get from the starting point through station Si,j

f* = min (f1[n] + x1, f2[n] + x2)

16
2. A Recursive Solution (cont.)
• Base case: j = 1, i=1,2 (getting through station 1)
f1[1] = e1 + a1,1
f2[1] = e2 + a2,1

17
2. A Recursive Solution (cont.)
• General Case: j = 2, 3, …,n, and i = 1, 2
• Fastest way through S1, j is either:
– the way through S1, j - 1 then directly through S1, j, or
f1[j - 1] + a1,j
– the way through S2, j - 1, transfer from line 2 to line 1, then through S1, j
f2[j -1] + t2,j-1 + a1,j
S1,j-1 S1,j
Line 1 a
a
1,j
-1
t 1,j

f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) 2,


j-
a
1
2,
Line 2 j-
S2,j-1
1
18
2. A Recursive Solution (cont.)

e1 + a1,1 if j = 1
f1[j] =
min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2

e2 + a2,1 if j = 1
f2[j] =
min(f2[j - 1] + a2,j ,f1[j -1] + t1,j-1 + a2,j) if j ≥ 2

19
3. Computing the Optimal Solution
f* = min (f1[n] + x1, f2[n] + x2)
f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j)
f2[j] = min(f2[j - 1] + a2,j ,f1[j -1] + t1,j-1 + a2,j)
1 2 3 4 5
f1[j] f1(1) f1(2) f1(3) f1(4) f1(5)

f2[j] f2(1) f2(2) f2(3) f2(4) f2(5)

4 times 2 times

• Solving top-down would result in exponential


running time

20
3. Computing the Optimal Solution
• For j ≥ 2, each value fi[j] depends only on the
values of f1[j – 1] and f2[j - 1]
• Idea: compute the values of fi[j] as follows:

in increasing order of j
1 2 3 4 5
f1[j]
f2[j]
• Bottom-up approach
– First find optimal solutions to subproblems
– Find an optimal solution to the problem from the
subproblems 21
Example

e1 + a1,1, if j = 1
f1[j] = min(f1[j - 1] + a1,j ,f2[j -1] + t2,j-1 + a1,j) if j ≥ 2
1 2 3 4 5
f1[j] 9 18[1] 20[2] 24[1] 32[1]
f* = 35[1]
f2[j] 12 16[1] 22[2] 25[1] 30[2]
22
FASTEST-WAY(a, t, e, x, n)
1. f1[1] ← e1 + a1,1
2. f2[1] ← e2 + a2,1 Compute initial values of f1 and f2

3. for j ← 2 to n
O(N)
4. do if f1[j - 1] + a1,j ≤ f2[j - 1] + t2, j-1 + a1, j
5. then f1[j] ← f1[j - 1] + a1, j
6. l1[j] ← 1 Compute the values of
f1[j] and l1[j]
7. else f1[j] ← f2[j - 1] + t2, j-1 + a1, j
8. l1[j] ← 2
9. if f2[j - 1] + a2, j ≤ f1[j - 1] + t1, j-1 + a2, j
10. then f2[j] ← f2[j - 1] + a2, j
11. l2[j] ← 2 Compute the values of
f2[j] and l2[j]
12. else f2[j] ← f1[j - 1] + t1, j-1 + a2, j
13. l2[j] ← 1
23
FASTEST-WAY(a, t, e, x, n) (cont.)
14. if f1[n] + x1 ≤ f2[n] + x2
15. then f* = f1[n] + x1
Compute the values of
16. l* = 1 the fastest time through the
17. else f* = f2[n] + x2 entire factory
18. l* = 2

24
4. Construct an Optimal Solution
Alg.: PRINT-STATIONS(l, n)
i ← l*
print “line ” i “, station ” n
for j ← n downto 2
do i ←li[j]
print “line ” i “, station ” j – 1’
1 2 3 4 5
f1[j]/l1[j] 9 18[1] 20[2] 24[1] 32[1]
l* = 1
f2[j]/l2[j] 12 16[1] 22[2] 25[1] 30[2]

25
Longest Common Subsequence

26
Longest Common Subsequence
• Given two sequences
X = 〈x1, x2, …, xm〉
Y = 〈y1, y2, …, yn〉
find a maximum length common subsequence
(LCS) of X and Y
• E.g.:
X = 〈A, B, C, B, D, A, B〉
• Subsequences of X:
– A subset of elements in the sequence taken in order
〈A, B, D〉, 〈B, C, D, B〉, etc.
27
Example

X = 〈A, B, C, B, D, A, B〉 X = 〈A, B, C, B, D, A,
B〉

Y = 〈B, D, C, A, B, A〉 Y = 〈B, D, C, A, B, A〉

• 〈B, C, B, A〉 and 〈B, D, A, B〉 are longest common


subsequences of X and Y (length = 4)

• 〈B, C, A〉, however is not a LCS of X and Y


28
Brute-Force Solution
• For every subsequence of X, check whether it’s
a subsequence of Y
• There are 2m subsequences of X to check

• Each subsequence takes Θ(n) time to check


– scan Y for first letter, from there scan for second, and
so on

• Running time: Θ(n2m)

29
Notations
• Given a sequence X = 〈x1, x2, …, xm〉 we define
the i-th prefix of X, for i = 0, 1, 2, …, m
Xi = 〈x1, x2, …, xi〉

• c[i, j] = the length of a LCS of the sequences


Xi = 〈x1, x2, …, xi〉 and Yj = 〈y1, y2, …, yj〉

30
A Recursive Solution
Case 1: xi = yj
e.g.: Xi = 〈A, B, D, E〉
Yj = 〈Z, B, E〉

c[i, j] = c[i - 1, j - 1] + 1

– Append xi = yj to the LCS of Xi-1 and Yj-1


– Must find a LCS of Xi-1 and Yj-1 ⇒ optimal solution to a
problem includes optimal solutions to subproblems

31
A Recursive Solution
Case 2: xi ≠ yj
e.g.: Xi = 〈A, B, D, G〉
Yj = 〈Z, B, D〉

c[i, j] = max { c[i - 1, j], c[i, j-1] }


– Must solve two problems
• find a LCS of Xi-1 and Yj: Xi-1 = 〈A, B, D〉 and Yj = 〈Z, B, D〉
• find a LCS of Xi and Yj-1: Xi = 〈A, B, D, G〉 and Yj = 〈Z, B〉

• Optimal solution to a problem includes optimal


solutions to subproblems
32
Overlapping Subproblems
• To find a LCS of X and Y

– we may need to find the LCS between X and Yn-1 and

that of Xm-1 and Y

– Both the above subproblems has the subproblem of

finding the LCS of Xm-1 and Yn-1

• Subproblems share subsubproblems

33
3. Computing the Length of the LCS
0 if i = 0 or j = 0
c[i, j] = c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi ≠ yj
0 1 2 n
yj: y1 y2 yn
0 xi 0 0 0 0 0 0
1 x1 0 first
2 x2 0 second
i
0
0
m xm 0
j
34
Additional Information
0 if i,j = 0 A matrix b[i, j]:
c[i, j] = c[i-1, j-1] + 1 if xi = yj • For a subproblem [i, j] it
max(c[i, j-1], c[i-1, j]) if xi ≠ yj tells us what choice was
made to obtain the
0 1 2 3 n
b & c: yj: A C D F optimal value
0 xi 0 0 0 0 0 0 • If xi = yj
1 A b[i, j] = “ ”
0
2 B • Else, if c[i - 1,
0 c[i-1,j]
i j] ≥ c[i, j-1]
3 C 0 c[i,j-1] b[i, j] = “ ↑ ”
0 else
m D 0 b[i, j] = “ ← ”
j
35
LCS-LENGTH(X, Y, m, n)
1. for i ← 1 to m
2. do c[i, 0] ← 0 The length of the LCS if one of the sequences
3. for j ← 0 to n is empty is zero
4. do c[0, j] ← 0
5. for i ← 1 to m
6. do for j ← 1 to n
7. do if xi = yj
8. then c[i, j] ← c[i - 1, j - 1] + 1 Case 1: xi = yj
9. b[i, j ] ← “ ”
10. else if c[i - 1, j] ≥ c[i, j - 1]
11. then c[i, j] ← c[i - 1, j]
12. b[i, j] ← “↑”
Case 2: xi ≠ yj
13. else c[i, j] ← c[i, j - 1]
14. b[i, j] ← “←”
15. return c and b
Running time: Θ
(mn) 36
Example
0 if i = 0 or j = 0
X = 〈A, B, C, B, D, A, B〉
c[i, j] = c[i-1, j-1] + 1 if xi = yj
Y = 〈B, D, C, A, B, A〉
max(c[i, j-1], c[i-1, j]) if xi ≠ yj
If xi = yj 0 1 2 3 4 5 6
yj B D C A B A
c[i, j] ← c[i - 1, j - 1] + 1
0 xi 0 0 0 0 0 0 0
b[i, j] = “ ”
1 A ↑ ↑ ↑
0 0 0 0 1 ←1 1
Else if c[i - 1, j] ≥ c[i, j-1] 2 B ↑
0 1 ←1 ←1 1 2 ←2
c[i, j] ← c[i - 1, j] ↑ ↑ ↑ ↑
3 C 0 1 1 2 ←2 2 2
b[i, j] = “ ↑ ”
4 B ↑ ↑ ↑
0 1 1 2 2 3 ←3
Else 5 D ↑ ↑ ↑ ↑ ↑
0 1 2 2 2 3 3
c[i, j] ← c[i, j - 1] ↑ ↑ ↑ ↑
6 A 0 1 2 2 3 3 4
b[i, j] = “ ← ” ↑ ↑ ↑ ↑
7 B 0 1 2 2 3 4 4
37
4. Constructing a LCS
• Start at b[m, n] and follow the arrows
• When we encounter a “ “ in b[i, j] ⇒ xi = yj is an element
of the LCS
0 1 2 3 4 5 6
yj B D C A B A
0 xi 0 0 0 0 0 0 0
1 A ↑ ↑ ↑
0 0 0 0 1 ←1 1
2 B ↑
0 1 ←1 ←1 1 2 ←2
3 C ↑ ↑ ↑ ↑
0 1 1 2 ←2 2 2
4 B ↑ ↑ ↑
0 1 1 2 2 3 ←3
5 D ↑ ↑ ↑ ↑ ↑
0 1 2 2 2 3 3
6 A ↑ ↑ ↑ ↑
0 1 2 2 3 3 4
↑ ↑ ↑ ↑
7 B 0 1 2 2 3 4 4
38
PRINT-LCS(b, X, i, j)
1. if i = 0 or j = 0 Running time: Θ(m +
2. then return n)

3. if b[i, j] = “ ”
4. then PRINT-LCS(b, X, i - 1, j - 1)
5. print xi
6. elseif b[i, j] = “↑”
7. then PRINT-LCS(b, X, i - 1, j)
8. else PRINT-LCS(b, X, i, j - 1)

Initial call: PRINT-LCS(b, X, length[X], length[Y])


39
Example: Let X be “XMJYAUZ” and Y be “MZJAWXU”.
Find longest common subsequence between X and Y.

40
• “MJAU”

41
Lecture 13: The Knapsack Problem

Outline of this Lecture

Introduction of the 0-1 Knapsack Problem.

A dynamic programming solution to this problem.

1
0-1 Knapsack Problem

Informal Description: We have computed data files


that we want to store, and we have available
bytes
of storage.
File has size bytes and takes minutes to re-
compute.
We want to avoid as much recomputing as possible,
so we want to find a subset of files to store such that
The files have combined size at most .

The total computing time of the stored files is as
large as possible.
We can not store parts of files, it is the whole file or
nothing.

How should we select the files?

2
0-1 Knapsack Problem

Formal description: Given two -tuples of positive


numbers



and
and
, we wish to determine the subset
!#" %$&'(*) (of files to store) that
maximizes .
,+-
subject to 0/
,+ -

Remark: This is an optimization problem.

Brute force: Try all $ possible subsets


.

Question: Any solution better than the brute-force?

3
Recall of the Divide-and-Conquer

1. Partition the problem into subproblems.

2. Solve the subproblems.

3. Combine the solutions to solve the original one.

Remark: If the subproblems are not independent, i.e.


subproblems share subsubproblems, then a divide-
and-conquer algorithm repeatedly solves the common
subsubproblems.
Thus, it does more work than necessary!

Question: Any better solution?


Yes–Dynamic programming (DP)!

4
The Idea of Dynamic Programming

Dynamic programming is a method for solving


optimization problems.

The idea: Compute the solutions to the subsub-problems


once and store the solutions in a table, so that they
can be reused (repeatedly) later.

Remark: We trade space for time.

5
The Idea of Developing a DP Algorithm

Step1: Structure: Characterize the structure of an


optimal solution.

– Decompose the problem into smaller problems,


and find a relation between the structure of the
optimal solution of the original problem and the
solutions of the smaller problems.

Step2: Principle of Optimality: Recursively define the


value of an optimal solution.

– Express the solution of the original problem in


terms of optimal solutions for smaller problems.

6
The Idea of Developing a DP Algorithm

Step 3: Bottom-up computation: Compute the value


of an optimal solution in a bottom-up fashion by
using a table structure.

Step 4: Construction of optimal solution: Construct


an optimal solution from computed information.

Steps 3 and 4 may often be combined.

7
Remarks on the Dynamic Programming Approach

Steps 1-3 form the basis of a dynamic-programming


solution to a problem.

Step 4 can be omitted if only the value of an opti-


mal solution is required.

8
Developing a DP Algorithm for Knapsack

Step 1: Decompose the problem into smaller


problems.
6
We construct an array 1 2 345 3 .
For
" / / , and / / , the entry
1 278( 6 will store the maximum (combined)
!#"
computing time of any subset of files %$&(9)
of (combined) size at most .

If we can compute all the entries of this array, then


the array entry 1 275
6 will contain the maximum
computing time of files that can fit into the
storage, that is, the solution to our problem.

9
Developing a DP Algorithm for Knapsack

Step 2: Recursively define the value of an optimal


solution in terms of solutions to smaller problems.

Initial Settings: Set


1 2 : 56 60; ; = > / /
? ,
for , no item
1 2<8 for illegal

Recursive Step: Use

1 278( 65; @ CA BED 1 72 = " 6 ( (F 1 27 = " ( = 6.G


"
for / / , / /
.

10
Correctness of the Method for Computing 1 278( 6
Lemma: For
" / / , / / ,
1 278( H
6 ; @ A C E
B D 1 27 = " 6
: F 1 27 = " = 6
G
6
Proof: To compute 1 2<8 we note that we have only
two choices for file :

Leave file : The best we can do with files


!#" %$&( = " ) and storage limit is 1 =72 " 8 6 .

Take file (only possible if I / ): Then we gain


of computing time, but have spent bytes of
!J" %$K = " ) and storage D = G is
our storage. The best we can do with remaining

1 27 = " = 6 .


files

Totally, we get L F 1 27


= " : = 6 .

Note that if



, then F 1 27 = " 8 = 60; = >
so the lemma is correct in any case.
11
Developing a DP Algorithm for Knapsack

Step 3: Bottom-up computing 1 278 6 (using iteration,


not recursion).

Bottom: 1 2 6H; for all


/ / .

Bottom-up computation: Computing the table using

1 2<8 6H; @ AMBED 1 2< = " 6 : NF 1 2< = " = 6.G


row by row.

V[i,w] w=0 1 2 3 ... ... W


i= 0 0 0 0 0 ... ... 0 bottom
1
2
..
.
n
up

12
Example of the Bottom-up computation

Let
; " and

1 2 3 4
10 40 30 50
5 4 6 3
O P , QSRUT V
XW V 0
1 2 3 4 5 6 7 8 9 10
0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 10 10 10 10 10 10
2 0 0 0 0 40 40 40 40 40 50 50
3 0 0 0 0 40 40 40 40 40 50 70
4 0 0 0 50 50 50 50 90 90 90 90

Remarks:
Y The final output is
O [P Z \Q
V]TXW ^_V .

Y
` CQ Zba
The method described does not tell which subset gives the
optimal solution. (It is in this example).

13
The Dynamic Programming Algorithm

g
KnapSack( c SQ RdQ Qfe )
for (R W V to e )
O P VCQhRiTW V ;
for (W
to )
for (R W P V to e )
if (O R P 3Tj R )
,QkRUTW l mon ` O P Mp
qQhRUTrQ c P 3Tts O P M p
qQhR p R P 3T3T a ;
else O P ,QkRUTW OIP Mp
qQhRUT ;
O P Que T ;
v return

Time complexity: Clearly, w D G.

14
Constructing the Optimal Solution

The algorithm for computing 1 278


6
described in
the previous slide does not keep record of which
subset of items gives the optimal solution.

To compute the actual subset, we can add an


6
auxiliary boolean array x#y]y(z*278{ which is 1 if we
6
decide to take the -th file in 1 2<8 and 0 other-
wise.

6
Question: How do we use all the values x#y]y(z*2<8 to

determine the subset of files having the maximum
computing time?

15
Constructing the Optimal Solution

Question: How do we use the values x#yqy(z|278%


6

determine the subset of items having the maximum
to

computing time?

If keep[ 5
] is 1, then } " . = We can now repeat
this argument for keep[
= ].
If keep[ H
] is 0, the } ~

ment for keep[
= " ]. and we repeat the argu-


Therefore, the following partial program will output the
elements of :
 ; ;
for (
; downto
if€ (keep 2<8
 0
6 ; ; "
1)
)

 ;  = 27 6 ;
output i;


16
The Complete Algorithm for the Knapsack Problem

g
KnapSack( c QSRdQ Qfe )
for (R W V to e )
O P VMQhRUTW V ;
for (W
to )
for (R W P V to e ) P O P C p q
QhR p R P 3T3T‚ O P M p
qQhRUT ))
ifg ((R 3TXj R ) and (c 3Tts
OIP ,QhRUTW c P 3Tts O P Cp
qhQ R p R P 3 T3T ;
P ,QƒRUTW
;
v keep

g
else
OIP ,QhRUTW O P Mp
qQhRUT ;
P ,QƒRUTW V ;
v keep
„ W e ;
for (W downto P „
ifg (keep ,Q TJW W
)
1)

„ W „ i; p R P 3T ;
output
v
O P Q…e T ;
v return

17
Travelling Salesman Problem
Ms. Mrunali Desai
Problem Statement
• A traveler needs to visit all the cities from a list, where distances
between all the cities are known and each city should be visited just
once. What is the shortest possible route that he visits each city
exactly once and returns to the origin city?
Problem Statement
• You are given-
• A set of some cities
• Distance between every pair of cities

• Travelling Salesman Problem states-


• A salesman has to visit every city exactly once.
• He has to come back to the city from where he starts his journey.
• What is the shortest possible route that the salesman must follow to
complete his tour?
TSP
• The traveling salesman problem (TSP) is a problem in combinatorial
optimization and has several applications, such as vehicle routing
problems, logistics, planning and scheduling.

• The main problem of the TSP is to find a tour of a given number (n) of
cities with a minimum distance, where each city visited exactly once
and returning to the starting city.
Brute-force solution
• Travelling salesman problem is the most notorious computational
problem. We can use brute-force approach to evaluate every possible
tour and select the best one. For n number of vertices in a graph,
there are (n - 1)! number of possibilities.
Dynamic Programming Solution
• Instead of brute-force using dynamic programming approach, the
solution can be obtained in lesser time, though there is no polynomial
time algorithm.

• Let us consider a graph G = (V, E), where V is a set of cities and E is a


set of weighted edges. An edge e(u, v) represents that
vertices u and v are connected. Distance between
vertex u and v is d(u, v), which should be non-negative.
Algorithm for TSP
Time complexity
Example
Dynamic Programming:
Multi Stage Graph

1
Dynamic method v.s.
Greedy method
■ Comparison: In the greedy
method, any decision is locally
optimal.
■ These locally optimal solutions will
finally add up to be a globally
optimal solution.

2
The Greedy Method
■ E.g. Find a shortest path from v0 to v3.
■ The greedy method can solve this problem.
■ The shortest path: 1 + 2 + 4 = 7.

3
The Greedy Method
■ E.g. Find a shortest path from v0 to v3 in the multi-stage
graph.

■ Greedy method: v0 v1,2 v2,1 v3 = 23


■ Optimal: v0 v1,1 v2,2 v3 = 7
■ The greedy method does not work for this problem.
■ This is because decisions at different stages influence
one another.
4
Multistage graph
■ A multistage graph G=(V,E) is a directed
graph in which the vertices are partitioned
into k≥2 disjoint sets Vi, 1≤i ≤ k
■ In addition, if <u,v> is an edge in E then
u∈Vi and v∈Vi+i for some i, 1≤i<k
■ The set V1 and Vk are such that ⏐V1⏐ =⏐Vk⏐=1
■ The multistage graph problem is to find a
minimum cost path from s in V1 to t in Vk
■ Each set Vi defines a stage in the graph

5
Algorithm

6
Greedy Method vs. Multistage
graph
■ E.g.

■ The greedy method cannot be applied to this


case: S A D T 1+4+18 = 23.
■ The shortest path is:
S C F T 5+2+2 = 9.

7
Dynamic Programming
■ Dynamic programming approach:

■ d(S, T) = min{1+d(A, T), 2+d(B, T),


5+d(C, T)}

8
Dynamic Programming

■ d(A, T) = min{4+d(D, T), 11+d(E, T)}


■ = min{4+18, 11+13} = 22.

9
Dynamic Programming
■ d(B, T) = min{9+d(D, T), 5+d(E, T), 16+d(F, T)}
= min{9+18, 5+13, 16+2} = 18.
■ d(C, T) = min{ 2+d(F, T) } = 2+2 = 4
■ d(S, T) = min{1+d(A, T), 2+d(B, T), 5+d(C, T)}
= min{1+22, 2+18, 5+4} = 9.

10
The advantages of dynamic
programming approach
■ To avoid exhaustively searching the entire solution
space (to eliminate some impossible solutions and
save computation).
■ To solve the problem stage by stage
systematically.
■ To store intermediate solutions in a table (array)
so that they can be retrieved from the table in
later stages of computation.

11
Comment
■ If a problem can be described by a
multistage graph then it can be solved
by dynamic programming.

12
Floyd’s Algorithm

All pairs shortest path

Floyd’s Algorithm 1
All pairs shortest path

• The problem: find the shortest path between every


pair of vertices of a graph

• The graph: may contain negative edges but no


negative cycles

• A representation: a weight matrix where


W(i,j)=0 if i=j.
W(i,j)=∞ if there is no edge between i and j.
W(i,j)=“weight of edge”

Floyd’s Algorithm 2
The weight matrix and the graph
1
3 v1 v2
9
5
1 2 3
v5
3 2
v4 v3
4

Floyd’s Algorithm 3
The subproblems
• Let D(k)[i,j]=weight of a shortest path from vi to vj
using only vertices from {v1,v2,…,vk} as
intermediate vertices in the path

– D(0)=W
– D(n)=D which is the goal matrix

• How do we compute D(k) from D(k-1) ?

Floyd’s Algorithm 4
The Recursive Definition:
Case 1: A shortest path from vi to vj restricted to using
only vertices from {v1,v2,…,vk} as intermediate vertices
does not use vk. Then D(k)[i,j]= D(k-1)[i,j].

Case 2: A shortest path from vi to vj restricted to using


only vertices from {v1,v2,…,vk} as intermediate vertices
does use vk. Then D(k)[i,j]= D(k-1)[i,k]+ D(k-1)[k,j].
Shortest path using intermediate vertices
{V1, . . . Vk } V k

Vj
Vi

Shortest Path using intermediate vertices { V1, . . . Vk -1 }


Floyd’s Algorithm 5
The recursive definition
• Since
D(k)[i,j]= D(k-1)[i,j] or
D(k)[i,j]= D(k-1)[i,k]+ D(k-1)[k,j].
We conclude:
D(k)[i,j]= min{ D(k-1)[i,j], D(k-1)[i,k]+ D(k-1)[k,j] }.

Shortest path using intermediate vertices


{V1, . . . Vk } V k

Vj
Vi

Shortest Path using intermediate vertices { V1, . . . Vk -1 }

Floyd’s Algorithm 6
The pointer array P
• Used to enable finding a shortest path
• Initially the array contains 0

• Each time that a shorter path from i to j is found


the k that provided the minimum is saved
(highest index node on the path from i to j)

• To print the intermediate nodes on the shortest


path a recursive procedure that print the shortest
paths from i and k, and from k to j can be used

Floyd’s Algorithm 7
Floyd's Algorithm Using n+1 D matrices

Floyd//Computes shortest distance between all pairs of


//nodes, and saves P to enable finding shortest paths
1. D0 ← W // initialize D array to W [ ]
2. P ← 0 // initialize P array to [0]
3. for k ← 1 to n
4. do for i ← 1 to n
5. do for j ← 1 to n
6. if (Dk-1[ i, j ] > Dk-1 [ i, k ] + Dk-1 [ k, j ] )
7. then Dk[ i, j ] ← Dk-1 [ i, k ] + Dk-1 [ k, j ]
8. P[ i, j ] ← k;
9. else Dk[ i, j ] ← Dk-1 [ i, j ]

Time Complexity: O(n^3)


Floyd’s Algorithm 8
Example
1 2 3
0
1 0 4 5
W=D =
5 2 2 0 ∞
1
3 ∞ -3 0
4 2 3
1 2 3
-3 1 0 0 0
2
P= 2 0 0 0
3 0 0 0

Floyd’s Algorithm 9
1 5 0
1 2 3
D =1
0 4 5 k=1
4 2 3 Vertex 1 can be
2 2 0 ∞
2
-3 intermediate
3 ∞ -3 0
node
1 2 3
1 0 4 5 D1[2,3] = min( D0[2,3], D0[2,1]+D0[1,3] )
1
D = = min (∝, 7)
2 2 0 7
=7
3 ∞ -3 0

1 2 3 D1[3,2] = min( D0[3,2], D0[3,1]+D0[1,2] )


1 0 0 0 = min (-3,∝)
P= 2 0 0 1 = -3

3 0 0 0

Floyd’s Algorithm 10
1 2 3
1 5 D1 = 1 0 4 5 k=2
4 2 3 2 2 0 7 Vertices 1, 2
2
-3 3 ∞ -3 0 can be
1 2 3
intermediate
1 0 4 5 D2[1,3] = min( D1[1,3], D1[1,2]+D1[2,3] )
2
D = = min (5, 4+7)
2 2 0 7
=5
3 -1 -3 0

1 2 3
1 0 0 0 D2[3,1] = min( D1[3,1], D1[3,2]+D1[2,1] )
P= 2 0 0 1 = min (∝, -3+2)
= -1
3 2 0 0

Floyd’s Algorithm 11
D2 = 1
1 5 2 3
1 0 4 5 k=3
4 2 3
2 2 0 7 Vertices 1, 2, 3
-3
2 3 -1 -3 0 can be
1 2 3
intermediate
1
3
0 2 5 D3[1,2] = min(D2[1,2], D2[1,3]+D2[3,2] )
D =
2 2 0 7 = min (4, 5+(-3))
3 -1 -3 0 =2

1 2 3
D3[2,1] = min(D2[2,1], D2[2,3]+D2[3,1] )
1 0 3 0
= min (2, 7+ (-1))
P= 2 0 0 1 =2
3 2 0 0

Floyd’s Algorithm 12
Example

Shortest Path between each pair

Floyd’s Algorithm 13
Example

Floyd’s Algorithm 14
The final distance matrix and P

The values in parenthesis are the non zero P values.

Floyd’s Algorithm 15

You might also like