0% found this document useful (0 votes)
20 views114 pages

Mod 4

The document discusses dynamic programming and its general method. It provides examples of the 0/1 knapsack problem and multistage graphs. For multistage graphs, it explains the forward and backward approaches for finding the shortest path, including pseudocode algorithms. It also discusses Warshall's algorithm and Floyd's algorithm for solving all-pairs shortest paths problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views114 pages

Mod 4

The document discusses dynamic programming and its general method. It provides examples of the 0/1 knapsack problem and multistage graphs. For multistage graphs, it explains the forward and backward approaches for finding the shortest path, including pseudocode algorithms. It also discusses Warshall's algorithm and Floyd's algorithm for solving all-pairs shortest paths problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 114

Module 4

Dynamic Programming
TB2:Chapter 5
Dynamic Programing: General Method with
examples, Multistage graph
5.1 Dynamic Programming
Dynamic programming is a technique for solving problems with
overlapping sub problems.
– sub problems arise from a recurrence relating a solution to
given problem with solutions of its smaller sub problems of
same type
– Rather than solving overlapping sub problems again and
again, it suggests solving each of the smaller sub problems
only once and recording the results in a table from which a
solution to the original problem can then be obtained.
Dynamic programming is an algorithm design method that can be
used when the solution to a problem can be viewed as the result of
a sequence of decisions to get optimal solution.
Consider the following examples --
0/1 knapsack problem
The 0/1 knapsack problem is one similar to the knapsack problem discussed
earlier except that the solution vector values Xj's are restricted to have a value
of either 0 or 1.
Here KNAP(l, j,y) represent the problem maximize ∑(Pi Xi) where l ≤ i
≤j
Subjected to Σ (wi X i)≤ y where I ≤ i ≤ j , y is Knapsack capacity
Xi = 0 or 1 where I ≤ i ≤ j
Let y1,y2…….,yn be an optimal sequence of 0/1 values for
x1,x2,…..,xn, respectively.
If y1=0,then y2,y3….yn must constitute an optimal sequence for the problem
KNAP(2, n, m) If it does not, then y1,y2…yn is not an optimal sequence
for KNAP(1,n,m)
If y1=1,then y2,y3….yn must constitute an optimal sequence for the problem
KNAP(2, n, m-w1) ,

If g0 (m)is optimal solution to KNAP(1,n,m) then,


g0 (m)=max { g1 (m), g1 (m-w1)+P1} This is principal of optimality
5.2 Multistage Graphs
Forward approach
Algorithm for forward approach

Complexity is Ө (|V|+|E)
Example:
According to the formula, we have to calculate the cost (i, j) using the following
steps
Step 1: Cost (K-2, j)
In this step, three nodes (node 4, 5. 6) are selected as j. Hence, we have three
options to choose the minimum cost at this step.
Cost(3, 4) = min {c(4, 7) + Cost(7, 9),c(4, 8) + Cost(8, 9)} = 7
Cost(3, 5) = min {c(5, 7) + Cost(7, 9),c(5, 8) + Cost(8, 9)} = 5
Cost(3, 6) = min {c(6, 7) + Cost(7, 9),c(6, 8) + Cost(8, 9)} = 5
Step 2: Cost (K-3, j)
Two nodes are selected as j because at stage k - 3 = 2 there are two nodes, 2 and
3. So, the value i = 2 and j = 2 and 3.
Cost(2, 2) = min {c(2, 4) + Cost(4, 8) + Cost(8, 9),c(2, 6) +
Cost(6, 8) + Cost(8, 9)} = 8
Cost(2, 3) = {c(3, 4) + Cost(4, 8) + Cost(8, 9), c(3, 5) + Cost(5, 8)+ Cost(8, 9),
c(3, 6) + Cost(6, 8) + Cost(8, 9)} = 10
Step 3: Cost (K-4, j)
Cost (1, 1) = {c(1, 2) + Cost(2, 6) + Cost(6, 8) + Cost(8, 9), c(1, 3) + Cost(3, 5)
+ Cost(5, 8) + Cost(8, 9))} = 12
c(1, 3) + Cost(3, 6) + Cost(6, 8 + Cost(8, 9))} = 13
Hence, the path having the minimum cost is 1→ 3→ 5→ 8→ 9.
1 5
5 9
2 2 3

8
3

cost=12
Backward Approach

Algorithm for Backward Approach

Complexity is Ө (|V|+|E|)
Backward Approach
4
3
1
2
6 4
5 6 7
3
7
1 5
5 2
9
2
6 3
8
3
6 2
8
1 5
5 9
2 2 3

8
3

cost=12
TB1:8.2Transitive Closure
The transitive closure of a directed graph with n vertices
can be defined as the n-by-n boolean matrix T = {tij}, in
which the element in the ith row and the jth column is 1 if
there exists a nontrivial path (i.e., directed path of a
positive length) from the ith vertex to the jth vertex;
otherwise, tij is 0.
Transitive closure of a digraph can be generated using
depth-first-search or breadth-first-search.

Starting any of this traversalfrom ith vertexgives the


information about the vertices reachable from the ith vertex.

Doing such traversal for every vertex as a starting point


yields the entire transitive closure.

So this method traverses the same digraph several times.


Warshall’s Algorithm
Construct the transitive closure of a digraph
with n vertices through a series of n-by-n
boolean matrices:

R
(0),…., R(k-1), R(k), ……, R(n)
Each of these matrices provides certain
information about directed paths in the
digraph.

Specifically, the element in the i th row and j th


column of matrix R(k) (i, j = 1, 2, . . . , n, k = 0,
1,...,n) is equal to 1 if and only if there exists a
directed path of a positive length from the ith vertex
Thus, the series starts with R(0) , which does not allow any
intermediate vertices in its paths; hence, R(0) is nothing
other than the adjacency matrix of the digraph.
R(1) contains the information about paths that can use the
first
Vertex as intermediate; thus, it may contain more 1’s
than R(0).
In general, each subsequent matrix in the series has
one more vertex to use as intermediate
The last matrix in the series, R (n) , reflects paths that can
use all n vertices of the digraph as intermediate and hence
is the digraph’s transitive closure.
Thus the algorithm computes all the elements of each
matrix R(k) from its immediate predecessor R(k−1) in
series.
Let r ij (k) , the element in the ith row, and j th column of
matrix R(k), be equal to 1. This means that there exists a
path from the ith vertex vi to the j th vertex v j with each
intermediate vertex numbered not higher than k:
Vi , a list of intermediate vertices each
numbered not higher than k, Vj
The formula implies following rule
If an element r ij is 1 in R (k−1) , it remains 1 in R (k) . If
an element r ij is 0 in R (k−1) , it has to be changed to 1
in R(k) if and only if the element in its row i and
column k and the element in its column j and row k
are both 1’s in R (k−1) .
From R(0) matrix
r =1 in matrix
r41=1 and r12=1 42
(1)
Then R
From R(1) matrix
r12=1 and r24=1 r14=1 matrix
From in matrix
Then r44=1
(2) (2)
R R
r42=1 and r24=1
From
rThen=1 andmatrixr ,No change
=1 r =1 in
14
(3) (3) 41 11
RThen R matrix
r14=1 and r43=1 r13=1
Then
r24=1 and r41=1 r21=1 matrix
Then
r24=1 and r42=1 r22=1
Analysis
Its time efficiency is Θ(n3)
We can make the algorithm to run faster by treating
matrix rows as bit strings and employ the bitwise or
operation available in most modern computer
languages.

Space efficiency
Although separate matrices for recording
intermediate results of the algorithm are used, that
can be avoided.
TB1:Floyd’s algorithm : All Pairs
Shortest Paths
Problem definition:
Given a weighted connected graph (undirected or directed),
the all-pairs shortest paths problem asks to find the
distances—i.e., the lengths of the shortest paths - from each
vertex to all other vertices.
Applications:
– Communications, transportation networks, and operations
research.
– pre-computing distances for motion planning in computer
games.
The lengths of shortest paths are recorded in an n × n matrix
D called the distance matrix: the element dij in the ith row
and the j th column of this matrix indicates the length of the
shortest path from the ith vertex to the j th vertex.
We can generate the distance matrix with an algorithm that is very
similar to Warshall’s algorithm.
Algorithm is applicable to both undirected and directed weighted
graphs provided that they do not contain a cycle of a negative length.
This algorithm is called Floyd’s algorithm
Floyd’s Algorithm computes the distance
matrix of a weighted graph with n vertices
through a series of n-by-n matrices:
(0) (k-1) (k) (n)
D ,…., D , D , ……, D

Each of these matrices contains the lengths of the shortest


paths with certain constraints on the paths considered for
the matrix in question. Specifically, the element dij (k) in the
ith row and the j th column of matrix D(k) (i, j = 1, 2, . . . , n,
k = 0, 1, . . . , n) is equal to the length of the shortest path
among all paths from the ith vertex to the j th vertex with
each intermediate vertex, if any, numbered not higher
than k.
The series starts with D(0) , which does not allow any
intermediate vertices in its paths; hence, D(0) is the
weight matrix of the graph. The last matrix in the series,
D(n) , contains the lengths of the shortest paths among
all paths that can use all n vertices as intermediate and
hence is the distance matrix being sought.
The algorithm can compute all the elements of each
matrix D (k)
from its immediate predecessor D (k-1) in series .
Let dij (k) be the element in the ith row and the j th column
of matrix D (k) . This means that dij (k) is equal to the
length of the shortest path among all paths from the ith
vertex vi to the j th vertex vj with their intermediate
vertices numbered not higher than k:
vi , a list of intermediate vertices each numbered not
We can partition all such paths into two disjoint subsets:
Those that do not use the kth vertex vk as intermediate
vertex, that is their intermediate vertices numbered not
higher than k − 1, the
(k-1)
shortest of them is, by definition of our matrices, of length
dij .
The second subset that use vertex vk as their
intermediate vertex exactly once. All such paths
have the following form:
vi, vertices numbered ≤ k − 1, vk, vertices numbered ≤ k − 1, vj
.

Then the length of the shortest path among


the paths that
Taking into account the lengths of the shortest paths
in both subsets
{
d ij(k ) = min dij(k −1)
, ( d ik(k −1) + d kj(k −1) )} for k ≥ 1, (0)
ij = w ij
d
This is the Recurrence used in Floyd’s Algorithm
To put it another way, the element in row i and
column j of the current distance matrix D (k-1) is
replaced by the sum of the elements in the same
row i and the column k and in the same column j
and the row k if and only if the latter sum is smaller
than its current value
Analysis
Its time efficiency is Θ(n3)
–similar to the Warshall’s algorithm.
Shortest path length with vertex” a “as intermediate
vertex d23=min{ d23,d21+d13}=min {∞,2+3} = 5
d43=min{ d43,d41+d13}=min {∞,6+3} = 9

Shortest path length with vertex” a and b“ as intermediate


vertices d31=min{ d31,d32+d21} =min {∞,2+7} = 9

Shortest path length with vertex” a,b and c“ as intermediate vertices


d12=min{ d12,d13+d32} =min {∞,3+7} = 10
d14=min{ d14,d13+d34} =min {∞,3+1} = 4
d24=min{ d24,d23+d34} =min {∞,5+1} = 6
d42=min{ d42,d43+d32} =min {∞,9+7} = 16

Shortest path length with vertex” a,b,c and d“ as intermediate


vertices d31=min{ d31,d34+d41} =min {9,1+6} = 7
Consider the following directed weighted graph- Using Floyd Warshall
Algorithm, find the shortest path distance between every pair of vertices.

Solution-

Step-01:

•Remove all the self-loops and parallel edges (keeping the lowest weight edge) from the graph.
•In the given graph, there are neither self-edges nor parallel edges.
Step-02:

•Write the initial distance matrix.


•It represents the distance between every pair of vertices in the form of given weights.
•For diagonal elements (representing self-loops), distance value = 0.
•For vertices having a direct edge between them, distance value = weight of that edge.
•For vertices having no direct edge between them, distance value = ∞.

Initial distance matrix for the given graph is-


Step-03:

Using Floyd Warshall Algorithm, write the following 4 matrices-


8.2 TB1:The Knapsack Problem and
Memory Functions
Algorithm MFKnapsack(i,
j )
Implements the memory function method for the knapsack
problem

Input: A nonnegative integer i indicating the number of the first


items being considered and a nonnegative integer j indicating the
knapsack capacity

Output: The value of an optimal feasible subset of the first i items

Note: Uses as global variables input arrays Weights[1..n],


Values[1..n], and table F[0..n, 0..W ] whose entries are initialized
with −1’s except for row 0 and column 0 initialised with 0’s
Analysis
The classic dynamic programming approach,, works
bottom up: it fills a table with solutions to all smaller
sub-problems, and each of them is solved only once.

Drawback: Some unnecessary subproblems are also


solved.

The time efficiency and space efficiency of this algorithm


are both in Θ(nW).

The time needed to find the composition of an optimal


solution is
in O(n+W).
Bellman-Ford Algorithm
Like other Dynamic Programming Problems, the
algorithm calculates shortest paths in bottom-up manner.
It first calculates the shortest distances
– for the shortest paths which have at-most one edge in the
path.
– Then, it calculates shortest paths with at-most 2 edges,
and so on.
In general, iteration i finds all shortest paths that use i
edges.
Let distL[u] be the length of a shortest path from the
source vertex v to vertex u under the constraint that
the shortest path contains at most L edges.Then,
distL[u] = cost[v,u], 1≤u≤n
When there are no cycles of negative length, we can
limit our search for shortest paths to paths with at most
n-1 edges. Hence,distn-1[u]is the length of an
unrestricted shortest path from v to u. Our goal then is
to compute distn-1[u] for all u.
This can be done using the dynamic
programming methodology.
First, we make the following observations
For the shortest path from v to u with at most k, k
> 1, edges, then we get two situations.
1. Shortest path that has no more than
k-1edges,then distk[u] = distk-1[u]
2.Shortest path with exactly k edges, then
it is made up of a shortest path from v to some vertex j
followed by the edge(j,u).
The path from v to j has k -1 edges, and its length is distk-1[j].
Here all vertices i such that the edge (i,u) is in the graph are
candidates for j.
Since we are interested in a shortest path, the i that
minimizes distk-1[i] + cost[i, u] is the correct value for j.
Example : Using Bellman ford Algorithm to find
shortest path from source1 to all other vertices
2.Example : Using Bellman ford Algorithm to find the shortest path from
source1 to all other vertices
Analysis
Overall complexity is
-O(ne) For adjacency List
-O(n3) For adjacency Matrix
TB2:5.9 The Travelling Salesperson Problem(TSP)

The TSP is to visit each of the cities exactly once


starting from a certain one and returning to the same
city.
For a given graph, G(V,E) with edge cost cij >0,TSP is
find
The shortest tour that includes every vertex V
TSP finds many applications in a variety of situations like,
to route a postal van to pick up mail from mailboxes
located at n different sites
To use a robot arm to tighten the nuts on some piece of
machinery on an assembly line
A production environment in which several
In general in TSP, a tour is a simple path that starts
and ends at vertex 1. Every tour consists of an
edge<l,k>for some k ∈ V-{1}and a path from
vertex k to vertex 1.The path from vertex k to vertex
1 goes through each vertex in V - {1,k} exactly
once.
If the tour is optimal, then the path from k to 1
must be the shortest k to 1 path going through all
vertices in V - {l,k}. Hence, the principle of
optimality holds

You might also like