Mod 4
Mod 4
Dynamic Programming
TB2:Chapter 5
Dynamic Programing: General Method with
examples, Multistage graph
5.1 Dynamic Programming
Dynamic programming is a technique for solving problems with
overlapping sub problems.
– sub problems arise from a recurrence relating a solution to
given problem with solutions of its smaller sub problems of
same type
– Rather than solving overlapping sub problems again and
again, it suggests solving each of the smaller sub problems
only once and recording the results in a table from which a
solution to the original problem can then be obtained.
Dynamic programming is an algorithm design method that can be
used when the solution to a problem can be viewed as the result of
a sequence of decisions to get optimal solution.
Consider the following examples --
0/1 knapsack problem
The 0/1 knapsack problem is one similar to the knapsack problem discussed
earlier except that the solution vector values Xj's are restricted to have a value
of either 0 or 1.
Here KNAP(l, j,y) represent the problem maximize ∑(Pi Xi) where l ≤ i
≤j
Subjected to Σ (wi X i)≤ y where I ≤ i ≤ j , y is Knapsack capacity
Xi = 0 or 1 where I ≤ i ≤ j
Let y1,y2…….,yn be an optimal sequence of 0/1 values for
x1,x2,…..,xn, respectively.
If y1=0,then y2,y3….yn must constitute an optimal sequence for the problem
KNAP(2, n, m) If it does not, then y1,y2…yn is not an optimal sequence
for KNAP(1,n,m)
If y1=1,then y2,y3….yn must constitute an optimal sequence for the problem
KNAP(2, n, m-w1) ,
Complexity is Ө (|V|+|E)
Example:
According to the formula, we have to calculate the cost (i, j) using the following
steps
Step 1: Cost (K-2, j)
In this step, three nodes (node 4, 5. 6) are selected as j. Hence, we have three
options to choose the minimum cost at this step.
Cost(3, 4) = min {c(4, 7) + Cost(7, 9),c(4, 8) + Cost(8, 9)} = 7
Cost(3, 5) = min {c(5, 7) + Cost(7, 9),c(5, 8) + Cost(8, 9)} = 5
Cost(3, 6) = min {c(6, 7) + Cost(7, 9),c(6, 8) + Cost(8, 9)} = 5
Step 2: Cost (K-3, j)
Two nodes are selected as j because at stage k - 3 = 2 there are two nodes, 2 and
3. So, the value i = 2 and j = 2 and 3.
Cost(2, 2) = min {c(2, 4) + Cost(4, 8) + Cost(8, 9),c(2, 6) +
Cost(6, 8) + Cost(8, 9)} = 8
Cost(2, 3) = {c(3, 4) + Cost(4, 8) + Cost(8, 9), c(3, 5) + Cost(5, 8)+ Cost(8, 9),
c(3, 6) + Cost(6, 8) + Cost(8, 9)} = 10
Step 3: Cost (K-4, j)
Cost (1, 1) = {c(1, 2) + Cost(2, 6) + Cost(6, 8) + Cost(8, 9), c(1, 3) + Cost(3, 5)
+ Cost(5, 8) + Cost(8, 9))} = 12
c(1, 3) + Cost(3, 6) + Cost(6, 8 + Cost(8, 9))} = 13
Hence, the path having the minimum cost is 1→ 3→ 5→ 8→ 9.
1 5
5 9
2 2 3
8
3
cost=12
Backward Approach
Complexity is Ө (|V|+|E|)
Backward Approach
4
3
1
2
6 4
5 6 7
3
7
1 5
5 2
9
2
6 3
8
3
6 2
8
1 5
5 9
2 2 3
8
3
cost=12
TB1:8.2Transitive Closure
The transitive closure of a directed graph with n vertices
can be defined as the n-by-n boolean matrix T = {tij}, in
which the element in the ith row and the jth column is 1 if
there exists a nontrivial path (i.e., directed path of a
positive length) from the ith vertex to the jth vertex;
otherwise, tij is 0.
Transitive closure of a digraph can be generated using
depth-first-search or breadth-first-search.
R
(0),…., R(k-1), R(k), ……, R(n)
Each of these matrices provides certain
information about directed paths in the
digraph.
Space efficiency
Although separate matrices for recording
intermediate results of the algorithm are used, that
can be avoided.
TB1:Floyd’s algorithm : All Pairs
Shortest Paths
Problem definition:
Given a weighted connected graph (undirected or directed),
the all-pairs shortest paths problem asks to find the
distances—i.e., the lengths of the shortest paths - from each
vertex to all other vertices.
Applications:
– Communications, transportation networks, and operations
research.
– pre-computing distances for motion planning in computer
games.
The lengths of shortest paths are recorded in an n × n matrix
D called the distance matrix: the element dij in the ith row
and the j th column of this matrix indicates the length of the
shortest path from the ith vertex to the j th vertex.
We can generate the distance matrix with an algorithm that is very
similar to Warshall’s algorithm.
Algorithm is applicable to both undirected and directed weighted
graphs provided that they do not contain a cycle of a negative length.
This algorithm is called Floyd’s algorithm
Floyd’s Algorithm computes the distance
matrix of a weighted graph with n vertices
through a series of n-by-n matrices:
(0) (k-1) (k) (n)
D ,…., D , D , ……, D
Solution-
Step-01:
•Remove all the self-loops and parallel edges (keeping the lowest weight edge) from the graph.
•In the given graph, there are neither self-edges nor parallel edges.
Step-02: