0% found this document useful (0 votes)
8 views9 pages

Unit Iv

Unit IV focuses on Dynamic Programming, covering its basic strategy, approaches (top-down and bottom-up), and applications in solving optimization problems like the traveling salesman problem and shortest path algorithms. Key algorithms discussed include the Floyd-Warshall algorithm for all pairs shortest paths and the Bellman-Ford algorithm for single source shortest paths, each with their advantages, disadvantages, and complexity analyses. The document emphasizes the importance of dynamic programming in efficiently solving complex problems by breaking them down into simpler subproblems and storing results for reuse.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views9 pages

Unit Iv

Unit IV focuses on Dynamic Programming, covering its basic strategy, approaches (top-down and bottom-up), and applications in solving optimization problems like the traveling salesman problem and shortest path algorithms. Key algorithms discussed include the Floyd-Warshall algorithm for all pairs shortest paths and the Bellman-Ford algorithm for single source shortest paths, each with their advantages, disadvantages, and complexity analyses. The document emphasizes the importance of dynamic programming in efficiently solving complex problems by breaking them down into simpler subproblems and storing results for reuse.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

UNIT IV: Dynamic Programming

Syllabus: Basic strategy, multistage graphs, all pairs shortest path-Floyd Warshall algorithm, single source shortest
paths-Bellman Ford algorithm, optimal binary search trees, traveling salesman problem, longest common
subsequence problem, chained matrix multiplication.

Basic strategy:
Dynamic programming is a technique that breaks the problems into sub-problems, and saves the result for
future purposes so that we do not need to compute the result again. The subproblems are optimized to
optimize the overall solution is known as optimal substructure property. The main use of dynamic
programming is to solve optimization problems. Here, optimization problems mean that when we are
trying to find out the minimum or the maximum solution of a problem. The dynamic programming
guarantees to find the optimal solution of a problem if the solution exists.
The definition of dynamic programming says that it is a technique for solving a complex problem by first
breaking into a collection of simpler subproblems, solving each subproblem just once, and then storing
their solutions to avoid repetitive computations.SSSS

How does the dynamic programming approach work?


The following are the steps that the dynamic programming follows:
i. It breaks down the complex problem into simpler subproblems.
ii. It finds the optimal solution to these sub-problems.
iii. It stores the results of subproblems (memorization). The process of storing the results of
subproblems is known as memorization.
iv. It reuses them so that same sub-problem is calculated more than once.
v. Finally, calculate the result of the complex problem.
The above five steps are the basic steps for dynamic programming. The dynamic programming is
applicable that are having properties such as:
Those problems that are having overlapping subproblems and optimal substructures. Here, optimal
substructure means that the solution of optimization problems can be obtained by simply combining the
optimal solution of all the subproblems.
In the case of dynamic programming, the space complexity would be increased as we are storing the
intermediate results, but the time complexity would be decreased.

Approaches of dynamic programming:


There are two approaches to dynamic programming:
• Top-down approach
• Bottom-up approach
1. Top-down approach
The top-down approach follows the memorization technique, while bottom-up approach follows the
tabulation method. Here memorization is equal to the sum of recursion and caching. Recursion means
calling the function itself, while caching means storing the intermediate results.
Advantages
• It is very easy to understand and implement.
• It solves the subproblems only when it is required.
• It is easy to debug.
Disadvantages
• It uses the recursion technique that occupies more memory in the call stack. Sometimes when the
recursion is too deep, the stack overflow condition will occur.
• It occupies more memory that degrades the overall performance.
2. Bottom-Up approach
The bottom-up approach is also one of the techniques which can be used to implement the dynamic
programming. It uses the tabulation technique to implement the dynamic programming approach. It
solves the same kind of problems but it removes the recursion. If we remove the recursion, there is no
stack overflow issue and no overhead of the recursive functions. In this tabulation technique, we solve the
problems and store the results in a matrix.
• Starts with the smallest subproblems and gradually builds up to the final solution.
• Fills a table with solutions to subproblems in a bottom-up manner.
• Suitable when the number of subproblems is small and the optimal solution can be directly
computed from the solutions to smaller subproblems.

Multistage Graphs(Shortest Path):


A Multistage graph is a directed, weighted graph in which the nodes can be divided into a set of stages
such that all edges are from a stage to next stage only (In other words there is no edge between vertices of
same stage and from a vertex of current stage to previous stage).
The vertices of a multistage graph are divided into n number of disjoint subsets S = { S1 , S2 , S3
……….. Sn }, where S1 is the source and Sn is the sink ( destination ). The cardinality of S1 and Sn are
equal to 1. i.e., |S1| = |Sn| = 1.
We are given a multistage graph, a source and a destination, we need to find shortest path from source to
destination. By convention, we consider source at stage 1 and destination as last stage.
Following is an example graph we will consider in this article :-
Now there are various strategies we can apply :-
• The Brute force method of finding all possible paths between Source and Destination and then
finding the minimum. That’s the WORST possible strategy.
• Dijkstra’s Algorithm of Single Source shortest paths. This method will find shortest paths from
source to all other nodes which is not required in this case. So it will take a lot of time and it
doesn’t even use the SPECIAL feature that this MULTI-STAGE graph has.
• Simple Greedy Method – At each node, choose the shortest outgoing path. If we apply this
approach to the example graph given above we get the solution as 1 + 4 + 18 = 23. But a quick
look at the graph will show much shorter paths available than 23. So the greedy method fails !
• The best option is Dynamic Programming. So we need to find Optimal Sub-structure, Recursive
Equations and Overlapping Sub-problems.

Time Complexity:
• O(E) where E is the number of edges in the graph.
• If E is not given, assume O(n²) for dense graphs.

All pairs shortest path-Floyd Warshall Algorithm:


The Floyd-Warshall algorithm, named after its creators Robert Floyd and Stephen Warshall, is a
fundamental algorithm in computer science and graph theory. It is used to find the shortest paths between
all pairs of nodes in a weighted graph. This algorithm is highly efficient and can handle graphs with both
positive and negative edge weights, making it a versatile tool for solving a wide range of network and
connectivity problems.
The Floyd Warshall Algorithm is an all pair shortest path algorithm unlike Dijkstra and Bellman Ford
which are single source shortest path algorithms. This algorithm works for both the directed and
undirected weighted graphs. But, it does not work for the graphs with negative cycles (where the sum of
the edges in a cycle is negative). It follows Dynamic Programming approach to check every possible path
going via every possible node in order to calculate shortest distance between every pair of nodes.

Working of Floyd-Warshall Algorithm:


The set of rules works as follows:
i. Initialize a distance matrix D wherein D[i][j] represents the shortest distance between vertex i and
vertex j.
ii. Set the diagonal entries of the matrix to 0, and all other entries to infinity.
iii. For every area (u,v) inside the graph, replace the gap matrix to mirror the weight of the brink:
D[u][v] = weight(u,v).
iv. For every vertex okay in the graph, bear in mind all pairs of vertices (i,j) and check if the path
from i to j through k is shorter than the current best path. If it is, update the gap matrix:
D[i][j] = min(D[i][j], D[i][k] D[k][j]).

v. After all iterations, the matrix D will contain the shortest course distances between all pairs of
vertices.

Complexity Analysis of Floyd Warshall Algorithm:


• Time Complexity: O(V3), where V is the number of vertices in the graph and we run three nested
loops each of size V
• Auxiliary Space: O(V2), to create a 2-D matrix in order to store the shortest distance for each pair
of nodes.
Note: Floyd-Warshall Algorithm better for Dense Graphs and not for Sparse Graphs(Less Vertex).
Advantages of the Floyd-Warshall algorithm include:
i. It can discover the shortest direction between all pairs of vertices in a weighted graph, such as
graphs with negative edge weights.
ii. It is an easy and smooth-to-put algorithm, making it accessible to developers of all skill ranges.
iii. It is appropriate for both dense and sparse graphs.
iv. It has a time complexity of O(N^3), that is relatively efficient for most real-international
applications.
v. It can be used to discover negative weight cycles in a graph.
Disadvantages of the Floyd-Warshall set of rules include:
i. It calls for a matrix of size N^2 to store the intermediate results, which may be prohibitively large
for extremely large graphs.
ii. It is not the maximum green set of rules for fixing the all-pairs shortest path hassle in sure types of
graphs, inclusive of sparse graphs or graphs with non-bad part weights.
iii. It won't be suitable for real-time packages or packages with strict reminiscence constraints, as it is
able to take a long term to compute the shortest paths in very huge graphs.
iv. It can be less intuitive than different algorithms, which include Dijkstra's algorithm, or the
Bellman-Ford set of rules, making it more difficult to understand for some builders.
Applications of Floyd Warshall Algorithm:
The Floyd-Warshall algorithm is a dynamic programming algorithm used for finding the shortest course
among all pairs of vertices in a weighted graph. Here are some of the programs of the Floyd-Warshall
algorithm:
i. Routing Algorithms: The Floyd-Warshall algorithm is broadly utilized in routing algorithms,
along with within the OSPF (Open Shortest Path First) protocol utilized in Internet routing. It can
help decide the shortest route among two nodes in a network and is useful in locating the least
congested path.
ii. Airline Networks: The Floyd-Warshall set of rules can also be utilized in airline networks to
locate the shortest path between two cities with the lowest cost. It can assist airways plan their
routes and limit fuel charges.
iii. Traffic Networks: The algorithm is used to find the shortest path between points in a visitors'
network. It can help reduce congestion and improve the go with the flow of visitors in city areas.
iv. Computer Networks: The Floyd-Warshall algorithm is likewise utilized in laptop networks to
decide the shortest course between hosts in a network. It can assist in minimizing community
latency and improve community overall performance.
v. Game Development: The set of rules may be used in game development to find the shortest
direction among two items in a sport world. It is beneficial in games in which the participant
desires to navigate through complex surroundings, together with a maze or a metropolis.

Single Source Shortest paths-Bellman Ford Algorithm:


Imagine you have a map with different cities connected by roads, each road having a certain distance.
The Bellman–Ford algorithm is like a guide that helps you find the shortest path from one city to all other
cities, even if some roads have negative lengths. It’s like a GPS for computers, useful for figuring out the
quickest way to get from one point to another in a network. In this article, we’ll take a closer look at how
this algorithm works and why it’s so handy in solving everyday problems.

Bellman-Ford is a single source shortest path algorithm that determines the shortest path
between a given source vertex and every other vertex in a graph. This algorithm can be used
on both weighted and unweighted graphs.
A Bellman-Ford algorithm is also guaranteed to find the shortest path in a graph, similar to Dijkstra’s
algorithm. Although Bellman-Ford is slower than Dijkstra’s algorithm, it is capable of handling graphs
with negative edge weights, which makes it more versatile. The shortest path cannot be found if there
exists a negative cycle in the graph. If we continue to go around the negative cycle an infinite number of
times, then the cost of the path will continue to decrease (even though the length of the path is increasing).
As a result, Bellman-Ford is also capable of detecting negative cycles, which is an important feature.

The idea behind Bellman Ford Algorithm:


The Bellman-Ford algorithm’s primary principle is that it starts with a single source and calculates the
distance to each node. The distance is initially unknown and assumed to be infinite, but as time goes on,
the algorithm relaxes those paths by identifying a few shorter paths. Hence it is said that Bellman-Ford is
based on “Principle of Relaxation“.
Principle of Relaxation of Edges for Bellman-Ford:
• It states that for the graph having N vertices, all the edges should be relaxed N-1 times to compute
the single source shortest path.
• In order to detect whether a negative cycle exists or not, relax all the edge one more time and if the
shortest distance for any node reduces then we can say that a negative cycle exists. In short if we
relax the edges N times, and there is any change in the shortest distance of any node between the N-
1th and Nth relaxation than a negative cycle exists, otherwise not exist.
Rule of this algorithm:
1. We will go on relaxing all the edges (n - 1) times where,
2. n = number of vertices
Consider the below graph:
As we can observe in the above graph that some of the weights are negative. The above graph contains 6
vertices so we will go on relaxing till the 5 vertices. Here, we will relax all the edges 5 times. The loop will
iterate 5 times to get the correct answer. If the loop is iterated more than 5 times then also the answer will
be the same, i.e., there would be no change in the distance between the vertices.
Relaxing means:
1. If (d(u) + c(u , v) < d(v))
2. d(v) = d(u) + c(u , v)
To find the shortest path of the above graph, the first step is note down all the edges which are given below:
(A, B), (A, C), (A, D), (B, E), (C, E), (D, C), (D, F), (E, F), (C, B)
Let's consider the source vertex as 'A'; therefore, the distance value at vertex A is 0 and the distance value
at all the other vertices as infinity shown as below:

Since the graph has six vertices so it will have five iterations.
First iteration
Consider the edge (A, B). Denote vertex 'A' as 'u' and vertex 'B' as 'v'. Now use the relaxing formula:
d(u) = 0
d(v) = ∞
c(u , v) = 6
Since (0 + 6) is less than ∞, so update
1. d(v) = d(u) + c(u , v)
d(v) = 0 + 6 = 6
Therefore, the distance of vertex B is 6.
Second iteration:
In the second iteration, we again check all the edges. The first edge is (A, B). Since (0 + 6) is greater than
1 so there would be no updation in the vertex B.
The next edge is (A, C). Since (0 + 4) is greater than 3 so there would be no updation in the vertex C.
The next edge is (A, D). Since (0 + 5) equals to 5 so there would be no updation in the vertex D.
The next edge is (B, E). Since (1 - 1) equals to 0 which is less than 5 so update:
d(v) = d(u) + c(u, v)
d(E) = d(B) +c(B , E)
=1-1=0
The next edge is (C, E). Since (3 + 3) equals to 6 which is greater than 5 so there would be no updation in
the vertex E.
The next edge is (D, C). Since (5 - 2) equals to 3 so there would be no updation in the vertex C.
The next edge is (D, F). Since (5 - 1) equals to 4 so there would be no updation in the vertex F.
The next edge is (E, F). Since (5 + 3) equals to 8 which is greater than 4 so there would be no updation in
the vertex F.
The next edge is (C, B). Since (3 - 2) equals to 1` so there would be no updation in the vertex B.

Third iteration
We will perform the same steps as we did in the previous iterations. We will observe that there will be no
updation in the distance of vertices.
1. The following are the distances of vertices:
2. A: 0
3. B: 1
4. C: 3
5. D: 5
6. E: 0
7. F: 3
Complexity Analysis of Bellman-Ford Algorithm:
• Time Complexity when graph is connected:
• Best Case: O(E), when distance array after 1st and 2nd relaxation are same , we can simply
stop further processing
• Average Case: O(V*E)
• Worst Case: O(V*E)
• Time Complexity when graph is disconnected:
• All the cases: O(E*(V^2))
• Auxiliary Space: O(V), where V is the number of vertices in the graph.
Bellman Ford’s Algorithm Applications:
• Network Routing: Bellman-Ford is used in computer networking to find the shortest paths in
routing tables, helping data packets navigate efficiently across networks.
• GPS Navigation: GPS devices use Bellman-Ford to calculate the shortest or fastest routes between
locations, aiding navigation apps and devices.
• Transportation and Logistics: Bellman-Ford’s algorithm can be applied to determine the optimal
paths for vehicles in transportation and logistics, minimizing fuel consumption and travel time.
• Game Development: Bellman-Ford can be used to model movement and navigation within virtual
worlds in game development, where different paths may have varying costs or obstacles.
• Robotics and Autonomous Vehicles: The algorithm aids in path planning for robots or autonomous
vehicles, considering obstacles, terrain, and energy consumption
Drawback of Bellman Ford’s Algorithm:
• Bellman-Ford algorithm will fail if the graph contains any negative edge cycle.

Optimal Binary Search Trees:


An Optimal Binary Search Tree (OBST), also known as a Weighted Binary Search Tree, is a binary search
tree that minimizes the expected search cost. In a binary search tree, the search cost is the number of
comparisons required to search for a given key.
In an OBST, each node is assigned a weight that represents the probability of the key being searched for.
The sum of all the weights in the tree is 1.0. The expected search cost of a node is the sum of the product
of its depth and weight, and the expected search cost of its children.
To construct an OBST, we start with a sorted list of keys and their probabilities. We then build a table that
contains the expected search cost for all possible sub-trees of the original list. We can use dynamic
programming to fill in this table efficiently. Finally, we use this table to construct the OBST.
The OBST is a useful data structure in applications where the keys have different probabilities of being
searched for. It can be used to improve the efficiency of searching and retrieval operations in databases,
compilers, and other computer programs.
Traveling Salesman Problem:
Given a set of cities and the distance between every pair of cities, the problem is to find the shortest possible
route that visits every city exactly once and returns to the starting point. Note the difference
between Hamiltonian Cycle and TSP. The Hamiltonian cycle problem is to find if there exists a tour that
visits every city exactly once. Here we know that Hamiltonian Tour exists (because the graph is complete)
and in fact, many such tours exist, the problem is to find a minimum weight Hamiltonian Cycle.

For example, consider the graph shown in the figure on the right side. A TSP tour in the graph is 1-2-4-3-
1. The cost of the tour is 10+25+30+15 which is 80. The problem is a famous NP-hard problem. There is
no polynomial-time know solution for this problem. The following are different solutions for the traveling
salesman problem.
Naive Solution:
1) Consider city 1 as the starting and ending point.
2) Generate all (n-1)! Permutations of cities.
3) Calculate the cost of every permutation and keep track of the minimum cost permutation.
4) Return the permutation with minimum cost.

Longest Common subsequence Problem:


Given two strings, S1 and S2, the task is to find the length of the Longest Common Subsequence. If there
is no common subsequence, return 0. A subsequence is a string generated from the original string by
deleting 0 or more characters and without changing the relative order of the remaining characters. For
example , subsequence’s of “ABC” are “”, “A”, “B”, “C”, “AB”, “AC”, “BC” and “ABC”. In general a
string of length n has 2n sub sequences.
LCS problem has great applications like diff utility (find the difference between two files) that we use in
our day to day software development.

Chained Matrix Multiplication:


Given the dimension of a sequence of matrices in an array arr[], where the dimension of the ith matrix
is (arr[i-1] * arr[i]), the task is to find the most efficient way to multiply these matrices together such that
the total number of element multiplications is minimum. When two matrices of size m*n and n*p when
multiplied, they generate a matrix of size m*p and the number of multiplications performed are m*n*p.

You might also like