0% found this document useful (0 votes)
49 views25 pages

Module 3 (Aad)

The document discusses various divide and conquer and greedy algorithms. It explains concepts like merge sort, matrix multiplication using Strassen's algorithm and greedy algorithms like fractional knapsack problem and Kruskal's minimum spanning tree algorithm.

Uploaded by

Ponnu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views25 pages

Module 3 (Aad)

The document discusses various divide and conquer and greedy algorithms. It explains concepts like merge sort, matrix multiplication using Strassen's algorithm and greedy algorithms like fractional knapsack problem and Kruskal's minimum spanning tree algorithm.

Uploaded by

Ponnu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

MODULE III

DIVIDE & CONQUER AND GREEDY STRATEGY


The Control Abstraction of Divide and Conquer- 2-way Merge sort, Strassen’s
Algorithm for Matrix Multiplication-Analysis.
The Control Abstraction of Greedy Strategy- Fractional Knapsack Problem,
Minimum Cost Spanning Tree Computation- Kruskal’s Algorithms - Analysis,
Single Source Shortest Path Algorithm - Dijkstra’s Algorithm-Analysis.

DIVIDE & CONQUER TECHNIQUE


Divide and Conquer is one of the best-known general algorithm design
technique. Divide-and-conquer algorithms work according to the following
general plan:
 Break the problem into smaller sub-problems, ideally of about the same
size.
 Solve each of the sub-problems recursively.
 Combine the solutions to obtain the solution to the original problem.

3 STEPS:
1. DIVIDE
• Problems are divided into smaller sub-problems
• Identical to original problem
• Sub problems are of same size
2. CONQUER
• Sub problems are solved recursively
3. COMBINE
• Solutions of each sub problems are combine to get final solution
CONTROL ABSTRACTION

COMPLEXITY CALCULTAION
• Assume the size of Problem P is n and size of k sub problems are n1, n2,n3
…. nk
• The computing time of D and C is described by the recurrence relation:

• The general equation of complexity of Divide & conquer Algorithms are:

EXAMPLE PROBLEMS
• BINARY SEARCH
• 2 WAY MERGE SORT
• FINDING MAXIMUM & MINIMUM
• STRASSEN’S MATRIX MULTIPLICATION
• QUICK SORT
• CONVEX HULL

2 WAY MERGE SORT


 Merge Sort is a Divide and Conquer algorithm.
 It divides the input array into two halves, calls itself for the two halves,
and then merges the two sorted halves.
 It is one of the best sorting techniques that successfully build a recursive
algorithm.
Divide:
 Given a sequence of n elements(array), a[1]….a[n]
 Split into 2 sets a[1]….a[n/2] and a[n/2 +1] … a[n]
Conquer:
 After splitting the arrays into two halves, the next step is to conquer.
 In this step, we individually sort both of the a[1]….a[n/2] and a[n/2 +1] …
a[n]
 In case if we did not reach the base situation, then we again follow the
same procedure, i.e., we further segment these subarrays followed by
sorting them separately.
Combine:
 After we successfully get our sorted sub arrays a[1]….a[n/2] and a[n/2 +1]
… a[n] we merge them back to form a new sorted a[1]….a[n]
ALGORITHM:
ANALYSIS

STRASSEN'S MATRIX MULTIPLICATION


 Let A & B be two nxn matrices.
 The product C= AxB is also a nxn matrix whose (i,j)th element is formed by
taking the elements of the ith row of A and jth column of B and multiplying
them
 To compute C(I,j) we need n multiplications
 Since C has n2 elements, the algorithm’s complexity is θ(n3).
 Divide & conquer strategy can be used to find product of 2 nxn matrices
 Let A and B be 2 matrices of size n( power of 2)
 A and B is portioned into 4 square matrices, each of size n/2 x n/2
 Then product is calculated as follows

 To compute AB, we need 8 multiplications of n/2 x n/2 matrices and 4


additions of n/2 x n/2 matrices.
 Two n/2 x n/2 matrices can be added in cn2, then computing time is given
by
 Solving the above recurrence we get T(n) = O(n3)
 There is no improvement from conventional method.

Strassen’s Method
 Volker Strassen discovered a method to compute C(Ab) using only 7
multiplications and 18 additions/subtractions.
 This method involves computing 7 n/2 x n/2 intermediate matrices named
as P, Q, R, S, T , U & V
 These sub matrices are computed using 7 multiplications and 10
additions/subtractions. Cij‘s are further calculated using 8
additions/subtractions
ANALYSIS

Example:

Q: Multiply

 P = (A11+A22) (B11+B22) =(1 + 7)(8+2) = 80


 Q= (A21 + A22) B11 = (5+7)8 = 96
 R = A11(B12 - B22) = 1(4 – 2) =2
 S = A22(B21 – B11) = 7(6 – 8) = -14
 T = (A11 + A12)B22 = (1+3)2= 8
 U = (A21 – A11) (B11 + B12) = (5 - 1)(8 + 4) = 48
 V = (A12 – A22) (B21 +B22) = (3 - 7) (6 + 2) = -32
 C11 = P + S – T + V = 80 +(-14) - 8 + (-32) = 26
 C12 = R + T = 2 + 8 =10
 C21 = Q + S =96 + (-14) = 82
 C22 = P + R – Q + U = 80 + 2 – 96 + 48 = 34

GREEDY ALGORITHMS
 Algorithm design technique to solve optimization problems.
 The goal is to find BEST SOLUTION.
 Builds up solution piece by piece.
 Chooses the next piece that offers the most obvious and immediate benefit.
2 Characteristics:
 Greedy choice property
 A global optimum can be arrived at by searching a
local optimum.
 Optimal sub-structure
 An optimal solution of problem contains an optimal
solution to sub problems.
Components:
1. A candidate set − A solution is created from this set.
2. A selection function − Used to choose the best candidate to be added to the
solution.
3. A feasibility function − Used to determine whether a candidate can be
used to contribute to the solution.
4. An objective function − Used to assign a value to a solution or a partial
solution.
5. A solution function − Used to indicate whether a complete solution has
been reached.
Control abstraction

EXAMPLE PROBLEMS
 Minimum spanning tree
 Knapsack problem
 Single source Shortest path
 Optimal storage on Tapes
 Tree vertex splitting
 Job sequencing

FRACTIONAL KNAPSACK PROBLEM


 Knapsack Problem in combinatorial optimization
 Given a set of items, each with a weight and a value, determine the
number of each item to include in a collection so that the total weight is
less than or equal to a given limit and the total value is as large as
possible.
 2 Types:
 Fractional Knapsack problem
 0/1 Knapsack Problem
 Given n objects and a knapsack with capacity m. Each object i has weight
wi and profit pi. A fraction xi (0<=xi <=1) of object is placed into knapsack
and profit of pixi is earned. Objective is to obtain a filling of knapsack that
maximizes total profit earned within the total weight limit m.
 Formally the problem can be stated as
 Feasible solution is any set x1, x2, ….xn satisfying the above conditions
Example
 Consider the following instance n=3, m=20, (P1, P2, P3 ) =( 25,24,15), (w1,
w2, w3 ) =( 18,15,10)
 There are 4 feasible solutions

Best method:
1. Calculate profit/ weight ratio pi/wi.
2. Include the object which has maximum profit per unit capacity used i.e
object with maximum pi/wi value.
3. Calculate the fraction taken(x), the profit so far(p) and remaining capacity
4. Repeat steps 2 & 3 until maximum capacity has reached.
Example
 Consider a Knapsack problem instance with N=7,M=15, P={
10,5,15,7,6,18,3}, W = {2,3,5,7,1,4,1}
 Step 1: Calculate pi/wi
 pi/wi = (10/2 , 5/3 , 15/5 , 7/7, 6/1, 18/4, 3/1}
 pi/wi = {5, 1.67, 3, 1, 6, 4.5, 3}
 Select objects in the decreasing order of pi/wi value
 Object order for selection 5, 1, 6, 3, 7, 2, 4
 5th object x5 =1 , p5 = 6, tp=6, remaining capacity= 15-1 =14
 1st object x1 =1 , p1 = 10, tp =6+10=16, remaining capacity= 14-2 =12
 6th object x6 =1 , p6 = 18, tp =16+18=34, remaining capacity= 12-4 =8
 3rd object x3 =1 , p3 = 15, tp =34+15=49, remaining capacity= 8-5 =3
 7th object x7 =1 , p7 = 3, tp =49+3= 52, remaining capacity= 3-1 = 2
 2nd object x2 =2/3 , p2 = 5, tp = 52+5x2/3 = 55.3, remaining capacity= 2-2=0
 Capacity reached maximum, cannot add any more objects. So set the
fraction taken of remaining objects as 0. Here only one remaining
object(4), its fraction I set to 0
 So fraction x = { 1, 2/3, 1, 0, 1, 1, 1}
 Total profit is 55.3

ALGORITHM:
MINIMUM COST SPANNING TREE
A tree T is said to be a spanning tree of a constructed graph G if T is a sub-
graph of G and T contains all vertices. For instance, the sub-graph in heavy
lines in figure below is a spanning tree of graph shown.

Spanning tree is sometimes referred to as a skeleton or scaffolding of G. Since


spanning tree are the largest trees among all trees in G, it is also called
maximal tree subgraph or maximal tree of G. Spanning tree is defined only
for connected graphs. A graph can have multiple spanning trees. If an edge is
added to the spanning tree which will result in a circuit, then such spanning
tree is minimally connected spanning tree. An edge in spanning tree is called
the branch and an edge in the graph that is not in spanning tree is called
chord.
MINIMUM SPANNING TREE
 Given a connected and undirected graph, a spanning tree of that graph is
a sub-graph that is a tree and connects all the vertices together.
 A single graph can have many different spanning trees.
 A minimum spanning tree (MST) or minimum weight spanning tree for a
weighted, connected and undirected graph is a spanning tree with weight
less than or equal to the weight of every other spanning tree.
 The weight of a spanning tree is the sum of weights given to each edge of
the spanning tree.
 There are two algorithms to find MST: PRIMS and KRUSKAL

KRUSKAL’S ALGORITHM
 Kruskal's algorithm to find the minimum cost spanning tree uses the
greedy approach.
 This algorithm treats the graph as a forest and every node it has as an
individual tree.
 A tree connects to another only and only if, it has the least cost among all
available options and does not violate MST properties.
ALGORITHM:
1. Remove all loops and parallel edges from the given graph. In case of
parallel edges, keep the one which has the least cost associated and
remove all others.
2. Arrange all edges in their increasing order of weight
3. Add the edge which has the least weightage
4. Repeat step 3 until all vertices are added
5. No circuit must be formed in between

Greedy Characteristics of Kruskal’s Algorithm:


1. A candidate set − Set of edges in increasing order of weight.
2. A selection function − Choosing the edge with smallest weight.
3. A feasibility function − Check whether it creates a cycle.
4. An objective function − Adding edge to MST.
5. A solution function − Check if all vertices included.
Analysis of Kruskal’s Algorithm:
In tha above algorithm, E is a set of all edges in G. The only functiond we
wish to perform on this set are:
 Determine an edge with minimum cost
 Delete this edge
Both these functions can be performed if edges in E are maintained as sorted
list. If edges are maintained as minheap, then the nest edge to consider can
be obtained in O(log|E|) time and construction of heap takes O(|E|) time.
So the total time taken is O(|E|log |E|), where E is the number of edges.

EXAMPLE:
SINGLE SOURCE SHORTEST PATH ALGORITHM
In a shortest- paths problem, we are given a weighted, directed graphs G = (V,
E), with weight function w: E → R mapping edges to real-valued weights. The
weight of path p = (v0,v1,..... vk) is the total of the weights of its constituent edges:

We define the shortest - path weight from u to v by δ(u,v) = min (w (p): u→v), if
there is a path from u to v, and δ(u,v)= ∞, otherwise.
The shortest path from vertex s to vertex t is then defined as any path p with
weight w (p) = δ(s,t).
In a Single Source Shortest Paths Problem, we are given a Graph G = (V, E), we
want to find the shortest path from a given source vertex s ∈ V to every vertex v
∈ V.
5 Shortest path Algorithms
Let G be a graph and let s,t be two specified vertices of G. In shortest
spanning tree problems, we have to a path from s to t, if there is any, which
uses the least number of edges. Such a path, if it exists, is called a shortest
path from s to t. There are different types of shortest path algorithms:
1. Shortest path between 2 specified vertices.

2. Shortest path between all pairs of vertices.

3. Shortest path from a specified vertex to all others.

4. Shortest path between specified vertices that passes through specified.


vertices.

5.1 Shortest path between 2 specified vertices


The problem is to find a shortest path from a specified vertex s to another
specified vertex t, can be stated as follows:
A simple weighted digraph G of n vertices is described by n by n matrix
D=[dij ], where
dij = length of directed edge from vertex i to vertex j, dij ≥ 0
dii =0
dij = ∞, if there is no edge from i to j

In general, dij 6= dji , and the triangle inequality need not be satisfied.
That is dij + djk may be less than dik . The distance of directed path P is
defined as the sum of lengths of edges in P. The problem is to find the short-
est path and its length from starting vertex s to terminal vertex t. The most
efficient algorithm for finding shortest path is Dijkstra’s algorithm.

The Dijkstra’s algorithm is an algorithm for finding the shortest paths


between nodes in a graph. Dijkstra’s algorithm labels vertices of given di-
graph. At each stage in the algorithm some vertices have permanent labels
and other temporary labels.The algorithm begins by assigning a permanent
label 0 to starting vertex s and temporary label ∞ to remaining n-1 vertices.
In each iteration another vertex gets a permanent label, according to the
following rules:
1. Every vertex j that is not permanently labelled gets a temporary label
whose value is given by:

17
min[old label j, (old label of i + dij )

where i is the latest vertex permanently labelled in previous iteration


and dij is the direct distance between vertices i and j. If i and j are not
joined by an edge then dij =0.
2. The smallest value among all temporary labels in the previous iteration
becomes permanent label of corresponding vertex. In case of tie, select
any one
Steps 1 and 2 are repeated alternatively until destination vertex t gets a
permanent label. The first vertex to be permanently labelled is at a distance
of zero from s. the second vertex to get permanent label is the vertex closest
to s. From remaining n-2 vertices, the next one to be permanently labelled is
the second closest vertex to s and so on. The permanent label of each vertex
is the shortest distance of the vertex from s.

5.1.1 Example of Dijkstra’s algorithm


Conisder the graph in Fig:26. We have to find the distance from A to H

Figure 26:

1. The initial labelling with A as starting vertex is given by the following


matrix. Here the vertices which are directly connected to A are given
a label value, which is the weight of the edge connecting that vertex to
A. For example the weight of edge AB=8, AC=2 and AD=5. All other
vertices are labelled ∞.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞

18
2. Now C is the vertex with the smallest label. So we are making the
value of C as the next permanent label.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
3. Now calculate the distance from A to all other vertices through C. If
there is an edge from C to any other vertex, add the permanent label
of C with the weight of the edge from C. Check the newly calculated
value is smaller than the previous value. If so, replace the previous
value with new smaller value. For example: there is an edge CD of
weight 2. The permannet label of C is 2. So adding these we get the
value 4. The previous value of vertex D was 5. The newly calculated
value is smaller than the previous value; so replace 5 by 4.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
C 0 8 2 4 7 ∞ ∞ ∞
4. Now D is the vertex with the smallest label. So we are making the
value of D as the next permanent label.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
C 0 8 2 4 7 ∞ ∞ ∞
5. Now calculate the distance from A to all other vertices through C and
D. If there is an edge from D to any other vertex, add the perma-
nent label of D with the weight of the edge from D.Check the newly
calculated value is smaller than the previous value. If so, replace the
previous value with new smaller value.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
C 0 8 2 4 7 ∞ ∞ ∞
D 0 6 2 4 5 10 7 ∞
6. Now E is the vertex with the smallest label. So we are making the
value of E as the next permanent label.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
C 0 8 2 4 7 ∞ ∞ ∞
D 0 6 2 4 5 10 7 ∞
7. Now calculate the distance from A to all other vertices through C,D
and E. If there is an edge from E to any other vertex, add the perma-
nent label of E with the weight of the edge from E.Check the newly

19
calculated value is smaller than the previous value. If so, replace the
previous value with new smaller value.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
C 0 8 2 4 7 ∞ ∞ ∞
D 0 6 2 4 5 10 7 ∞
E 0 6 2 4 5 10 6 ∞
8. Now B and G are the vertices with the smallest label. We can arbitarily
choose either. We choose B. So we are making the value of B as the
next permanent label.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
C 0 8 2 4 7 ∞ ∞ ∞
D 0 6 2 4 5 10 7 ∞
E 0 6 2 4 5 10 6 ∞
9. Now calculate the distance from A to all other vertices through C,D,
E and B. If there is an edge from B to any other vertex, add the
permanent label of B with the weight of the edge from B.Check the
newly calculated value is smaller than the previous value. If so, replace
the previous value with new smaller value.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
C 0 8 2 4 7 ∞ ∞ ∞
D 0 6 2 4 5 10 7 ∞
E 0 6 2 4 5 10 6 ∞
B 0 6 2 4 5 10 6 ∞
10. Now G is the vertex with the smallest label. So we are making the
value of G as the next permanent label.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
C 0 8 2 4 7 ∞ ∞ ∞
D 0 6 2 4 5 10 7 ∞
E 0 6 2 4 5 10 6 ∞
B 0 6 2 4 5 10 6 ∞
11. Now calculate the distance from A to all other vertices through C,D,
E, B and G. If there is an edge from G to any other vertex, add the
permanent label of G with the weight of the edge from G.Check the
newly calculated value is smaller than the previous value. If so, replace

20
the previous value with new smaller value.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
C 0 8 2 4 7 ∞ ∞ ∞
D 0 6 2 4 5 10 7 ∞
E 0 6 2 4 5 10 6 ∞
B 0 6 2 4 5 8 6 ∞
G 0 6 2 4 5 8 6 12
12. Now F is the vertex with the smallest label. So we are making the
value of G as the next permanent label.

vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
C 0 8 2 4 7 ∞ ∞ ∞
D 0 6 2 4 5 10 7 ∞
E 0 6 2 4 5 10 6 ∞
B 0 6 2 4 5 8 6 ∞
G 0 6 2 4 5 8 6 12
13. Now calculate the distance from A to all other vertices through C, D,
E, B, G and F. If there is an edge from F to any other vertex, add the
permanent label of F with the weight of the edge from F.Check the
newly calculated value is smaller than the previous value. If so, replace
the previous value with new smaller value.
vi A B C D E F G H
A 0 8 2 5 ∞ ∞ ∞ ∞
C 0 8 2 4 7 ∞ ∞ ∞
D 0 6 2 4 5 10 7 ∞
E 0 6 2 4 5 10 6 ∞
B 0 6 2 4 5 8 6 ∞
G 0 6 2 4 5 8 6 12
F 0 6 2 4 5 8 6 11
14. Now we have got the distance from A to H which is the current label
of H i.e. the distance of A to H is 11.
The algorithm is represented by the flowchart in Fig:27

5.2 Shortest path between all pairs of vertices


Algorithm to find the shortest path between all n(n-1) ordered pairs of ver-
tices in a graph. This algorithm is Warshall- Floyd algorithm. The compu-

21
ALGORITHM:

EXAMPLE:
ANALYSIS
EXAMPLE:

You might also like