GREEDY ALG.
GREEDY ALG.
Greedy Method:
The greedy method is perhaps (maybe or possible) the most straight forward design
technique, used to determine a feasible solution that may or may not be optimal.
Feasible solution:- Most problems have n inputs and its solution contains a subset of inputs
that satisfies a given constraint(condition). Any subset that satisfies the constraint is called
feasible solution.
Optimal solution: To find a feasible solution that either maximizes or minimizes a given
objective function. A feasible solution that does this is called optimal solution.
The greedy method suggests that an algorithm works in stages, considering one input at a
time. At each stage, a decision is made regarding whether a particular input is in an optimal
solution.
Greedy algorithms neither postpone nor revise the decisions (ie., no back tracking).
Example: Kruskal’s minimal spanning tree. Select an edge from a sorted list, check, decide,
and never visit it again.
Application of Greedy Method:
➢ Job sequencing with deadline
➢ 0/1 knapsack problem
➢ Minimum cost spanning trees
➢ Single source shortest path problem.
➢ To find SSSP for directed graphs G(V,E) there are two different algorithms.
➢ Bellman-Ford Algorithm
➢ Dijkstra’s algorithm
➢ Bellman-Ford Algorithm:- allow –ve weight edges in input graph. This algorithm
either finds a shortest path form source vertex S∈V to other vertex v∈V or detect a –
ve weight cycles in G, hence no solution. If there is no negative weight cycles are
reachable form source vertex S∈V to every other vertex v∈V
➢ Dijkstra’s algorithm:- allows only +ve weight edges in the input graph and finds a
shortest path from source vertex S∈V to every other vertex v∈V.
➢ As an optimization measure we can use the sum of the lengths of all paths so far
generated.
➢ If we have already constructed ‘i’ shortest paths, then using this optimization measure,
the next path to be constructed should be the next shortest minimum length path.
➢ The greedy way to generate the shortest paths from Vo to the remaining vertices is to
generate these paths in non-decreasing order of path length.
➢ For this 1st, a shortest path of the nearest vertex is generated. Then a shortest path to
the 2nd nearest vertex is generated and so on.
Minimum cost spanning tree: For a given graph ‘G’ there can be more than one spanning
tree. If weights are assigned to the edges of ‘G’ then the spanning tree which has the
minimum cost of edges is called as minimal spanning tree.
The greedy method suggests that a minimum cost spanning tree can be obtained by contacting
the tree edge by edge. The next edge to be included in the tree is the edge that results in a
minimum increase in the some of the costs of the edges included so far.
There are two basic algorithms for finding minimum-cost spanning trees, and both are greedy
algorithms
→Prim’s Algorithm
→Kruskal’s Algorithm
Prim’s Algorithm: Start with any one node in the spanning tree, and repeatedly add the
cheapest edge, and the node it leads to, for which the node is not already in the spanning tree.
Notes: - At every state a decision is made about an edge of minimum cost to be included
into the spanning tree. From the edges which are adjacent to the last edge included in the
spanning tree i.e. at every stage the sub-graph obtained is a tree.
The algorithm takes four arguments E: set of edges, cost is nxn adjacency matrix cost of
(i,j)= +ve integer, if an edge exists between i&j otherwise infinity. ‘n’ is no/: of vertices. ‘t’ is a (n-
1):2matrix which consists of the edges of spanning tree.
E = { (1,2), (1,6), (2,3), (3,4), (4,5), (4,7), (5,6), (5,7), (2,7) }
n = {1,2,3,4,5,6,7)
Consider the above graph of , Using Kruskal's method the edges of this graph are considered
for inclusion in the minimum cost spanning tree in the order (1, 2), (3, 6), (4, 6), (2, 6), (1, 4),
(3, 5), (2, 5), (1, 5), (2, 3), and (5, 6). This corresponds to the cost sequence 10, 15, 20, 25,
30, 35, 40, 45, 50, 55. The first four edges are included in T. The next edge to be considered
is (I, 4). This edge connects two vertices already connected in T and so it is rejected. Next,
the edge (3, 5) is selected and that completes the spanning tree.
Maximize subject to
Maximize the sum of the values of the items in the knapsack so that the sum of the weights must be less
than the knapsack's capacity.
Greedy algorithm for knapsack
Algorithm GreedyKnapsack(m,n)
// p[i:n] and [1:n] contain the profits and weights respectively
// if the n-objects ordered such that p[i]/w[i]>=p[i+1]/w[i+1], m→ size of knapsack and
x[1:n]→ the solution vector
{
For i:=1 to n do x[i]:=0.0
U:=m;
For i:=1 to n do
{
if(w[i]>U) then break;
x[i]:=1.0;
U:=U-w[i];
}
If(i<=n) then x[i]:=U/w[i];
}
Consider a knapsack of capacity 20. Determine the optimum strategy for placing the objects
in to the knapsack. The problem can be solved by the greedy approach where in the inputs
are arranged according to selection process (greedy strategy) and solve the problem in
stages. The various greedy strategies for the problem could be as follows.
Analysis: - If we do not consider the time considered for sorting the inputs then all of the
three greedy strategies complexity will be O(n).
To complete a job one had to process the job on a machine for one unit of time. Only one
machine is available for processing jobs.
A feasible solution for this problem is a subset J of jobs such that each job in this subset can
be completed by its deadline.
The value of a feasible solution J is the sum of the profits of the jobs in J, i.e., ∑i∈jPi
An optimal solution is a feasible solution with maximum value.
The problem involves identification of a subset of jobs which can be completed by its
deadline. Therefore the problem suites the subset methodology and can be solved by the
greedy method.
In the example solution ‘3’ is the optimal. In this solution only jobs 1&4 are processed and
the value is 127. These jobs must be processed in the order j4 followed by j1. the process of
job 4 begins at time 0 and ends at time 1. And the processing of job 1 begins at time 1 and
ends at time2. Therefore both the jobs are completed within their deadlines. The optimization
measure for determining the next job to be selected in to the solution is according to the
profit. The next job to include is that which increases ∑pi the most, subject to the constraint
that the resulting “j” is the feasible solution. Therefore the greedy strategy is to consider the
jobs in decreasing order of profits.
We must formulate an optimization measure to determine how the next job is chosen.
algorithm js(d, j, n)
//d→ dead line, j→subset of jobs ,n→ total number of jobs
// d[i]≥1 1 ≤ i ≤ n are the dead lines,
// the jobs are ordered such that p[1]≥p[2]≥---≥p[n]
//j[i] is the ith job in the optimal solution 1 ≤ i ≤ k, k→ subset range
{
d[0]=j[0]=0;
j[1]=1;
k=1;
for i=2 to n do{
r=k;
while((d[j[r]]>d[i]) and [d[j[r]]≠r)) do
r=r-1;
if((d[j[r]]≤d[i]) and (d[i]> r)) then
{
for q:=k to (r+1) setp-1 do j[q+1]= j[q];
j[r+1]=i;
k=k+1;
}
}
return k;
}
Note: The size of sub set j must be less than equal to maximum deadline in given list.
➢ Graphs can be used to represent the highway structure of a state or country with
vertices representing cities and edges representing sections of highway.
➢ The edges have assigned weights which may be either the distance between the 2
cities connected by the edge or the average time to drive along that section of
highway.
➢ For example if A motorist wishing to drive from city A to B then we must answer the
following questions
o Is there a path from A to B
o If there is more than one path from A to B which is the shortest path
➢ The length of a path is defined to be the sum of the weights of the edges on that path.
Given a directed graph G(V,E) with weight edge w(u,v). e have to find a shortest path from