402 Ada Lab Manual
402 Ada Lab Manual
Experiment Aim
No.
10. Write a program for Minimum spanning trees using Prim’s algorithm.
The Binary search requires an ordered list of elements and is only applicable to the arrays. A
binary search or half-interval search algorithm locates the position of an item in a sorted array.
Binary search works by comparing an input value to the middle element of the array. The
comparison determines whether the element equals the input, less than the input or greater.
When the element being compared to equals the input the search stops and typically returns the
position of the element. If the element is not equal to the input then a comparison is made to
determine whether the input is less than or greater than the element. Binary search algorithms
typically halve the number of items to check with each successive iteration, thus locating the
given item (or determining its absence) in logarithmic time.
Note:
1. It is applicable to arrays not on linked list, because it is not possible to locate middle in the
linked list.
2. Elements should be sorted in the array.
3. Performance is good for sorting in a large collection of elements, but low for very less elements.
Iterative Algorithm: An iterative method attempts to solve a problem (for example, finding the
root of an equation or system of equations) by finding successive approximations to the solution
starting from an initial guess. This approach is in contrast to direct methods, which attempt to
solve the problem by a finite sequence of operations, and, in the absence of rounding errors,
would deliver an exact solution.
Example: Find 103 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.
}
return null; // not found
}//end binary search
#include<stdio.h>
int binarySearch(int arr[], int l, int r, int x)
{
while (l <= r)
{
int m = l + (r-l)/2;
int main(void)
{
int arr[] = {2, 3, 4, 10, 40};
int n = sizeof(arr)/ sizeof(arr[0]);
int x = 10;
int result = binarySearch(arr, 0, n-1, x);
(result == -1)? printf("Element is not present in array")
: printf("Element is present at index %d", result);
return 0;
}
Program- 2
Recursive Binary Search Algorithms
Recursive Algorithm: A recursive algorithm is an algorithm which calls itself with "smaller (or
simpler)" input values, and which obtains the result for the current input by applying simple
operations to the returned value for the smaller (or simpler) input. More generally if a problem
can be solved utilizing solutions to smaller versions of the same problem, and the smaller versions
reduce to easily solvable cases, then one can use a recursive algorithm to solve that problem. In
general, recursive computer programs require more memory and computation compared with
iterative algorithms, but they are simpler and for many cases, a natural way of thinking about the
problem. In recursive algorithm there should be some stopping criteria.
In recursive binary search algorithm, the list of elements is divided into two equal sized half and
then searching begins recursively in only one half of the list where possibility of element to be
present.
Algorithm
Complexity: The Complexity of Recursive Binary Search algorithm is O (log2n) where n is the
number of elements in the array.
IMPLEMENTATION:
Idea:
The Mergesort algorithm is based on divide and conquer strategy. First, the sequence to be sorted
is decomposed into two halves (Divide). Each half is sorted independently (Conquer). Then the
two sorted halves are merged to a sorted sequence (Combine).
The procedure mergesort sorts a sequence from index “low” to index “high”. First, index “mid” in
the middle between “low” and “high” is determined. Then the first part of the sequence (from low
to mid) and the second part (from mid+1 to high) are sorted by recursive calls of mergesort. Then
the two sorted halves are merged by procedure merge. Recursion ends when low = high, i.e. when
a subsequence consists of only one element.
The main work of the Mergesort algorithm is performed by function merge. Function Merge is
usually implemented in the following way: The two halves are first copied into an auxiliary
array b. Then the two halves are scanned by pointers i and j and the respective next-greatest
element at each time is copied back to array a.
At the end a situation occurs where one index has reached the end of its half, while the other has
not. Then, in principle, the rest of the elements of the corresponding half have to be copied back.
Actually, this is not necessary for the second half, since (copies of) the remaining elements are
already at their proper places.
Algorithm
#include <stdio.h>
#define max 10
int a[11] = { 10, 14, 19, 26, 27, 31, 33, 35, 42, 44, 0 };
int b[10];
for(l1 = low, l2 = mid + 1, i = low; l1 <= mid && l2 <= high; i++) {
if(a[l1] <= a[l2])
b[i] = a[l1++];
else
b[i] = a[l2++];
}
int main() {
int i;
sort(0, max);
Divide: Rearrange the elements and split the array into two subarrays and an element in between
such that so that each element in the left subarray is less than or equal the middle element and
each element in the right subarray is greater than the middle element.
Conquer: Recursively sort the two subarrays.
Combine: None
Example:
Sort the numbers: 65, 70, 75, 80, 85, 60, 55, 50, 45
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) i j
65 70 75 80 85 60 55 50 45 +∞ 2 9
65 45 75 80 85 60 55 50 70 +∞ 3 8
65 45 50 80 85 60 55 75 70 +∞ 4 7
65 45 50 55 85 60 80 75 70 +∞ 5 6
65 45 50 55 60 85 80 75 70 +∞ 6 5
60 45 50 55 65 85 80 75 70 +∞
______________ _____________
Sublist -1 Sublist-2
Algorithm
Algorithm Quicksort (a, low, high)
{
/* Termination condition! */
if ( high > low )
{
pivot = partition( a, low, high );
quicksort( a, low, pivot-1 );
quicksort( a, pivot+1, high );
}
}
if ( i < j )
{
temp:=a[i];
a[i]:=a[j];
a[j]:=temp;
}
Complexity:
Best Case: O (nlogn)
Worst Case: O (n2)
Average Case: O (nlogn)
IMPLEMENTATION:
#include<stdio.h>
void quicksort(int number[25],int first,int last){
int i, j, pivot, temp;
if(first<last){
pivot=first;
i=first;
j=last;
while(i<j){
while(number[i]<=number[pivot]&&i<last)
i++;
while(number[j]>number[pivot])
j--;
if(i<j){
temp=number[i];
number[i]=number[j];
number[j]=temp;
}
}
temp=number[pivot];
number[pivot]=number[j];
number[j]=temp;
quicksort(number,first,j-1);
quicksort(number,j+1,last);
}
}
int main(){
int i, count, number[25];
quicksort(number,0,count-1);
return 0;
}
Program- 5
Strassen’s Matrix Multiplication
Let A, B be two square matrices of size n X n. We want to calculate the matrix product C as:
With
Then
This standard method (brute force approach) of matrix multiplication of two n × n matrices takes
O(n3) operations.
The usual multiplication of two 2 × 2 matrices takes 8 multiplications and 4 additions. Strassen
showed how two 2 × 2 matrices can be multiplied using only 7 multiplications and 18 additions.
This reduce can be done by Divide and Conquer Approach. Here divide matrices into sub-
matrices: A0, A1, A2 etc. Use matrix multiply equations and Recursively multiply sub-matrices.
Terminate recursion with a simple base case
The equations that perform 7 multiplication and 10 addition subtraction of matrices of size n/2 X
n/2 are as follows:
P1 = (A11+ A22)(B11+B22)
P2 = (A21 + A22) * B11
P3 = A11 * (B12 - B22)
P4 = A22 * (B21 - B11)
P5 = (A11 + A12) * B22
P6 = (A21 - A11) * (B11 + B12)
P7 = (A12 - A22) * (B21 + B22)
Now remaining 8 additions / subtractions of size n/2 X n/2 are performed to calculate Cijs.
C11 = P1 + P4 - P5 + P7
C12 = P3 + P5
C21 = P2 + P4
C22 = P1 + P3 - P2 + P6
else
{
matmul(A, B, R, n/4);
matmul(A, B+(n/4), R+(n/4), n/4);
matmul(A+2*(n/4), B, R+2*(n/4), n/4);
matmul(A+2*(n/4), B+(n/4), R+3*(n/4), n/4);
matmul(A+(n/4), B+2*(n/4), R, n/4);
matmul(A+(n/4), B+3*(n/4), R+(n/4), n/4);
matmul(A+3*(n/4), B+2*(n/4), R+2*(n/4), n/4);
matmul(A+3*(n/4), B+3*(n/4), R+3*(n/4), n/4);
}
}
Complexity
#include<stdio.h>
int main(){
int a[2][2], b[2][2], c[2][2], i, j;
int m1, m2, m3, m4 , m5, m6, m7;
return 0;
}
Program- 6
The Greedy Knapsack Problem
Greedy algorithm: A greedy algorithm for an optimization problem always makes the choice
that looks best at the moment and adds it to the current subsolution. A greedy algorithm obtains
an optimal solution to a problem by making a sequence of choices. At each decision point, the
algorithm chooses the locally optimal solution. In other words, when we are considering which
choice to make, we make the choice that looks best in the current situation, without considering
results from subproblems.
Knapsack Problem: Given a set of items, each with a weight and a value, determine the number
of each item to include in a collection so that the total weight is less than or equal to a given limit
and the total value is as large as possible. It derives its name from the problem faced by someone
who is constrained by a fixed-size knapsack and must fill it with the most useful items.
We have n kinds of items, 1 through n. Each kind of item i has a profit value p i and a weight wi.
We usually assume that all values and weights are nonnegative. To simplify the representation,
we can also assume that the items are listed in increasing order of weight. The maximum weight
that we can carry in the bag is W.
and 0≤ xi ≤ 1, 1≤ i ≤ n
Concept:
1. Calculate pi/wi for each item i.
2. Then Sort the items in decreasing order of their pi/wi.
3. Select an item from the sorted list and place it into the bag such that the total weight of the
objects should not exceed the total capacity of the knapsack. Perform this step repeatedly.
Example :
i: (1, 2, 3, 4) pi: (5, 9, 4, 8) wi: (1, 3, 2, 2) and W= 4
now pi/wi: (5, 3, 2, 4)
Solution:
1st: all of item 1, x[1]=1, x[1]*W[1]=1
2nd: all of item 4, x[4]=1, x[4]*W[4]=2
3rd: 1/3 of item 2, x[2]=1/3, x[2]*W[2]=1
Now the total weight is 4=W
(x[3]=0)
The Algorithm:
Complexity
If the items are already sorted into decreasing order of vi / wi, then the while-loop takes a time
in O(n); Therefore, the total time including the sort is in O(n log n).
IMPLEMENTATION:
Given n sorted files, there are many ways in which to pair-wise merge them into a single sorted
file. Different pairing requires differing amounts of computing time. Thus the optimal way to
pair-wise merge n sorted files can be performed using greedy method.
Greedy attempt to obtain an optimal merge pattern is easy to formulate. Since merging an n-
record file and m-record file requires possibly n+m record moves, the obvious choice for
selection criteria is: at each step merge the two smallest size files together.
For Example: Suppose there are 3 sorted lists L1, L2, and L3, of sizes 30, 20, and 10, respectively,
which need to be merged into a combined sorted list, but we can merge only two at a time. We
intend to find an optimal merge pattern which minimizes the total number of comparisons. For
example, we can merge L1 and L2, which uses 30 + 20 = 50 comparisons resulting in a list of size
50. We can then merge this list with list L3, using another 50 + 10 = 60 comparisons, so the total
number of comparisons is 50 + 60 = 110. Alternatively, we can merge lists L2 and L3, using 20 +
10 = 30 comparisons, the resulting list (size 30) can then be merged with list L1, for another 30 +
30 = 60 comparisons. So the total number of comparisons is 30 + 60 = 90. It doesn’t take long to
see that this latter merge pattern is the optimal one.
Binary Merge Trees: We can depict the merge patterns using a binary tree, built from the leaf
nodes (the initial lists) towards the root in which each merge of two nodes creates a parent node
whose size is the sum of the sizes of the two children. For example, the two previous merge
patterns are depicted in the following two figures:
Cost = Cost =
6 30*2 + 6 30*1 +
0 20*2 + 0 20*2 +
5 1 3 3
10*1 = 10*2 =
0 0 0 0
110 2 901
3 2
0 L1 and
Merge 0 L2, Merge L02 and L03,
then with L3 then with L1
merge cost = sum of all weighted
external path lengths
Fig 7.1: Optimal merge tree
If di is the distance from the root to the external node for file xi and qi is the length of file xi, then
the total number of record moves for this binary merge tree is given by:
n
∑ diqi
i=1
This sum is called the weighted external path length of the tree.
Algorithm
#include<iostream.h>
#include<conio.h>
void main()
{
clrscr();
int i,k,a[10],c[10],n,l;
cout<<"Enter the no. of elements\t";
cin>>n;
cout<<"\nEnter the sorted elments for optimal merge pattern";
for(i=0;i<n;i++)
{
cout<<"\t";
cin>>a[i];
}
i=0;
k=0;
c[k]=a[i]+a[i+1];
i=2;
while(i<n)
{
k++;
if((c[k-1]+a[i])<=(a[i]+a[i+1]))
{
c[k]=c[k-1]+a[i];
}
else
{
c[k]=a[i]+a[i+1];
i=i+2;
while(i<n)
{
k++;
if((c[k-1]+a[i])<=(c[k-2]+a[i]))
{
c[k]=c[k-1]+a[i];
}
else
{
c[k]=c[k-2]+a[i];
}
i++;
}
i++;
}
k++;
c[k]=c[k-1]+c[k-2];
cout<<"\n\nThe optimal sum are as follows......\n\n";
for(k=0;k<n-1;k++)
{
cout<<c[k]<<"\t";
}
l=0;
for(k=0;k<n-1;k++)
{
l=l+c[k];
}
cout<<"\n\n The external path length is ......"<<l;
getch();
}
Program-8
Huffman Coding
This problem is that of finding the minimum length bit string which can be used to encode a
string of symbols. One application is text compression:
Prefix (free) code: no codeword is also a prefix of some other code words (Un-ambiguous)
An optimal data compression achievable by a character code can always be achieved with a prefix
code. An optimal code is always represented by a full binary tree, in which every non-leaf node
has two children
Huffman's scheme uses a table of frequency of occurrence for each symbol (or character) in the
input. This table may be derived from the input itself or from data which is representative of the
input. For instance, the frequency of occurrence of letters in normal English might be derived
from processing a large number of text documents and then used for encoding all text documents.
We then need to assign a variable-length bit string to each character that unambiguously
represents that character. This means that the encoding for each character must have a unique
prefix. If the characters to be encoded are arranged in a binary tree:
An encoding for each character is found by following the tree from the route to the character in
the leaf: the encoding is the string of symbols on each branch followed.
For example:
String Encoding
TEA 10 00 010
SEA 011 00 010
TEN 10 00 110
Algorithm
Complexity: The time complexity of the Huffman algorithm is O (n log n). Using a heap to store
the weight of each tree, each iteration requires O (log n) time to determine the cheapest weight
and insert the new weight. There are O (n) iterations, one for each item.
Another Example:
IMPLEMENTATION:
#include <stdio.h>
#include <stdlib.h>
return temp;
}
// current size is 0
minHeap->size = 0;
minHeap->capacity = capacity;
minHeap->array
= (struct MinHeapNode**)malloc(minHeap->
capacity * sizeof(struct MinHeapNode*));
return minHeap;
}
// A utility function to
// swap two min heap nodes
void swapMinHeapNode(struct MinHeapNode** a,
struct MinHeapNode** b)
if (smallest != idx) {
swapMinHeapNode(&minHeap->array[smallest],
&minHeap->array[idx]);
minHeapify(minHeap, smallest);
}
}
--minHeap->size;
minHeapify(minHeap, 0);
return temp;
}
++minHeap->size;
int i = minHeap->size - 1;
minHeap->array[i] = minHeapNode;
}
int n = minHeap->size - 1;
int i;
for (i = (n - 1) / 2; i >= 0; --i)
minHeapify(minHeap, i);
}
printf("\n");
}
minHeap->size = size;
buildMinHeap(minHeap);
return minHeap;
}
{
struct MinHeapNode *left, *right, *top;
top->left = left;
top->right = right;
insertMinHeap(minHeap, top);
}
arr[top] = 0;
printCodes(root->left, arr, top + 1);
}
arr[top] = 1;
printCodes(root->right, arr, top + 1);
}
{
// Construct Huffman Tree
struct MinHeapNode* root
= buildHuffmanTree(data, freq, size);
return 0;
}
Program-9
Minimum Spanning Trees using Kruskal’s algorithm
Let G = (V, E) be an undirected connected graph. A subgraph T = (V, E’) of G is a spanning
tree of G iff T is a tree.
A spanning tree is a minimal subgraph G’ of G such that V (G’) = V (G) and G’ is connected
Any connected graph with n vertices must have n − 1 edges
All connected graphs with n − 1 edges are trees
Kruskal's algorithm is an algorithm in graph theory that finds a minimum spanning tree for a
connected weighted graph. This means it finds a subset of the edges that forms a tree that
includes every vertex, where the total weight of all the edges in the tree is minimized. If the
graph is not connected, then it finds a minimum spanning forest (a minimum spanning tree for
each connected component). Kruskal's algorithm is an example of a greedy algorithm.
Example:
Solution:
Iteration Edge Components
0 - {1}; {2}; {3}; {4}; {5}; {6}; {7}
1 {1,2} {1,2}; {3}; {4}; {5}; {6}; {7}
2 {2,3} {1,2,3}; {4}; {5}; {6}; {7}
Iteration Edge Components
3 {4,5} {1,2,3}; {4,5}; {6}; {7}
4 {6,7} {1,2,3}; {4,5}; {6,7}
5 {1,4} {1,2,3,4,5}; {6,7}
6 {2,5} Not included (adds cycle)
7 {4,7} {1,2,3,4,5,6,7}
Algorithm
Algorithm MST_Kruskal ( G, t )
{
// G is the graph, with edges E(G) and vertices V(G).
// w(u,v) gives the weight of edge (u,v).
// t is the set of edges in the minimum spanning tree.
// Build a heap out of the edges using edge cost as the comparison criteria
// using the heapify algorithm
heap = heapify ( E(G) )
t = NULL;
// Change the parent of each vertex to a NULL
// Each vertex is in different set
for ( i = 0; i < |V(G)|; i++ )
{
parent[i] = NULL
}
i=0
while ( ( i < n - 1 ) AND (heap is not empty) )
{
e = delete ( heap ) // Get minimum cost edge from heap
adjust ( heap ) // Reheapify heap
// Find both sides of edge e = (u,v) in the tree grown so far
j = find ( u(e), t )
k = find ( v(e), t )
if ( j != k )// Both are in different sets
{
i++
t[i,1] = u
t[i,2] = v
union ( j, k )
}
}
}
Complexity: first sort the edges by weight using a comparison sort in O(E log E) time; this
allows the step "remove an edge with minimum weight from S" to operate in constant time. Next,
we use a disjoint-set data structure (Union & Find) to keep track of which vertices are in which
components. We need to perform O(E)operations, two 'find' operations and possibly one union for
each edge. Even a simple disjoint-set data structure such as disjoint-set forests with union by rank
can perform O(E) operations in O(E log V) time. Thus the total time is O(E log E) =O(E log V).
IMPLEMENTATION:
#include <bits/stdc++.h>
using namespace std;
return graph;
}
return subsets[i].parent;
}
// Driver code
int main()
{
/* Let us create following weighted graph
10
0--------1
|\|
6| 5\ |15
|\|
2--------3
4 */
int V = 4; // Number of vertices in graph
int E = 5; // Number of edges in graph
Graph* graph = createGraph(V, E);
// add edge 0-1
graph->edge[0].src = 0;
graph->edge[0].dest = 1;
graph->edge[0].weight = 10;
KruskalMST(graph);
return 0;
}
Program-10
Minimum Spanning Trees using Prim’s algorithm
Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a connected
weighted undirected graph. This means it finds a subset of the edges that forms a tree that
includes every vertex, where the total weight of all the edges in the tree is minimized. . The
algorithm was developed in 1930 by Czech mathematician Vojtěch Jarník and later
independently by computer scientist Robert C. Prim in 1957 and rediscovered by Edsger
Dijkstra in 1959. Therefore it is also sometimes called the DJP algorithm, the Jarník
algorithm, or the Prim–Jarník algorithm.
The Concept:
The algorithm continuously increases the size of a tree, one edge at a time, starting with a tree
consisting of a single vertex, until it spans all vertices.
Initialize: Vnew = {x}, where x is an arbitrary node (starting point) from V, Enew = {}
Repeat until Vnew = V:
Choose an edge (u, v) with minimal weight such that u is in Vnew and v is not (if there are
multiple edges with the same weight, any of them may be picked)
Add v to Vnew, and (u, v) to Enew
Example:
Complexity
A simple implementation using an adjacency matrix graph representation and searching an array
of weights to find the minimum weight edge to add requires O(V2) running time. Using a simple
binary heap data structure and an adjacency list representation, Prim's algorithm can be shown to
run in time O(E log V) where E is the number of edges and V is the number of vertices. Using a
more sophisticated Fibonacci heap, this can be brought down to O(E + V log V), which is
asymptotically faster when the graph is dense enough that E is Ω(V).
IMPLEMENTATION:
#include<stdio.h>
#include<stdlib.h>
int G[MAX][MAX],spanning[MAX][MAX],n;
int prims();
int main()
{
int i,j,total_cost;
printf("Enter no. of vertices:");
scanf("%d",&n);
for(i=0;i<n;i++)
for(j=0;j<n;j++)
scanf("%d",&G[i][j]);
total_cost=prims();
printf("\nspanning tree matrix:\n");
for(i=0;i<n;i++)
{
printf("\n");
for(j=0;j<n;j++)
printf("%d\t",spanning[i][j]);
}
int prims()
{
int cost[MAX][MAX];
int u,v,min_distance,distance[MAX],from[MAX];
int visited[MAX],no_of_edges,i,min_cost,j;
for(i=1;i<n;i++)
{
distance[i]=cost[0][i];
from[i]=0;
visited[i]=0;
}
while(no_of_edges>0)
{
//find the vertex at minimum distance from the tree
min_distance=infinity;
for(i=1;i<n;i++)
if(visited[i]==0&&distance[i]<min_distance)
{
v=i;
min_distance=distance[i];
}
u=from[v];
min_cost=min_cost+cost[u][v];
}
return(min_cost);
}
Program-11
Single Source Shortest path algorithm using Greedy
Method
Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1956 and
published in 1959, is a graph search algorithm that solves the single-source shortest path
problem for a graph with nonnegative edge path costs, producing a shortest path tree. This
algorithm is often used in routing. An equivalent algorithm was developed by Edward F. Moore
in 1957.
For a given source vertex (node) in the graph, the algorithm finds the path with lowest cost (i.e.
the shortest path) between that vertex and every other vertex. It can also be used for finding
costs of shortest paths from a single vertex to a single destination vertex by stopping the
algorithm once the shortest path to the destination vertex has been determined. For example, if
the vertices of the graph represent cities and edge path costs represent driving distances between
pairs of cities connected by a direct road, Dijkstra's algorithm can be used to find the shortest
route between one city and all other cities.
Concept:
Let the node at which we are starting be called the initial node. Let the distance of node Y be the
distance from the initial node to Y. Dijkstra's algorithm will assign some initial distance values
and will try to improve them step by step.
1. Assign to every node a distance value: set it to zero for our initial node and to infinity for all
other nodes.
2. Mark all nodes as unvisited. Set initial node as current.
3. For current node, consider all its unvisited neighbors and calculate their tentative distance. For
example, if current node (A) has distance of 6, and an edge connecting it with another node (B)
is 2, the distance to B through A will be 6+2=8. If this distance is less than the previously
recorded distance, overwrite the distance.
4. When we are done considering all neighbors of the current node, mark it as visited. A visited
node will not be checked ever again; its distance recorded now is final and minimal.
5. If all nodes have been visited, finish. Otherwise, set the unvisited node with the smallest distance
(from the initial node, considering all nodes in graph) as the next "current node" and continue
from step 3.
#include <limits.h>
#include <stdio.h>
// A utility function to find the vertex with minimum distance value, from
// the set of vertices not yet included in shortest path tree
int minDistance(int dist[], bool sptSet[])
{
// Initialize min value
int min = INT_MAX, min_index;
return min_index;
}
dijkstra(graph, 0);
return 0;
}
Program-12
Floyd-Warshal Algorithm using Dynamic
Programming
The Floyd–Warshall algorithm (sometimes known as the WFI Algorithm, Roy–Floyd
algorithm or just Floyd's algorithm) is a graph analysis algorithm for finding shortest paths in a
weighted graph (with positive or negative edge weights). A single execution of the algorithm
will find the lengths (summed weights) of the shortest paths between all pairs of vertices
though it does not return details of the paths themselves. The algorithm is an example of
dynamic programming. It was published in its currently recognized form by Robert Floyd in
1962. However, it is essentially the same as algorithms previously published by Bernard Roy
in 1959 and also by Stephen Warshall in 1962.
Let dist(k,i,j) be the the length of the shortest path from i and j that uses only the vertices
as intermediate vertices. The following recurrence:
k = 0 is our base case - dist(0,i,j) is the length of the edge from vertex i to vertex j if it exists,
and otherwise.
dist(k,i,j) = min(dist(k - 1,i,k) + dist(k - 1,k,j),dist(k - 1,i,j)): For any vertex i and vertex j, the
length of the shortest path from i to j with all intermediate vertices simply does not
involve the vertex k at all (in which case it is the same as dist(k - 1,i,j)), or that the shorter path
goes through vertex k, so the shortest path between vertex i and vertex j is the combination of
the path from vertex i to k, and from vertex k to j.
Algorithm:
for k = 1 to n
for i = 1 to n
for j = 1 to n
dist[k][i][j] = min(dist[k-1][i][j], dist[k-1][i][k] + dist[k-1][k][j])
}
Complexity: This algorithm takes Θ(n3) time and Θ(n3) space, and has the distinct advantage of
hiding a small constant in its behavior, since very little work is done in the innermost loop.
Furthermore, the space-bound can be reduced further to Θ(n2) by noticing that dist(k,i,j) is
independent from dist(k - 1,i,j).
IMPLEMENTATION:
#include<stdio.h>
int min(int,int);
void floyds(int p[10][10],int n)
{
int i,j,k;
for(k=1;k<=n;k++)
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
if(i==j)
p[i][j]=0;
else
p[i][j]=min(p[i][j],p[i][k]+p[k][j]);
}
int min(int a,int b)
{
if(a<b)
return(a);
else
return(b);
}
void main()
{
int p[10][10],w,n,e,u,v,i,j;;
printf("\n Enter the number of vertices:");
scanf("%d",&n);
printf("\n Enter the number of edges:\n");
scanf("%d",&e);
for(i=1;i<=n;i++)
{
for(j=1;j<=n;j++)
p[i][j]=999;
}
for(i=1;i<=e;i++)
{
printf("\n Enter the end vertices of edge%d with its weight \n",i);
scanf("%d%d%d",&u,&v,&w);
p[u][v]=w;
}
printf("\n Matrix of input data:\n");
for(i=1;i<=n;i++)
{
for(j=1;j<=n;j++)
printf("%d \t",p[i][j]);
printf("\n");
}
floyds(p,n);
printf("\n Transitive closure:\n");
for(i=1;i<=n;i++)
{
for(j=1;j<=n;j++)
printf("%d \t",p[i][j]);
printf("\n");
}
printf("\n The shortest paths are:\n");
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
{
if(i!=j)
printf("\n <%d,%d>=%d",i,j,p[i][j]);
}
}
Important Viva Questions
1. What is an algorithm?
An algorithm is a sequence of unambiguous instructions for solving a problem. i.e. for
obtaining a required output for any legitimate input in a finite amount of time.
2. What do you mean by Amortized Analysis?
Amortized analysis finds the average running time per operation over a worst case sequence of
operations. Amortized analysis differs from average-case performance in that probability is not
involved; amortized analysis guarantees the time per operation over worst-case performance.
3. What are important problem types? (or) Enumerate some important types of
problems.
a) Sorting
b) Searching
c) Numerical problems
d) Geometric problems
e) Combinatorial Problems
f) Graph Problems
g) String processing Problems
4. Name some basic Efficiency classes
a) Constant
b) Logarithmic
c) Linear
d) nlogn
e) Quadratic
f) Cubic
g) Exponential
h) Factorial
5. What are algorithm design techniques?
Algorithm design techniques ( or strategies or paradigms) are general approaches to solving
problems algorithmatically, applicable to a variety of problems from different areas of
computing. General design techniques are:
a) Brute force
b) divide and conquer
c) decrease and conquer
d) transform and concquer
e) greedy technique
f) dynamic programming
g) backtracking
h) branch and bound
6. How is an algorithm’s time efficiency measured?
Time efficiency indicates how fast the algorithm runs. An algorithm’s time efficiency is
measured as a function of its input size by counting the number of times its basic operation
(running time) is executed. Basic operation is the most time consuming operation in the
algorithm’s innermost loop.
7. What is Big ‘Oh’ notation?
A function t(n) is said to be in O(g(n)), denoted t(n) O(g(n)) , if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integers n0 such that:
t(n) ≤ cg(n) for all n≥ n0
8. What is an Activation frame?
It is a storage area for an invocation of recursive program (parameters, local variables, return
address/value etc.). Activation frame allocated from frame stack pointed by frame pointer.
9. Define order of an algorithm
Measuring the performance of an algorithm in relation with the input size n is known as order
of growth.
10. What is recursive call?
Recursive algorithm makes more than a single call to itself is known as recursive call. An
algorithm that calls itself is direct recursive.An algorithm”A” is said to be indirect recursive if
it calls another algorithm which in turn calls “A”.
11. What do you mean by stepwise refinement?
In top down design methodology the problem is solved in sequence (step by step) is known as
stepwise refinement.
12. How is the efficiency of the algorithm defined?
The efficiency of an algorithm is defined with the components.
a) Time efficiency -indicates how fast the algorithm runs
b) Space efficiency -indicates how much extra memory the algorithm needs
13. Define direct recursive and indirect recursive algorithms.
Recursion occurs where the definition of an entity refers to the entity itself. Recursion can be
direct when an entity refers to itself directly or indirect when it refers to other entities which
refers to it. A (Directly) recursive routine calls itself. Mutually recursive routines are an
example of indirect recursion. A (Directly) recursive data type contains pointers to instances of
the data type.
14. What are the characteristics of an algorithm?
Every algorithm should have the following five characteristics
a) Input
b) Output
c) Definiteness
d) Effectiveness
e) Termination
Therefore, an algorithm can be defined as a sequence of definite and effective instructions,
which terminates with the production of correct output from the given input. In other words,
viewed little more formally, an algorithm is a step by step formalization of a mapping function
to map input set onto an output set.
15. What do you mean by time complexity and space complexity of an algorithm?
Time complexity indicates how fast the algorithm runs. Space complexity deals with extra
memory it require. Time efficiency is analyzed by determining the number of repetitions of the
basic operation as a function of input size. Basic operation: the operation that contributes most
towards the running time of the algorithm The running time of an algorithm is the function
defined by the number of steps (or amount of memory) required to solve input instances of size
n.
16. Define Big Omega Notations
A function t(n) is said to be in Ω(g(n)) , denoted t(n) Є Ω((g(n)) , if t(n) is bounded below by
some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant
c and some nonnegative integer n0 such that
t(n) ≥cg(n) for all for all n ≥ n0
17. What are the different criteria used to improve the effectiveness of algorithm?
(i) The effectiveness of algorithm is improved, when the design, satisfies the following
constraints to be minimum.
Time efficiency - how fast an algorithm in question runs.
Space efficiency – an extra space the algorithm requires
(ii) The algorithm has to provide result for all valid inputs.
18. Analyze the time complexity of the following segment:
for(i=0;i<N;i++)
for(j=N/2;j>0;j--)
sum++;
Time Complexity= N * N/2
= N2 /2
= O(N2)
19. write general plan for analyzing non-recursive algorithms.
a) Decide on parameter indicating an input’s size.
b) Identify the algorithm’s basic operation
c) Checking the no.of times basic operation executed depends on size of input.if it depends on
some additional property,then best,worst,avg.cases need to be investigated
d) Set up sum expressing the no.of times the basic operation is executed. (establishing order of
growth)
20. How will you measure input size of algorithms?
The time taken by an algorithm grows with the size of the input. So the running time of the
program depends on the size of its input. The input size is measured as the number of items in
the input that is a parameter n is indicating the algorithm’s input size.
21. Define the terms: pseudocode, flow chart
A pseudocode is a mixture of a natural language and programming language like constructs. A
pseudocode is usually more precise than natural language. A flowchart is a method of
expressing an algorithm by a collection of connected geometric shapes containing descriptions
of the algorithm’s steps.
22. write general plan for analyzing recursive algorithms.
a) Decide on parameter indicating an input’s size.
b) Identify the algorithm’s basic operation
c) Checking the no.of times basic operation executed depends on size of input.if it depends on
some additional property,then best,worst,avg.cases need to be investigated
d) Set up the recurrence relation,with an appropriate initial condition,for the number of times the
basic operation is executed
e) Solve recurrence (establishing order of growth)
23. What do you mean by Combinatorial Problem?
Combinatorial Problems are problems that ask to find a combinatorial object-such as
permutation, a combination, or a subset--that satisfies certain constraints and has some desired
property.
24. Define Big Theta Notations
A function t(n) is said to be in Θ(g(n)) , denoted t(n) Є Θ (g(n)) , if t(n) is bounded both above
and below by some positive constant multiple of g(n) for all large n, i.e., if there exist some
positive constants c1 and c2 and some nonnegative integer n0 such that c1 g(n) ≤t(n) ≤ c2g(n)
for all n ≥ n0
25. What is performance measurement?
Performance measurement is concerned with obtaining the space and the time requirements of
a particular algorithm.
26. What is an algorithm?
An algorithm is a finite set of instructions that, if followed, accomplishes a particular task. In
addition, all algorithms must satisfy the following criteria:
a) input
b) Output
c) Definiteness
d) Finiteness
e) Effectiveness.
27. Define Program.
A program is the expression of an algorithm in a programming language. Sometimes works
such as procedure, function and subroutine are used synonymously program.
28. What is recursive algorithm?
An algorithm is said to be recursive if the same algorithm is invoked in the body.
29. What is space complexity and time complexity ?
The space complexity of an algorithm is the amount of memory it needs to run to completion.
The time complexity of an algorithm is the amount of computer time it needs to run to
completion.
30. Give the two major phases of performance evaluation
Performance evaluation can be loosely divided into two major phases:
a) a prior estimates (performance analysis)
b) a Posterior testing(performance measurement)
31. Define input size.
The input size of any instance of a problem is defined to be the number of words(or the
number of elements) needed to describe that instance.
32. Define best-case step count.
The best-case step count is the minimum number of steps that can be executed for the given
parameters.
33. Define worst-case step count.
The worst-case step count is the maximum number of steps that can be executed for the given
parameters.
34. Define average step count.
The average step count is the average number of steps executed an instances with the given
parameters.
35. Define Little “oh”.
The function f(n) = 0(g(n)) iff Lim f(n) =0, n →∞ g(n)
36. Define Little Omega.
The function f(n) = ω (g(n)) iff Lim f(n) =0 n →∞ g(n)
37. Write algorithm using iterative function to fine sum of n numbers.
Algorithm sum(a,n)
{ S : = 0.0
For i=1 to n do
S : - S + a[i];
Return S;
}
38. Write an algorithm using Recursive function to fine sum of n numbers,
Algorithm Rsum (a,n)
{
If(n≤0) then
Return 0.0;
Else Return Rsum(a, n- 1) + a(n);
39. Define the divide an conquer method.
Given a function to compute on ‘n’ inputs the divide-and-comquer strategy suggests splitting
the inputs in to’k’ distinct susbsets, 1<k <n, yielding ‘k’ subproblems. The subproblems must
be solved, and then a method must be found to combine subsolutions into a solution of the
whole. If the subproblems are still relatively large, then the divide-and conquer strategy can
possibly be reapplied.
40. Define control abstraction.
A control abstraction we mean a procedure whose flow of control is clear but whose primary
operations are by other procedures whose precise meanings are left undefined.
The Control abstraction for Divide-and conquer.
Algorithm DAndC(P)
{
if small(p) then return S(P);
else
{
divide P into smaller instance p1,p2,…pk, k ≥1;
Apply DAndC to each of these subproblems
Return combine (DAnd C(p1) DAnd C(p2),----, DAnd (pk));
}
}
41.What is the substitution method?
One of the methods for solving any such recurrence relation is called the substitution method.
42. What is the binary search?
If ‘q’ is always chosen such that ‘aq’ is the middle element(that is, q=[(n+1)/2), then the
resulting search algorithm is known as binary search.
43. Give computing time for Binary search?
In conclusion we are now able completely describe the computing time of binary search by
giving formulas that describe the best, average and worst cases.
Successful searches
Θ(1) Θ (logn) Θ (Logn)
best average worst
Unsuccessful searches
Θ (logn)
best, average, worst
45. Define external path length?
The external path length E, is defines analogously as sum of the distance of all external nodes
from the root.
46. Define internal path length.
The internal path length ‘I’ is the sum of the distances of all internal nodes from the root.
47. What is the maximum and minimum problem?
The problem is to find the maximum and minimum items in a set of ‘n’ elements. Though
this problem may look so simple as to be contrived, it allows us to demonstrate divide and-
conquer in simple setting.
48. What is the Quick sort?
Quicksort is divide and conquer strategy that works by partitioning it’s input elements
according to their value relative to some preselected element(pivot). it uses recursion and the
method is also called partition –exchange sort.
49. Write the Analysis for the Quick sort.
O(nlogn) in average and best cases, O(n2) in worst case
50. what is Merge sort? and Is insertion sort better than the merge sort?
Merge sort is divide and conquer strategy that works by dividing an input array in to two
halves,sorting them recursively and then merging the two sorted halves to get the original array
sorted Insertion sort works exceedingly fast on arrays of less then 16 elements, though for
large
‘n’ its computing time is O(n2).
51. Write a algorithm for straightforward maximum and minimum?
Algorithm straight MaxMin(a,n,max,min)
//set max to the maximum and min to the minimum of a[1:n]
{
max := min: = a[i];
for i = 2 to n do
{
if(a[i] >max) then max: = a[i];
if(a[i] >min) then min: = a[i];
}
}
52. what is general divide and conquer recurrence?
Time efficiency T(n)of many divide and conquer algorithms satisfies the equation
T(n)=a.T(n/b)+f(n).This is the general recurrence relation.
53. What is Master’s theorem?
Let a be an integer greater than or equal to 1 and b be a real number greater than 1. Let c be
a positive real number and d a nonnegative real number. Given a recurrence of the form