0% found this document useful (0 votes)
30 views

402 Ada Lab Manual

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

402 Ada Lab Manual

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 64

INDEX

Experiment Aim
No.

1. Write a program for Iterative Binary Search.

2. Write a program for Recursive Binary Search.

3. Write a program for Merge Sort.

4. Write a program for Quick Sort.

5. Write a program for Strassen’s Matrix Multiplication.

6. Write a program for The Greedy Knapsack Problem.

7. Write a Program for Optimal merge patterns using Greedy method.

8. Write a program for Huffman Coding

9. Write a program for Minimum spanning trees using Kruskal’s algorithm.

10. Write a program for Minimum spanning trees using Prim’s algorithm.

11. Write a program for single sources shortest path algorithm.

12. Write a program for Floyd-Warshal algorithm.


Program- 1

Iterative Binary Search Algorithms


Searching is an algorithm for finding an item with specified properties among a collection of
items. The items may be stored individually as records in a database; or may be elements of a
search space defined by a mathematical formula or procedure.

The Binary search requires an ordered list of elements and is only applicable to the arrays. A
binary search or half-interval search algorithm locates the position of an item in a sorted array.
Binary search works by comparing an input value to the middle element of the array. The
comparison determines whether the element equals the input, less than the input or greater.
When the element being compared to equals the input the search stops and typically returns the
position of the element. If the element is not equal to the input then a comparison is made to
determine whether the input is less than or greater than the element. Binary search algorithms
typically halve the number of items to check with each successive iteration, thus locating the
given item (or determining its absence) in logarithmic time.

Note:

1. It is applicable to arrays not on linked list, because it is not possible to locate middle in the
linked list.
2. Elements should be sorted in the array.
3. Performance is good for sorting in a large collection of elements, but low for very less elements.

Iterative Algorithm: An iterative method attempts to solve a problem (for example, finding the
root of an equation or system of equations) by finding successive approximations to the solution
starting from an initial guess. This approach is in contrast to direct methods, which attempt to
solve the problem by a finite sequence of operations, and, in the absence of rounding errors,
would deliver an exact solution.

Example: Find 103 in {-1, 5, 6, 18, 19, 25, 46, 78, 102, 114}.

Step 1 (middle element is 19 < 103): -1 5 6 18 19 25 46 78 102 114

Step 2 (middle element is 78 < 103): -1 5 6 18 19 25 46 78 102 114

Step 3 (middle element is 102 < 103): -1 5 6 18 19 25 46 78 102 114

Step 4 (middle element is 114 > 103): -1 5 6 18 19 25 46 78 102 114

Step 5 (searched value is absent): -1 5 6 18 19 25 46 78 102 114


Algorithm

Algorithm IterativeBinarySearch( A[], value, low, high)


// let the list is sorted in ascending order.
// Here A[] is an array of elements with lower bound “low” and upper bound “high”.
// This algorithm Iteratively searches “value” in the array A[].
{
while (low <= high)
{
mid = (low + high) / 2;
if (A[mid] = value)
return mid; // found
else if (A[mid] < value)
low = mid + 1; // Search in Second half of the list.
else
high = mid - 1; // Search in First half of the list.

}
return null; // not found
}//end binary search

Complexity: The Complexity of Iterative Binary Search algorithm is O (log2n) where


n is the number of elements in the array.
IMPLEMENTATION:

// A iterative binary search function. It returns location of x in


// given array arr[l..r] if present, otherwise -1

#include<stdio.h>
int binarySearch(int arr[], int l, int r, int x)
{
while (l <= r)
{
int m = l + (r-l)/2;

// Check if x is present at mid


if (arr[m] == x)
return m;

// If x greater, ignore left half


if (arr[m] < x)
l = m + 1;

// If x is smaller, ignore right half


else
r = m - 1;
}

// if we reach here, then element was not present


return -1;
}

int main(void)
{
int arr[] = {2, 3, 4, 10, 40};
int n = sizeof(arr)/ sizeof(arr[0]);
int x = 10;
int result = binarySearch(arr, 0, n-1, x);
(result == -1)? printf("Element is not present in array")
: printf("Element is present at index %d", result);
return 0;
}
Program- 2
Recursive Binary Search Algorithms
Recursive Algorithm: A recursive algorithm is an algorithm which calls itself with "smaller (or
simpler)" input values, and which obtains the result for the current input by applying simple
operations to the returned value for the smaller (or simpler) input. More generally if a problem
can be solved utilizing solutions to smaller versions of the same problem, and the smaller versions
reduce to easily solvable cases, then one can use a recursive algorithm to solve that problem. In
general, recursive computer programs require more memory and computation compared with
iterative algorithms, but they are simpler and for many cases, a natural way of thinking about the
problem. In recursive algorithm there should be some stopping criteria.

In recursive binary search algorithm, the list of elements is divided into two equal sized half and
then searching begins recursively in only one half of the list where possibility of element to be
present.

Note: It is a divide and conquer algorithm

Algorithm

Algorithm RecursiveBinarySearch( A[], value, low, high)


// let the list is sorted in ascending order.
// Here A[] is an array of elements with lower bound “low” and upper bound
“high”.
// This algorithm recursively searches “value” in the array A[].
{
if (low>high)
return NULL; // value is not found.
else
{
mid = (low + high) / 2;
if (A[mid] = value)
return mid; // value if found at mid.
else if (A[mid] > value)
RecursiveBinarySearch( A[], value, low, mid – 1);
// Search in First half of the list.
else
RecursiveBinarySearch( A[], value, mid + 1, high);
// Search in second half of the list.
}
}

Complexity: The Complexity of Recursive Binary Search algorithm is O (log2n) where n is the
number of elements in the array.
IMPLEMENTATION:

// A recursive binary search function. It returns location of x in


// given array arr[l..r] is present, otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
if (r >= l)
{
int mid = l + (r - l)/2;

// If the element is present at the middle itself


if (arr[mid] == x) return mid;

// If element is smaller than mid, then it can only be present


// in left subarray
if (arr[mid] > x) return binarySearch(arr, l, mid-1, x);

// Else the element can only be present in right subarray


return binarySearch(arr, mid+1, r, x);
}

// We reach here when element is not present in array


return -1;
}
int main(void)
{
int arr[] = {2, 3, 4, 10, 40};
int n = sizeof(arr)/ sizeof(arr[0]);
int x = 10;
int result = binarySearch(arr, 0, n-1, x);
(result == -1)? printf("Element is not present in array")
: printf("Element is present at index %d", result);
return 0;
}
Program- 3
Merge Sort Algorithm
The sorting algorithm Mergesort produces a sorted sequence by sorting its two halves and
merging them.

Idea:
The Mergesort algorithm is based on divide and conquer strategy. First, the sequence to be sorted
is decomposed into two halves (Divide). Each half is sorted independently (Conquer). Then the
two sorted halves are merged to a sorted sequence (Combine).

Fig 3.1: Merge sort

The procedure mergesort sorts a sequence from index “low” to index “high”. First, index “mid” in
the middle between “low” and “high” is determined. Then the first part of the sequence (from low
to mid) and the second part (from mid+1 to high) are sorted by recursive calls of mergesort. Then
the two sorted halves are merged by procedure merge. Recursion ends when low = high, i.e. when
a subsequence consists of only one element.

The main work of the Mergesort algorithm is performed by function merge. Function Merge is
usually implemented in the following way: The two halves are first copied into an auxiliary
array b. Then the two halves are scanned by pointers i and j and the respective next-greatest
element at each time is copied back to array a.
At the end a situation occurs where one index has reached the end of its half, while the other has
not. Then, in principle, the rest of the elements of the corresponding half have to be copied back.
Actually, this is not necessary for the second half, since (copies of) the remaining elements are
already at their proper places.

Algorithm

Algorithm Merge-Sort (low, high)


{
// Divide the problem into subproblems.
if (low<high)
{
mid=(low + high)/2;// Split the list from the middle.
// Solve the subproblems.
Merge-Sort(low, mid);
Merge-Sort(mid+1, high);
// Combine the solutions.
Merge(low, mid, high);
}
}

Algorithm Merge (low, mid, high)


{
//B[] is an auxiliary global array.
//A[low:high] is a global array that contains two sorted subsets in A[low: mid] and in
//A[mid+1: high]. This algorithm merges these two subsets.

i=low; j=mid+1; k=low;


while ( i<= mid and j<=high)
{
if (A[i]<=A[j])
{
B[k]=A[i];
i++;
}
else
{
B[k]=A[j];
j++;
}
k++;
}
// copy the remaining half in the array B.
if ( i> mid) then
for ( p= j to high)
{
B[k]= A[p];
k++;
}
else
for ( p= i to mid)
{
B[k]= A[p];
k++;
}

// copy the array B back into A.


for ( p= low to high)
{
A[p]= B[p];
}
}
Complexity: The time complexity of Mergesort is O (n log2n).
IMPLEMENTATION:

#include <stdio.h>

#define max 10

int a[11] = { 10, 14, 19, 26, 27, 31, 33, 35, 42, 44, 0 };
int b[10];

void merging(int low, int mid, int high) {


int l1, l2, i;

for(l1 = low, l2 = mid + 1, i = low; l1 <= mid && l2 <= high; i++) {
if(a[l1] <= a[l2])
b[i] = a[l1++];
else
b[i] = a[l2++];
}

while(l1 <= mid)


b[i++] = a[l1++];

while(l2 <= high)


b[i++] = a[l2++];

for(i = low; i <= high; i++)


a[i] = b[i];
}

void sort(int low, int high) {


int mid;

if(low < high) {


mid = (low + high) / 2;
sort(low, mid);
sort(mid+1, high);
merging(low, mid, high);
} else {
return;
}
}

int main() {
int i;

printf("List before sorting\n");

for(i = 0; i <= max; i++)


printf("%d ", a[i]);

sort(0, max);

printf("\nList after sorting\n");


for(i = 0; i <= max; i++)
printf("%d ", a[i]);
}
Program- 4
Quick Sort Algorithm
Quick sort was discovered by Tony Hoare in 1962. In this algorithm, the hard work is splitting the
array into subsets so that merging the final result is trivial.

Quicksort is a divide-and-conquer sorting algorithm in which division is dynamically carried out


(as opposed to static division in Mergesort). The three steps of Quicksort are as follows:

Divide: Rearrange the elements and split the array into two subarrays and an element in between
such that so that each element in the left subarray is less than or equal the middle element and
each element in the right subarray is greater than the middle element.
Conquer: Recursively sort the two subarrays.
Combine: None

Example:
Sort the numbers: 65, 70, 75, 80, 85, 60, 55, 50, 45

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) i j
65 70 75 80 85 60 55 50 45 +∞ 2 9
65 45 75 80 85 60 55 50 70 +∞ 3 8
65 45 50 80 85 60 55 75 70 +∞ 4 7
65 45 50 55 85 60 80 75 70 +∞ 5 6
65 45 50 55 60 85 80 75 70 +∞ 6 5
60 45 50 55 65 85 80 75 70 +∞
______________ _____________
Sublist -1 Sublist-2

Algorithm
Algorithm Quicksort (a, low, high)
{
/* Termination condition! */
if ( high > low )
{
pivot = partition( a, low, high );
quicksort( a, low, pivot-1 );
quicksort( a, pivot+1, high );
}
}

Algorithm partition (a, low, high)


{
v = a[low];
i = low;
j = high;
while ( i < j )
{
/* Move left while item < pivot */
while( a[i] <= v )
left++;
/* Move right while item > pivot */
while( a[j] > v )
right--;

if ( i < j )
{
temp:=a[i];
a[i]:=a[j];
a[j]:=temp;
}

/* right is final position for the pivot */


a[i] = a[j];
a[j] = v;
return j;
}

Complexity:
Best Case: O (nlogn)
Worst Case: O (n2)
Average Case: O (nlogn)
IMPLEMENTATION:

#include<stdio.h>
void quicksort(int number[25],int first,int last){
int i, j, pivot, temp;

if(first<last){
pivot=first;
i=first;
j=last;

while(i<j){
while(number[i]<=number[pivot]&&i<last)
i++;
while(number[j]>number[pivot])
j--;
if(i<j){
temp=number[i];
number[i]=number[j];
number[j]=temp;
}
}

temp=number[pivot];
number[pivot]=number[j];
number[j]=temp;
quicksort(number,first,j-1);
quicksort(number,j+1,last);

}
}

int main(){
int i, count, number[25];

printf("How many elements are u going to enter?: ");


scanf("%d",&count);

printf("Enter %d elements: ", count);


for(i=0;i<count;i++)
scanf("%d",&number[i]);

quicksort(number,0,count-1);

printf("Order of Sorted elements: ");


for(i=0;i<count;i++)
printf(" %d",number[i]);

return 0;
}
Program- 5
Strassen’s Matrix Multiplication
Let A, B be two square matrices of size n X n. We want to calculate the matrix product C as:

With

Then

The algorithm for this calculation is as follows:

Algorithm MatrixMultiplication (A, B, C, n)


{
for i:= 1 to n do
{ for j:=1 to n do
{
C[i,j] := 0;
for k := 1 to n do
C[i,j] := C[i,j] + A[i,k] * B[k,j]
}
}
}

This standard method (brute force approach) of matrix multiplication of two n × n matrices takes
O(n3) operations.

The usual multiplication of two 2 × 2 matrices takes 8 multiplications and 4 additions. Strassen
showed how two 2 × 2 matrices can be multiplied using only 7 multiplications and 18 additions.
This reduce can be done by Divide and Conquer Approach. Here divide matrices into sub-
matrices: A0, A1, A2 etc. Use matrix multiply equations and Recursively multiply sub-matrices.
Terminate recursion with a simple base case

The equations that perform 7 multiplication and 10 addition subtraction of matrices of size n/2 X
n/2 are as follows:

P1 = (A11+ A22)(B11+B22)
P2 = (A21 + A22) * B11
P3 = A11 * (B12 - B22)
P4 = A22 * (B21 - B11)
P5 = (A11 + A12) * B22
P6 = (A21 - A11) * (B11 + B12)
P7 = (A12 - A22) * (B21 + B22)

Now remaining 8 additions / subtractions of size n/2 X n/2 are performed to calculate Cijs.

C11 = P1 + P4 - P5 + P7
C12 = P3 + P5
C21 = P2 + P4
C22 = P1 + P3 - P2 + P6

The Divide and Conquer algorithm for this as follows:

Algorithm StressenMatMul ( *A, *B, *R, n)


{
if (n == 1)
{
(*R) += (*A) * (*B);
}

else
{
matmul(A, B, R, n/4);
matmul(A, B+(n/4), R+(n/4), n/4);
matmul(A+2*(n/4), B, R+2*(n/4), n/4);
matmul(A+2*(n/4), B+(n/4), R+3*(n/4), n/4);
matmul(A+(n/4), B+2*(n/4), R, n/4);
matmul(A+(n/4), B+3*(n/4), R+(n/4), n/4);
matmul(A+3*(n/4), B+2*(n/4), R+2*(n/4), n/4);
matmul(A+3*(n/4), B+3*(n/4), R+3*(n/4), n/4);
}
}

Complexity

Strassen’s algorithm is a Divide-and-Conquer algorithm that is asymptotically faster than


traditional approach, i.e. O(nlg7)=O(n2.81)
IMPLEMENTATION:

#include<stdio.h>
int main(){
int a[2][2], b[2][2], c[2][2], i, j;
int m1, m2, m3, m4 , m5, m6, m7;

printf("Enter the 4 elements of first matrix: ");


for(i = 0;i < 2; i++)
for(j = 0;j < 2; j++)
scanf("%d", &a[i][j]);

printf("Enter the 4 elements of second matrix: ");


for(i = 0; i < 2; i++)
for(j = 0;j < 2; j++)
scanf("%d", &b[i][j]);

printf("\nThe first matrix is\n");


for(i = 0; i < 2; i++){
printf("\n");
for(j = 0; j < 2; j++)
printf("%d\t", a[i][j]);
}

printf("\nThe second matrix is\n");


for(i = 0;i < 2; i++){
printf("\n");
for(j = 0;j < 2; j++)
printf("%d\t", b[i][j]);
}

m1= (a[0][0] + a[1][1]) * (b[0][0] + b[1][1]);


m2= (a[1][0] + a[1][1]) * b[0][0];
m3= a[0][0] * (b[0][1] - b[1][1]);
m4= a[1][1] * (b[1][0] - b[0][0]);
m5= (a[0][0] + a[0][1]) * b[1][1];
m6= (a[1][0] - a[0][0]) * (b[0][0]+b[0][1]);
m7= (a[0][1] - a[1][1]) * (b[1][0]+b[1][1]);

c[0][0] = m1 + m4- m5 + m7;


c[0][1] = m3 + m5;
c[1][0] = m2 + m4;
c[1][1] = m1 - m2 + m3 + m6;

printf("\nAfter multiplication using Strassen's algorithm \n");


for(i = 0; i < 2 ; i++){
printf("\n");
for(j = 0;j < 2; j++)
printf("%d\t", c[i][j]);
}

return 0;
}
Program- 6
The Greedy Knapsack Problem

Greedy algorithm: A greedy algorithm for an optimization problem always makes the choice
that looks best at the moment and adds it to the current subsolution. A greedy algorithm obtains
an optimal solution to a problem by making a sequence of choices. At each decision point, the
algorithm chooses the locally optimal solution. In other words, when we are considering which
choice to make, we make the choice that looks best in the current situation, without considering
results from subproblems.

Knapsack Problem: Given a set of items, each with a weight and a value, determine the number
of each item to include in a collection so that the total weight is less than or equal to a given limit
and the total value is as large as possible. It derives its name from the problem faced by someone
who is constrained by a fixed-size knapsack and must fill it with the most useful items.
We have n kinds of items, 1 through n. Each kind of item i has a profit value p i and a weight wi.
We usually assume that all values and weights are nonnegative. To simplify the representation,
we can also assume that the items are listed in increasing order of weight. The maximum weight
that we can carry in the bag is W.

Mathematically the knapsack problem can be formulated as:


n
Maximize ∑ pi∗xi
i=1
n
subject to ∑ wi∗xi≤ W
i=1

and 0≤ xi ≤ 1, 1≤ i ≤ n

Concept:
1. Calculate pi/wi for each item i.
2. Then Sort the items in decreasing order of their pi/wi.
3. Select an item from the sorted list and place it into the bag such that the total weight of the
objects should not exceed the total capacity of the knapsack. Perform this step repeatedly.

Example :
i: (1, 2, 3, 4) pi: (5, 9, 4, 8) wi: (1, 3, 2, 2) and W= 4
now pi/wi: (5, 3, 2, 4)

Solution:
1st: all of item 1, x[1]=1, x[1]*W[1]=1
2nd: all of item 4, x[4]=1, x[4]*W[4]=2
3rd: 1/3 of item 2, x[2]=1/3, x[2]*W[2]=1
Now the total weight is 4=W
(x[3]=0)

The Algorithm:

Algorithm Greedy-fractional-knapsack (w, p/w, W)


{
for ( i =1 to n)
do x[i] =0
weight = 0
while ( weight < W)
do{
i = best remaining item
if (weight + w[i] ≤ W)
then x[i] = 1
weight = weight + w[i]
else
x[i] = (w - weight) / w[i]
weight = W
}
return x
}

Complexity

If the items are already sorted into decreasing order of vi / wi, then the while-loop takes a time
in O(n); Therefore, the total time including the sort is in O(n log n).
IMPLEMENTATION:

// Main greedy function to solve problem


double fractionalKnapsack(int W, struct Item arr[], int n)
{
// sorting Item on basis of ratio
sort(arr, arr + n, cmp);

// Uncomment to see new order of Items with their ratio


/*
for (int i = 0; i < n; i++)
{
cout << arr[i].value << " " << arr[i].weight << " : "
<< ((double)arr[i].value / arr[i].weight) << endl;
}
*/

int curWeight = 0; // Current weight in knapsack


double finalvalue = 0.0; // Result (value in Knapsack)

// Looping through all Items


for (int i = 0; i < n; i++)
{
// If adding Item won't overflow, add it completely
if (curWeight + arr[i].weight <= W)
{
curWeight += arr[i].weight;
finalvalue += arr[i].value;
}

// If we can't add current Item, add fractional part of it


else
{
int remain = W - curWeight;
finalvalue += arr[i].value * ((double) remain / arr[i].weight);
break;
}
}

// Returning final value


return finalvalue;
}

// driver program to test above function


int main()
{
int W = 50; // Weight of knapsack
Item arr[] = {{60, 10}, {100, 20}, {120, 30}};

int n = sizeof(arr) / sizeof(arr[0]);

cout << "Maximum value we can obtain = "


<< fractionalKnapsack(W, arr, n);
return 0;
}
Program-7
Optimal merge patterns using Greedy method

Given n sorted files, there are many ways in which to pair-wise merge them into a single sorted
file. Different pairing requires differing amounts of computing time. Thus the optimal way to
pair-wise merge n sorted files can be performed using greedy method.

Greedy attempt to obtain an optimal merge pattern is easy to formulate. Since merging an n-
record file and m-record file requires possibly n+m record moves, the obvious choice for
selection criteria is: at each step merge the two smallest size files together.

For Example: Suppose there are 3 sorted lists L1, L2, and L3, of sizes 30, 20, and 10, respectively,
which need to be merged into a combined sorted list, but we can merge only two at a time. We
intend to find an optimal merge pattern which minimizes the total number of comparisons. For
example, we can merge L1 and L2, which uses 30 + 20 = 50 comparisons resulting in a list of size
50. We can then merge this list with list L3, using another 50 + 10 = 60 comparisons, so the total
number of comparisons is 50 + 60 = 110. Alternatively, we can merge lists L2 and L3, using 20 +
10 = 30 comparisons, the resulting list (size 30) can then be merged with list L1, for another 30 +
30 = 60 comparisons. So the total number of comparisons is 30 + 60 = 90. It doesn’t take long to
see that this latter merge pattern is the optimal one.

Binary Merge Trees: We can depict the merge patterns using a binary tree, built from the leaf
nodes (the initial lists) towards the root in which each merge of two nodes creates a parent node
whose size is the sum of the sizes of the two children. For example, the two previous merge
patterns are depicted in the following two figures:

Cost = Cost =
6 30*2 + 6 30*1 +
0 20*2 + 0 20*2 +
5 1 3 3
10*1 = 10*2 =
0 0 0 0
110 2 901
3 2
0 L1 and
Merge 0 L2, Merge L02 and L03,
then with L3 then with L1
merge cost = sum of all weighted
external path lengths
Fig 7.1: Optimal merge tree

Example of the optimal merge tree algorithm:


Here external nodes (leaf nodes or basic files) are represented by square and internal nodes are
represented by circle.
2 3 5 7 9 Initially, 5 leaf nodes
5 with sizes
2 3 5 7 9 Iteration 1: merge 2
1 and 3 into 5
5 0 5 1 Iteration 3:
2 3 76 9 merge 7 and 9
2 (chosen among
6 1 Iteration
7, 9, and4:10)
1
5 0 5 76 9
merge
into 1610
Cost
and 16= 2*3
into + 3*3 +
2 3
5*2
26 + 7*2 + 9*2 =
57.
Fig 7.2: Construction of optimal merge tree

If di is the distance from the root to the external node for file xi and qi is the length of file xi, then
the total number of record moves for this binary merge tree is given by:
n

∑ diqi
i=1
This sum is called the weighted external path length of the tree.

Algorithm

Algorithm OptimalMergeTree (n)


{
// list is a global list of n single node.
for ( i:=1 to n-1) do
{
pt≔ new treenode; // Get a new tree node
(pt→lchild) ≔ Least(list); // Merge two trees with smallest length
(pt→rchild) ≔ Least(list);
(pt→weight) ≔ ((pt→lchild) →weight)+(pt→rchild) →weight);
Insert(list,pt);
}
return Least(least); // Tree left in the list is the merge tree.
}

Complexity: The total time complexity of the algorithm is O (nlog n)


IMPLEMENTATION:

#include<iostream.h>
#include<conio.h>
void main()
{
clrscr();
int i,k,a[10],c[10],n,l;
cout<<"Enter the no. of elements\t";
cin>>n;
cout<<"\nEnter the sorted elments for optimal merge pattern";
for(i=0;i<n;i++)
{
cout<<"\t";
cin>>a[i];
}
i=0;
k=0;
c[k]=a[i]+a[i+1];
i=2;
while(i<n)
{
k++;
if((c[k-1]+a[i])<=(a[i]+a[i+1]))
{
c[k]=c[k-1]+a[i];
}
else
{
c[k]=a[i]+a[i+1];
i=i+2;
while(i<n)
{
k++;
if((c[k-1]+a[i])<=(c[k-2]+a[i]))
{
c[k]=c[k-1]+a[i];
}
else
{
c[k]=c[k-2]+a[i];
}
i++;
}
i++;
}
k++;
c[k]=c[k-1]+c[k-2];
cout<<"\n\nThe optimal sum are as follows......\n\n";
for(k=0;k<n-1;k++)
{
cout<<c[k]<<"\t";
}
l=0;
for(k=0;k<n-1;k++)
{
l=l+c[k];
}
cout<<"\n\n The external path length is ......"<<l;
getch();
}
Program-8
Huffman Coding

This problem is that of finding the minimum length bit string which can be used to encode a
string of symbols. One application is text compression:

Prefix (free) code: no codeword is also a prefix of some other code words (Un-ambiguous)
An optimal data compression achievable by a character code can always be achieved with a prefix
code. An optimal code is always represented by a full binary tree, in which every non-leaf node
has two children

Huffman's scheme uses a table of frequency of occurrence for each symbol (or character) in the
input. This table may be derived from the input itself or from data which is representative of the
input. For instance, the frequency of occurrence of letters in normal English might be derived
from processing a large number of text documents and then used for encoding all text documents.
We then need to assign a variable-length bit string to each character that unambiguously
represents that character. This means that the encoding for each character must have a unique
prefix. If the characters to be encoded are arranged in a binary tree:
An encoding for each character is found by following the tree from the route to the character in
the leaf: the encoding is the string of symbols on each branch followed.

For example:
String Encoding
TEA 10 00 010
SEA 011 00 010
TEN 10 00 110

Fig 8.1: Huffman tree

Algorithm

Algorithm Huffman(c) ; Analysis


{ n = |c| ; Q is a binary heap
Q=c ; O(n) BuildHeap
for i = 1 to n-1 ; O(n)
z = Allocate-Node()
x = Extract-Min(Q) ; O(lgn) O(n) times
y = Extract-Min(Q) ; O(lgn) O(n) times
left(z) = x
right(z) = y
f(z) = f(x) + f(y)
Insert(Q,z) ; O(lgn) O(n) times
return Extract-Min(Q) ; -------------
; O(nlgn)
}

Complexity: The time complexity of the Huffman algorithm is O (n log n). Using a heap to store
the weight of each tree, each iteration requires O (log n) time to determine the cheapest weight
and insert the new weight. There are O (n) iterations, one for each item.

Another Example:
IMPLEMENTATION:

#include <stdio.h>
#include <stdlib.h>

// This constant can be avoided by explicitly


// calculating height of Huffman Tree
#define MAX_TREE_HT 100

// A Huffman tree node


struct MinHeapNode {

// One of the input characters


char data;

// Frequency of the character


unsigned freq;

// Left and right child of this node


struct MinHeapNode *left, *right;
};

// A Min Heap: Collection of


// min-heap (or Huffman tree) nodes
struct MinHeap {

// Current size of min heap


unsigned size;

// capacity of min heap


unsigned capacity;

// Array of minheap node pointers


struct MinHeapNode** array;
};

// A utility function allocate a new


// min heap node with given character
// and frequency of the character
struct MinHeapNode* newNode(char data, unsigned freq)
{
struct MinHeapNode* temp
= (struct MinHeapNode*)malloc
(sizeof(struct MinHeapNode));

temp->left = temp->right = NULL;


temp->data = data;
temp->freq = freq;

return temp;
}

// A utility function to create


// a min heap of given capacity
struct MinHeap* createMinHeap(unsigned capacity)

struct MinHeap* minHeap


= (struct MinHeap*)malloc(sizeof(struct MinHeap));

// current size is 0
minHeap->size = 0;

minHeap->capacity = capacity;

minHeap->array
= (struct MinHeapNode**)malloc(minHeap->
capacity * sizeof(struct MinHeapNode*));
return minHeap;
}

// A utility function to
// swap two min heap nodes
void swapMinHeapNode(struct MinHeapNode** a,
struct MinHeapNode** b)

struct MinHeapNode* t = *a;


*a = *b;
*b = t;
}

// The standard minHeapify function.


void minHeapify(struct MinHeap* minHeap, int idx)

int smallest = idx;


int left = 2 * idx + 1;
int right = 2 * idx + 2;

if (left < minHeap->size && minHeap->array[left]->


freq < minHeap->array[smallest]->freq)
smallest = left;

if (right < minHeap->size && minHeap->array[right]->


freq < minHeap->array[smallest]->freq)
smallest = right;

if (smallest != idx) {
swapMinHeapNode(&minHeap->array[smallest],
&minHeap->array[idx]);
minHeapify(minHeap, smallest);
}
}

// A utility function to check


// if size of heap is 1 or not
int isSizeOne(struct MinHeap* minHeap)
{

return (minHeap->size == 1);


}

// A standard function to extract


// minimum value node from heap
struct MinHeapNode* extractMin(struct MinHeap* minHeap)

struct MinHeapNode* temp = minHeap->array[0];


minHeap->array[0]
= minHeap->array[minHeap->size - 1];

--minHeap->size;
minHeapify(minHeap, 0);

return temp;
}

// A utility function to insert


// a new node to Min Heap
void insertMinHeap(struct MinHeap* minHeap,
struct MinHeapNode* minHeapNode)

++minHeap->size;
int i = minHeap->size - 1;

while (i && minHeapNode->freq < minHeap->array[(i - 1) / 2]->freq) {

minHeap->array[i] = minHeap->array[(i - 1) / 2];


i = (i - 1) / 2;
}

minHeap->array[i] = minHeapNode;
}

// A standard function to build min heap


void buildMinHeap(struct MinHeap* minHeap)

int n = minHeap->size - 1;
int i;
for (i = (n - 1) / 2; i >= 0; --i)
minHeapify(minHeap, i);
}

// A utility function to print an array of size n


void printArr(int arr[], int n)
{
int i;
for (i = 0; i < n; ++i)
printf("%d", arr[i]);

printf("\n");
}

// Utility function to check if this node is leaf


int isLeaf(struct MinHeapNode* root)

return !(root->left) && !(root->right);


}

// Creates a min heap of capacity


// equal to size and inserts all character of
// data[] in min heap. Initially size of
// min heap is equal to capacity
struct MinHeap* createAndBuildMinHeap(char data[], int freq[], int size)

struct MinHeap* minHeap = createMinHeap(size);

for (int i = 0; i < size; ++i)


minHeap->array[i] = newNode(data[i], freq[i]);

minHeap->size = size;
buildMinHeap(minHeap);

return minHeap;
}

// The main function that builds Huffman tree


struct MinHeapNode* buildHuffmanTree(char data[], int freq[], int size)

{
struct MinHeapNode *left, *right, *top;

// Step 1: Create a min heap of capacity


// equal to size. Initially, there are
// modes equal to size.
struct MinHeap* minHeap = createAndBuildMinHeap(data, freq, size);

// Iterate while size of heap doesn't become 1


while (!isSizeOne(minHeap)) {

// Step 2: Extract the two minimum


// freq items from min heap
left = extractMin(minHeap);
right = extractMin(minHeap);

// Step 3: Create a new internal


// node with frequency equal to the
// sum of the two nodes frequencies.
// Make the two extracted node as
// left and right children of this new node.
// Add this node to the min heap
// '$' is a special value for internal nodes, not used
top = newNode('$', left->freq + right->freq);

top->left = left;
top->right = right;

insertMinHeap(minHeap, top);
}

// Step 4: The remaining node is the


// root node and the tree is complete.
return extractMin(minHeap);
}

// Prints huffman codes from the root of Huffman Tree.


// It uses arr[] to store codes
void printCodes(struct MinHeapNode* root, int arr[], int top)

// Assign 0 to left edge and recur


if (root->left) {

arr[top] = 0;
printCodes(root->left, arr, top + 1);
}

// Assign 1 to right edge and recur


if (root->right) {

arr[top] = 1;
printCodes(root->right, arr, top + 1);
}

// If this is a leaf node, then


// it contains one of the input
// characters, print the character
// and its code from arr[]
if (isLeaf(root)) {
printf("%c: ", root->data);
printArr(arr, top);
}
}

// The main function that builds a


// Huffman Tree and print codes by traversing
// the built Huffman Tree
void HuffmanCodes(char data[], int freq[], int size)

{
// Construct Huffman Tree
struct MinHeapNode* root
= buildHuffmanTree(data, freq, size);

// Print Huffman codes using


// the Huffman tree built above
int arr[MAX_TREE_HT], top = 0;

printCodes(root, arr, top);


}

// Driver program to test above functions


int main()
{

char arr[] = { 'a', 'b', 'c', 'd', 'e', 'f' };


int freq[] = { 5, 9, 12, 13, 16, 45 };

int size = sizeof(arr) / sizeof(arr[0]);

HuffmanCodes(arr, freq, size);

return 0;
}
Program-9
Minimum Spanning Trees using Kruskal’s algorithm
Let G = (V, E) be an undirected connected graph. A subgraph T = (V, E’) of G is a spanning
tree of G iff T is a tree.
 A spanning tree is a minimal subgraph G’ of G such that V (G’) = V (G) and G’ is connected
 Any connected graph with n vertices must have n − 1 edges
 All connected graphs with n − 1 edges are trees

Kruskal's algorithm is an algorithm in graph theory that finds a minimum spanning tree for a
connected weighted graph. This means it finds a subset of the edges that forms a tree that
includes every vertex, where the total weight of all the edges in the tree is minimized. If the
graph is not connected, then it finds a minimum spanning forest (a minimum spanning tree for
each connected component). Kruskal's algorithm is an example of a greedy algorithm.

Concept: Let each vertex be a separate sets.


1. Examine each edge in increasing order of cost.
2. If edge connects vertices in same component, discard it. Otherwise, add edge to MST.
Update sets. Perform this until we got a single set.

Example:

Fig 9.1: graph

Solution:
Iteration Edge Components
0 - {1}; {2}; {3}; {4}; {5}; {6}; {7}
1 {1,2} {1,2}; {3}; {4}; {5}; {6}; {7}
2 {2,3} {1,2,3}; {4}; {5}; {6}; {7}
Iteration Edge Components
3 {4,5} {1,2,3}; {4,5}; {6}; {7}
4 {6,7} {1,2,3}; {4,5}; {6,7}
5 {1,4} {1,2,3,4,5}; {6,7}
6 {2,5} Not included (adds cycle)
7 {4,7} {1,2,3,4,5,6,7}

The Minimum Spanning Tree will be:

Fig 9.2: Minimum Spanning Tree

Algorithm

Algorithm MST_Kruskal ( G, t )
{
// G is the graph, with edges E(G) and vertices V(G).
// w(u,v) gives the weight of edge (u,v).
// t is the set of edges in the minimum spanning tree.
// Build a heap out of the edges using edge cost as the comparison criteria
// using the heapify algorithm
heap = heapify ( E(G) )
t = NULL;
// Change the parent of each vertex to a NULL
// Each vertex is in different set
for ( i = 0; i < |V(G)|; i++ )
{
parent[i] = NULL
}
i=0
while ( ( i < n - 1 ) AND (heap is not empty) )
{
e = delete ( heap ) // Get minimum cost edge from heap
adjust ( heap ) // Reheapify heap
// Find both sides of edge e = (u,v) in the tree grown so far
j = find ( u(e), t )
k = find ( v(e), t )
if ( j != k )// Both are in different sets
{
i++
t[i,1] = u
t[i,2] = v
union ( j, k )
}
}
}

Complexity: first sort the edges by weight using a comparison sort in O(E log E) time; this
allows the step "remove an edge with minimum weight from S" to operate in constant time. Next,
we use a disjoint-set data structure (Union & Find) to keep track of which vertices are in which
components. We need to perform O(E)operations, two 'find' operations and possibly one union for
each edge. Even a simple disjoint-set data structure such as disjoint-set forests with union by rank
can perform O(E) operations in O(E log V) time. Thus the total time is O(E log E) =O(E log V).
IMPLEMENTATION:

#include <bits/stdc++.h>
using namespace std;

// a structure to represent a weighted edge in graph


class Edge
{
public:
int src, dest, weight;
};

// a structure to represent a connected, undirected


// and weighted graph
class Graph
{
public:
// V-> Number of vertices, E-> Number of edges
int V, E;

// graph is represented as an array of edges.


// Since the graph is undirected, the edge
// from src to dest is also edge from dest
// to src. Both are counted as 1 edge here.
Edge* edge;
};

// Creates a graph with V vertices and E edges


Graph* createGraph(int V, int E)
{
Graph* graph = new Graph;
graph->V = V;
graph->E = E;

graph->edge = new Edge[E];

return graph;
}

// A structure to represent a subset for union-find


class subset
{
public:
int parent;
int rank;
};

// A utility function to find set of an element i


// (uses path compression technique)
int find(subset subsets[], int i)
{
// find root and make root as parent of i
// (path compression)
if (subsets[i].parent != i)
subsets[i].parent = find(subsets, subsets[i].parent);

return subsets[i].parent;
}

// A function that does union of two sets of x and y


// (uses union by rank)
void Union(subset subsets[], int x, int y)
{
int xroot = find(subsets, x);
int yroot = find(subsets, y);

// Attach smaller rank tree under root of high


// rank tree (Union by Rank)
if (subsets[xroot].rank < subsets[yroot].rank)
subsets[xroot].parent = yroot;
else if (subsets[xroot].rank > subsets[yroot].rank)
subsets[yroot].parent = xroot;

// If ranks are same, then make one as root and


// increment its rank by one
else
{
subsets[yroot].parent = xroot;
subsets[xroot].rank++;
}
}

// Compare two edges according to their weights.


// Used in qsort() for sorting an array of edges
int myComp(const void* a, const void* b)
{
Edge* a1 = (Edge*)a;
Edge* b1 = (Edge*)b;
return a1->weight > b1->weight;
}

// The main function to construct MST using Kruskal's algorithm


void KruskalMST(Graph* graph)
{
int V = graph->V;
Edge result[V]; // Tnis will store the resultant MST
int e = 0; // An index variable, used for result[]
int i = 0; // An index variable, used for sorted edges

// Step 1: Sort all the edges in non-decreasing


// order of their weight. If we are not allowed to
// change the given graph, we can create a copy of
// array of edges
qsort(graph->edge, graph->E, sizeof(graph->edge[0]), myComp);

// Allocate memory for creating V ssubsets


subset *subsets = new subset[( V * sizeof(subset) )];

// Create V subsets with single elements


for (int v = 0; v < V; ++v)
{
subsets[v].parent = v;
subsets[v].rank = 0;
}

// Number of edges to be taken is equal to V-1


while (e < V - 1 && i < graph->E)
{
// Step 2: Pick the smallest edge. And increment
// the index for next iteration
Edge next_edge = graph->edge[i++];

int x = find(subsets, next_edge.src);


int y = find(subsets, next_edge.dest);

// If including this edge does't cause cycle,


// include it in result and increment the index
// of result for next edge
if (x != y)
{
result[e++] = next_edge;
Union(subsets, x, y);
}
// Else discard the next_edge
}

// print the contents of result[] to display the


// built MST
cout<<"Following are the edges in the constructed MST\n";
for (i = 0; i < e; ++i)
cout<<result[i].src<<" -- "<<result[i].dest<<" == "<<result[i].weight<<endl;
return;
}

// Driver code
int main()
{
/* Let us create following weighted graph
10
0--------1
|\|
6| 5\ |15
|\|
2--------3
4 */
int V = 4; // Number of vertices in graph
int E = 5; // Number of edges in graph
Graph* graph = createGraph(V, E);
// add edge 0-1
graph->edge[0].src = 0;
graph->edge[0].dest = 1;
graph->edge[0].weight = 10;

// add edge 0-2


graph->edge[1].src = 0;
graph->edge[1].dest = 2;
graph->edge[1].weight = 6;

// add edge 0-3


graph->edge[2].src = 0;
graph->edge[2].dest = 3;
graph->edge[2].weight = 5;

// add edge 1-3


graph->edge[3].src = 1;
graph->edge[3].dest = 3;
graph->edge[3].weight = 15;

// add edge 2-3


graph->edge[4].src = 2;
graph->edge[4].dest = 3;
graph->edge[4].weight = 4;

KruskalMST(graph);

return 0;
}
Program-10
Minimum Spanning Trees using Prim’s algorithm
Prim's algorithm is a greedy algorithm that finds a minimum spanning tree for a connected
weighted undirected graph. This means it finds a subset of the edges that forms a tree that
includes every vertex, where the total weight of all the edges in the tree is minimized. . The
algorithm was developed in 1930 by Czech mathematician Vojtěch Jarník and later
independently by computer scientist Robert C. Prim in 1957 and rediscovered by Edsger
Dijkstra in 1959. Therefore it is also sometimes called the DJP algorithm, the Jarník
algorithm, or the Prim–Jarník algorithm.

The Concept:
The algorithm continuously increases the size of a tree, one edge at a time, starting with a tree
consisting of a single vertex, until it spans all vertices.
 Initialize: Vnew = {x}, where x is an arbitrary node (starting point) from V, Enew = {}
 Repeat until Vnew = V:
 Choose an edge (u, v) with minimal weight such that u is in Vnew and v is not (if there are
multiple edges with the same weight, any of them may be picked)
 Add v to Vnew, and (u, v) to Enew

Example:

Fig 10.1: Construction of MST using Prim’s Algorithm


Algorithm

Algorithm Prim (E, cost, n, t)


// E is the set of edges in G. cost[1:n,1:n] s i the cost adjacency matrix of an n vertex
graph such //that cost[i,j] is either a positive real number or infinity if no edge (i, j)
exists. A minimum //spanning tree is computed and stored as a set of edges in the
array t[1:n-1, 1:2]. (t[i,1],t[i,2]) is //an edge in the minimum cost spanning tree. The
final cost is returned.
{
Let (k,l) be an edge of minimum cost in E.
mincost≔ cost[k,l];
t[1,1] ≔ k;
t[1,2] ≔l;
for( i≔1 to n ) do // Initialize near
{
if ( cost[i,l] < cost [i,k]) then
near[i] ≔ l;
else
near[i] ≔ k;
}
near[k] ≔ near [l]≔0;

for ( i≔2 to n-1) do


{ //find additional n-2 edges for t.
Let j be an index such that near[j]≠ 0 and cost[j, near[j]] is minimum.
t[i,1]≔j;
t[i,2]≔ near[j];
mincost≔ mincost+ cost[j,near[j]];
near[j]≔0;

for ( k≔1 to n) do //Update near[].


{
if ((near[k]≠0) and ( cost[k,near[k] > cost [k,j])) then
near[k]≔ j;
}
}
return mincost;
}

Complexity
A simple implementation using an adjacency matrix graph representation and searching an array
of weights to find the minimum weight edge to add requires O(V2) running time. Using a simple
binary heap data structure and an adjacency list representation, Prim's algorithm can be shown to
run in time O(E log V) where E is the number of edges and V is the number of vertices. Using a
more sophisticated Fibonacci heap, this can be brought down to O(E + V log V), which is
asymptotically faster when the graph is dense enough that E is Ω(V).
IMPLEMENTATION:

#include<stdio.h>
#include<stdlib.h>

#define infinity 9999


#define MAX 20

int G[MAX][MAX],spanning[MAX][MAX],n;

int prims();

int main()
{
int i,j,total_cost;
printf("Enter no. of vertices:");
scanf("%d",&n);

printf("\nEnter the adjacency matrix:\n");

for(i=0;i<n;i++)
for(j=0;j<n;j++)
scanf("%d",&G[i][j]);

total_cost=prims();
printf("\nspanning tree matrix:\n");

for(i=0;i<n;i++)
{
printf("\n");
for(j=0;j<n;j++)
printf("%d\t",spanning[i][j]);
}

printf("\n\nTotal cost of spanning tree=%d",total_cost);


return 0;
}

int prims()
{
int cost[MAX][MAX];
int u,v,min_distance,distance[MAX],from[MAX];
int visited[MAX],no_of_edges,i,min_cost,j;

//create cost[][] matrix,spanning[][]


for(i=0;i<n;i++)
for(j=0;j<n;j++)
{
if(G[i][j]==0)
cost[i][j]=infinity;
else
cost[i][j]=G[i][j];
spanning[i][j]=0;
}

//initialise visited[],distance[] and from[]


distance[0]=0;
visited[0]=1;

for(i=1;i<n;i++)
{
distance[i]=cost[0][i];
from[i]=0;
visited[i]=0;
}

min_cost=0; //cost of spanning tree


no_of_edges=n-1; //no. of edges to be added

while(no_of_edges>0)
{
//find the vertex at minimum distance from the tree
min_distance=infinity;
for(i=1;i<n;i++)
if(visited[i]==0&&distance[i]<min_distance)
{
v=i;
min_distance=distance[i];
}

u=from[v];

//insert the edge in spanning tree


spanning[u][v]=distance[v];
spanning[v][u]=distance[v];
no_of_edges--;
visited[v]=1;

//updated the distance[] array


for(i=1;i<n;i++)
if(visited[i]==0&&cost[i][v]<distance[i])
{
distance[i]=cost[i][v];
from[i]=v;
}

min_cost=min_cost+cost[u][v];
}

return(min_cost);
}
Program-11
Single Source Shortest path algorithm using Greedy
Method
Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1956 and
published in 1959, is a graph search algorithm that solves the single-source shortest path
problem for a graph with nonnegative edge path costs, producing a shortest path tree. This
algorithm is often used in routing. An equivalent algorithm was developed by Edward F. Moore
in 1957.

For a given source vertex (node) in the graph, the algorithm finds the path with lowest cost (i.e.
the shortest path) between that vertex and every other vertex. It can also be used for finding
costs of shortest paths from a single vertex to a single destination vertex by stopping the
algorithm once the shortest path to the destination vertex has been determined. For example, if
the vertices of the graph represent cities and edge path costs represent driving distances between
pairs of cities connected by a direct road, Dijkstra's algorithm can be used to find the shortest
route between one city and all other cities.

Concept:
Let the node at which we are starting be called the initial node. Let the distance of node Y be the
distance from the initial node to Y. Dijkstra's algorithm will assign some initial distance values
and will try to improve them step by step.

1. Assign to every node a distance value: set it to zero for our initial node and to infinity for all
other nodes.
2. Mark all nodes as unvisited. Set initial node as current.
3. For current node, consider all its unvisited neighbors and calculate their tentative distance. For
example, if current node (A) has distance of 6, and an edge connecting it with another node (B)
is 2, the distance to B through A will be 6+2=8. If this distance is less than the previously
recorded distance, overwrite the distance.
4. When we are done considering all neighbors of the current node, mark it as visited. A visited
node will not be checked ever again; its distance recorded now is final and minimal.
5. If all nodes have been visited, finish. Otherwise, set the unvisited node with the smallest distance
(from the initial node, considering all nodes in graph) as the next "current node" and continue
from step 3.

The basic formula for finding the distance is:

D[i]= min(D[i], min j (D[j]+C[j,i]))


S

Example: Consider the digraph


Algorithm:

Algorithm Dijkstra (C[ ][ ], D[ ],v)


// V is the set of vertices of graph G.
// C[1:n,1:n] is an cost adjacency matrix where C[i, j] is cost the cost of an edge < i, j
> in G.
// Here D[1:n] is an distance vector that indicates the cost of destination from the
source.
{
S= { v };
for (i = 1; i <= n; i++)
D[i] = C[v,i];
for (i=1; i < = n-1; i++)
{
choose a vertex w Є V-S such that D[w] is a minimum;
S=S {w };
for each vertex v Є V-S
D[v] = min (D[v], D[w] + C[w, v])
}
}

Complexity: The complexity of the algorithm is O (n2).


IMPLEMENTATION:

// A C++ program for Dijkstra's single source shortest path algorithm.


// The program is for adjacency matrix representation of the graph

#include <limits.h>
#include <stdio.h>

// Number of vertices in the graph


#define V 9

// A utility function to find the vertex with minimum distance value, from
// the set of vertices not yet included in shortest path tree
int minDistance(int dist[], bool sptSet[])
{
// Initialize min value
int min = INT_MAX, min_index;

for (int v = 0; v < V; v++)


if (sptSet[v] == false && dist[v] <= min)
min = dist[v], min_index = v;

return min_index;
}

// A utility function to print the constructed distance array


int printSolution(int dist[], int n)
{
printf("Vertex Distance from Source\n");
for (int i = 0; i < V; i++)
printf("%d tt %d\n", i, dist[i]);
}

// Function that implements Dijkstra's single source shortest path algorithm


// for a graph represented using adjacency matrix representation
void dijkstra(int graph[V][V], int src)
{
int dist[V]; // The output array. dist[i] will hold the shortest
// distance from src to i

bool sptSet[V]; // sptSet[i] will be true if vertex i is included in shortest


// path tree or shortest distance from src to i is finalized

// Initialize all distances as INFINITE and stpSet[] as false


for (int i = 0; i < V; i++)
dist[i] = INT_MAX, sptSet[i] = false;

// Distance of source vertex from itself is always 0


dist[src] = 0;

// Find shortest path for all vertices


for (int count = 0; count < V - 1; count++) {
// Pick the minimum distance vertex from the set of vertices not
// yet processed. u is always equal to src in the first iteration.
int u = minDistance(dist, sptSet);

// Mark the picked vertex as processed


sptSet[u] = true;

// Update dist value of the adjacent vertices of the picked vertex.


for (int v = 0; v < V; v++)

// Update dist[v] only if is not in sptSet, there is an edge from


// u to v, and total weight of path from src to v through u is
// smaller than current value of dist[v]
if (!sptSet[v] && graph[u][v] && dist[u] != INT_MAX
&& dist[u] + graph[u][v] < dist[v])
dist[v] = dist[u] + graph[u][v];
}

// print the constructed distance array


printSolution(dist, V);
}

// driver program to test above function


int main()
{
/* Let us create the example graph discussed above */
int graph[V][V] = { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },
{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };

dijkstra(graph, 0);

return 0;
}
Program-12
Floyd-Warshal Algorithm using Dynamic
Programming
The Floyd–Warshall algorithm (sometimes known as the WFI Algorithm, Roy–Floyd
algorithm or just Floyd's algorithm) is a graph analysis algorithm for finding shortest paths in a
weighted graph (with positive or negative edge weights). A single execution of the algorithm
will find the lengths (summed weights) of the shortest paths between all pairs of vertices
though it does not return details of the paths themselves. The algorithm is an example of
dynamic programming. It was published in its currently recognized form by Robert Floyd in
1962. However, it is essentially the same as algorithms previously published by Bernard Roy
in 1959 and also by Stephen Warshall in 1962.

Concept: The Floyd-Warshall Algorithm is an application of Dynamic Programming.

Let dist(k,i,j) be the the length of the shortest path from i and j that uses only the vertices
as intermediate vertices. The following recurrence:

 k = 0 is our base case - dist(0,i,j) is the length of the edge from vertex i to vertex j if it exists,
and otherwise.
 dist(k,i,j) = min(dist(k - 1,i,k) + dist(k - 1,k,j),dist(k - 1,i,j)): For any vertex i and vertex j, the
length of the shortest path from i to j with all intermediate vertices simply does not
involve the vertex k at all (in which case it is the same as dist(k - 1,i,j)), or that the shorter path
goes through vertex k, so the shortest path between vertex i and vertex j is the combination of
the path from vertex i to k, and from vertex k to j.

Example: Consider the graph


Solution:

Fig 12.1: All pairs shortest path construction

Algorithm:

Algorithm All Pairs(dist)


{
for i = 1 to n
for j = 1 to n
if there is an edge from i to j
dist[0][i][j] = C[i, j]
else
dist[0][i][j] = ∞

for k = 1 to n
for i = 1 to n
for j = 1 to n
dist[k][i][j] = min(dist[k-1][i][j], dist[k-1][i][k] + dist[k-1][k][j])
}

Complexity: This algorithm takes Θ(n3) time and Θ(n3) space, and has the distinct advantage of
hiding a small constant in its behavior, since very little work is done in the innermost loop.
Furthermore, the space-bound can be reduced further to Θ(n2) by noticing that dist(k,i,j) is
independent from dist(k - 1,i,j).
IMPLEMENTATION:

#include<stdio.h>
int min(int,int);
void floyds(int p[10][10],int n)
{
int i,j,k;
for(k=1;k<=n;k++)
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
if(i==j)
p[i][j]=0;
else
p[i][j]=min(p[i][j],p[i][k]+p[k][j]);
}
int min(int a,int b)
{
if(a<b)
return(a);
else
return(b);
}
void main()
{
int p[10][10],w,n,e,u,v,i,j;;
printf("\n Enter the number of vertices:");
scanf("%d",&n);
printf("\n Enter the number of edges:\n");
scanf("%d",&e);
for(i=1;i<=n;i++)
{
for(j=1;j<=n;j++)
p[i][j]=999;
}
for(i=1;i<=e;i++)
{
printf("\n Enter the end vertices of edge%d with its weight \n",i);
scanf("%d%d%d",&u,&v,&w);
p[u][v]=w;
}
printf("\n Matrix of input data:\n");
for(i=1;i<=n;i++)
{
for(j=1;j<=n;j++)
printf("%d \t",p[i][j]);
printf("\n");
}
floyds(p,n);
printf("\n Transitive closure:\n");
for(i=1;i<=n;i++)
{
for(j=1;j<=n;j++)
printf("%d \t",p[i][j]);
printf("\n");
}
printf("\n The shortest paths are:\n");
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
{
if(i!=j)
printf("\n <%d,%d>=%d",i,j,p[i][j]);
}
}
Important Viva Questions

1. What is an algorithm?
An algorithm is a sequence of unambiguous instructions for solving a problem. i.e. for
obtaining a required output for any legitimate input in a finite amount of time.
2. What do you mean by Amortized Analysis?
Amortized analysis finds the average running time per operation over a worst case sequence of
operations. Amortized analysis differs from average-case performance in that probability is not
involved; amortized analysis guarantees the time per operation over worst-case performance.
3. What are important problem types? (or) Enumerate some important types of
problems.
a) Sorting
b) Searching
c) Numerical problems
d) Geometric problems
e) Combinatorial Problems
f) Graph Problems
g) String processing Problems
4. Name some basic Efficiency classes
a) Constant
b) Logarithmic
c) Linear
d) nlogn
e) Quadratic
f) Cubic
g) Exponential
h) Factorial
5. What are algorithm design techniques?
Algorithm design techniques ( or strategies or paradigms) are general approaches to solving
problems algorithmatically, applicable to a variety of problems from different areas of
computing. General design techniques are:
a) Brute force
b) divide and conquer
c) decrease and conquer
d) transform and concquer
e) greedy technique
f) dynamic programming
g) backtracking
h) branch and bound
6. How is an algorithm’s time efficiency measured?
Time efficiency indicates how fast the algorithm runs. An algorithm’s time efficiency is
measured as a function of its input size by counting the number of times its basic operation
(running time) is executed. Basic operation is the most time consuming operation in the
algorithm’s innermost loop.
7. What is Big ‘Oh’ notation?
A function t(n) is said to be in O(g(n)), denoted t(n) O(g(n)) , if t(n) is bounded above by some
constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some
nonnegative integers n0 such that:
t(n) ≤ cg(n) for all n≥ n0
8. What is an Activation frame?
It is a storage area for an invocation of recursive program (parameters, local variables, return
address/value etc.). Activation frame allocated from frame stack pointed by frame pointer.
9. Define order of an algorithm
Measuring the performance of an algorithm in relation with the input size n is known as order
of growth.
10. What is recursive call?
Recursive algorithm makes more than a single call to itself is known as recursive call. An
algorithm that calls itself is direct recursive.An algorithm”A” is said to be indirect recursive if
it calls another algorithm which in turn calls “A”.
11. What do you mean by stepwise refinement?
In top down design methodology the problem is solved in sequence (step by step) is known as
stepwise refinement.
12. How is the efficiency of the algorithm defined?
The efficiency of an algorithm is defined with the components.
a) Time efficiency -indicates how fast the algorithm runs
b) Space efficiency -indicates how much extra memory the algorithm needs
13. Define direct recursive and indirect recursive algorithms.
Recursion occurs where the definition of an entity refers to the entity itself. Recursion can be
direct when an entity refers to itself directly or indirect when it refers to other entities which
refers to it. A (Directly) recursive routine calls itself. Mutually recursive routines are an
example of indirect recursion. A (Directly) recursive data type contains pointers to instances of
the data type.
14. What are the characteristics of an algorithm?
Every algorithm should have the following five characteristics
a) Input
b) Output
c) Definiteness
d) Effectiveness
e) Termination
Therefore, an algorithm can be defined as a sequence of definite and effective instructions,
which terminates with the production of correct output from the given input. In other words,
viewed little more formally, an algorithm is a step by step formalization of a mapping function
to map input set onto an output set.
15. What do you mean by time complexity and space complexity of an algorithm?
Time complexity indicates how fast the algorithm runs. Space complexity deals with extra
memory it require. Time efficiency is analyzed by determining the number of repetitions of the
basic operation as a function of input size. Basic operation: the operation that contributes most
towards the running time of the algorithm The running time of an algorithm is the function
defined by the number of steps (or amount of memory) required to solve input instances of size
n.
16. Define Big Omega Notations
A function t(n) is said to be in Ω(g(n)) , denoted t(n) Є Ω((g(n)) , if t(n) is bounded below by
some positive constant multiple of g(n) for all large n, i.e., if there exist some positive constant
c and some nonnegative integer n0 such that
t(n) ≥cg(n) for all for all n ≥ n0
17. What are the different criteria used to improve the effectiveness of algorithm?
(i) The effectiveness of algorithm is improved, when the design, satisfies the following
constraints to be minimum.
Time efficiency - how fast an algorithm in question runs.
Space efficiency – an extra space the algorithm requires
(ii) The algorithm has to provide result for all valid inputs.
18. Analyze the time complexity of the following segment:
for(i=0;i<N;i++)
for(j=N/2;j>0;j--)
sum++;
Time Complexity= N * N/2
= N2 /2
= O(N2)
19. write general plan for analyzing non-recursive algorithms.
a) Decide on parameter indicating an input’s size.
b) Identify the algorithm’s basic operation
c) Checking the no.of times basic operation executed depends on size of input.if it depends on
some additional property,then best,worst,avg.cases need to be investigated
d) Set up sum expressing the no.of times the basic operation is executed. (establishing order of
growth)
20. How will you measure input size of algorithms?
The time taken by an algorithm grows with the size of the input. So the running time of the
program depends on the size of its input. The input size is measured as the number of items in
the input that is a parameter n is indicating the algorithm’s input size.
21. Define the terms: pseudocode, flow chart
A pseudocode is a mixture of a natural language and programming language like constructs. A
pseudocode is usually more precise than natural language. A flowchart is a method of
expressing an algorithm by a collection of connected geometric shapes containing descriptions
of the algorithm’s steps.
22. write general plan for analyzing recursive algorithms.
a) Decide on parameter indicating an input’s size.
b) Identify the algorithm’s basic operation
c) Checking the no.of times basic operation executed depends on size of input.if it depends on
some additional property,then best,worst,avg.cases need to be investigated
d) Set up the recurrence relation,with an appropriate initial condition,for the number of times the
basic operation is executed
e) Solve recurrence (establishing order of growth)
23. What do you mean by Combinatorial Problem?
Combinatorial Problems are problems that ask to find a combinatorial object-such as
permutation, a combination, or a subset--that satisfies certain constraints and has some desired
property.
24. Define Big Theta Notations
A function t(n) is said to be in Θ(g(n)) , denoted t(n) Є Θ (g(n)) , if t(n) is bounded both above
and below by some positive constant multiple of g(n) for all large n, i.e., if there exist some
positive constants c1 and c2 and some nonnegative integer n0 such that c1 g(n) ≤t(n) ≤ c2g(n)
for all n ≥ n0
25. What is performance measurement?
Performance measurement is concerned with obtaining the space and the time requirements of
a particular algorithm.
26. What is an algorithm?
An algorithm is a finite set of instructions that, if followed, accomplishes a particular task. In
addition, all algorithms must satisfy the following criteria:
a) input
b) Output
c) Definiteness
d) Finiteness
e) Effectiveness.
27. Define Program.
A program is the expression of an algorithm in a programming language. Sometimes works
such as procedure, function and subroutine are used synonymously program.
28. What is recursive algorithm?
An algorithm is said to be recursive if the same algorithm is invoked in the body.
29. What is space complexity and time complexity ?
The space complexity of an algorithm is the amount of memory it needs to run to completion.
The time complexity of an algorithm is the amount of computer time it needs to run to
completion.
30. Give the two major phases of performance evaluation
Performance evaluation can be loosely divided into two major phases:
a) a prior estimates (performance analysis)
b) a Posterior testing(performance measurement)
31. Define input size.
The input size of any instance of a problem is defined to be the number of words(or the
number of elements) needed to describe that instance.
32. Define best-case step count.
The best-case step count is the minimum number of steps that can be executed for the given
parameters.
33. Define worst-case step count.
The worst-case step count is the maximum number of steps that can be executed for the given
parameters.
34. Define average step count.
The average step count is the average number of steps executed an instances with the given
parameters.
35. Define Little “oh”.
The function f(n) = 0(g(n)) iff Lim f(n) =0, n →∞ g(n)
36. Define Little Omega.
The function f(n) = ω (g(n)) iff Lim f(n) =0 n →∞ g(n)
37. Write algorithm using iterative function to fine sum of n numbers.
Algorithm sum(a,n)
{ S : = 0.0
For i=1 to n do
S : - S + a[i];
Return S;
}
38. Write an algorithm using Recursive function to fine sum of n numbers,
Algorithm Rsum (a,n)
{
If(n≤0) then
Return 0.0;
Else Return Rsum(a, n- 1) + a(n);
39. Define the divide an conquer method.
Given a function to compute on ‘n’ inputs the divide-and-comquer strategy suggests splitting
the inputs in to’k’ distinct susbsets, 1<k <n, yielding ‘k’ subproblems. The subproblems must
be solved, and then a method must be found to combine subsolutions into a solution of the
whole. If the subproblems are still relatively large, then the divide-and conquer strategy can
possibly be reapplied.
40. Define control abstraction.
A control abstraction we mean a procedure whose flow of control is clear but whose primary
operations are by other procedures whose precise meanings are left undefined.
The Control abstraction for Divide-and conquer.
Algorithm DAndC(P)
{
if small(p) then return S(P);
else
{
divide P into smaller instance p1,p2,…pk, k ≥1;
Apply DAndC to each of these subproblems
Return combine (DAnd C(p1) DAnd C(p2),----, DAnd (pk));
}
}
41.What is the substitution method?
One of the methods for solving any such recurrence relation is called the substitution method.
42. What is the binary search?
If ‘q’ is always chosen such that ‘aq’ is the middle element(that is, q=[(n+1)/2), then the
resulting search algorithm is known as binary search.
43. Give computing time for Binary search?
In conclusion we are now able completely describe the computing time of binary search by
giving formulas that describe the best, average and worst cases.
Successful searches
Θ(1) Θ (logn) Θ (Logn)
best average worst
Unsuccessful searches
Θ (logn)
best, average, worst
45. Define external path length?
The external path length E, is defines analogously as sum of the distance of all external nodes
from the root.
46. Define internal path length.
The internal path length ‘I’ is the sum of the distances of all internal nodes from the root.
47. What is the maximum and minimum problem?
The problem is to find the maximum and minimum items in a set of ‘n’ elements. Though
this problem may look so simple as to be contrived, it allows us to demonstrate divide and-
conquer in simple setting.
48. What is the Quick sort?
Quicksort is divide and conquer strategy that works by partitioning it’s input elements
according to their value relative to some preselected element(pivot). it uses recursion and the
method is also called partition –exchange sort.
49. Write the Analysis for the Quick sort.
O(nlogn) in average and best cases, O(n2) in worst case
50. what is Merge sort? and Is insertion sort better than the merge sort?
Merge sort is divide and conquer strategy that works by dividing an input array in to two
halves,sorting them recursively and then merging the two sorted halves to get the original array
sorted Insertion sort works exceedingly fast on arrays of less then 16 elements, though for
large
‘n’ its computing time is O(n2).
51. Write a algorithm for straightforward maximum and minimum?
Algorithm straight MaxMin(a,n,max,min)
//set max to the maximum and min to the minimum of a[1:n]
{
max := min: = a[i];
for i = 2 to n do
{
if(a[i] >max) then max: = a[i];
if(a[i] >min) then min: = a[i];
}
}
52. what is general divide and conquer recurrence?
Time efficiency T(n)of many divide and conquer algorithms satisfies the equation
T(n)=a.T(n/b)+f(n).This is the general recurrence relation.
53. What is Master’s theorem?
Let a be an integer greater than or equal to 1 and b be a real number greater than 1. Let c be
a positive real number and d a nonnegative real number. Given a recurrence of the form

then for n a power of b,


1. if logb a < c, T(n) = Θ(nc),
2. if logb a = c, T(n) = Θ(nc log n),
3. if logb a > c, T(n) = Θ(nlogb a).

54. Write the algorithm for Iterative binary search?


Algorithm BinSearch(a,n,x)
//Given an array a[1:n] of elements in nondecreasing
// order, n>0, determine whether x is present
{
low : = 1;
high : = n;
while (low < high) do
{
mid : = [(low+high)/2];
if(x < a[mid]) then high:= mid-1;
else if (x >a[mid]) then low:=mid + 1;
else return mid;
}
return 0;
}
55. Describe the recurrence relation for merge sort?
If the time for the merging operation is proportional to n, then the computing time of merge
sort is described by the recurrence relation
T(n) = a n = 1, a constant
2T (n/2) + n n >1, c, a constant

56. Explain the greedy method.


Greedy method is the most important design technique, which makes a choice that looks best
at that moment. A given ‘n’ inputs are required us to obtain a subset that satisfies some
constraints that is the feasible solution. A greedy method suggests that one can device an
algorithm that works in stages considering one input at a time.
57. Define feasible and optimal solution.
Given n inputs and we are required to form a subset such that it satisfies some given
constraints then such a subset is called feasible solution. A feasible solution either maximizes
or minimizes the given objective function is called as optimal solution.
58. Write the control abstraction for greedy method.
Algorithm Greedy (a, n)
{
solution=0;
for i=1 to n do
{
x= select(a);
if feasible(solution ,x) then
solution=Union(solution ,x);
}
return solution;
}
59. What are the constraints of knapsack problem?
To maximize Σpixi
1≤i≤n
The constraint is : Σwixi ≥ m and 0 ≤ xi ≤ 1 1≤ i ≤ n
1≤i≤n
where m is the bag capacity, n is the number of objects and for each object i wi and pi are the
weight and profit of object respectively.
60. What is a minimum cost spanning tree?
A spanning tree of a connected graph is its connected acyclic subgraph that contains all
vertices of a graph. A minimum spanning tree of a weighted connected graph is its spanning
tree of the smallest weight where bweight of the tree is the sum of weights on all its edges. A
minimum spanning subtree of a weighted graph (G,w) is a spanning subtree of G of minimum
weight
w(T )= Σ w(e )
e€ T
Minimum Spanning Subtree Problem: Given a weighted connected undirected graph (G,w),
find a minimum spanning subtree
61. Specify the algorithms used for constructing Minimum cost spanning tree.
a) Prim’s Algorithm
b) Kruskal’s Algorithm
62. State single source shortest path algorithm (Dijkstra’s algorithm).
For a given vertex called the source in a weigted connected graph,find shotrtest paths to all its
other vertices.Dijikstra’s algorithm applies to graph with non-negative weights only.
63. What is Knapsack problem?
A bag or sack is given capacity and n objects are given. Each object has weight wi and profit pi
.Fraction of object is considered as xi (i.e) 0<=xi<=1 .If fraction is 1 then entire object is put
into sack. When we place this fraction into the sack we get wixi and pixi.
64. Write any two characteristics of Greedy Algorithm?
To solve a problem in an optimal way construct the solution from given set of
candidates.
As the algorithm proceeds, two other sets get accumulated among this one set contain
the candidates that have been already considered and chosen while the other set contains the
candidates that have been considered but rejected.
65. What is the Greedy approach?
The method suggests constructing solution through sequence of steps, each expanding partially
constructed solution obtained so far, until a complete solution is reached. On each step, the
choice must be
Feasible(satisfy problem constraints)
Locally optimal(best local choice among all feasible choices available on that step)
Irrevocable(once made,it cant be changed)
66. What are the steps required to develop a greedy algorithm?
a) Determine the optimal substructure of the problem.
b) Develop a recursive solution.
c) Prove that at any stage of recursion one of the optimal choices is greedy choice. Thus it is
always safe to make greedy choice.
d) Show that all but one of the sub problems induced by having made the greedy choice are
empty.
e) Develop a recursive algorithm and convert into iterative algorithm.
67. Define forest.
Collection of sub trees that are obtained when root node is eliminated is known as forest.
68. Write the difference between the Greedy method and Dynamic programming.
Greedy method Dynamic programming Only one sequence of decision is generated. Many
number of decisions are generated. It does not guarantee to give an optimal solution always. It
definitely gives an optimal solution always.
69.state the requirement in optimal storage problem in tapes.
Finding a permutation for the n programs so that when they are stored on the tape in this order
the MRT is minimized.This problem fits the ordering paradigm.
70. State Kruskal Algorithm.
The algorithm looks at a MST for a weighted connected graph as an acyclic subgraph with |v|-
1 edges for which the sum of edge weights is the smallest.
71. state efficiency of Dijkstra’s algorithm.
O(|v|2) (weight matrix and priority queue as unordered array)
O(|E| LOG|V|) (adjacency list and priority queue as min-heap)
72. Differentiate subset paradigm and ordering paradigm
subset paradigm ordering paradigm At each stage a decision is made regarding whether a
particular input is in an optimal solution (generating sub optimal solutions) For problems that
do not call for selection of optimal subset,in the greedy manner we make decisions by
considering inputs in some order. Example kNAPSACK,MST Optimal storage on tapes
73. Write the difference between the Greedy method and Dynamic programming. Greedy
method Dynamic programming
1.In Greedy method only one sequence of decision is generated. While in dynamic
programming many number of decisions are generated.
2. Greedy method does not guarantee to give an optimal solution always. But dynamic
programming definitely gives an optimal solution always.
74. Define dynamic programming.
Dynamic programming is an algorithm design method that can be used when a solution to the
problem is viewed as the result of sequence of decisions. It is technique for solving problems
with overlapping sub-problems.
75. What are the features of dynamic programming?
Optimal solutions to sub problems are retained so as to avoid recomputing of their values.
Decision sequences containing subsequences that are sub optimal are not considered.
It definitely gives the optimal solution always.
76. What are the drawbacks of dynamic programming?
Time and space requirements are high, since storage is needed for all level.
Optimality should be checked at all levels.
77. Write the general procedure of dynamic programming.
The development of dynamic programming algorithm can be broken into a sequence of 4
steps.
a) Characterize the structure of an optimal solution.
b) Recursively define the value of the optimal solution.
c) Compute the value of an optimal solution in the bottom-up fashion.
d) Construct an optimal solution from the computed information.
78. Define principle of optimality.
It states that an optimal solution to any of its instances must be made up of optimal solutions to
its sub-instances.
79. Define multistage graph
A multistage graph G =(V,E) is a directed graph in which the vertices are partitioned in to
K>=2 disjoint sets Vi,1<=i<=k.The multi stage graph problem is to find a minimum cost paths
from s(source ) to t(sink). There are two approach(forward and backward).
80. Define All pair shortest path problem
Given a weighted connected graph, all pair shortest path problem asks to find the lengths of
shortest paths from each vertex to all other vertices.
81.Define Distance matrix
Recording the lengths of shortest path in n x n matrix is called Distance matrix(D)
82.Define floyd’s algorithm
a) To find all pair shortest path
b) The algorithm computes the distance matrix of a weighted graph with n vertices through series
of n by n matrices :D(0)…D(k-1),D(k)…..D(n)
83. State the time efficiency of floyd’s algorithm
O(n3) It is cubic
84. Define OBST.
a) Dynammic pgmg. Used
b) If probabilities of searching for elements of a set are known then finding optimal BST for
which the average number of comparisons in a search is smallest possible.
85. Define catalan number.
The total number of binary search trees with n keys is equal to nth catalan number
C(n)=(2n to n) 1/(n+1) for n>0,c(0)=1
86. State time and space efficiency of OBST
Space efficiency :quadratic
Time efficiency : cubic.
87. What are the requirements that are needed for performing Backtracking?
To solve any problem using backtracking, it requires that all the solutions satisfy a
complex set of constraints. They are:
i. Explicit constraints.
ii. Implicit constraints.
88.Define explicit constraint.
They are rules that restrict each x to take on values only from a give set. The i depend on the
particular instance I of the problem being solved. All tuples that satisfy the explicit constraints
define a possible solution space.
89. Define implicit constraint.
They are rules that determine which of the tuples in the solution space of I satisfy the criteria
function. It describes the way in which the x must relate to each other i.
90.Define state space tree.
The tree organization of the solution space is referred to as state space tree.
91.Define state space of the problem.
All the paths from the root of the organization tree to all the nodes is called as state space of
the problem
92.Define answer states.
Answer states are those solution states s for which the path from the root to s defines a tuple
that is a member of the set of solutions of the problem.
93.What are static trees?
The tree organizations that are independent of the problem instance being solved are called as
static tree.
94.What are dynamic trees?
The tree organizations those are independent of the problem instance being solved are called as
static tree.
95.Define a live node.
A node which has been generated and all of whose children have not yet been generated is
called as a live node.
96. Define a E – node.
E – node (or) node being expanded. Any live node whose children are currently being
generated is called as a E – node.
97.Define a dead node.
Dead node is defined as a generated node, which is to be expanded further all of whose
children have been generated.
98,What are the factors that influence the efficiency of the backtracking algorithm?
The efficiency of the backtracking algorithm depends on the following four
factors. They are:
a) The time needed to generate the next x k
b) The number of x satisfying the explicit constraints. k
c) The time for the bounding functions B
d) The number of x satisfying the B
99.Define Branch-and-Bound method.
The term Branch-and-Bound refers to all the state space methods in which all children of the
E-node are generated before any other live node can become the E- node.
100.What are the searching techniques that are commonly used in Branch-and-Bound
method.
The searching techniques that are commonly used in Branch-and-Bound method are:
FIFO
LIFO
LC
Heuristic search
101.State 8 – Queens problem.
The problem is to place eight queens on a 8 x 8 chessboard so that no two queen “attack” that
is, so that no two of them are on the same row, column or on the diagonal.
102.State Sum of Subsets problem.
Given n distinct positive numbers usually called as weights, the problem calls for finding all
the combinations of these numbers whose sums are m.
103. State m – colorability decision problem.
Let G be a graph and m be a given positive integer. We want to discover whether the nodes of
G can be colored in such a way that no two adjacent nodes have the same color yet only m
colors are used.
104.Define chromatic number of the graph.
The m – colorability optimization problem asks for the smallest integer m for which the graph
G can be colored. This integer is referred to as the chromatic number of the graph.
105. Define a planar graph.
A graph is said to be planar iff it can be drawn in such a way that no two edges cross each
other.
106. What are NP- hard and NP-complete problems?
The problems whose solutions have computing times are bounded by polynomials of small
degree.
107. What is a decision problem?
Any problem for which the answer is either zero or one is called decision problem.
108. what is approximate solution?
A feasible solution with value close to the value of an optimal solution is called approximate
solution.
109. what is promising and non-promising nodes?
a node in a state space tree is said to be promising if it corresponds to a partially constructed
solution from which a complete solution can be obtained. The nodes which are not promising
for solution in a state space tree are called non-promising nodes.
110.Write formula for bounding function in Knapsack problem
In knapsack problem upper bound value is computed by the formula
UB = v + (W-w) * (vi+1/wi+1)
111. Write about traveling salesperson problem.
Let g = (V, E) be a directed. The tour of G is a directed simple cycle that includes every vertex
in V. The cost of a tour is the sum of the cost of the edges on the tour. The traveling
salesperson problem is to find a tour of minimum cost. In branch and bound technique of TSP
problem Lower bound lb= s/2
112. Write some applications of traveling salesperson problem.
a) Routing a postal van to pick up mail from boxes located at n different sites.
b) Using a robot arm to tighten the nuts on some piece of machinery on an assembly line.
c) Production environment in which several commodities are manufactured on the same set of
machines.
113. Give the time complexity and space complexity of traveling salesperson problem.
Time complexity is O (n2 2n). Space complexity is O (n 2n).
114. Differentiate decision problem and optimization problem
Any problem for which the answer is either zero or one is called decision problem Any
problem that involves the identification of an optimal (maximum or minimum) value of a
given cost function is called optimization problem
115. what is class P and NP?
P is set of all decision problems solvable by deterministic algorithms in polynomial time. NP is
set of all decision problems solvable by non deterministic algorithms in polynomial time.
116. Define NP-Hard and NP-Complete problems
Problem L is NP-Hard if and only if satisfiability reduces to L. A Problem L is NP-Complete if
and only if L is NP-Hard and L belongs to NP.

You might also like