0% found this document useful (0 votes)
86 views12 pages

DCA7104 Analysis and Design of Algorithms

Deepak Sarswat is a 3rd semester MCA student taking the course DCA7104 Analysis and Design of Algorithms. In set 1, he discusses properties of algorithms and provides examples of branch and bound algorithms and the traveling salesman problem. He also discusses analyzing the efficiency of recursive algorithms and provides an example of calculating the factorial of a number recursively. In set 2, he describes bottom-up and top-down heap construction algorithms and provides a comparison of top-down vs bottom-up approaches. Finally, he discusses insertion sort, providing time complexities for different cases and noting it is best for small, nearly sorted data.

Uploaded by

armaan rathore
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views12 pages

DCA7104 Analysis and Design of Algorithms

Deepak Sarswat is a 3rd semester MCA student taking the course DCA7104 Analysis and Design of Algorithms. In set 1, he discusses properties of algorithms and provides examples of branch and bound algorithms and the traveling salesman problem. He also discusses analyzing the efficiency of recursive algorithms and provides an example of calculating the factorial of a number recursively. In set 2, he describes bottom-up and top-down heap construction algorithms and provides a comparison of top-down vs bottom-up approaches. Finally, he discusses insertion sort, providing time complexities for different cases and noting it is best for small, nearly sorted data.

Uploaded by

armaan rathore
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Name :- DEEPAK SARSWAT

Roll No. :- 2114100601

Program :- MCA

Semester:- 3RD

Course code & Name :- DCA7104 Analysis and Design of Algorithms

SET- 1

Ans.1(A) :- Properties of algorithms :- An algorithm can have zero or more external inputs
and must produce one or more outputs. Also, the algorithm must stop after a finite number of
steps. The properties of the algorithm are:

• Accuracy - Must provide correct and accurate output for any valid input.

• Clarity – Every instruction should be clear, precise and unambiguous. Inputs and outputs
must be clearly defined.

• Finiteness – In all cases when following the instructions of an algorithm, it must stop after a
finite number of steps.

• Effectiveness – All instructions should be very basic so that they can basically be done
using only pencil and paper.

• Generality – Algorithms need general instructions that can be applied to any case.

Branch and bound algorithm: We usually use branch and bound algorithms for
optimization problems. Form a tree of subproblems with the branch-and-bound algorithm.
We consider the main problem as the "root problem". A method for constructing upper and
lower bounds for a given problem. At each node, apply the boundary method.

• If the bounds are satisfied, a suitable solution is considered for that particular subproblem.

• If the boundaries do not match, split the problem represented by that node and transform the
two subproblems into child nodes. Continue using the best known feasible solution to resolve
all other nodes in the tree.

Branch-and-Bound Algorithm Example: Traveling Salesman Problem: A salesman needs


to visit each of n cities once and wants to minimize the total distance traveled.

• Consider the routing problem as finding the shortest path through a set of cities, visiting
each city once.

• Split the node into two child problems.


1. The shortest route to visit city A
2. The shortest route without visiting city A first

• Continue splitting as the tree grows.

• Heuristics: Rules of thumb that help determine which possibilities to explore first.

• Optimization: how to eliminate some possibilities without fully investigating them.

(B). A general scheme for analyzing the efficiency of recursive algorithms is as follows:

1) Determine the input size based on the n parameter.

2) Identify and analyze the basic operations of recursive algorithms.

3) Determine the number of elementary operations performed on different inputs of the same
size.

4) If it is different, we need to get the best, worst and average cases for input size n. (If the
underlying operation depends on it, it must be analyzed separately.)

5) Set up iterative relationships with proper initial conditions to solve basic operations

6) Solve the recurrence relation by forward and backward substitution to obtain the order of
growth. Using mathematical induction, we can prove the answer. These steps are made
clearer with a few examples.

Example 1: Computing factorial of a number n. We can find the factorial of a number n by


performing repeated multiplication. Consider n=5, then n factorial (n!) is computed by
following steps:

1) n! = 5!

2) 4! * 5

3) 3! * 4 * 5

4) 2! * 3 * 4 * 5

5) 1! * 2 * 3 * 4 * 5

6) 0! * 1 * 2 * 3 * 4 * 5 * 6

7) 1 * 1 * 2 * 3 * 4 * 5 * 6

The recursive algorithm for finding factorial of a number is given as:

Algorithm Tracing to Find the Factorial of a Number


n=2

Pass 1 : factorial(2)// this is a recursive function which traces to itself

If(2=0)//this condition is not met

return 1

else return factorial(2-1) * 2 // recursively calls factorial (1)

Pass 2 : factorial (1)

If(1=0) // condition is false

Return 1

ANS. 2 :- Bottom-up heap construction It prepares the essentially complete binary tree with n
nodes by putting keys in the order given and then “heapifies” the tree as follows. Starting
with the last parental node and ending with the root, the algorithm checks whether the
parental domination holds for the key at this node.

If it does not, the algorithm swaps the node‟s key K with the larger key of its children and
checks whether the parental domination holds for K in its new position

This process supports until the parental dominance requirement for K is fulfilled. After
completing the “heapification” of the sub-tree rooted at the current parental node, the
algorithm continues to do the same for the node‟s immediate predecessor. The algorithm
stops after this is done for the tree‟s root to give the final heap in figure.

The numbers above the nodes in the tree indicate their position in the array which is shown
below:
Since the value of a node‟s key does not change during the process of shifting it down the
tree, it need not be involved in intermediary swaps. The empty nodes are swapped with larger
keys in its children until a final position is achieved where it accepts the “erased” value again.

Algorithm for bottom-up heap construction:

Algorithm: Heap Bottom-up (H [1...n])

//A heap from the elements of a given array by the bottom-up algorithm

//Input: An array H[1..n] of orderable items

//Output: A heap H[1..n]

for i ←n/2down to 1 do

k←i; v←H[k] heap←false

while not heap and 2 * k ≤ n do j ←2 * k

if j <n //there are two children

if H[ j ]<H[ j + 1]

j ←j + 1

if v ≥ H[j ]

heap←true

else H[k]←H[j ];

k←j

H[k]←v

Top-down heap construction algorithm The top-down heap construction algorithm (which is
less efficient) forms a heap by successive insertions of a new key into a previously
constructed heap shown below
To insert a new key into a heap, first attach a new node with key K in it next to the last leaf of
the existing heap. Then shift K up to its proper place in the new heap as below

Evaluate K with its parent‟s key. Stop if the parent key is greater than or equal to K (the
structure is a heap), else, swap these two keys and evaluate K with its new parent. This
swapping continues until K is not greater than its last parent or it reaches the root. In this
algorithm, too, we can shift up an empty node until it reaches its proper position, where it will
get K‟s value.

This insertion operation doesn‟t require more key comparisons than the heap‟s height. Since
the height of a heap with n nodes is about log2 n, the time efficiency of insertion is in O(log
n).

BASIS FOR COMPARISON TOP-DOWN APPROACH BOTTOM-UP APPROACH


Basic Breaks the massive problem Solves the fundamental low-
into smaller subproblems. level problem and integrates
them into a larger one.
Process Submodules are solitarily Examine what data is to be
analyzed. encapsulated and implies the
concept of information
hiding.
Communication Not required in the top-down Needs a specific amount of
approach communication.
Redundancy Hold redundant information. Needs a specific amount of
communication.
Programming languages Structure/procedural oriented Object-oriented
programming languages (i.e., programming languages (like
C) follows the top-down C++, Java, etc.) follows the
approach. bottom-up approach.
Mainly used in Module documentation, test Testing
case creation, code
implementation and
debugging.

ANS.3 (A):- Insertion sort is a simple algorithm that implements the divide and conquers
methodology. Insertion sort executes in O(n2 ) time, but it‟s about twice as fast as the bubble
sort and somewhat faster than the selection sort in normal situations. It‟s also not too
complex, although it‟s slightly more complex than the bubble and selection sorts. It‟s often
used as the final stage of more sophisticated sorts, such as quick sort.

There are numerous approaches to sort an unsorted array in insertion sort like, start sorting
the array from:

• The left

• The right

• Any desired place where the marker is placed.

Algorithm Worst-Case running Best-Case running Best-Case running


time time time
Selection Sort Θ (n2 ) Θ (n2 ) Θ (n2 )
Insertion Sort Θ (n2 ) Θ (n) Θ (n2 )
Merge Sort Θ (n log n) Θ (n log n) Θ (n log n)
Quick Sort Θ (n2 ) Θ (n log n) Θ (n log n)

(B). The best-case input is an array that is already sorted. In this case insertion sort has a
linear running time (i.e., O(n)). Each cycle, the first remaining element of input is compared
with the right-most element of the sorted sub- section of the array.
The worst-case input is an array sorted in reverse order. In this case every cycle of the inner
loop examines the entire sorted sub- section of the array before inserting the next element.
For this case insertion sort has a quadratic running time (i.e., O(n2 )).

The average case is also quadratic, which makes insertion sort unrealistic for sorting large
arrays. However, insertion sort is the fastest algorithm for sorting arrays having fewer than
ten elements

SET-2

ANS.1 (A) :- Distribution counting is an input enhancement method wherein a separate array
is used to store the information generated during the sorting process and these arrays enhance
the sorting process. Horspool‟s and Boyer-Moore algorithms are string matching algorithms
where in the pattern is compared with the text and shifting of the pattern is done by
computing shift size. Thus, searching operation becomes faster.

Hashing is a technique that uses a hash key to find items. Collision occurs when the hash key
value of two items turns out to be the same value. This is rectified by the two collision
resolution methods - separate chaining and open addressing. The branching factor in B-Tree
technique speeds the access time. Thus, by using all the above techniques, we can reduce the
execution time of an algorithm.

(B). : Given a knapsack with following:

M-capacity of the knapsack

n-number of objects

w- an array consisting of weights w1,w2…wn

p- an array consisting of profits p1,p2…pn

x- an array consisting of either 0 or 1, 0 stand for xi has not been selected and 1 represent
that xi object has been selected.

The main goal is to place the objects into the knapsack so that maximum profit is

obtained and the weights of objects chosen should not exceed the capacity of knapsack.

This problem can be stated as:

Design Methodology:

Case 1: If there are no objects (i.e., i = 0) or capacity of knapsack itself is 0 i.e., j=0, then
profit will be 0. The total profit obtained as:

V [ i, j ] = 0 if (i=0 or j=0) -------------------------------------------- Eq 1

Case 2: Suppose we have a situation that weight of the ith object wi is greater than remaining
capacity j, in such case ith object cannot be selected, so the profit obtained is

V [ i, j ] = V [ i-1, j ] if wi > j -----------------------------------------------------Eq 2

Case 3: Suppose we have a situation where wi is less than remaining capacity j, in such case
we have two alternatives:

• We reject ith object, if so, profit obtained is V[ i,j ] = V [ i-1, j ]

• We select ith object, if so, profit obtained is : V [ i, j ]= V [ i-1, j-wi ] + Pi

Out of these situations which ever yields, maximum profit, that must be considered. So,
wemust select maximum of these two. So, profit obtained if Wi <= j is given by V [ i, j ] =
max { V [ i -1 , j ] , V [ i-1, j - Wi] +Pi } if wi <= j ------------------------ Eq 3 Considering all
the cases, the final recurrence relation to get solution to knapsack problem can be written as
below:
Profit Table: Profit table for the above example is shown bel

ANS.2 Let a1, a2…an be the distinct elements given in ascending order and let p1, p2…pn be
the probabilities of searching the elements. Let c[i,j] be the smallest average number of
comparisons made in a binary search tree elements ai….aj, where i,j are some integer indices,
1 ≤ i ≤ j ≤ n .Now find the values of C[i,j] for its the sub instances.We have to choose a root
ak for keys ai….aj, so as to derive the recurrence relation using dynamic programming. For
such binary search tree, the root consists of the key ak, the left sub-tree 𝑇𝑖 𝑘−1 holds keys
ai…ak- 1 optimally arranged and the right sub-tree 𝑇𝑘+1 𝑗 contains keys ak+1…aj also
optimally arranged.

Here we are taking advantage of the Principle of Optimality. If we start counting tree levels at
1 then we can derive the following recurrence relation:
In the recurrence relation given by Eq: 1, let us assume that C[i, j − 1] ≈ 0 for 1 ≤ i ≤ n + 1.
This we can interpret as the number of comparisons in the empty tree. The figure shows the
values required to compute the C[i,j] formula.

Above, we can find the values at row i and columns to the left of column j and in column j
and the rows below the row i. The arrows shown point to the pairs of entries that are added up
and the smallest one is recorded as the value of C[i,j]. We have to fill the table along its
diagonal, starting with zeroes on the main diagonal and with probabilities given as pi, 1≤ i ≤
n , and moving toward the upper right corner. This algorithm helps us to compute C[1,n], the
average number of comparisons for the successful searches in the optimal binary search tree.
We have to maintain another two dimensional table to record the value of k for which the
minimum is achieved. The table will be same as the one above and will be filled in the same
manner. The table entries will start at R[i,i] for 1 ≤ i ≤ n and is used to find the optimal
solution.

ANS.3 (A):- : Greedy Choice Property :- In greedy algorithms a globally best solution is
arrived by making a locally optimal (greedy) choice. When considering which choice to
make, the choice that looks best in the current problem, without considering results from sub
problems is selected. The choices made by a greedy algorithm can depend on choices of the
past, but it cannot depend on any future choices or on the solutions to sub problems. This is
where greedy algorithm differs from dynamic programming. In dynamic programming,
choices are made at each step, but the choice usually depends on the solutions to sub
problems. Dynamic-programming solves problems in bottom-up manner that is solving from
smaller sub problems to larger sub problems. Therefore, a greedy strategy usually progresses
in a top-down fashion, making one greedy choice after another, reducing each given problem
instance to a smaller one.

(B) : Decision tree means to be a program in the form of tree with branches. Here each node
stands for a decision. First the root node is tested and then the control is passed to one of its
subtrees, depending on the result of the test. This flow is continued till the leaf node with the
element of interest is reached.A real-life example for a decision tree is given in figure. Let us
assume that we are the income tax is calculated for salaried citizens under the age on 60.
First we check if the citizen is a male or a female. If the citizen is a male then check if the
income is less than 2,00,000. If yes then the citizen is not eligible for tax. If not, check if the
income is less than 5,00,000. If yes then the tax is 10% else check if the income is less than
10,00,000. If that is true then the tax is 20% else it is 30%. If it is a female then first we check
if the less than 2,50,000 then proceed with the same process followed from the male citizen.
Some algorithms like sorting and searching need to compare their input elements. To study
such algorithms, we use decision trees.

The problem of sorting can be viewed as following.

Input: A sequence of n numbers <a1, a2, . . . , an>.

Output: A permutation (reordering) <a„1, a„2, . . . , a„n> of the input sequence such that a„1
<= a„2 ….. <= a‟n. A sorting algorithm is comparison based if it uses comparison operators
to find the order between two numbers. Comparison sorts can be viewed abstractly in terms
of decision trees. A decision tree is a full binary tree that stands for the comparisons between
elements that are performed by a particular sorting algorithm operating on an input of a given
size. The execution of the sorting algorithm corresponds to tracing a path from the root of the
decision tree to a leaf. At each internal node, a comparison ai <= aj is made. The left subtree
then dictates nextcomparisons for ai <= aj, and the right subtree dictates subsequent
comparisons for ai > aj.
When we come to a leaf, the sorting algorithm has proven the ordering. So we can say
following about the decision tree.

1) Each of the n! permutations on n elements must appear as one of the leaves of the decision
tree for the sorting algorithm to sort properly.

2) Let x be the maximum number of comparisons in a sorting algorithm. The maximum


height of the decision tree would be x. A tree with maximum height x has at most 2^x
leaves.After combining the above two facts, we get following relation.

n! <= 2^x

Taking Log on both sides.

log2(n!) <= x

Since log2(n!) = Θ(n Log n), we can say

x = Ω(nLog2n)

Therefore, any comparison-based sorting algorithm must make at least nLog2n comparisons
to sort the input array, and Heapsort and merge sort are asymptotically best comparison sorts

You might also like