0% found this document useful (0 votes)
6 views27 pages

Download File

An algorithm is a sequence of instructions designed to solve a problem by transforming input into output in a finite time. Algorithms are essential in various applications, including data analysis, internet search, cryptography, and resource allocation. The document outlines the process of algorithm design, including problem understanding, computational capabilities, and efficiency analysis, while also discussing methods for proving correctness and measuring performance using asymptotic notations.

Uploaded by

itshursh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views27 pages

Download File

An algorithm is a sequence of instructions designed to solve a problem by transforming input into output in a finite time. Algorithms are essential in various applications, including data analysis, internet search, cryptography, and resource allocation. The document outlines the process of algorithm design, including problem understanding, computational capabilities, and efficiency analysis, while also discussing methods for proving correctness and measuring performance using asymptotic notations.

Uploaded by

itshursh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Unit 1: Introduction

1.1 What is an Algorithm?

An algorithm is a sequence of unambiguous instructions for solving a problem, i.e., for


obtaining a required output for any legitimate input in a finite amount of time.

An algorithm is thus a sequence of computational steps that transform the input into the
output in a finite amount of time.

Basic notions of the algorithm:

Problem

Algorithm

Input Output
Computer

An algorithm is any well-defined computational procedure that takes some value, or set of
values, as input and produces some value, or set of values, as output in a fnite amount of
time.
Example:
Euclid’s algorithm for computing gcd(m, n)
Step 1: If n = 0, return the value of m as the answer and stop; otherwise,
proceed to Step 2.
Step 2: Divide m by n and assign the value of the remainder to r.
Step 3: Assign the value of n to m and the value of r to n. Go to Step 1.

Alternatively, we can express the same algorithm in pseudocode:

ALGORITHM Euclid(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n

while n != 0 do
r ← m mod n
m←n
n←r
return m
1.2 What kinds of problems are solved by algorithms?

Practical applications of algorithms are ubiquitous and include the following examples:

-Sophisticated algorithms are required for storing the information about human DNA in
databases and developing tools for analyzing such data. Dynamic programming is an
important technique for solving several of these biological problems, particularly ones that
involve determining similarity between DNA sequences.
-The internet enables people all around the world to quickly access and retrieve large
amounts of information. With the clever algorithms, sites on the internet are able to
manage and manipulate large volume of data.
Algorithms required for finding good routes on which the data travels and using a search
engine to quickly find pages on which particular information resides.
-The core technologies used in electronic commerce include public-key cryptography and
digital signatures , which are based on numerical algorithms and number theory.
-Manufacturing and other commercial enterprises often need to allocate scarce
resources in the most beneficial way. Such problems can be solved by modeling them as
linear programs.
-Design algorithms to determine the shortest route from one intersection to another by
modeling the road map as a graph. And find the shortest path from one vertex to another.
-Image processing in medical field, compression techniques, all these computation
required efficient algorithms.

1.3 Fundamentals of Algorithmic Problem Solving

We now list and briefly discuss a sequence of steps one typically goes through
in designing and analyzing an algorithm

Understanding the problem


The first thing you need to do before designing an algorithm is to understand the given
problem completely. Read the problem discription carefully and do a few small examples
by hand, think about special cases, and ask questions again if needed.
An input to an algorithm specifies an instance of the problem the algorithm solves. It is
very important to specify exactly the set of instances the algorithm need to handle.

Ascertaining the Capabilities of the Computational Device


Once you completely understand a problem, you need to ascertain the capabilities
of the computational device the algorithm is intended for. The RAM model architecture of
computers needs sequential algorithms. Some newer computers that can execute
operations concurrently, i.e., in parallel we can design parallel algorithms.

Choosing between Exact and Approximate problem solving


The next principal decision is to choose between solving the problem exactly or solving it
approximately. In the former case, an algorithm is called an exact algorithm; in the latter
case, an algorithm is called an approximation algorithm.
We opt for approximation algorithm because ,First there are important problems that
simply cannot be solved exactly for most of their instances; examples include extracting
square roots, solving nonlinear equations, and evaluating definite integrals.
Second, available algorithms for solving a problem exactly can be unacceptably slow
because of the problem’s intrinsic complexity.Third, an approximation algorithm can be a
part of a more sophisticated algorithm that solves a problem exactly

Understand the problem

Decide on the computational


means, exact or approximate
algorithm, algorithm design
techniques, etc.

Design an algorithm

Prove the correctness

Analyze the algorithm

Code the algorithm

Designing Algorithm and Data Structure


While the algorithm design techniques do provide a powerful set of general approaches to
algorithmic problem solving, designing an algorithm for a particular problem may still be a
challenging task. Some design techniques can be simply inapplicable to the problem in
question. Sometimes, several techniques need to be combined, and there are algorithms
that are hard to pinpoint as applications of the known design techniques. one should pay
close attention to choosing data structures appropriate for the operations performed by the
algorithm. Some of the algorithm design techniques depend intimately on structuring or
restructuring data specifying a problem’s instance.

Methods of Specifying an Algorithm


Once you have designed an algorithm, you need to specify it in some fashion. There are
the two options that are most widely used for specifying algorithms. Using a natural
language, however, the inherent ambiguity of any natural language makes a succinct and
clear description of algorithms surprisingly difficult.
Pseudocode is a mixture of a natural language and programming language-like constructs.
Pseudocode is usually more precise than natural language.

Proving an Algorithm’s Correctness


Once an algorithm has been specified, you have to prove its correctness. That is, you
have to prove that the algorithm yields a required result for every legitimate input in a finite
amount of time. A common technique for proving correctness is to use mathematical
induction because an algorithm’s iterations provide a natural sequence of steps needed for
such proofs.

Analyze an Algorithm
After correctness, by far the most important is efficiency. In fact, there are two kinds of
algorithm efficiency: time efficiency, indicating how fast the algorithm runs, and space ef-
ficiency, indicating how much extra memory it uses. Another desirable characteristic of an
algorithm is simplicity. Because simpler algorithms are easier to understand and easier to
program; consequently, the resulting programs usually contain fewer bugs. Yet another
desirable characteristic of an algorithm is generality. As to the set of inputs, your main
concern should be designing an algorithm that can handle a set of inputs that is natural for
the problem at hand.

Coding an alogorithm
Most algorithms are destined to be ultimately implemented as computer programs.
Programming an algorithm presents both a peril and an opportunity. Of course,
implementing an algorithm correctly is necessary but not sufficient: you would not like to
diminish your algorithm’s power by an inefficient implementation.

1.3 The Analysis Framework

That there are two kinds of efficiency: time efficiency and space efficiency.

Time efficiency, also called time complexity, indicates how fast an algorithm in question
runs.
Space efficiency, also called space complexity, refers to the amount of memory units
required by the algorithm in addition to the space needed for its input and output.

Now the amount of extra space required by an algorithm is typically not of as much
concern, with the caveat that there is still, of course, a difference between the fast main
memory, the slower secondary memory, and the cache. The time issue has not diminished
quite to the same extent,

Measuring an Input’s size


Almost all algorithms run longer on larger inputs. For example, it takes longer to sort larger
arrays, multiply larger matrices, and so on. Therefore, it is logical to investigate an
algorithm’s efficiency as a function of some parameter n indicating the algorithm’s input
size. In most cases, selecting such a parameter is quite straightforward. For example, it
will be the size of the list for problems of sorting, searching, finding the list’s smallest
element, and most other problems dealing with lists.There are situations, of course, where
the choice of a parameter indicating an input size does matter. One such example is
computing the product of two n × n matrices. There are two natural measures of size for
this problem. The first and more frequently used is the matrix order n. This metric usually
gives a better idea about the efficiency of algorithms in question.

Units measuring an algorithm running time


We can simply use some standard unit of time measurement—a second, or millisecond,
and so on—to measure the running time of a program implementing the algorithm. There
are obvious drawbacks to such an approach, the required time depend on the speed of a
particular computer, quality of a program implementing the algorithm and of the compiler
used in generating the machine code.
So for the measure of an algorithm’s efficiency, we would like to have a metric that does
not depend on these extraneous factors. One possible approach is to identify the most
important operation of the algorithm, called the basic operation and compute the number
of times the basic operation is executed. The basic operation of an algorithm: it is usually
the most time-consuming operation in the algorithm’s innermost loop.

Order of Growth
A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones. For large values of n, it is the function’s order of growth
that counts.

Input size n logn n2 n3 2n n!


10 10 3.3 100 1000 1024 3.6*10 6
100 100 6.6 10000 1000000 2100 9.3*10157

Implementing an algorithm with a logarithmic basic-operation count to run practically


instantaneously on inputs of all realistic sizes. Also the linear and quadratic (n and n 2,n3)
can be consider for smaller input. On the other end of the spectrum are the exponential
function 2 n and the factorial function n! Both these functions grow so fast that their values
become astronomically large even for rather small values of n.

Worst-Case, Best-Case, and Average-Case Efficiencies


There are many algorithms for which running time depends not only on an input size but
also on the specifics of a particular input.
The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n,
which is an input (or inputs) of size n for which the algorithm runs the longest among all
possible inputs of that size.
The best-case efficiency of an algorithm is its efficiency for the best-case input of size n,
which is an input (or inputs) of size n for which the algorithm runs the fastest among all
possible inputs of that size.
To analyze the algorithm’s average-case efficiency, we must make some assumptions
about possible inputs of size n. Let’s consider sequential search. The standard
ssumptions are that (a) the probability of a successful search is equal to p (0 ≤ p ≤ 1) and
(b) the probability of the first match occurring in the i th position of the list is the same for
every i.
1.4 Asymptotic Notations and Basic Efficiency Classes

The efficiency analysis framework concentrates on the order of growth of an algorithm’s


basic operation count as the principal indicator of the algorithm’s efficiency. To compare
and rank such orders of growth, computer scientists use three notations: Ο(big oh), Ω(big
omega), and Θ(big theta).

O-notation
DEFINITION: A function t (n) is said to be in O(g(n)), denoted t (n) ∈ O(g(n)), if t (n) is O(g(n)), if t (n) is
bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some
positive constant c and some nonnegative integer n 0 such that

t (n) ≤ cg(n) for all n ≥ n0.

The definition is illustrated in Figure 2.1

Examples:

1. 100n + 5

100n + 5 ∈ O(g(n)), if t (n) is O(n2).

Indeed, 100n + 5 ≤ 100n + n (for all n ≥ 5) = 101n ≤ 101n 2.

Thus, as values of the constants c and n0 required by the definition, we can take 101 and
5, respectively.
Note that the definition gives us a lot of freedom in choosing specific values for constants c
and n0 .
For example, we could also reason that 100n + 5 ≤ 100n + 5n (for all n ≥ 1) = 105n to
complete the proof with c = 105 and n0 = 1.

Ω-notation

DEFINITION: A function t (n) is said to be in Ω(g(n)), denoted t (n) ∈ O(g(n)), if t (n) is Ω(g(n)), if t (n) is
bounded below by some positive constant multiple of g(n) for all large n, i.e., if there exist
some positive constant c and some nonnegative integer n 0 such that
t (n) ≥ cg(n) for all n ≥ n0.
A[i] ≠ K will not be checked if the first one, which checks that the array’s index does not
exceed its upper bound, fails.

ALGORITHM SequentialSearch(A[0..n − 1], K)


//Searches for a given value in a given array by sequential search
//Input: An array A[0..n − 1] and a search key K
//Output: The index of the first element in A that matches K or −1 if there are no matching
elements
i←0
while i < n and A[i] ≠ K do
i←i+1
if i < n return i
else return −1

Explanation with example: Let array A={ 19,15,34,73,27,12,9,21,56,45} and K=27.


First initialize i=0 and check condition in while loop, i< n, here n is length of array which is
n=10
i=0, n=10; while (i<n and A[i] ≠ K)=> (0<10 and 19 ≠ 27) both the conditions are true then
increament the i index, i=0+1,
now i=1, 1<10 and 15 ≠ 27 ; condition true , increament i as i=2
i=2, 2<10 and 34 ≠ 27 ; condition true , increament i as i=3
i=3, 3<10 and 73 ≠ 27 ; condition true , increament i as i=4
i=4, 4<10 and 27 ≠ 27 ; condition false , 27 is matched hence loop breaks and
return the index value i=4. This is successful case.

Let array A={ 19,15,34,73,27,12,9,21,56,45} and K=30.


First initialize i=0 and check condition in while loop, i< n, here n is length of array which is
n=10
i=0, n=10; while (i<n and A[i] ≠ K)=> (0<10 and 19 ≠ 30) both the conditions are true then
increament the index, i=0+1,
now i=1, 1<10 and 15 ≠ 30 ; condition true , increament i as i=2
i=2, 2<10 and 34 ≠ 30 ; condition true , increament i as i=3
i=3, 3<10 and 73 ≠ 30 ; condition true , increament i as i=4
i=4, 4<10 and 27 ≠ 30 ; condition true , increament i as i=5
i=5, 5<10 and 12 ≠ 30 ; condition true , increament i as i=6
i=6, 6<10 and 9 ≠ 30 ; condition true , increament i as i=7

i=7, 7<10 and 21 ≠ 30 ; condition true , increament i as i=8


i=8, 3<10 and 56 ≠ 30 ; condition true , increament i as i=9
i=9, 9<10 and 45 ≠ 30 ; condition true , increament i as i=10
now ,i=10, 10<9 ; condition false , return the index value -1 as the key is not
present in the array.

Insertion sort

Insertion sort is an efficient algorithm for sorting a small number of elements. Insertion sort
works the way you might sort a hand of playing cards. You find the correct position for a
card by comparing it with each of the cards already in your left hand, starting at the right
and moving left.
The pseudocode for insertion sort is given as below,
Algorithm: INSERTION -SORT (A,n)
//Input:Array of elements A[1,...n] to be sorted.
//Output:Array of sorted elelments in non-decreasing order
for i=2 to n
key=A[i]
//Insert A[i] into subarray A[1:i-1]
j=i-1
while j>0 and A[j]>key
A[j+1]=A[j]
j=j-1
A[j+1]=key

Explanation with eample: Let array A={20,56,32,19,46,67,23,47}


Here, index of array start from 1, A[1,...n] therefore for loop start from 2 as i= 2 and length
of array n=8
i=2, key=A[2]=56, j=i-1=2-1=1, now
i=2,j=1,while (j>0 and A[1]>key)=> 1>0 and 20>56; condition false, A[2]=key, no change in
array. Go for next value of i
i=3, key=A[3]=32, j=i-1=3-1=2, now
i=3,j=2,while (j>0 and A[2]>key)=> 2>0 and 56>32; condition true, A[3]=A[2] and
decrement j value as j=j-1=2-1=1,
now array becomes A={20,56,56,19,46,67,23,47}
i=3, j=1, while (j>0 and A[1]>key)=> 1>0 and 20>32; condition false, A[2]=key, A[2]=32 and
array becomes A={20,32,56,19,46,67,23,47} go for next value of i
i=4, key=A[4]=19,j=3, now
i=4, j=3, while (j>0 and A[3]>key)=> 3>0 and 20>19; condition true, A[4]=A[3] and j=2
now array becomes A={20,32,56,56,46,67,23,47}
i=4, j=2, while (j>0 and A[2]>key)=> 2>0 and 32>19; condition true, A[3]=A[2] and j=1
now array becomes A={20,32,32,56,46,67,23,47},
i=4, j=1, while (j>0 and A[2]>key)=> 1>0 and 20>19; condition true, A[2]=A[1] and j=0, exit
from while loop as j=0 , and array becomes A={20,20,32,56,46,67,23,47}
A[1]=key=19. Now, key element is placed at proper place as A={19,20,32,56,46,67,23,47}
Simiarly, for i=5, key= A[5]=46 is placed at proper position and array becomes
A={19,20,32,46,56,67,23,47}, then for i=6 , key=67 is placed at proper position as
A={19,20,32,46,56,67,23,47}, then for, i=7, key=A[7]=23, after iteration array becomes
A={19,20,23,32,46,56,67,47}, for i=8, A[8]=47 is inserted at proper position as
A={19,20,23, 32,46,47,56,67}

Example:
Loop invariants and the correctness of insertion sort

Loop invariants help us understand why an algorithm is correct. When you are using a
loop invariant, you need to show three things:

Initialization: It is true prior to the first iteration of the loop.


Maintenance: If it is true before an iteration of the loop, it remains true before the next
iteration.
Termination: The loop terminates, and when it terminates, the invariant --usually along
with the reason that the loop terminated--gives us a useful property that helps show that
the algorithm is correct.

Let’s see how these properties hold for insertion sort.

Initialization: We start by showing that the loop invariant holds before the first loop
iteration, when i = 2 . The subarray A[1:i-1] consists of just the single element A[1], which
is the original element in A[1]. Moreover, this subarray is sorted (as single element is
sorted) which shows that the loop invariant holds prior to the first iteration of the loop.
Maintenance: Next, we tackle the second property: showing that each iteration maintains
the loop invariant. The body of the for loop works by moving the values in A[i-1] , A [i-2],
A[i-3], and so on by one position to the right until it finds the proper position for A[i] (lines 4-
7), at which point it inserts the value of A[i] (line 8).
The subarray A[1:i] then consists of the elements originally in A[1:i] , but in sorted order.
Incrementing i (increasing its value by 1 ) for the next iteration of the for loop then
preserves the loop invariant.
Termination: Finally, we examine loop termination. The loop variable i starts at 2 and
increases by 1 in each iteration. Once i’s value exceeds n in line 1, the loop terminates.
That is, the loop terminates once i equals n +1 . Substituting n+1 for i in the wording of the
loop invariant yields that the subarray A[1:n] consists of the elements originally in A[1:n] ,
but in sorted order. Hence, the algorithm is correct.

Analysis of INSERTION SORT

To analyze the INSERTION -SORT procedure, For each i = 2; 3; : : : ; n , let t i denote the
number of times the while loop test in line 5 is executed for that value of i . When a for or
while loop exits in the usual way--because the test in the loop header comes up FALSE--
the test is executed one time more than the loop body.
The running time of the algorithm is the sum of running times for each statement executed.
A statement that takes ck steps to execute and executes m times contributes c k .m to the

You might also like