Download File
Download File
An algorithm is thus a sequence of computational steps that transform the input into the
output in a finite amount of time.
Problem
Algorithm
Input Output
Computer
An algorithm is any well-defined computational procedure that takes some value, or set of
values, as input and produces some value, or set of values, as output in a fnite amount of
time.
Example:
Euclid’s algorithm for computing gcd(m, n)
Step 1: If n = 0, return the value of m as the answer and stop; otherwise,
proceed to Step 2.
Step 2: Divide m by n and assign the value of the remainder to r.
Step 3: Assign the value of n to m and the value of r to n. Go to Step 1.
ALGORITHM Euclid(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, not-both-zero integers m and n
//Output: Greatest common divisor of m and n
while n != 0 do
r ← m mod n
m←n
n←r
return m
1.2 What kinds of problems are solved by algorithms?
Practical applications of algorithms are ubiquitous and include the following examples:
-Sophisticated algorithms are required for storing the information about human DNA in
databases and developing tools for analyzing such data. Dynamic programming is an
important technique for solving several of these biological problems, particularly ones that
involve determining similarity between DNA sequences.
-The internet enables people all around the world to quickly access and retrieve large
amounts of information. With the clever algorithms, sites on the internet are able to
manage and manipulate large volume of data.
Algorithms required for finding good routes on which the data travels and using a search
engine to quickly find pages on which particular information resides.
-The core technologies used in electronic commerce include public-key cryptography and
digital signatures , which are based on numerical algorithms and number theory.
-Manufacturing and other commercial enterprises often need to allocate scarce
resources in the most beneficial way. Such problems can be solved by modeling them as
linear programs.
-Design algorithms to determine the shortest route from one intersection to another by
modeling the road map as a graph. And find the shortest path from one vertex to another.
-Image processing in medical field, compression techniques, all these computation
required efficient algorithms.
We now list and briefly discuss a sequence of steps one typically goes through
in designing and analyzing an algorithm
Design an algorithm
Analyze an Algorithm
After correctness, by far the most important is efficiency. In fact, there are two kinds of
algorithm efficiency: time efficiency, indicating how fast the algorithm runs, and space ef-
ficiency, indicating how much extra memory it uses. Another desirable characteristic of an
algorithm is simplicity. Because simpler algorithms are easier to understand and easier to
program; consequently, the resulting programs usually contain fewer bugs. Yet another
desirable characteristic of an algorithm is generality. As to the set of inputs, your main
concern should be designing an algorithm that can handle a set of inputs that is natural for
the problem at hand.
Coding an alogorithm
Most algorithms are destined to be ultimately implemented as computer programs.
Programming an algorithm presents both a peril and an opportunity. Of course,
implementing an algorithm correctly is necessary but not sufficient: you would not like to
diminish your algorithm’s power by an inefficient implementation.
That there are two kinds of efficiency: time efficiency and space efficiency.
Time efficiency, also called time complexity, indicates how fast an algorithm in question
runs.
Space efficiency, also called space complexity, refers to the amount of memory units
required by the algorithm in addition to the space needed for its input and output.
Now the amount of extra space required by an algorithm is typically not of as much
concern, with the caveat that there is still, of course, a difference between the fast main
memory, the slower secondary memory, and the cache. The time issue has not diminished
quite to the same extent,
Order of Growth
A difference in running times on small inputs is not what really distinguishes efficient
algorithms from inefficient ones. For large values of n, it is the function’s order of growth
that counts.
O-notation
DEFINITION: A function t (n) is said to be in O(g(n)), denoted t (n) ∈ O(g(n)), if t (n) is O(g(n)), if t (n) is
bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some
positive constant c and some nonnegative integer n 0 such that
Examples:
1. 100n + 5
Thus, as values of the constants c and n0 required by the definition, we can take 101 and
5, respectively.
Note that the definition gives us a lot of freedom in choosing specific values for constants c
and n0 .
For example, we could also reason that 100n + 5 ≤ 100n + 5n (for all n ≥ 1) = 105n to
complete the proof with c = 105 and n0 = 1.
Ω-notation
DEFINITION: A function t (n) is said to be in Ω(g(n)), denoted t (n) ∈ O(g(n)), if t (n) is Ω(g(n)), if t (n) is
bounded below by some positive constant multiple of g(n) for all large n, i.e., if there exist
some positive constant c and some nonnegative integer n 0 such that
t (n) ≥ cg(n) for all n ≥ n0.
A[i] ≠ K will not be checked if the first one, which checks that the array’s index does not
exceed its upper bound, fails.
Insertion sort
Insertion sort is an efficient algorithm for sorting a small number of elements. Insertion sort
works the way you might sort a hand of playing cards. You find the correct position for a
card by comparing it with each of the cards already in your left hand, starting at the right
and moving left.
The pseudocode for insertion sort is given as below,
Algorithm: INSERTION -SORT (A,n)
//Input:Array of elements A[1,...n] to be sorted.
//Output:Array of sorted elelments in non-decreasing order
for i=2 to n
key=A[i]
//Insert A[i] into subarray A[1:i-1]
j=i-1
while j>0 and A[j]>key
A[j+1]=A[j]
j=j-1
A[j+1]=key
Example:
Loop invariants and the correctness of insertion sort
Loop invariants help us understand why an algorithm is correct. When you are using a
loop invariant, you need to show three things:
Initialization: We start by showing that the loop invariant holds before the first loop
iteration, when i = 2 . The subarray A[1:i-1] consists of just the single element A[1], which
is the original element in A[1]. Moreover, this subarray is sorted (as single element is
sorted) which shows that the loop invariant holds prior to the first iteration of the loop.
Maintenance: Next, we tackle the second property: showing that each iteration maintains
the loop invariant. The body of the for loop works by moving the values in A[i-1] , A [i-2],
A[i-3], and so on by one position to the right until it finds the proper position for A[i] (lines 4-
7), at which point it inserts the value of A[i] (line 8).
The subarray A[1:i] then consists of the elements originally in A[1:i] , but in sorted order.
Incrementing i (increasing its value by 1 ) for the next iteration of the for loop then
preserves the loop invariant.
Termination: Finally, we examine loop termination. The loop variable i starts at 2 and
increases by 1 in each iteration. Once i’s value exceeds n in line 1, the loop terminates.
That is, the loop terminates once i equals n +1 . Substituting n+1 for i in the wording of the
loop invariant yields that the subarray A[1:n] consists of the elements originally in A[1:n] ,
but in sorted order. Hence, the algorithm is correct.
To analyze the INSERTION -SORT procedure, For each i = 2; 3; : : : ; n , let t i denote the
number of times the while loop test in line 5 is executed for that value of i . When a for or
while loop exits in the usual way--because the test in the loop header comes up FALSE--
the test is executed one time more than the loop body.
The running time of the algorithm is the sum of running times for each statement executed.
A statement that takes ck steps to execute and executes m times contributes c k .m to the