0% found this document useful (0 votes)
18 views22 pages

Effieciency-of-Algorithms

The document discusses algorithms as well-defined computational procedures that transform inputs into outputs, tracing the etymology of the term 'algorithm' to the Persian mathematician Al-Khwārizmī. It emphasizes the importance of measuring algorithm efficiency through time and space complexity, introducing Big O notation to describe growth rates of algorithms. Various growth-rate functions are outlined, illustrating how different algorithms perform based on their complexity, from constant time O(1) to exponential time O(2^N).
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views22 pages

Effieciency-of-Algorithms

The document discusses algorithms as well-defined computational procedures that transform inputs into outputs, tracing the etymology of the term 'algorithm' to the Persian mathematician Al-Khwārizmī. It emphasizes the importance of measuring algorithm efficiency through time and space complexity, introducing Big O notation to describe growth rates of algorithms. Various growth-rate functions are outlined, illustrating how different algorithms perform based on their complexity, from constant time O(1) to exponential time O(2^N).
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 22

Efficiency of Algorithm

Algorithm
any well-defined computational procedure
that takes some value, or set of values, as
input and produces some value, or set
of values, as output.

A sequence of computational steps


that transform the input into the output.

a tool for solving a well-specified


computational problem
Algorithm - Etymology
Al-Khwārizmī (Abu Abdullah Muhammad ibn Musa al-Khwarizmi),
Persian astronomer and mathematician, wrote a treatise in
825 AD, On Calculation with Hindu Numerals. It was
translated into Latin in the 12th century as Algoritmi de
numero Indorum (al-Daffa 1977), whose title was likely
intended to mean "Algoritmi on the numbers of the Indians",
where "Algoritmi" was the translator's rendition of the
author's name; but people misunderstanding the title
treated Algoritmi as a Latin plural and this led to the word
"algorithm" (Latin algorismus) coming to mean "calculation
method". The intrusive "th" is most likely due to a
false cognate with the Greek ἀριθμός (arithmos) meaning
"number".
Measuring the Efficiency of
Algorithm
Consider two algorithms that perform the
same task?
What does it mean to compare the
algorithms?
How can we conclude one is better, say,
more efficient than the other?
What are the factors that affect the
efficiency of an algorithm?
Measuring the Efficiency of
Algorithm
How are the algorithms coded? Does
one algorithm run faster than another
because of better programming?
What computer should you use? The
only fair way would be to use the same
computer for both programs.
What data should the programs use?
Analysis of Algorithm
area of computer science that provides
tools for contrasting the efficiency of
different methods of solution.
Notice the use of the term methods of
solution rather than programs; it is
important to emphasize that the analysis
concerns itself primarily with significant
differences in efficiency – differences that
you can usually obtain only through
superior methods of solution and rarely
through clever tricks in coding.
Primitive Operations
Assigning a value to a variable
Calling a method
Performing an arithmetic operation
Comparing two operations
Indexing into an array
Following an object reference
Returning from a method
Example: Counting of primitive
operations
for (int i=0; i<n; i++)
a=i;

init comp inc/dec loop


? ? ? ?

T(n)=?
Time Complexity
-how long does it take to find a
solution?
Space Complexity
-how much memory does it need to
perform the search?

Note: The difference between space


complexity and time complexity is that
space can be reused.
Order of Algorithm
Algorithm A is said to be order f(N), which
is denoted as O(f(N));
- f(N) is called the algorithm’s
growth rate function. Because the
notation uses the capital letter O to denote
order, it is called Big O notation.
Growth-rate functions
1
-A growth-rate functions of 1 implies
a problem whose time requirement is
constant and, therefore, independent of the
problem's size. An algorithm that runs the
same no matter what the input.
For instance, an algorithm that always
returns the same value regardless of input
could be considered an algorithm of
efficiency O(1).
log2N
-The time requirement for a
logarithmic algorithm increases slowly as
the problem size increases. The binary
search algorithm has this behaviour.

Algorithms based on binary trees are often


O(logn). This is because a perfectly
balanced binary search tree has logN
layers, and to search for any element in a
binary search tree requires traversing a
single node on each layer.
N
-The time requirement for a linear
algorithm increases directly with the size of
the problem.Algorithms of efficiency n
require only one pass over an entire input.

Nlog2N
-The time requirement increases more
rapidly than a linear algorithm. Such
algorithms usually divide a problem into
smaller problems that are each solved
separately.
N2
- The time requirement for a
quadratic algorithm increases rapidly with
the size of the problem. Algorithms that use
two nested loops are often quadratic. A
fairly reasonable efficiency, still in the
polynomial time range, the typical
examples for this order come from sorting
algorithms, such as the selection sort.
N3
-The time requirement for a cubic
algorithm increases more rapidly with the
size of the problem than the time
requirement for a quadratic algorithm.
Algorithms that use three nested loops are
often cubic and are practical only for small
problems.
 2N
-As the size of a problem increases,
the time requirement for an exponential
algorithm usually increases too rapidly to
be practical.

The most important non-polynomial


efficiency is this exponential time increase.
Many important problems can only be
solved by algorithms with this (or worse)
efficiency.
One example is factoring large
numbers expressed in binary; the only
known way is by trial and error, and a naive
approach would involve dividing every
number less than the number being
factored into that number until one divided
in evenly. For every increase of a single
digit, it would require twice as many tests.
Note:
If algorithm A requires time that is
proportional to function f and algorithm B
requires time that is proportional to a
slower-growing function g, it is apparent
that B will always be significantly more
efficient than A for large enough problems.
For large problems, the proportional growth
rate dominates all other factors in
determining an algorithm’s efficiency.
Big O notation
Properties of Big O notation help to simplify
the analysis of an algorithm. O(f(N)) means
"is of order f(N)" or "has order f(N)." O is not
a function.

1. You can ignore low order terms in an


algorithm’s growth-rate function.
2. You can ignore a multiplicative constant in
the high-order term of an algorithm’s growth
rate function.
3. O(f(N)) + O(g(N)) = O(f(N) + g(N)). You
can combine growth-rate functions.
A function t(n) is said to be in O(g(n)),
denoted t  O(g(n)), if t(n) is bounded
above by some constant multiple of g(n) for
all large n.
-notation
A function t(n) is said to be in (g(n)),
denoted t 2 ((n)), if t(n) is bounded
below by some constant multiple of
g(n) for all large n.
-notation
 A function t(n) is said to be in (g(n)),
denoted t(n) in (g(n)), if t(n) is bounded
both above and below by some positive
constant multiples of g(n) for all large n, i.e.
there must exists some positive constant c 1
and c2 and some non-negative integer, n 0,
such that:

You might also like