Lecture 2
Lecture 2
2
Reasons to perform analyze algorithms
It enables us to:
Compare algorithms.
performance characteristics
3
How to Measure Efficiency/performance?
Two approaches to measure algorithms
efficiency/performance
Empirical
Implement the algorithms and
Trying them on different instances of input
Use/plot actual clock time to pick one
Theoretical/Asymptotic Analysis
Determine quantity of resource required
mathematically needed by each algorithms
4
Example- Empirical
Actual clock time
Input size
5
Drawbacks of empirical methods
It is difficult to use actual clock because clock time varies based
on
Specific processor speed
Current processor load
Specific data for a particular run of the program
Input size
Input properties
Programming language (C++, java, python …)
The programmer (You, Me, Billgate …)
Operating environment/platform (PC, sun, smartphone etc)
Therefore, it is quite machine dependent
6
Machine independent analysis
Critical resources:
Time, Space (disk, RAM), Programmer’s effort, Ease of
use (user’s effort).
Factors affecting running time:
System dependent effects.
Hardware: CPU, memory, cache, …
Software: compiler, interpreter, garbage collector, …
System: operating system, network, other apps, …
System independent effects
Algorithm.
7 Input data/ Problem size
Machine independent analysis…
For most algorithms, running time depends on “size” of the
input.
Size is often the number of inputs processed
be sorted
input size n.
8
Machine independent analysis
Efficiency of an algorithm is measured in terms of the number of basic
operations it performs.
Not based on actual time-clock
We assume that every basic operation takes constant time.
Arbitrary time
Examples of Basic Operations:
Single Arithmetic Operation (Addition, Subtraction, Multiplication)
Assignment Operation
Single Input/Output Operation
Single Boolean Operation
Function Return
We do not distinguish between the basic operations.
Examples of Non-basic Operations are
Sorting, Searching.
9
Examples: Count of Basic Operations T(n)
Sample Code
int count()
{
Int k=0;
cout<< “Enter an integer”;
cin>>n;
for (i = 0;i < n;i++)
k = k+1;
return 0;
}
10
Examples: Count of Basic Operations T(n)
Sample Code Count of Basic Operations (Time Units)
int count()
{
Int k=0; 1 for the assignment statement: int k=0
T (n) = 1+1+1+(1+n+1+n)+2n+1 =
11 4n+6
Examples: Count of Basic Operations T(n)
int total(int n)
{
Int sum=0;
for (int i=1;i<=n;i++)
sum=sum+i;
return sum;
}
12
Examples: Count of Basic Operations T(n)
Count of Basic Operations (Time
Sample Code Units)
int total(int n)
{
1 for the assignment statement: int sum=0
Int sum=0;
In the for loop:
for (inti=1;i<=n;i++) 1 assignment, n+1tests, and n increments.
sum=sum+i; n loops of 2 units for an assignment, and an
addition.
return sum;
1 for the return statement.
}
T (n) = 1+ (1+n+1+n)+2n+1 = 4n+4
13
Examples: Count of Basic Operations T(n)
void func()
{
Int x=0;
Int i=0;
Int j=1;
cout<< “Enter an Integer value”;
cin>>n;
while (i<n){
x++;
i++;
}
while (j<n)
{
j++;
}
14 }
Examples: Count of Basic Operations T(n)
Sample Code Count of Basic Operations (Time Units)
void func()
{
Int x=0; 1 for the first assignment statement: x=0;
Int i=0; 1 for the second assignment statement: i=0;
Int j=1; 1 for the third assignment statement: j=1;
cout<< “Enter an Integer value”; 1 for the output statement.
cin>>n; 1 for the input statement.
while (i<n){ In the first while loop:
x++; n+1tests
i++; n loops of 2 units for the two increment (addition) operations
}
while (j<n) In the second while loop:
n tests
{
n-1 increments
j++;
T (n) = 1+1+1+1+1+n+1+2n+n+n-1 = 5n+5
}
}
15
Examples: Count of Basic Operations T(n)
Sample Code
int sum (int n)
{
int partial_sum= 0;
for (int i = 1; i <= n; i++)
partial_sum= partial_sum+ (i * i * i);
return partial_sum;
}
16
Examples: Count of Basic Operations T(n)
Sample code Count of Basic Operations (Time Units)
17
Simplified Rules to Compute Time Units(Formal Method)
18
Simplified Rules to Compute Time Units
19
Simplified Rules to Compute Time Units
20
Simplified Rules to Compute Time Units
Conditionals:
If (test) s1 else s2: Compute the maximum
of the running time for s1 and s2.
if (test == 1) {
for ( int i = 1; i <= N; i++) {
sum = sum+i;
}}
Else
{
for ( int i = 1; i <= N; i++) {
for ( int j = 1; j <= N; j++) {
sum = sum+i+j;
}}
21
Example: Computation of Run-time
Suppose we have hardware capable of executing 106
instructions per second. How long would it take to
execute an algorithm whose complexity function was T
(n) = 2n2 on an input size of n =108?
22
Example: Computation of Run-time
Suppose we have hardware capable of executing 106 instructions
per second. How long would it take to execute an algorithm
whose complexity function was T (n) = 2n2 on an input size of n
=108?
23
Types of Algorithm complexity analysis
Best case.
Lower bound on cost.
Determined by “easiest” input.
Provides a goal for all inputs.
Worst case.
Upper bound on cost.
Determined by “most difficult” input.
Provides a guarantee for all inputs.
Average case. Expected cost for random input.
Need a model for “random” input.
Provides a way to predict performance.
24
Best, Worst and Average Cases
Not all inputs of a given size take the same time.
Sequential search for K in an array of n integers:
Begin at first element in array and look at each element in turn until
K is found.
Best Case: [Find at first position: 1 compare]
Worst Case: [Find at last position: n compares]
Average Case: [(n + 1)/2 compares]
While average time seems to be the fairest measure, it may be difficult
to determine.
Depends on distribution. Assumption for above analysis: Equally
likely at any position.
When is worst case time important?
25 algorithms for time-critical systems
Order of Growth and Asymptotic Analysis
Suppose an algorithm for processing a retail store’s inventory takes:
10,000 milliseconds to read the initial inventory from disk, and then
10 milliseconds to process each transaction (items acquired or sold).
Processing n transactions takes (10,000 + 10 n) milliseconds.
Even though 10,000 >> 10, the "10 n" term will be more important if
the number of transactions is very large.
We also know that these coefficients will change if we buy a faster
computer or disk drive, or use a different language or compiler.
we want to ignore constant factors (which get smaller and smaller as
technology improves)
In fact, we will not worry about the exact values, but will look at “broad
classes" of values.
26
Growth rates
The growth rate for an algorithm is the rate at which the cost of the
algorithm grows as the size of its input grows.
27
Rate of Growth
Consider the example of buying elephants and goldfish:
Cost: cost_of_elephants + cost_of_goldfish
Cost ~ cost_of_elephants (approximation)
since the cost of the gold fish is insignificant when compared with cost of
elephants
Similarly, the low order terms in a function are relatively insignificant for
large n
n4 + 100n2 + 10n + 50 ~ n4
i.e., we say that n4 + 100n2 + 10n + 50 and n4 have the same rate of
growth
More Examples: fB(n)=n2+1 ~ n2
fA(n)=30n+8 ~ n
28
Visualizing Orders of Growth
On a graph, as you go to the right, a faster growing
function eventually becomes larger...
Value of function
fA(n)=30n+8
fB(n)=n2+1
Increasing n
29
Asymptotic analysis
Refers to the study of an algorithm as the input size "gets big" or
reaches a limit.
To compare two algorithms with running times f(n) and g(n), we need a
rough measure that characterizes how fast each function grows-
growth rate.
Ignore constants [especially when input size very large]
But constants may have impact on small input size
Several notations are used to describe the running-time equation for an
algorithm.
Big-Oh (O), Little-Oh (o)
Big-Omega (Ω), Little-Omega(w)
Theta Notation()
30
Big-Oh Notation
Definition
For f(n) a non-negatively valued function, f(n) is in
set O(g(n)) if there exist two positive
constants c and n0 such that f(n)≤cg(n)for all n>n0 .
Usage: The algorithm is in O(n2) in [best ,average, worst]
case.
Meaning: For all data sets big enough (i.e., n > n0), the
algorithm always executes in less than cg (n) steps [in best,
average or worst case].
31
Big-Oh Notation - Visually
32
Big-O Visualization
O(g(n)) is the set of functions
with smaller or same order of
growth as f(n)
.
Big-O
Demonstrating that a function f(n) is in big-O of a function
g(n) requires that we find specific constants c and no for which
the inequality holds.
The following points are facts that you can use for Big-Oh
problems:
1<= n for all n >= 1
n <= n2 for all n >= 1
2n <= n! for all n >= 4
log2n <= n for all n >= 2
n <= nlog2n for all n >= 2
34
Examples
f(n) = 10n + 5 and g(n) = n. Show that f(n) is in O(g(n)).
To show that f(n) is O(g(n)) we must show constants c
and no such that
f(n) <= c.g(n) for all n >= no
1000n2+1000n = O(n2):
36
More Examples
Show that 30n+8 is O(n).
Show c,n0: 30n+8 cn, n>n0.
37
No Uniqueness
There is no unique set of values for n0 and c in proving the asymptotic
bounds
O(log n) Logarithmic Finding an item in a sorted array with a binary search or a search
tree (best case)
O(n) Linear Finding an item in an unsorted list or a malformed tree (worst
case); adding two n-digit numbers
O(nlogn) Linearithmic Performing a Fast Fourier transform; heap sort, quick sort (best
case), or merge sort
O(n2) Quadratic Multiplying two n-digit numbers by a simple algorithm; adding
two n×n matrices; bubble sort (worst case or naive
implementation), shell sort, quick sort (worst case), or insertion
sort
39
Some properties of Big-O
Constant factors are may be ignored
For all k>0, kf is O(f)
The growth rate of a sum of terms is the growth rate of its
fastest growing term.
Ex, an3 + bn2 is O(n3 )
The growth rate of a polynomial is given by the growth
rate of its leading term
If f is a polynomial of degree d, then f is O(nd)
40
Implication of Big-Oh notation
We use Big-Oh notation to say how slowly code might run
as its input grows.
Suppose we know that our algorithm uses at most O(f(n))
basic steps for any n inputs, and n is sufficiently large, then
we know that our algorithm will terminate after executing
at most constant times f(n) basic steps.
We know that a basic step takes a constant time in a
machine.
Hence, our algorithm will terminate in a constant times
f(n) units of time, for all large n.
41
Other notations
Reading Assignments
42
End of Lecture 2
43