0% found this document useful (0 votes)
56 views28 pages

Algrothim Analysis: Time and Space Complexity

The document discusses algorithm analysis and asymptotic analysis. It begins with a brief history of algorithm analysis starting from 1860. It then explains that asymptotic analysis measures the efficiency of an algorithm as the input size grows. The document contrasts empirical versus analytical analysis and lists advantages of asymptotic analysis. It discusses different algorithm growth rates such as constant, linear, logarithmic, quadratic, and exponential. It also covers calculating time complexity for different algorithms and analyzing best, average, and worst case input sizes.

Uploaded by

Olawale Seun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views28 pages

Algrothim Analysis: Time and Space Complexity

The document discusses algorithm analysis and asymptotic analysis. It begins with a brief history of algorithm analysis starting from 1860. It then explains that asymptotic analysis measures the efficiency of an algorithm as the input size grows. The document contrasts empirical versus analytical analysis and lists advantages of asymptotic analysis. It discusses different algorithm growth rates such as constant, linear, logarithmic, quadratic, and exponential. It also covers calculating time complexity for different algorithms and analyzing best, average, and worst case input sizes.

Uploaded by

Olawale Seun
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

ALGROTHIM

ANALYSIS
TIME AND SPACE COMPLEXITY
Short History
• It started from 1860, when Babbage invented a
mechanical computation device and calculate how
many times a crank needed to be turn to produce the
result.
• In 1947, Alan Turing measured amount of work in
computing process by counting up amount of time each
elementary operation is applied in a computing task.
• In 1960 Knuth, came up with scientific approach for
algorithm analysis.
• Knuth approach has some draw backs which were
corrected by Aho, Hopcrost and Ullman in 1970.
• This present day Cormen, Leiserson, Riverst and Stein
are still working more on Algorithm analysis.
In Conclusion……
• Different methodologies have been used from
empirical till scientific, but the methodology we are
using in this course is called ASYMPTOTIC.
• Asymptotic Analysis or Asymptotic Algorithm
Analysis is a methodology that attempts to
measure the resource consumption of an algorithm
using mathematical techniques.
• Thus, asymptotic analysis measure the efficiency
(resource (e.g. time and space) consumption) or
implementation of an algorithm as input size
grows.
Empirical versus Analytical
wasted effort in coding and testing two algorithms
while you desire to keep one.
writing one code better than the other because of
biases
test cases might favor one algorithm than the other
the so called better algorithm may not meet budget
resources
Advantages of asymptotic analysis
• If an algorithm is worthy of implementation
• It helps algorithm designer to know whether the
proposed solution will meet the resource
constraints for a problem before implementation.
Disadvantages
• Asymptotic Analysis is just an estimate, it is not
perfect, for example:
• Two sorting algorithm that takes 1000nlogn and
2nlogn are asymptotically equal because their
order of growth is in nlogn when constant is
ignored.
Algorithm Growth Rate
• Growth rate is the output of your asymptotic
analysis
• This is the rate at which cost (e.g. time) of an
algorithm grows as input size increases
Examples
• cn--->is a linear growth rate
• meaning that running time grows in the same
proportion as value of n grows
• when n is squared this is called quadratic
growth rate
• when n appears in the exponent it is called
exponential growth rate e.g. 2 raised to power
n
• n! is also an exponential growth rate
Calculating the running time of an
Algorithm
• Constant time or constant growth rate
• Linear time or linear growth rate
• Logarithmic time or logarithm growth rate
• Linear logarithmic time or linearithmic growth
rate
• Quadratic time or quadratic growth rate
Constant Growth Rate
• Example: An assignment operation or arithmetic
expression, invoking a method or function,
comparison in selection statement, variable
declaration
• a=b
• sum=0
• add=c+d…….O(1)
• sum(a,b)
• int a;
• Each of the above will take constant time O(1)
Linear Growth Rate
• This occurs for single for loops e.g.
• for (i=0; i<n; i++)
statement;
f(n) = n
The for loop will executes n times thus the running time is O(f(n))

Another example:
for(i=0; i<n; i+=2) i= 0, 2, 4, 6, 8 if n = 10 making the execution to be 5
times
statement;
Because the value of i doubles after each iteration, thus the number of
execution of the for loop is n/2.
Thus f(n) = n/2 and the t=running time is in O(f(n/2)) = O(f(n))
……..ignoring constant 1/2
A practical application is age increasing with year is a linear growth
• 5 TIMES =10/2=N/2
• F(N/2)---RUNNING TIME
• IGNORE CONSTANT
• F(1/2*N)=F(N)
• O(F(N))
Notice
• In the linear growth rate the looping updates
or modification either adds or subtracts
• In logarithm growth rate the modification
either multiplies or divides
Logarithm Growth Rate
• for(i=1; i<n; i*=2)
statement; if n=1000, it will be 1, 2, 4, 8,16,
32, 64, 128, 356, 712 (10 times)
Thus the execution of the for loop is 10 times which
is log 1000 base 2(the multiples you are taken in
each iteration for modification in this case 2).
f(n)=log n
The running time of the algorithm is in O(f(log n))
Example of logarithm growth rate is binary search
Growth rate for Nested Loops
• Nested loop is when you have a loop inside
another loop
• Different ones are:
– Linear logarithm loop or linearithmic loop
– Quadratic loop
– Dependent quadratic loop
• In case of growth rate of nested loops, the
growth rate is computed as a product of
growth rate of each loop
Linearithmic Growth rate
• This is illustrated using nested for loop
• for(i=0; i<10; i++) //outer loop linear O(f(n))
for(j=1; j<10; j*=2) //inner loop 1,2,4,8 (4 times) Log 10 base 2
statement;
In this case to find the running time, find the product of the
running time of the inner loop and outer loop
outer loop running time is n
Inner loop running time is log n
Product =n log n
Thus the running time of the nested for loop is in O(f(n log n))
Examples of this is divide and conquer, and merge sort
Quadratic Growth Rate
• Quadratic loop is when the outer loop execution
time is the same as inner loop execution time
• For example:
• for(i=0; i<n; i++) //outer loop linear O(f(n))
for(j=0; j<n; j++) //inner loop linear O(f(n))
statement;
Outer loop is n times, inner loop is n times, product= n*n = n 2
Thus the running time or growth rate of the algorithm is in O(f( n 2 )
Example is shortest path between two nodes, multiplication tables
involving two integers.
Dependent Quadratic Growth Rate
• Here the number of iteration in the inner loop is dependent on the
outer loop, for example
• for(i=0; i<10; i++) //outer loop n
for(j=0; j<=i; j++) //inner loop j<=0 0,
j<=1 0, 1, j<=2 0,1,2
statement;
For first iteration of outer loop inner loop will execute 1 time--0
For second iteration of outer loop inner loop will execute 2 times—0,1
For third iteration of outer loop inner loop will execute 3 times---0,1,2
For fourth iteration of outer loop inner loop will execute 4 times etc.
Thus inner loop executes 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 of which the
summation is 55
Average of the summation is 55/10=5.5
Dependent Quadratic Growth Rate
• 5.5 is equal to the iteration of outer loop (10)
plus 1 =11
• Then divided by 2, 11/2=5.5
• Expressed as (n+1)/2
• Thus inner loop execution is (n+1)/2
• Outer loop execution is n
• Their product is n(n+1)/2
• Thus the running time of the algorithm is in
O(f(n(n+1)/2)
Exponential
• Example is tower of hanoi ( or any application
of dynamic programming)
• Growth of bacteria is a natural example
CLASS OF INPUT SIZE
• Best case
• Average case
• Worst case
Logic behind the class input size
• Existence of algorithm where different input size results in
different running time e.g. sequential search.
• For example, sequential search algorithm may be given the
problem of finding a particular value K in an array (assuming
K only appears once).
• This implies that, the algorithm will quit searching as soon
as it finds an element that marches K.
• Thus, K can be the first element, the middle element or the
last element. Implying that, in each case of the position of K,
there are different input sizes that the search algorithm
uses.
• For the first case it uses input size 1, the second average
input size, and the last all the input size of the array.
Best case
• Most favorable input that uses minimum running
time
• Best case is not always realistic (i.e. to have just one
input), but there are examples such as:
• Problem of finding Factorial of a number
• Problem of finding the largest-value in a sorted array
using largest –value sequential search
• Best case is too optimistic to represent the true
behavior of an algorithm i.e. it is rarely used in
practise
Worst case
• This is the most unfavourable input size
• It describes the behaviour of an algorithm in the
worst possible case of input instance.
Average case
• The average-case running time of an algorithm is an
estimate of the running time for an ‘average’ input.
• It specifies the expected behaviour of the algorithm
when the input is randomly drawn from a given
distribution.
• Average-case running time assumes that all inputs
of a given size are equally likely.
Average case Input Size
• The AVERAGE CASE complexity of the algorithm is
the function defined by the average number of steps
taken on any instance of size ‘n’.
• The expected behavior when the input is randomly
drawn from a given distribution.
• The average-case running time of an algorithm is an
estimate of the running time for an "average" input.
• Computation of average-case running time entails
"knowing all possible input sequences, the probability
distribution of occurrence of these sequences, and the
running times for the individual sequences”.
• Often it is assumed that all inputs of a given size are
equally likely.
• It relies on the probability theory

7/12/2021 27
Worst case Input Size
The WORST CASE complexity of the algorithm is the
function defined by the maximum number of steps
taken on any instance of size ‘n’.
The behavior of the algorithm with respect to the worst
possible case of the input instance.
The worst-case running time of an algorithm is an
upper bound on the running time for any input.
Knowing it gives us a guarantee that the item does not
occur in data.
There is no need to make an educated guess about the
running time.

You might also like