0% found this document useful (0 votes)
7 views

Week2

Chapter 2 discusses algorithmic analysis, focusing on space and time complexity, and the methods of a priori and a posteriori analysis. It explains the concepts of algorithm complexity, including how to measure time and space factors, and introduces asymptotic notations such as Big-O, Omega, and Theta to evaluate algorithm efficiency. The chapter also outlines the best, average, and worst-case scenarios for algorithm performance.

Uploaded by

amanthpanchal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Week2

Chapter 2 discusses algorithmic analysis, focusing on space and time complexity, and the methods of a priori and a posteriori analysis. It explains the concepts of algorithm complexity, including how to measure time and space factors, and introduces asymptotic notations such as Big-O, Omega, and Theta to evaluate algorithm efficiency. The chapter also outlines the best, average, and worst-case scenarios for algorithm performance.

Uploaded by

amanthpanchal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

CHAPTER 2

ALGORITHMIC ANALYSIS

● Algorithm Analysis – Space Complexity, Time Complexity.


● Run time analysis.
● Asymptomatic notations, Big-O Notation, Omega Notation, Theta Notation.

Efficiency of an algorithm can be analyzed at two different stages, before implementation and
after implementation. They are the following −

• A Priori Analysis − This is a theoretical analysis of an algorithm. Efficiency of an


algorithm is measured by assuming that all other factors, for example, processor speed,
are constant and have no effect on the implementation.

• A Posterior Analysis − This is an empirical analysis of an algorithm. The selected


algorithm is implemented using programming language. This is then executed on target
computer machine. In this analysis, actual statistics like running time and space
required, are collected.

Algorithm Complexity

Suppose X is an algorithm and n is the size of input data, the time and space used by the
algorithm X are the two main factors, which decide the efficiency of X.

• Time Factor − Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.

• Space Factor − Space is measured by counting the maximum memory space required
by the algorithm.

The complexity of an algorithm f(n) gives the running time and/or the storage space required
by the algorithm in terms of n as the size of input data.

GOVERNMENT POLYTECHNIC, KARWAR 1


Space Complexity

Space complexity of an algorithm represents the amount of memory space required by the
algorithm in its life cycle. The space required by an algorithm is equal to the sum of the
following two components −

• A fixed part that is a space required to store certain data and variables, that are
independent of the size of the problem. For example, simple variables and constants
used, program size, etc.

• A variable part is a space required by variables, whose size depends on the size of the
problem. For example, dynamic memory allocation, recursion stack space, etc.

Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part and
S(I) is the variable part of the algorithm, which depends on instance characteristic I. Following
is a simple example that tries to explain the concept −

Algorithm: SUM(A, B)

Step 1 − START

Step 2 − C ← A + B + 10

Step 3 − Stop

Here we have three variables A, B, and C and one constant. Hence S(P) = 1 + 3. Now, space
depends on data types of given variables and constant types and it will be multiplied
accordingly.

Time Complexity

Time complexity of an algorithm represents the amount of time required by the algorithm to run
to completion. Time requirements can be defined as a numerical function T(n), where T(n) can
be measured as the number of steps, provided each step consumes constant time.

For example, addition of two n-bit integers takes n steps. Consequently, the total
computational time is T(n) = c ∗ n, where c is the time taken for the addition of two bits. Here,
we observe that T(n) grows linearly as the input size increases.

The efficiency and accuracy of algorithms have to be analysed to compare them and choose a
specific algorithm for certain scenarios. The process of making this analysis is called

GOVERNMENT POLYTECHNIC, KARWAR 2


Asymptotic analysis. It refers to computing the running time of any operation in mathematical
units of computation.

For example, the running time of one operation is computed as f(n) and may be for another
operation it is computed as g(n 2). This means the first operation running time will increase
linearly with the increase in n and the running time of the second operation will increase
exponentially when n increases. Similarly, the running time of both operations will be nearly the
same if n is significantly small.

Usually, the time required by an algorithm falls under three types −

• Best Case − Minimum time required for program execution.

• Average Case − Average time required for program execution.

• Worst Case − Maximum time required for program execution.

Asymptotic Notations

The commonly used asymptotic notations to calculate the running time complexity of an
algorithm.

• Ο Notation

• Ω Notation

• θ Notation

Big Oh Notation, Ο

The notation Ο(n) is the formal way to express the upper bound of an algorithm's running time.
It measures the worst case time complexity or the longest amount of time an algorithm can
possibly take to complete.

GOVERNMENT POLYTECHNIC, KARWAR 3


For example, for a function f(n)

Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that f(n) ≤ c.g(n) for all n > n0. }

Omega Notation, Ω

The notation Ω(n) is the formal way to express the lower bound of an algorithm's running time.
It measures the best case time complexity or the best amount of time an algorithm can
possibly take to complete.

For example, for a function f(n)

Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n > n0. }

Theta Notation, θ

The notation θ(n) is the formal way to express both the lower bound and the upper bound of an
algorithm's running time. It is represented as follows −

GOVERNMENT POLYTECHNIC, KARWAR 4


θ(f(n)) = { g(n) if and only if g(n) = Ο(f(n)) and g(n) = Ω(f(n)) for all n > n0. }

Common Asymptotic Notations

A list of some common asymptotic notations is mentioned below −

GOVERNMENT POLYTECHNIC, KARWAR 5

You might also like