0% found this document useful (0 votes)
22 views

UNIT 1_DAA_NOTES_2023

Uploaded by

Arun Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

UNIT 1_DAA_NOTES_2023

Uploaded by

Arun Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

UNIT 1

Algorithm: An Algorithm is a finite sequence of instructions, each of which has a clear meaning
and can be performed with a finite amount of effort in a finite length of time. No matter what the
input values may be, an algorithm terminates after executing a finite number of instructions.
In addition every algorithm must satisfy the following criteria:
 Input: there are zero or more quantities, which are externally supplied;
 Output: at least one quantity is produced
 Definiteness: each instruction must be clear and unambiguous;
 Finiteness: if we trace out the instructions of an algorithm, then for all cases the algorithm
will terminate after a finite number of steps;
 Effectiveness: every instruction must be sufficiently basic that it can in principle be
carried out by a person using only pencil and paper.

ALGORITHM PROGRAM

 Design  Implementation

 Domain knowledge  Programmer

 Any language  Programming language

 Not dependent on hardware and OS  Dependent

 Priori Analysis  Posteriori Testing


 (Hardware Independent)  (Hardware Dependent)

Algorithm can be described in three ways.


 Natural language like English: When this way is selected care should be taken, we should
ensure that each & every statement is definite.
 Graphic representation called flowchart: This method will work well when the algorithm
is small & simple.
 Pseudo-code Method: In this method, we should typically describe algorithms as
program, which resembles language like Pascal & Algol.

Areas of study of Algorithm:


 How to devise or design an algorithm– It includes the study of various design techniques
and helps in writing algorithms using the existing design techniques like divide and
conquer. For devising algorithms we use the following techniques
1. Divide and Conquer
2. Branch and Bound
3. Greedy Technique
4. Backtracking
5. Dynamic Programming
 How to validate an algorithm– After the algorithm is written it is necessary to check the
correctness of the algorithm i.e for each input correct output is produced, known as
algorithm validation. The second phase is writing a program known as program proving
or program verification.
 How to analysis an algorithm–It is known as analysis of algorithms or performance
analysis, refers to the task of calculating time and space complexity of the algorithm.
 How to test a program – It consists of two phases :
1. debugging is detection and correction of errors.
2. Profiling or performance measurement is the actual amount of time required by
the program to compute the result.

Advantages of Algorithms:

 It is easy to understand.
 Algorithm is a step-wise representation of a solution to a given problem.
 In algorithm the problem is broken down into smaller pieces or steps hence, it is
easier
for the programmer to convert it into an actual program.
Disadvantages of Algorithms:

 Writing an algorithm takes a long time so it is time-consuming.


 Branching and Looping statements are difficult to show in Algorithms.
Analyzing Algorithms

As we know, an algorithm is a finite step-by-step set of instructions, which are meaningful and
used to solve any particular problem. To perform every process, there is a set of Algorithms,
which lets us know how to proceed with the problem and solve it in some steps.
The analysis of an algorithm is a technique that measures the performance of an
algorithm. The factors over which the algorithms majorly depend are the space and time
complexities. There are other factors as well, which we use for the analysis of algorithms.
Importance of Analysis

 To predict the behavior of an algorithm without implementing it on a specific computer.


 It is much more convenient to have simple measures for the efficiency of an algorithm
than to implement the algorithm and test the efficiency every time a certain parameter in
the underlying computer system changes.
 It is impossible to predict the exact behavior of an algorithm. There are too many
influencing factors.
 The analysis is thus only an approximation; it is not perfect.
 More importantly, by analyzing different algorithms, we can compare them to determine
the best one for our purpose.

Performance Analysis: The performance of a program is the amount of computer memory and
time needed to run a program. We use two approaches to determine the performance of a
program. One is analytical, and the other experimental. In performance analysis we use
analytical methods, while in performance measurement we conduct experiments.
We can analyze performance of an algorithms by the means of following:

i) Time Complexity , ii) Space Complexity

Time Complexity: Time complexity is the amount of time taken by an algorithm to run, as a
function of the length of the input. It measures the time taken to execute each statement of code
in an algorithm.

Space Complexity: When an algorithm is run on a computer, it necessitates a certain amount of


memory space. The amount of memory used by a program to execute it is represented by its
space complexity. Because a program requires memory to store input data and temporal values
while running, the space complexity is auxiliary and input space.

The algorithms are generally of two types (i) Iterative, and (ii) Recursive in nature

To find the Complexity of the Algorithms we use

 Frequency Count Method


 Substitution Method
 Recursive Tree Method
 Master’s Theorem

Frequency Count Method:


If no iteration or recursion exists, the algorithm takes a constant time O (1)

Example 1
Example 2

Example 3

Example 4

Example 5

Substitution Method: This method is used to find the complexity of recursive function. Find the values
of the function up to three steps and then back substitute the values to get the answer.

Example 1

Example 2

Example 3

Example 4

Example 5

Example 6
Example 7

RECURSIVE TREE METHOD:

This method is the pictorial representation of an iteration method, which is in the form of a tree.
At each level the roots are expanded. In this we consider the term in the recurrence as root. It is
useful when divide and conquer algorithm is used. This method can only be used when the right
hand side of the recurrence relation has minimum 2 recursive terms and 1 non-recursive term.

In a recursion tree ,each node represents the cost of a single sub-problem somewhere in the set of
recursive problems invocations .we sum the cost within each level of the tree to obtain a set of
per level cost, and then we sum all the per level cost to determine the total cost of all levels of
recursion .

Example 1

Example 2

Example 3

Example 4

Master’s Theorem:

Rules of Master’s Theorem and Example 1 (case 3a)

Example 2 (Case 1)
Example 3 (case 3a)

Example 4 (Case 2c)

Example 5 (Case 1)

Example

Asymptotic Notations
Asymptotic notation provides the basic vocabulary for discussing the design and analysis of
algorithms. Asymptotic means a line that tends to converge to a curve, which may or may not
eventually touch the curve. It is a line that stays within the bounds. It is a quick way to talk about
the fastest possible or slowest possible running time of an algorithm. It describes the rate of
growth of functions.

 They give a simple characterization for an algorithm’s efficiency


 It helps comparing the performance of various algorithms.
Asymptotic notation is coarse enough to suppress all the details you want to ignore, details that
depend on:

 The choice of architecture,


 The choice of programming language,
 The choice of compiler.
 It is useful to make comparisons between different high level algorithmic approaches to
solving a problem, especially on larger input

Asymptotic notations suppresses the constant factors (system dependent) and lower order terms
(irrelevant for large inputs)

There are 5 asymptotic notations


Big Oh (O) – Asymptotic Upper Bound
Big Omega (Ω) - Asymptotic Lower Bound
Big Theta (θ) – Asymptotic Average Bound
Little-Oh (o)
Little Omega (ω)
It is a way to compare “sizes” of functions:
O means ≤
Ω means ≥
Θ means =
o means <
ω means >

1) Big Oh (O)
The Big-O notation describes the worst-case running time of a program. We compute the Big-O of an
algorithm by counting how many iterations an algorithm will take in the worst-case scenario with an input
of N. We typically consult the Big-O because we must always plan for the maximum time an algorithm
can take. For example, O (log n) describes the Big-O of a binary search algorithm. It is depicted as

Example 1

Example 2

Example 3

NOTE : The leading term of the polynomial determines its growth.

2) Big Omega (Ω)


Big-Ω describes the best running time of a program. We compute the big-Ω by counting how
many iterations an algorithm will take in the best-case scenario based on an input of N. For
example, a Bubble Sort algorithm has a running time of Ω (n) because in the best case scenario
the list is already sorted, and the bubble sort will terminate after the first iteration.
The function f(n)=Ω (g(n)) (read as “f of n is Omega of g of n”) iff there exist positive constants
c and n0 such that f(n)≥C*g(n) for all n, n≥0 The value g(n) is the lower bound value of f(n).
3) Theta notation
Theta notation:θ The function f(n)= θ (g(n)) (read as “f of n is theta of g of n”) iff there exist
positive constants c1, c2 and n0 such that C1*g(n) ≤f(n)≤C2*g(n) for all n, n≥0

Example: 3n+2=θ (n) as 3n+2 ≥3n for all n≥2


3n+2 ≤3n for all n≥2
Here c1=3 and c2=4 and n0=2
CLASS OF FUNCTION
O(1) Constant (independent of the input size)
O (log n) Logarithmic
O(n) Linear
O(n2) Quadratic
O(n3) Cubic
O(2n) Exponential
GROWTH OF FUNCTION

SORTING ALGORITHMS
INSERTION SORT: In insertion sort the items are inserted into its proper place in the final list.
The simplest implementation of this requires two list structures – the source list and the list into
which the sorted items are inserted. To save memory, most implementations use an in-place sort
that works by moving the current item pass the already sorted items and repeatedly swapping it
with the preceding item until in place.

Algorithm and Analysis


Quick sort
Quick sort works by partitioning a given array A[p…..r] into two non empty sub arrays
A[p…..q] and A[q+1,…….r]. Then these two sub arrays are sorted by recursive calls to Quick
sort. The exact partition depends on the given array and index q is computed as a part of the
partitioning procedure.
Example and Analysis of Quick Sort
One pass is depicted above. You will continue this process and finally obtain a
sorted array.

Advantages of Quick Sort


The advantages of quick sort algorithm are-
 Quick Sort is an in-place sort, so it requires no temporary memory.
 Quick Sort is typically faster than other algorithms.
 Quick Sort tends to make excellent usage of the memory hierarchy like virtual memory
or caches.
 Quick Sort can be easily parallelized due to its divide and conquer nature
MERGE SORT
COUNTING SORT
RADIX SORT

HEAP SORT
Heap Sort
Binary Heap: Binary Heap is an array object can be viewed as Complete Binary Tree. Each node of the
Binary Tree corresponds to an element in an array
Length [A]: number of elements in array 2.
Heap-Size[A], number of elements in a heap stored within array A.
The root of tree A [1] and gives index 'i' of a node that indices of its parents, left child, and the right child
can be computed.
1. PARENT (i)
2 Return floor (i/2)
3.LEFT (i)
4. Return 2i
5.RIGHT (i)
6. Return 2i+1

To find the index of the left child, we calculate 1*2=2


This takes us (correctly) to the 14. Now, we go right, so we calculate 2*2+1=5

This takes us (again, correctly) to the 6. Now, 4's index is 7, we want to go to the parent, so we calculate
7/2 =3 which takes us to the 17.

Heap Property: A binary heap can be classified as Max Heap or Min Heap
1. Max Heap: In a Binary Heap, for every node I other than the root, the value of the node is
greater than or equal to the value of its highest child.
2. A [PARENT (i) ≥A[i] Thus, the highest element in a heap is stored at the root. Following is an
example of MAX-HEAP

3. MIN-HEAP: In MIN-HEAP, the value of the node is lesser than or equal to the value of its lowest
child. 1. A [PARENT (i) ≤A[i]
4.

HEAPIFY

You might also like