0% found this document useful (0 votes)
5 views

Unit 2(ADS)

The document provides an overview of various search and sorting algorithms, including Linear Search, Binary Search, Bubble Sort, Selection Sort, Insertion Sort, Quick Sort, and Merge Sort. Each algorithm is explained with its respective algorithm steps and pseudocode, highlighting their operational principles and efficiency. The document emphasizes the importance of data organization, such as the need for sorted data in Binary Search and the divide-and-conquer approach in Quick and Merge Sort.

Uploaded by

navata
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Unit 2(ADS)

The document provides an overview of various search and sorting algorithms, including Linear Search, Binary Search, Bubble Sort, Selection Sort, Insertion Sort, Quick Sort, and Merge Sort. Each algorithm is explained with its respective algorithm steps and pseudocode, highlighting their operational principles and efficiency. The document emphasizes the importance of data organization, such as the need for sorted data in Binary Search and the divide-and-conquer approach in Quick and Merge Sort.

Uploaded by

navata
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Unit 2(ADS)(Search with  for new topic)

Linear Search
Linear search is a very simple search algorithm. In this type of search, a sequential search is made over
all items one by one. Every item is checked and if a match is found then that particular item is returned,
otherwise the search continues till the end of the data collection.

Algorithm

Linear Search ( Array A, Value x)

Step 1: Set i to 1
Step 2: if i > n then go to step 7
Step 3: if A[i] = x then go to step 6
Step 4: Set i to i + 1
Step 5: Go to Step 2
Step 6: Print Element x Found at index i and go to step 8
Step 7: Print element not found
Step 8: Exit

Pseudocode

procedure linear_search (list, value


value)

for each item in the list


if match item == value
return the item's location
end if
end for

end procedure

Binary Search
Binary search is a fast search algorithm with run
run-time
time complexity of Ο(log n). This search algorithm
works on the principle of divide and conquer. For this algorithm to work properly, the data collection
should be in the sorted form.

Page 1 of 20
Binary search looks for a particular item by comparing the middle most item of the collection. If a
match occurs, then the index of item is returned. If the middle item is greater than the item, then the
item is searched in the sub-array to the left of the middle item. Otherwise, the item is searched for in
the sub-array to the right of the middle item. This process continues on the sub-array as well until the
size of the subarray reduces to zero.

How Binary Search Works?

For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the process
of binary search with a pictorial example. The following is our sorted array and let us assume that we
need to search the location of value 31 using binary search.

First, we shall determine half of the array by using this formula −


mid = low + (high - low) / 2
Here it is, 0 + (9 - 0 ) / 2 = 4 (integer value of 4.5). So, 4 is the mid of the array.

Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find that the
value at location 4 is 27, which is not a match. As the value is greater than 27 and we have a sorted
array, so we also know that the target value must be in the upper portion of the array.

We change our low to mid + 1 and find the new mid value again.
low = mid + 1
mid = low + (high - low) / 2
Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.

Page 2 of 20
The value stored at location 7 is not a match, rather it is more than what we are looking for. So, the
value must be in the lower part from this location.

Hence, we calculate the mid again. This time it is 5.

We compare the value stored at location 5 with our target value. We find that it is a match.

We conclude that the target value 31 is stored at location 5.


Binary search halves the searchable items and thus reduces the count of comparisons to be made to
very less numbers.

Pseudocode

The pseudocode of binary search algorithms should look like this −

Procedure binary_search
A ← sorted array
n ← size of array
x ← value to be searched

Set lowerBound = 1
Set upperBound = n

while x not found


if upperBound < lowerBound
EXIT: x does not exists.

set midPoint = lowerBound + ( upperBound - lowerBound ) / 2

if A[midPoint] < x
set lowerBound = midPoint + 1

if A[midPoint] > x

Page 3 of 20
set upperBound = midPoint - 1

if A[midPoint] = x
EXIT: x found at location midPoint
end while

end procedure

Bubble Sort
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements
if they are in wrong order.
Example:
First Pass:
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps since 5 > 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not
swap them.
Second Pass:
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Now, the array is already sorted, but our algorithm does not know if it is completed. The algorithm
needs one whole pass without any swap to know it is sorted.
Third Pass:
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )

Algorithm

We assume list is an array of n elements. We further assume that swap function swaps the values of
the given array elements.

begin BubbleSort(list)

for all elements of list


if list[i] > list[i+1]
swap(list[i], list[i+1])
end if
end for

return list

end BubbleSort

Page 4 of 20
Pseudocode

Pseudocode of BubbleSort algorithm can be written as follows −

procedure bubbleSort( list : array of items )

loop = list.count;

for i = 0 to loop-1 do:


swapped = false

for j = 0 to loop-1 do:

/* compare the adjacent elements */


if list[j] > list[j+1] then
/* swap them */
swap( list[j], list[j+1] )
swapped = true
end if

end for

/*if no number was swapped that means


array is sorted now, break the loop.*/

if(not swapped) then


break
end if

end for

end procedure return list

Selection Sort
The selection sort algorithm sorts an array by repeatedly finding the minimum element (considering
ascending order) from unsorted part and putting it at the beginning. The algorithm maintains two
subarrays in a given array.
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element (considering ascending order) from the
unsorted subarray is picked and moved to the sorted subarray.
Following example explains the above steps:

Page 5 of 20
arr[] = 64 25 12 22 11

// Find the minimum element in arr[0...4]


// and place it at beginning
11 25 12 22 64

// Find the minimum element in arr[1...4]


// and place it at beginning of arr[1...4]
11 12 25 22 64

// Find the minimum element in arr[2...4]


// and place it at beginning of arr[2...4]
11 12 22 25 64

// Find the minimum element in arr[3...4]


// and place it at beginning of arr[3...4]
11 12 22 25 64

Algorithm
Step 1 − Set MIN to loca on 0
Step 2 − Search the minimum element in the list
Step 3 − Swap with value at loca on MIN
Step 4 − Increment MIN to point to next element
Step 5 − Repeat un l list is sorted

Pseudocode
procedure selection sort
list : array of items
n : size of list

for i = 1 to n - 1
/* set current element as minimum*/
min = i

/* check the element to be minimum */

for j = i+1 to n
if list[j] < list[min] then
min = j;
end if
end for

/* swap the minimum element with the current element*/


if indexMin != i then
swap list[min] and list[i]
end if

Page 6 of 20
end for

end procedure

Insertion Sort
Insertion sort is a simple sorting algorithm that works similar to the way you sort playing cards in your
hands. The array is virtually split into a sorted and an unsorted part. Values from the unsorted part are
picked and placed at the correct position in the sorted part.

Example:
12, 11, 13, 5, 6
Let us loop for i = 1 (second element of the array) to 4 (last element of the array)
i = 1. Since 11 is smaller than 12, move 12 and insert 11 before 12
11, 12, 13, 5, 6
i = 2. 13 will remain at its position as all elements in A[0..I-1] are smaller than 13
11, 12, 13, 5, 6
i = 3. 5 will move to the beginning and all other elements from 11 to 13 will move one position ahead
of their current position.
5, 11, 12, 13, 6
i = 4. 6 will move to position after 5, and elements from 11 to 13 will move one position ahead of their
current position.
5, 6, 11, 12, 13

Algorithm
Now we have a bigger picture of how this sorting technique works, so we can derive simple steps by
which we can achieve insertion sort.
Step 1 − If it is the first element, it is already sorted. return 1;
Step 2 − Pick next element
Step 3 − Compare with all elements in the sorted sub-list
Step 4 − Shi all the elements in the sorted sub-list that is greater than the
value to be sorted
Step 5 − Insert the value
Step 6 − Repeat un l list is sorted

Pseudocode

procedure insertionSort( A : array of items )


int holePosition
int valueToInsert

for i = 1 to length(A) inclusive do:

/* select value to be inserted */


valueToInsert = A[i]

Page 7 of 20
holePosition = i

/*locate hole position for the element to be inserted */

while holePosition > 0 and A[holePosition


holePosition-1] > valueToInsert do:
A[holePosition] = A[holePosition
holePosition-1]
holePosition = holePosition -1
end while

/* insert the number at hole position */


A[holePosition] = valueToInsert

end for

end procedure

Quick Sort
QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array
around the picked pivot. There are many different versions of quickSort that pick pivot in different
ways.
1. Always pick first element as pivot.
2. Always pick last element as pivot (implemented below)
3. Pick a random element as pivot.
4. Pick median as pivot.
The key process in quickSort is partition(). Target of partitions is, given an array and an element x of
array as pivot, put x at its correct position in sorted array and put all smaller elements (smaller than x)
before x, and put all greater elements (greater than x) after x. All this should be done in linear time.

Quick Sort Algorithm

Page 8 of 20
Using pivot algorithm recursively, we end up with smaller possible partitions. Each partition is then
processed for quick sort. We define recursive algorithm for quicksort as follows −
Step 1 − Make the right-most index value pivot
Step 2 − par on the array using pivot value
Step 3 − quicksort le par on recursively
Step 4 − quicksort right par on recursively

Quick Sort Pseudocode

To get more into it, let see the pseudocode for quick sort algorithm −

procedure quickSort(left, right)

if right-left <= 0
return
else
pivot = A[right]
partition = partitionFunc(left, right, pivot)
quickSort(left,partition-1)
quickSort(partition+1,right)
end if

end procedure

Merge Sort
Merge sort is a sorting technique based on divide and conquer technique. With worst-case time
complexity being Ο(n log n), it is one of the most respected algorithms.
Merge sort first divides the array into equal halves and then combines them in a sorted manner.

How Merge Sort Works?

To understand merge sort, we take an unsorted array as the following −

We know that merge sort first divides the whole array iteratively into equal halves unless the atomic
values are achieved. We see here that an array of 8 items is divided into two arrays of size 4.

This does not change the sequence of appearance of items in the original. Now we divide these two
arrays into halves.

Page 9 of 20
We further divide these arrays and we achieve atomic value which can no more be divided.

Now, we combine them in exactly the same manner as they were broken down. Please note the color
codes given to these lists.
We first compare the element for each list and then combine them into another list in a sorted
manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10 and in the target list of
2 values we put 10 first, followed by 27. We change the order of 19 and 35 whereas 42 and 44 are
placed sequentially.

In the next iteration of the combining phase, we compare lists of two data values, and merge them into
a list of found data values placing all in a sorted order.

After the final merging, the list should look like this −

Now we should learn some programming aspects of merge sorting.

Algorithm
Merge sort keeps on dividing the list into equal halves until it can no more be divided. By definition, if it
is only one element in the list, it is sorted. Then, merge sort combines the smaller sorted lists keeping
the new list sorted too.
Step 1 − if it is only one element in the list it is already sorted, return.
Step 2 − divide the list recursively into two halves un l it can no more be divided.
Step 3 − merge the smaller lists into new list in sorted order.

Pseudocode
We shall now see the pseudocodes for merge sort functions. As our algorithms point out two main
functions − divide & merge.
Merge sort works with recursion and we shall see our implementation in the same way.

procedure mergesort( var a as array )

Page 10 of 20
if ( n == 1 ) return a

var l1 as array = a[0] ... a[n/2]


var l2 as array = a[n/2+1] ... a[n]

l1 = mergesort( l1 )
l2 = mergesort( l2 )

return merge( l1, l2 )


end procedure

procedure merge( var a as array, var b as array )

var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
add a[0] to the end of c
remove a[0] from a
end if
end while

while ( a has elements )


add a[0] to the end of c
remove a[0] from a
end while

while ( b has elements )


add b[0] to the end of c
remove b[0] from b
end while

return c

end procedure

Binary Tree Data Structure


The Binary tree means that the node can have maximum two children. Here, binary name itself suggests
that 'two'; therefore, each node can have either 0, 1 or 2 children, we typically name them the left and
right child

Page 11 of 20
Representation of Binary Tree
The above tree is a binary tree because each node contains the utmost two children. The logical
representation of the above tree is given below:

Properties of Binary Tree


o At each level of i, the maximum number of nodes is 2 i.
o The height of the tree is defined as the longest path from the root node to the leaf node. The
tree which is shown above has a height equal to 3. Therefore, the maximum number of nodes at
height 3 is equal to (1+2+4+8) = 15. In general, the maximum number of nodes possible at
height h is (20 + 21 + 22+….2h) = 2h+1 -1.
o The minimum number of nodes possible at height h is equal to h+1.
o If the number of nodes is minimum, then the height of the tree would be maximum. Conversely,
if the number of nodes is maximum, then the height of the tree would be minimum.

Page 12 of 20
Depth-first search
DFS (Depth-first search) is technique used for traversing tree or graph. Here backtracking is used for
traversal. In this traversal first the deepest node is visited and then backtracks to it’s parent node if no
sibling of that node exist.

Therefore, the Depth First Traversals of this Tree will be:


(a) Inorder (Left, Root, Right) : 4 2 5 1 3
(b) Preorder (Root, Left, Right) : 1 2 4 5 3
(c) Postorder (Left, Right, Root) : 4 5 2 3 1
Implementing all traversals using DFS

 C++

// C program for different tree traversals


#include <iostream>
using namespace std;

/* A binary tree node has data, pointer to left child


and a pointer to right child */
struct Node
{
int data;
struct Node* left, *right;
Node(int data)
{
this->data = data;
left = right = NULL;
}
};

/* Given a binary tree, print its nodes according to the


"bottom-up" postorder traversal. */
void printPostorder(struct Node* node)
{

Page 13 of 20
if (node == NULL)
return;

// first recur on left subtree


printPostorder(node->left);

// then recur on right subtree


printPostorder(node->right);

// now deal with the node


cout << node->data << " ";
}

/* Given a binary tree, print its nodes in inorder*/


void printInorder(struct Node* node)
{
if (node == NULL)
return;

/* first recur on left child */


printInorder(node->left);

/* then print the data of node */


cout << node->data << " ";

/* now recur on right child */


printInorder(node->right);
}

/* Given a binary tree, print its nodes in preorder*/


void printPreorder(struct Node* node)
{
if (node == NULL)
return;

/* first print data of node */


cout << node->data << " ";

/* then recur on left subtree */


printPreorder(node->left);

Page 14 of 20
/* now recur on right subtree */
printPreorder(node->right);
}

/* Driver program to test above functions*/


int main()
{
struct Node *root = new Node(1);
root->left = new Node(2);
root->right = new Node(3);
root->left->left = new Node(4);
root->left->right = new Node(5);

cout << "\nPreorder traversal of binary tree is \n";


printPreorder(root);

cout << "\nInorder traversal of binary tree is \n";


printInorder(root);

cout << "\nPostorder traversal of binary tree is \n";


printPostorder(root);

return 0;
}

Output:
Preorder traversal of binary tree is
12453
Inorder traversal of binary tree is
42513
Postorder traversal of binary tree is
45231

 Breadth First Search


Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures. It
starts at the tree root and explores the neighbor nodes first, before moving to the next level neighbors.
Approach:
 Take a Empty Queue.
 Start from the root, insert the root into the Queue.
 Now while Queue is not empty,
 Extract the node from the Queue and insert all its children into the Queue.

Page 15 of 20
 Print the extracted node.
Complete Code:Run This Code

import java.util.LinkedList;
import java.util.Queue;
public class T_BFS {
public void levelOrderQueue(Node root) {
Queue<Node> q = new LinkedList<Node>();
if (root == null)
return;
q.add(root);
while (!q.isEmpty()) {
Node n = (Node) q.remove();
System.out.print(" " + n.data);
if (n.left != null)
q.add(n.left);
if (n.right != null)
q.add(n.right);
}
}
public static void main(String[] args) throws java.lang.Exception {
Node root = new Node(5);
root.left = new Node(10);
root.right = new Node(15);
root.left.left = new Node(20);
root.left.right = new Node(25);
root.right.left = new Node(30);
root.right.right = new Node(35);
T_BFS i = new T_BFS();
System.out.println("Breadth First Search : ");
i.levelOrderQueue(root);
}
}
class Node {
int data;
Node left;
Node right;
public Node(int data) {
this.data = data;
this.left = null;
this.right = null;
}
}

Page 16 of 20
Run This Code
Output:
Breadth First Search :
5 10 15 20 25 30 35

Expression Trees (Infix, prefix, postfix).


There are three ways to read a binary tree:

 Prefix: Root node, then left child, then right child


 Infix: Left child, then root node, then right child
 Postfix: Left child, then right child, then root node
Take, for example, this really simple binary tree:

The ways to read this are:

 Prefix: + 2 3
 Infix: 2 + 3
 Postfix: 2 3 +
The infix reading of this tree resembles (and, in fact, is) the standard way we write and interpret simple
mathematical equations. "Two plus three equals..." (As an aside, all simple mathematical equations can
be expressed as a binary tree. I'm not happy with the tools I have available to render trees right now so I
will leave this as an exercise for you, the reader.)

The postfix reading should be familiar to anyone who owns a Hewlett-Packard graphing calculator. This
form of representing mathematical equations is most commonly referred to as Reverse Polish notation.
Postfix ordering of mathematical expressions is commonly used for rendering stack-based calculators,
usually in assignments for a programming class.

The prefix reading resembles the standard way we use constructs in programming languages. If we had
to represent "2 + 3" using a function, we would write something like plus( 2, 3 ). This is most clearly
shown with LISP's construct ( + 2 3 ). Haskell's backtick operators around infix operators, e.g. `div`, have
a side effect of reminding programmers that most functions are prefix-oriented.

Page 17 of 20
Basic Concepts of Graphs
A graph is a set of points, called nodes or vertices, which are interconnected by a set of lines called
edges. The study of graphs, or graph theory is an important part of a number of disciplines in the fields
of mathematics, engineering and computer science.
Graph Theory
Definition − A graph (denoted as G = (V, E)) consists of a non-empty set of vertices or nodes V and a set
of edges E. A vertex a represents an endpoint of an edge. An edge joins two vertices a, b and is
represented by set of vertices it connects.
Example − Let us consider, a Graph is G = (V, E) where V = {a, b, c, d} and E = {{a, b}, {a, c}, {b, c}, {c, d}}

Degree of a Vertex − The degree of a vertex V of a graph G (denoted by deg (V)) is the number of edges
incident with the vertex V.
Vertex Degree Even / Odd

a 2 even

b 2 even

c 3 odd

d 1 odd

Even and Odd Vertex − If the degree of a vertex is even, the vertex is called an even vertex and if the
degree of a vertex is odd, the vertex is called an odd vertex.
Degree of a Graph − The degree of a graph is the largest vertex degree of that graph. For the above
graph the degree of the graph is 3.
The Handshaking Lemma − In a graph, the sum of all the degrees of all the ver ces is equal to twice the
number of edges. For example, in above case, sum of all the degrees of all vertices is 8 and total edges
are 4.

Storing of Graph
Every storage method has its pros and cons, and the right storage method is chosen based on the
complexity. The two most commonly used data structures to store graphs are:

Page 18 of 20
1. Adjacency list

Here nodes are stored as an index of the one


one-dimension
dimension array followed by edges being stored as a list.
2. Adjacency matrix

Here nodes are represented as the index of a twotwo-dimensional


dimensional array, followed by edges represented as
non-zero values of an adjacent
djacent matrix.
Both rows and columns showcase Nodes; the entire matrix is filled with either “0” or “1”, representing
true or false. Zero represents that there is no path, and 1 represents a path.

 Graph Traversal
Graph traversal is a method used to search nodes in a graph. The graph traversal is used to decide the
order used for node arrangement. It also searches for edges without making a loop, which means all the
nodes and edges can be searched without creating a loop.
There are two graph traversal structures.
1. DFS (Depth First Search): In-depth
depth search method
The DFS search begins starting from the first node and goes deeper and deeper, exploring down until
the targeted node is found. If the targeted key is not found, the search path is changed to the path that
was stopped exploring during the initial search, and the same procedure is repeated for that branch.

Page 19 of 20
The spanning tree is produced from the result of this search. This tree method is without the loops. The
total number of nodes in the stack data structure is used to implement DFS traversal.
Steps followed to implement DFS search:
Step 1 – Stack size needs to be defined depending on the total number of nodes.
Step 2 – Select the initial node for transversal; it needs to be pushed to the stack by visiting that node.
Step 3 – Now, visit the adjacent node that is not visited before and push that to the stack.
Step 4 – Repeat Step 3 until there is no adjacent node that is not visited.
Step 5 – Use backtracking and one node when there are no other nodes to be visited.
Step 6 – Empty the stack by repeating steps 3,4, and 5.
Step 7 – When the stack is empty, a final spanning tree is formed by eliminating unused edges.
Applications of DFS are:
 Solving puzzles with only one solution.
 To test if a graph is bipartite.
 Topological Sorting for scheduling the job and many others.
2. BFS (Breadth-First Search): Search is implemented using a queuing method
Breadth-First Search navigates a graph in a breadth motion and utilises based on the Queue to jump
from one node to another, after encountering an end in the path.
Steps followed to implement BFS search,
Step 1 – Based on the number of nodes, the Queue is defined.
Step 2 – Start from any node of the traversal. Visit that node and add it to the Queue.
Step 3 – Now check the non-visited adjacent node, which is in front of the Queue, and add that into the
Queue, not to the start.
Step 4 – Now start deleting the node that doesn’t have any edges that need to be visited and is not in
the Queue.
Step 5 – Empty the Queue by repeating steps 4 and 5.
Step 6 – Remove the unused edges and form the spanning tree only after the Queue is empty.
Applications of BFS are:
 Peer to Peer Networks- Like in Bittorrent, it is used to find all adjacent nodes.
 Crawlers in Search Engine.
 Social Networking Websites and many more.

Page 20 of 20

You might also like