Unit 2(ADS)
Unit 2(ADS)
Linear Search
Linear search is a very simple search algorithm. In this type of search, a sequential search is made over
all items one by one. Every item is checked and if a match is found then that particular item is returned,
otherwise the search continues till the end of the data collection.
Algorithm
Step 1: Set i to 1
Step 2: if i > n then go to step 7
Step 3: if A[i] = x then go to step 6
Step 4: Set i to i + 1
Step 5: Go to Step 2
Step 6: Print Element x Found at index i and go to step 8
Step 7: Print element not found
Step 8: Exit
Pseudocode
end procedure
Binary Search
Binary search is a fast search algorithm with run
run-time
time complexity of Ο(log n). This search algorithm
works on the principle of divide and conquer. For this algorithm to work properly, the data collection
should be in the sorted form.
Page 1 of 20
Binary search looks for a particular item by comparing the middle most item of the collection. If a
match occurs, then the index of item is returned. If the middle item is greater than the item, then the
item is searched in the sub-array to the left of the middle item. Otherwise, the item is searched for in
the sub-array to the right of the middle item. This process continues on the sub-array as well until the
size of the subarray reduces to zero.
For a binary search to work, it is mandatory for the target array to be sorted. We shall learn the process
of binary search with a pictorial example. The following is our sorted array and let us assume that we
need to search the location of value 31 using binary search.
Now we compare the value stored at location 4, with the value being searched, i.e. 31. We find that the
value at location 4 is 27, which is not a match. As the value is greater than 27 and we have a sorted
array, so we also know that the target value must be in the upper portion of the array.
We change our low to mid + 1 and find the new mid value again.
low = mid + 1
mid = low + (high - low) / 2
Our new mid is 7 now. We compare the value stored at location 7 with our target value 31.
Page 2 of 20
The value stored at location 7 is not a match, rather it is more than what we are looking for. So, the
value must be in the lower part from this location.
We compare the value stored at location 5 with our target value. We find that it is a match.
Pseudocode
Procedure binary_search
A ← sorted array
n ← size of array
x ← value to be searched
Set lowerBound = 1
Set upperBound = n
if A[midPoint] < x
set lowerBound = midPoint + 1
if A[midPoint] > x
Page 3 of 20
set upperBound = midPoint - 1
if A[midPoint] = x
EXIT: x found at location midPoint
end while
end procedure
Bubble Sort
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements
if they are in wrong order.
Example:
First Pass:
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two elements, and swaps since 5 > 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in order (8 > 5), algorithm does not
swap them.
Second Pass:
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Now, the array is already sorted, but our algorithm does not know if it is completed. The algorithm
needs one whole pass without any swap to know it is sorted.
Third Pass:
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Algorithm
We assume list is an array of n elements. We further assume that swap function swaps the values of
the given array elements.
begin BubbleSort(list)
return list
end BubbleSort
Page 4 of 20
Pseudocode
loop = list.count;
end for
end for
Selection Sort
The selection sort algorithm sorts an array by repeatedly finding the minimum element (considering
ascending order) from unsorted part and putting it at the beginning. The algorithm maintains two
subarrays in a given array.
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted.
In every iteration of selection sort, the minimum element (considering ascending order) from the
unsorted subarray is picked and moved to the sorted subarray.
Following example explains the above steps:
Page 5 of 20
arr[] = 64 25 12 22 11
Algorithm
Step 1 − Set MIN to loca on 0
Step 2 − Search the minimum element in the list
Step 3 − Swap with value at loca on MIN
Step 4 − Increment MIN to point to next element
Step 5 − Repeat un l list is sorted
Pseudocode
procedure selection sort
list : array of items
n : size of list
for i = 1 to n - 1
/* set current element as minimum*/
min = i
for j = i+1 to n
if list[j] < list[min] then
min = j;
end if
end for
Page 6 of 20
end for
end procedure
Insertion Sort
Insertion sort is a simple sorting algorithm that works similar to the way you sort playing cards in your
hands. The array is virtually split into a sorted and an unsorted part. Values from the unsorted part are
picked and placed at the correct position in the sorted part.
Example:
12, 11, 13, 5, 6
Let us loop for i = 1 (second element of the array) to 4 (last element of the array)
i = 1. Since 11 is smaller than 12, move 12 and insert 11 before 12
11, 12, 13, 5, 6
i = 2. 13 will remain at its position as all elements in A[0..I-1] are smaller than 13
11, 12, 13, 5, 6
i = 3. 5 will move to the beginning and all other elements from 11 to 13 will move one position ahead
of their current position.
5, 11, 12, 13, 6
i = 4. 6 will move to position after 5, and elements from 11 to 13 will move one position ahead of their
current position.
5, 6, 11, 12, 13
Algorithm
Now we have a bigger picture of how this sorting technique works, so we can derive simple steps by
which we can achieve insertion sort.
Step 1 − If it is the first element, it is already sorted. return 1;
Step 2 − Pick next element
Step 3 − Compare with all elements in the sorted sub-list
Step 4 − Shi all the elements in the sorted sub-list that is greater than the
value to be sorted
Step 5 − Insert the value
Step 6 − Repeat un l list is sorted
Pseudocode
Page 7 of 20
holePosition = i
end for
end procedure
Quick Sort
QuickSort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array
around the picked pivot. There are many different versions of quickSort that pick pivot in different
ways.
1. Always pick first element as pivot.
2. Always pick last element as pivot (implemented below)
3. Pick a random element as pivot.
4. Pick median as pivot.
The key process in quickSort is partition(). Target of partitions is, given an array and an element x of
array as pivot, put x at its correct position in sorted array and put all smaller elements (smaller than x)
before x, and put all greater elements (greater than x) after x. All this should be done in linear time.
Page 8 of 20
Using pivot algorithm recursively, we end up with smaller possible partitions. Each partition is then
processed for quick sort. We define recursive algorithm for quicksort as follows −
Step 1 − Make the right-most index value pivot
Step 2 − par on the array using pivot value
Step 3 − quicksort le par on recursively
Step 4 − quicksort right par on recursively
To get more into it, let see the pseudocode for quick sort algorithm −
if right-left <= 0
return
else
pivot = A[right]
partition = partitionFunc(left, right, pivot)
quickSort(left,partition-1)
quickSort(partition+1,right)
end if
end procedure
Merge Sort
Merge sort is a sorting technique based on divide and conquer technique. With worst-case time
complexity being Ο(n log n), it is one of the most respected algorithms.
Merge sort first divides the array into equal halves and then combines them in a sorted manner.
We know that merge sort first divides the whole array iteratively into equal halves unless the atomic
values are achieved. We see here that an array of 8 items is divided into two arrays of size 4.
This does not change the sequence of appearance of items in the original. Now we divide these two
arrays into halves.
Page 9 of 20
We further divide these arrays and we achieve atomic value which can no more be divided.
Now, we combine them in exactly the same manner as they were broken down. Please note the color
codes given to these lists.
We first compare the element for each list and then combine them into another list in a sorted
manner. We see that 14 and 33 are in sorted positions. We compare 27 and 10 and in the target list of
2 values we put 10 first, followed by 27. We change the order of 19 and 35 whereas 42 and 44 are
placed sequentially.
In the next iteration of the combining phase, we compare lists of two data values, and merge them into
a list of found data values placing all in a sorted order.
After the final merging, the list should look like this −
Algorithm
Merge sort keeps on dividing the list into equal halves until it can no more be divided. By definition, if it
is only one element in the list, it is sorted. Then, merge sort combines the smaller sorted lists keeping
the new list sorted too.
Step 1 − if it is only one element in the list it is already sorted, return.
Step 2 − divide the list recursively into two halves un l it can no more be divided.
Step 3 − merge the smaller lists into new list in sorted order.
Pseudocode
We shall now see the pseudocodes for merge sort functions. As our algorithms point out two main
functions − divide & merge.
Merge sort works with recursion and we shall see our implementation in the same way.
Page 10 of 20
if ( n == 1 ) return a
l1 = mergesort( l1 )
l2 = mergesort( l2 )
var c as array
while ( a and b have elements )
if ( a[0] > b[0] )
add b[0] to the end of c
remove b[0] from b
else
add a[0] to the end of c
remove a[0] from a
end if
end while
return c
end procedure
Page 11 of 20
Representation of Binary Tree
The above tree is a binary tree because each node contains the utmost two children. The logical
representation of the above tree is given below:
Page 12 of 20
Depth-first search
DFS (Depth-first search) is technique used for traversing tree or graph. Here backtracking is used for
traversal. In this traversal first the deepest node is visited and then backtracks to it’s parent node if no
sibling of that node exist.
C++
Page 13 of 20
if (node == NULL)
return;
Page 14 of 20
/* now recur on right subtree */
printPreorder(node->right);
}
return 0;
}
Output:
Preorder traversal of binary tree is
12453
Inorder traversal of binary tree is
42513
Postorder traversal of binary tree is
45231
Page 15 of 20
Print the extracted node.
Complete Code:Run This Code
import java.util.LinkedList;
import java.util.Queue;
public class T_BFS {
public void levelOrderQueue(Node root) {
Queue<Node> q = new LinkedList<Node>();
if (root == null)
return;
q.add(root);
while (!q.isEmpty()) {
Node n = (Node) q.remove();
System.out.print(" " + n.data);
if (n.left != null)
q.add(n.left);
if (n.right != null)
q.add(n.right);
}
}
public static void main(String[] args) throws java.lang.Exception {
Node root = new Node(5);
root.left = new Node(10);
root.right = new Node(15);
root.left.left = new Node(20);
root.left.right = new Node(25);
root.right.left = new Node(30);
root.right.right = new Node(35);
T_BFS i = new T_BFS();
System.out.println("Breadth First Search : ");
i.levelOrderQueue(root);
}
}
class Node {
int data;
Node left;
Node right;
public Node(int data) {
this.data = data;
this.left = null;
this.right = null;
}
}
Page 16 of 20
Run This Code
Output:
Breadth First Search :
5 10 15 20 25 30 35
Prefix: + 2 3
Infix: 2 + 3
Postfix: 2 3 +
The infix reading of this tree resembles (and, in fact, is) the standard way we write and interpret simple
mathematical equations. "Two plus three equals..." (As an aside, all simple mathematical equations can
be expressed as a binary tree. I'm not happy with the tools I have available to render trees right now so I
will leave this as an exercise for you, the reader.)
The postfix reading should be familiar to anyone who owns a Hewlett-Packard graphing calculator. This
form of representing mathematical equations is most commonly referred to as Reverse Polish notation.
Postfix ordering of mathematical expressions is commonly used for rendering stack-based calculators,
usually in assignments for a programming class.
The prefix reading resembles the standard way we use constructs in programming languages. If we had
to represent "2 + 3" using a function, we would write something like plus( 2, 3 ). This is most clearly
shown with LISP's construct ( + 2 3 ). Haskell's backtick operators around infix operators, e.g. `div`, have
a side effect of reminding programmers that most functions are prefix-oriented.
Page 17 of 20
Basic Concepts of Graphs
A graph is a set of points, called nodes or vertices, which are interconnected by a set of lines called
edges. The study of graphs, or graph theory is an important part of a number of disciplines in the fields
of mathematics, engineering and computer science.
Graph Theory
Definition − A graph (denoted as G = (V, E)) consists of a non-empty set of vertices or nodes V and a set
of edges E. A vertex a represents an endpoint of an edge. An edge joins two vertices a, b and is
represented by set of vertices it connects.
Example − Let us consider, a Graph is G = (V, E) where V = {a, b, c, d} and E = {{a, b}, {a, c}, {b, c}, {c, d}}
Degree of a Vertex − The degree of a vertex V of a graph G (denoted by deg (V)) is the number of edges
incident with the vertex V.
Vertex Degree Even / Odd
a 2 even
b 2 even
c 3 odd
d 1 odd
Even and Odd Vertex − If the degree of a vertex is even, the vertex is called an even vertex and if the
degree of a vertex is odd, the vertex is called an odd vertex.
Degree of a Graph − The degree of a graph is the largest vertex degree of that graph. For the above
graph the degree of the graph is 3.
The Handshaking Lemma − In a graph, the sum of all the degrees of all the ver ces is equal to twice the
number of edges. For example, in above case, sum of all the degrees of all vertices is 8 and total edges
are 4.
Storing of Graph
Every storage method has its pros and cons, and the right storage method is chosen based on the
complexity. The two most commonly used data structures to store graphs are:
Page 18 of 20
1. Adjacency list
Graph Traversal
Graph traversal is a method used to search nodes in a graph. The graph traversal is used to decide the
order used for node arrangement. It also searches for edges without making a loop, which means all the
nodes and edges can be searched without creating a loop.
There are two graph traversal structures.
1. DFS (Depth First Search): In-depth
depth search method
The DFS search begins starting from the first node and goes deeper and deeper, exploring down until
the targeted node is found. If the targeted key is not found, the search path is changed to the path that
was stopped exploring during the initial search, and the same procedure is repeated for that branch.
Page 19 of 20
The spanning tree is produced from the result of this search. This tree method is without the loops. The
total number of nodes in the stack data structure is used to implement DFS traversal.
Steps followed to implement DFS search:
Step 1 – Stack size needs to be defined depending on the total number of nodes.
Step 2 – Select the initial node for transversal; it needs to be pushed to the stack by visiting that node.
Step 3 – Now, visit the adjacent node that is not visited before and push that to the stack.
Step 4 – Repeat Step 3 until there is no adjacent node that is not visited.
Step 5 – Use backtracking and one node when there are no other nodes to be visited.
Step 6 – Empty the stack by repeating steps 3,4, and 5.
Step 7 – When the stack is empty, a final spanning tree is formed by eliminating unused edges.
Applications of DFS are:
Solving puzzles with only one solution.
To test if a graph is bipartite.
Topological Sorting for scheduling the job and many others.
2. BFS (Breadth-First Search): Search is implemented using a queuing method
Breadth-First Search navigates a graph in a breadth motion and utilises based on the Queue to jump
from one node to another, after encountering an end in the path.
Steps followed to implement BFS search,
Step 1 – Based on the number of nodes, the Queue is defined.
Step 2 – Start from any node of the traversal. Visit that node and add it to the Queue.
Step 3 – Now check the non-visited adjacent node, which is in front of the Queue, and add that into the
Queue, not to the start.
Step 4 – Now start deleting the node that doesn’t have any edges that need to be visited and is not in
the Queue.
Step 5 – Empty the Queue by repeating steps 4 and 5.
Step 6 – Remove the unused edges and form the spanning tree only after the Queue is empty.
Applications of BFS are:
Peer to Peer Networks- Like in Bittorrent, it is used to find all adjacent nodes.
Crawlers in Search Engine.
Social Networking Websites and many more.
Page 20 of 20