0% found this document useful (0 votes)
7 views165 pages

DS_class_ppts(Searching,Sorting to AVLtrees) (1)

The document discusses sorting and searching algorithms, detailing various sorting methods such as selection, insertion, bubble, quick, and merge sort, along with their characteristics and algorithms. It also covers searching techniques, including sequential and binary search, explaining their use cases and efficiency. Additionally, it touches on special forms of matrices and polynomial representations in data structures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views165 pages

DS_class_ppts(Searching,Sorting to AVLtrees) (1)

The document discusses sorting and searching algorithms, detailing various sorting methods such as selection, insertion, bubble, quick, and merge sort, along with their characteristics and algorithms. It also covers searching techniques, including sequential and binary search, explaining their use cases and efficiency. Additionally, it touches on special forms of matrices and polynomial representations in data structures.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 165

SORTING & SEARCHING.

Looking for data.


SORTING.
One of the most common data-processing
applications.
The process through which data are arranged
according to their values is called SORTING.
If data were not ordered, hours spent on trying to
find a single piece of information.
Example: The difficulty of finding someone’s telephone
number in a telephone book that had no internal order.
SORT CLASSIFICATIONS.
TYPES OF SORTS.
Uses primary
memory for the
All data are held in current data being
primary memory sorted.
during the sorting Secondary storage
process. for data not fitting in
primary memory

Internal External
THREE INTERNAL SORTS.

Selection sort
Shell sort

Insertion sort Heap sort


Quick sort

Bubble sort
SORT ORDER.
Data may be sorted in either ascending or descending
sequence.

If the order of the sort is not specified, it is assumed to be


ascending.

Examples of common data sorted in ascending sequence are the dictionary


and the telephone book.

Examples of common descending data are percentages of games won in a


sporting event such as baseball or grade-point averages for honor students.
SORT STABILITY.
Is an attribute of a sort, indicating that data with
equal keys maintain their relative input order
in the output.

input
order

output
PASSES.
During the sorting process, the data are traversed
many times. Each traversal of the data is referred
to as a sort pass.

Depending on the algorithm, the sort pass may traverse the


whole list or just a section of the list.

Also, a characteristic of a sort pass is the placement of one


or more elements in a sorted list.
SELECTION SORT.
✘ In each pass of the selection sort, the smallest element is
selected from the unsorted sublist and exchanged with the
element at the beginning of the unsorted list.
✘ If there is a list of n elements, therefore, n – 1 passes are
needed to completely rearrange the data.
SELECTION SORT.

Example 2
INSERTION SORT.
✘ Given a list, it is divided into two parts: sorted and
unsorted.
✘ In each pass the first element of the unsorted sublist is
transferred to the sorted sublist by inserting it at the
appropriate place.
✘ If list has n elements, it will take at most n – 1 passes to
sort the data.
INSERTION SORT.
BUBBLE SORT.

✘ The list is divided into two sublists: sorted and unsorted.


✘ The smallest element is bubbled from the unsorted sublist and moved to
the sorted sublist.
✘ After moving the smallest to the sorted list, the wall moves one element to
the right, increasing the number of sorted elements and decreasing the
number of unsorted ones.
✘ Each time an element moves from the unsorted sublist to the sorted
sublist, one sort pass is completed.
✘ Given a list of n elements, the bubble sort requires up to n – 1 passes to
sort the data.
BUBBLE SORT.
QUICK SORT.
✘ In Quick sort, also an exchange sort method, developed by C. A. R.
Hoare in 1962.

✘ Quick sort is an exchange sort in which a pivot key is placed in its


correct position in the array while rearranging other elements
widely dispersed across the list.

✘ More efficient than the bubble sort because a typical exchange


involves elements that are far apart, so fewer exchanges are
required to correctly position an element.
QUICK SORT.
✘ Also called partition-exchange sort.
✘ Each iteration of the quick sort selects an element, known as pivot,
and divides the list into three groups:
- Partition of elements whose keys are less than the pivot’s key,
- Pivot element placed in its ultimately correct location in the list,
- Partition of elements greater than or equal to the pivot’s key.
✘ Pivot element can be any element from the array, it can be the first
element, the last element or any random element.
✘ Approach is recursive.
LOGIC OF PARTITION.
✘ In the array {52, 37, 63, 14, 17, 8, 6, 25} , 25 is taken as pivot.
✘ First pass: {6, 8, 17, 14, 25, 63, 37, 52}
✘ After the first pass, pivot will be set at its position in the final
sorted array, with all the elements smaller to it on its left and
all the elements larger than to its right.
✘ Next, {6 8 17 14} and {63 37 52} are considered as two
separate subarrays
✘ Same recursive logic will be applied on them, and keep doing
this until the complete array is sorted.
HOW DOES QUICK SORT WORK?
Selection of pivot to partition the array
Pivot (key) = First element; Find a position for it
Left partition < pivot, Right partition ≥ pivot
Repeat recursively for Left and Right partitions

Can be used for finding the kth smallest OR largest element in an


array without completely sorting the array of size n.

When the pivot element is placed in (k-1)th position, it will be the kth
smallest element

When the pivot element is placed in (n-k)th position, it will be the kth
largest element
QUICK SORT ALGORITHM.
✘ Two Algorithms:
- Quick Sort Recursive
- Partition
QUICK SORT EXAMPLES. (PARTITIONS)
QUICK SORT EXAMPLES.
QUICK SORT EXAMPLES. Pivot Element = 6
QUICK SORT ALGORITHM.
Algorithm Quicksort (Array, Low, High)
1. If (Low < High) Then
1. Set Mid = Partition (Array, Low, High)
2. Quicksort (Array, Low, Mid – 1)
3. Quicksort (Array, Mid + 1, High)
2. End
QUICK SORT ALGORITHM.
Algorithm Partition (Array, Low, High)
1. Set Key = Array[low], I = Low + 1, J = High
2. Repeat Steps A through C
A. while (I < High && Key ≥ Array[i]) i++
B. while (Key < Array[j]) j- -
C. if (I < J) then
swap Array[i] with Array[j]
else
swap Array[low] with Array[j]
return j {Position For KEY}
3. End
DIVIDE & CONQUER.
✘ Divide-and conquer is a general
algorithm design paradigm.
✘ Divide: divide the input data S in two
disjoint subsets S1 and S2.
✘ Recur: solve the subproblems
associated with S1 and S2.
✘ Conquer: combine the solutions for S1
and S2 into a solution for S.
✘ Base case for recursion are subproblems
of size 0 or 1.
MERGE SORT.
✘ Invented by John von Neumann in 1945, example of Divide &
Conquer.
✘ Repeatedly divides the data in half, sorts each half, and combines the
sorted halves into a sorted whole.
✘ Basic algorithm:
- Divide the list into two roughly equal halves.
- Sort the left half.
- Sort the right half.
- Merge the two sorted halves into one sorted list.
✘ Often implemented recursively
✘ Runtime: O(n log n).
MERGE SORT ALGORITHM.
Algorithm merge(S1, S2, S)
Input: Two arrays S1 & S2 of size n1 and n2 sorted in
Algorithm mergeSort(S) non decreasing order and an empty array S of
size at least (n1+n2)
Input: Sequence S with n elements Output: S containing elements from S1 & S2 in sorted order.
i1
Output: Sequence S sorted j1
if S.size() > 1 while (i ≤ 𝑛1) 𝑎𝑛𝑑 (𝑗 ≤ n2)
if S1[i] ≤ S2[j] then
(S1, S2)  partition(S, n/2) S[i+j-1]  Si[i]
i  i +1
mergeSort(S1) else
S[i+j-1]  Si[j]
mergeSort(S2)
j  j +1
S  merge(S1, S2) while ((i ≤ 𝑛1) 𝑑𝑜
S[i+j-1]  Si[i]
i  i +1
while (𝑗 ≤ n2)
S[i+j-1]  Si[j]
j  j +1
Merge Sort
example.

split

22 18 12 -4 58 7 31 42
split split
22 18 12 -4 58 7 31 42
split split split split
22 18 12 -4 58 7 31 42
merge merge merge merge
18 22 -4 12 7 58 31 42
merge merge
-4 12 18 22 7 31 42 58
merge
-4 7 12 18 22 31 42 58
Merge Sort
example.
SEARCHING.
✘ One MORE common and time-consuming operations in
computer science is searching
✘ The process used to find the location of a target among a
list of objects.
✘ The two basic search algorithms:
✘ Sequential search including three interesting variations
and,
✘ Binary search.
SEQUENTIAL SEARCH.
✘ Used whenever the list is not ordered.
✘ Generally, technique used only for small lists or lists that are not
searched often.
✘ Process: Start searching for the target at the beginning of the list and
continue until target found or it is not in the list.
✘ This approach has two possibilities: Find element (successful) or
reach end of list (unsuccessful).
Sequential Search
Example for
Successful search.
Sequential Search
Example for
Unsuccessful search.
BINARY SEARCH.
✘ Sequential search algorithm is very slow. If an array of
1000 elements, exists, 1000 comparisons are made in
worst case.
✘ If the array is not sorted, the sequential search is the only
solution.
✘ However, if the array is sorted, we can use a more efficient
algorithm called binary search.
✘ Generally speaking, Binary search used whenever the list
starts to become large.
BINARY SEARCH.
✘ Begins by testing the data in the element at the middle of the array to
determine if the target is in the first or the second half of the list.
✘ If target in first half, there is NO need to check the second half.
✘ If target in second half, NO need to test the first half.
✘ In other words, half the list is eliminated from further consideration with
just one comparison.
✘ This process repeated, eliminating half of the remaining list with each
test, until target if found or does not exist in the list.
✘ To find the middle of the list, three variables needed: one to identify the
beginning of the list, one to identify the middle of the list, and one to
identify the end of the list.
Binary Search
Example for
Successful search.
Binary Search
Example for
Unsuccessful search.
✘ End
Special forms of square
matrices- Sparse, Polynomial
2-Dimensional Arrays

Matrices
• A m x n matrix is a table with m rows and n columns (m and n are the
dimensions of the matrix)

• Operations: addition, substraction, multiplication, transpose, etc.

2
Matrices (Continued …)
• Representations (mapped as 1-D array)

3
Special forms of square matrices (m = n)
• Diagonal: M (i, j) = 0 for i ≠j
• Diagonal entries = m

4
Special forms of square matrices (m = n)
• Tridiagonal: M (i, j) = 0 for | i –j | > 1
✓ 3 diagonals: Main →i = j; Upper →i = j + 1; Lower →i=
j –1
✓ Number of elements on three diagonals: 3*m-2
✓ Mapping: row-major, column-major or diagonals (begin
with lowest)

5
Special forms of square matrices (m = n)
• Triangular matrices
✓ No. of elements in non-zero region: n (n + 1) / 2
✓ Upper triangular: M (i, j) = 0 for i>j
✓ Lower triangular: M (i, j) = 0 for i<j

Upper: M (i, j) = A ((n * i) + j –(i* (i+ 1) / 2)) for i ≤ j


Lower: M (i, j) = A (i* (i+ 1) / 2 + j) for i ≥ j
6
2-Dimensional Arrays

Special forms of square matrices (m = n)


• Symmetric: M (i, j) = M (j, i) for all i and j
✓ Can be stored as lower-or upper-triangular
✓ E.g., Table of inter-city distance for a set of cities

7
Sparse Matrices

Sparse matrices
• Number of non-zero elements is very less compared to total number of
elements
• Represented as a 1-D array of triplets
✓ Element 1 onwards -Row, Column and Value –for all non-zero
elements
✓ Element 0 –number of rows, columns and non-zero elements

8
Polynomials
Polynomial
• An expression of the form
anxn+ a(n-1)x(n-1)+ … + a1x1+ a0
• One possible way of representation
✓ Store coefficients of the terms in an array element at position equal
to the exponent
✓ Disadvantage: Waste of space specially in case of sparse
polynomial

9
Polynomial(Continued …)
• Represented as a 1-D array of doublets
✓ Element 1 onwards -coefficient and exponent of all terms
✓ Element 0 –number of terms (either in coefficient or exponent
field)

10
Polynomial(Continued …)
• Represented as a linked list
✓ Store the coefficient and exponent (power) of each term in a
node of a singly linked list

11
int main()
{
//system("clear");
//clrscr();
//add polynomial as 1d array poly p1[10],p2[10],p3[20];
#include <stdio.h> printf("1st polynomial:\n");
typedef struct getpoly(p1);
printf("\n2nd polynomial:\n");
{ getpoly(p2);
int coeff, exp; printf("\n1st polynomial: ");
showpoly(p1);
} poly; printf("\n2nd polynomial: ");
showpoly(p2);
add(p1,p2,p3);
printf("\nSum : ");
showpoly(p3);
printf("\n");
return 0;
}
void getpoly(poly p[])
{
int n,i;
printf("Enter the number of terms: ");
scanf("%d",&n);
printf("Enter the terms (coefficient, exponent):\n");
p[0].coeff=n;
for(i=1;i<=n;i++)
scanf("%d %d",&p[i].coeff,&p[i].exp);
}
void showpoly(poly p[])
{
int i;
for(i=1;i<=p[0].coeff;i++)
printf("(%d,%d) ",p[i].coeff,p[i].exp);
}
void add(poly p1[],poly p2[],poly p3[]) else { while(i<=n1)
{ int sum=p1[i].coeff+p2[j].coeff; {
int i=1,j=1,n1=p1[0].coeff, if(sum!=0) n3++;
n2=p2[0].coeff,n3=0; { p3[n3].coeff=p1[i].coeff;
while(i<=n1 && j<=n2) n3++; p3[n3].exp=p1[i].exp;
{ p3[n3].coeff=sum; i++;
if(p1[i].exp>p2[j].exp) p3[n3].exp=p1[i].exp; }
{ } while(j<=n2)
n3++; i++; {
p3[n3].coeff=p1[i].coeff; j++; n3++;
} p3[n3].coeff=p2[j].coeff;
p3[n3].exp=p1[i].exp;
} p3[n3].exp=p2[j].exp;
i++;
j++;
} }
else if(p1[i].exp<p2[j].exp) p3[0].coeff=n3;
{ }
n3++;
p3[n3].coeff=p2[j].coeff;
p3[n3].exp=p2[j].exp;
j++;
}
END
GRAPHS.
Useful data structures.
WHAT WE KNOW ALREADY.
Need for ADTs and data structures:
Regular Arrays (with dynamic sizing)

Linked Lists

Stacks, Queues

Heaps

Unbalanced and Balanced Search Trees

Some algorithms like Tree traversals


BASIC CONCEPTS: GRAPHS.

A graph is a data structure that consists of a set of


nodes or vertices and a set of edges that relate the
nodes to each other.
BASIC CONCEPTS: GRAPHS.

The set of edges describes relationships among the nodes.


MATHEMATICAL OR FORMAL DEFINITION.
A graph G is defined as follows:

G=(V,E)

V(G): a finite, nonempty set of vertices V = {v1,v2,…,vn}

E(G): a set of edges (pairs of vertices) E = {e1,e2,…,em}

An edge "connects" the vertices

Graphs can be directed or undirected


MATHEMATICAL OR FORMAL DEFINITION.
 Undirected graphs: edges have Directed graphs or digraphs: edges
have direction
no specific direction.
Thus, (u, v) ∊ E does not imply (v, u) ∊
Edges are always "two-way“ E.
Thus, (u, v) ∊ E implies (v, u) ∊ E. Let (u, v) ∊ E mean u → v
Only one of these edges needs to
Where u is the source, and
be in the set, the other is implicit.
v the destination
D
D D
A A A
C C or C
B B

B
2 edges here
APPLICATIONS OF GRAPHS.

 Web pages with links


 Facebook friends
 Methods in a program that call each other
 Road maps
 Airline routes
 Family trees
APPLICATIONS OF GRAPHS.

Graphs can be used to solve complex


routing problems, such as designing and
routing airlines among the airports they serve.
Similarly, graphs can be used to route
messages over a computer network from one
node to another.
TREES & GRAPHS
In a non-linear list, each element can have more than one
successor.
In a tree, an element can have only one predecessor.
In a graph, an element can have one or more predecessors.

One final point: A tree is a graph in which each vertex has


only one predecessor; however, a graph is not a tree.
BASIC CONCEPTS IN COMPUTER SCIENCE.
A graph is a collection of nodes called vertices, and a
collection of segments called lines, connecting pairs of
vertices.
So, a graph consists of two sets, a set of
vertices and a set of lines (we know that)

A directed graph, or digraph for short, is a


graph in which each line has a direction (arrow
head) to its successor.

The lines in a directed graph are known as


arcs. In a directed graph, the flow along the
arcs between two vertices can follow only the
indicated direction.
BASIC CONCEPTS.
An undirected graph is a graph in which there is no
direction (arrow head) on any of the lines, which
are known as edges.

In an undirected graph, the flow between two


vertices can go in either direction.
A path is a sequence of vertices in which each
vertex is adjacent to the next one.
Two vertices in a graph are said to be adjacent
{A, B, C, E} is one path and vertices (or neighbors) if there is a path of length 1
{A, B, E, F} is another. connecting them.

In (a), B is adjacent to A, whereas E is not adjacent to D; on the other hand, D is adjacent to E


In (b), E and D are adjacent, but D and F are not.
CYCLES AND LOOPS.
In Figure 11-1(a), same vertices do
not constitute a cycle because in a
digraph a path can follow only the
direction of the arc, whereas in an
undirected graph a path can move in
either direction along the edge.

A cycle is a path consisting of at least A loop is a special case of a cycle in


three vertices that starts and ends with which a single arc begins and ends
the same vertex. with the same vertex. In a loop the
In Fig.11-1(b) : B, C, D, E, B is a cycle end points of the line are the same.

Length: Length of a path is the number of edges in the path


PATHS AND CYCLES IN DIRECTED GRAPHS
D A

E
C
A
D
B B

Is there a path from A to D? No C

Does the graph contain any cycles? No


CONNECTED GRAPHS.
Two vertices are said to be connected if there is a
path between them.

A graph is said to be connected if, ignoring direction,


there is a path from any vertex to any other vertex.
A directed graph is strongly connected if there is a
path from each vertex to every other vertex in the
digraph.
A directed graph is weakly connected if at least two
vertices are not connected.

A connected undirected graph would always be


strongly connected, so the concept is not normally
used with undirected graphs.
DISJOINT GRAPHS.

A graph is a disjoint graph if it is not connected.


DEGREE OF A VERTEX. Number of lines incident to it.

The indegree The outdegree of


is the number a vertex in a
of arcs entering digraph is the
number of arcs
the vertex.
leaving the vertex.

The degree of vertex B is 3


The degree of vertex E is 4.

The indegree of vertex B is 1 and its outdegree is 2


DEGREE OF A VERTEX. Number of lines incident to it.

The outdegree
The indegree is of a vertex in a
the number of digraph is the
arcs entering number of arcs
the vertex. leaving the
vertex.

The indegree of vertex E is 3 and its outdegree is 1.


OPERATIONS.
There are six primitive graph operations that provide the
basic modules needed to maintain a graph:

Insert a Delete a Add an Delete Find a Traverse


vertex vertex edge an edge vertex a graph
INSERT VERTEX.
When a vertex is
Inserting a
inserted, it is disjoint. After a vertex
Adds a new vertex is just the
That is, it is not is inserted, it
vertex to a first step in the
connected to any must be
graph. insertion
other vertices in the connected.
process.
list.

Figure: A graph before and after a new vertex is added


DELETE VERTEX.

When a vertex is
Delete vertex removes
deleted, all connecting
a vertex from the
edges are also
graph.
removed.
ADD EDGE.
If a vertex
If the graph is a
Add edge requires multiple To add an digraph, one of the
connects a edges, add an edge, two vertices must be
vertex to a edge must be vertices specified as the
destination called once for must be source and one as
vertex. each adjacent specified. the destination.
vertex.

Figure: Adding an edge, {A, E}, to the graph.


DELETE EDGE.

Figure: Deleting the edge, {A, E}, to the graph.

Delete edge removes one edge from a graph.


FIND VERTEX.
Find vertex If the vertex is
traverses a graph, found, its data are
looking for a returned. If it is not
specified vertex. found, an error is
indicated.

Figure: Find vertex traverses the graph, looking for vertex C..
END.
GRAPH STORAGE
STRUCTURES.
Ref. Books: Corman & Forouzan
REPRESENTATIONS OF GRAPHS.
Choice between two standard ways to represent a graph G = (V, E):
 As an adjacency matrix, or
 As a collection of adjacency lists.

Either way applies to both directed and undirected graphs.

Because the adjacency-list representation provides a compact way to represent


sparse graphs, those for which |E| is much less than |V|2 , it is usually the method
of choice.

An adjacency-matrix representation is chosen, however, when the graph is dense,


|E| is close to |V| 2 or when we need to be able to tell quickly if there is an edge
connecting two given vertices.
GRAPH STORAGE STRUCTURES.
To represent a graph, need to store two sets.
 First set represents the vertices of the graph, and
 Second set represents the edges or arcs

The two most common structures used to store these sets are arrays and
linked lists.

This is a major limitation.


 Although the arrays offer some simplicity and processing efficiencies, the
number of vertices must be known in advance.
 Only one edge can be stored between any two vertices.
1. ADJACENCY MATRIX.
The adjacency matrix uses:
 A vector (one-dimensional array) for the vertices, and
 A matrix (two-dimensional array) to store the edges.

If two vertices are adjacent, that is, if there is an edge between them,
the intersect has a value of 1

If there is no edge between them, the intersect is set to 0.

If the graph is directed, the intersection in the adjacency matrix


indicates the direction.
1. ADJACENCY MATRIX.
For an adjacency matrix representation of a graph G = (V,E), we
assume that the vertices are numbered 1,2, ….. |V| in some arbitrary
manner.

Then the adjacency matrix representation of a graph G,


 Consists of |V| * |V| matrix A = (ai j) such that,
1 if (i, j) ∈ E ;

𝑎𝑖𝑗 = 0 otherwise
1. ADJACENCY MATRIX.
Number of 1’s in the adjacency matrix:
 Undirected graph: Sum of degrees of all vertices

 Directed graph: Sum of outdegrees of all vertices


ADJACENCY MATRIX FOR
UNDIRECTED GRAPH.
1
1 1 1
1 1 1

1 1
1 1 1 1

1
ADJACENCY MATRIX FOR DIRECTED
GRAPH.
1

1 1
1 1

1 1
ADJACENCY MATRIX
An element of the adjacency matrix is either 0 or 1
Any matrix whose elements are either 0 or 1 is called bit matrix or Boolean matrix
For a given graph G =m (V, E), an adjacency matrix depends upon the ordering of the
elements of V
For different ordering of the elements of V we get different adjacency matrices.

V1 V4 V1 V2 V3 V4
V1 0 1 0 1
A = V2 1 0 0 0
V3 1 1 0 1
V4 0 1 0 0
V2 V3
ADJACENCY MATRIX
V1 V4 V1 V2 V3 V4
V1 0 1 0 1
A= V2 1 0 0 0
V3 1 1 0 1
V4 0 1 0 0
V2 V3

The number of elements in the ith row whose value is 1 is equal to


the out-degree of node Vi
The number of elements in the jth column whose value is 1 is
equal to the in-degree of node Vj
For a NULL graph which consist of only n nodes but no edges, the
adjacency matrix has all its elements 0. i.e. the adjacency matrix is
the NULL matrix
2. ADJACENCY LIST.
The adjacency matrix uses:
 A linked list to store the vertices, and
 A two-dimensional linked list to store the arcs.

The vertex list is a singly linked list of the vertices in the list.

Depending on the application, it could also be implemented using doubly


linked lists or circularly linked lists.

The pointer at the left of the list links the vertex entries.

The pointer at the right in the vertex is a head pointer to a linked list of
edges from the vertex.
ADJACENCY MATRIX FOR
UNDIRECTED GRAPH.
2. ADJACENCY LIST.
The adjacency-list representation of a graph G = (V,E) consists of an array
Adj of |V| lists, one for each vertex in V .

For each u ∈ V , the adjacency list,


 Adj[u] contains all the vertices v s.t. there is an edge (u,v) ∈ E. That is, Adj[u]
consists of all the vertices adjacent to u in G. (Alternatively, it may contain
pointers to these vertices).

Sum of lengths of all adjacency lists


 |E| for directed graph (an edge (u, v) appears in Adj[u] and not in Adj[v])
 2|E| for G is undirected (an edge (u, v) appears in both Adj[u] and Adj[v])
ADJACENCY LIST: ONE MORE
EXAMPLE. 0

4
1
2

0 1 2 3 4
1 0 3
2 0 3 4
3 0 1 2 4
4 0 2 3
REPRESENTATIONS FOR UNDIRECTED GRAPH:
EXAMPLE 1.
1
2
3
4
5
REPRESENTATIONS FOR DIRECTED GRAPH:
EXAMPLE 2.
1
2
3
4
5
6
NETWORKS.
NETWORKS.
A network is a graph whose lines are weighted or called weighted graph.
Meaning of the weights depends on the application. For example, an airline might use
a graph to represent the routes between cities that it serves. Here,
 Vertices represent cities, and
Vertices
 Edge is a route between 2 cities. Edge
 Edge’s weight could be:
 Miles between the 2 cities
 Price of the flight weight
STORING WEIGHTS IN GRAPH STRUCTURES .
▪ Since weight is an attribute of an
edge, it is stored in the structure that
contains the edge.
▪ In adjacency matrix, weight is stored as
the intersection value.
▪ In adjacency list, stored as the value in
the adjacency linked list.
INCIDENCE MATRIX OF A DIGRAPH.
Incidence matrix is a |E| x |V| matrix B = (bi j) such that,

1 2 3 4 -1 if edge i leaves vertex j


b ij = 1 if edge i enters vertex j
E1 -1 1 0 0 0 otherwise
With no self loops
E2 0 -1 1 0

E3 -1 0 1 0
It has one column for each vertex and one
E4 -1 0 0 1 row for each edge
E5 0 0 -1 1
DIRECTED ACYCLIC GRAPHS(DAG).
A directed graph with no directed cycles.

A A A

B C B C B C

D E D E D E
Directed Acyclic Graph Tree
Directed Graph
GRAPH TRAVERSAL
Two Commonly used Traversal Techniques are
 Depth First Search (DFS)
 Process all of a vertex’s descendants before moving to an adjacent vertex.

 Breadth First Search (BFS)


 Process all adjacent vertices of a vertex before going to the next level.
DEPTH FIRST SEARCH (DFS)
Steps:

1.Begin by pushing the first


vertex A into the stack.
2.Then loop. Pop stack. After
processing the vertex, push all
of the adjacent vertices into
the stack. To process X at step
2, therefore, pop X from the
stack, process it, and then
push G & H into the stack,
giving the stack contents for
step 3.
3.When stack is empty,
traversal is complete.
DEPTH FIRST SEARCH (DFS)
It is like preorder traversal of tree
Traversal can start from any vertex Vi
Vi is visited and then all vertices adjacent to Vi are traversed recursively using DFS

DFS (G, 1) is given by



1 Step 1: Visit (1)
Step 2: DFS (G, 2) DFS (G, 2):
2 ✓ 5 DFS (G, 3) Step1: Visit(2)
✓ DFS (G, 4) Step 2: DFS (G, 6)
✓ 3 ✓ 4 DFS (G, 5) DFS (G, 6):
Step1: Visit(6)
Step 2: DFS (G, 3)
✓ 6 ✓ 7 DFS (G, 8)

DFS of given graph starting from node 1 is given by
8
1 2 6 3 8 7 4 5
DEPTH FIRST SEARCH (DFS)
✓ A

✓ B C ✓
M N O
✓D ✓ E F ✓ G ✓
R
Q P
H

A B D H ECF G
✓ A

✓B ✓ C E A B D C F E
✓ D F✓
BREADTH FIRST SEARCH (BFS)
Steps:
1. Begin by enqueuing vertex A in the
queue.
2. Then loop, dequeuing the queue and
processing the vertex from the front of
the queue. After processing the
vertex, place all of its adjacent
vertices into the queue. Thus, at
step 2, dequeue vertex X, process it,
and then place vertices G & H in the
queue. In step 3, process vertex G.
3. When queue is empty, traversal is
complete.
Breadth First Search (BFS)
✓ 1 ✓ A
✓ ✓
2 5 ✓ B ✓ C
✓ ✓

3 4 ✓ D ✓ E F✓ G

6 ✓ 7 ✓
H✓
✓ 8 1 | 2 3 4 5 | 6 7 | 8 A| B C | D E F G | H

V4
V1

V2
V0 V6

V3 V0| V1 V2 | V4 V6 V3 | V5
V5
WRITE DFS & BFS OF FOLLOWING
GRAPHSA
B C
M N O
D E F G
R
Q P
H

A A
0 1
B C E B E
5 2
D F
4 3 C D
END.
THREADED TREES.
THREADED TREES.
• Binary tree traversal algorithms are written using either recursion or
programmer-written stacks. If the tree must be traversed frequently, using
stacks rather than recursion may be more efficient.
• A third alternative is a threaded tree. In a threaded tree, null pointers are
replaced with pointers to their successor nodes.
• To build a threaded tree ----> first build a standard binary search tree.
• Then traverse the tree, changing the null right pointers to point to their
successors.
• The traversal for a threaded tree is straightforward. Once you locate the
far-left node, loop happens, following the thread (the right pointer) to the
next node. No recursion or stack is needed. When you find a null thread
(right pointer), the traversal is complete.
Inorder traversal (LNR)
THREADED TREES.

To find the far-left leaf, backtracking is performed to process


the right subtrees while navigating downwards. This is especially inefficient
when the parent node has no right subtree.
Binary Trees

◼ Threaded Binary Tree


⚫ A binary tree with n nodes has n + 1 null pointers
 Waste of space due to null pointers
 Replace by threads pointing to inorder successor and/or inorder predecessor
(if any)

 Single threaded: Makes inorder traversal easier

 Double threaded: Makes inorder and postorder easier


Binary Trees

◼ Threaded Binary Tree (Continued…)


⚫ Implementation requirements
 Use a boolean value – thread or child pointer (0 : child, 1 : thread)

⚫ Advantage: Stack not required for inorder traversal


⚫ Disadvantage: Adjustment of pointers during insertion and deletion of nodes
Binary Trees

◼ Threaded Binary Tree (Continued…)


⚫ Example (for inorder traversal)

6
7
Priority Queue

◼ What is a priority queue?


⚫ Queue in which each element has a priority associated with it
⚫ Priority – a unique number that determines the relative importance of
that element compared to other
◼ Operations on Priority Queue
⚫ Insertion as usual
⚫ Deletion
Max (ascending) priority queue – delete element with highest
priority
Min (descending) priority queue – delete element with least priority
8
Double Ended Queue

◼ What is a double ended queue (deque) ?


⚫ Queue in which insertion (enqueue) and deletion (dequeue) can be
made at both ends
◼ Operations on a deque
⚫ insert_rear
⚫ insert_front
⚫ delete_front
⚫ delete_rear
◼ Types of deque
⚫ Input-restricted: Insert at rear end only, delete from any end
⚫ Output-restricted: Delete from front end only, insert at any end
Priority Queue and Heap

◼ Priority Queue
⚫ Queue: FIFO data structure
⚫ Priority Queue: Order of deletion is determined by the element’s
priority
 Deleted either in increasing or decreasing priority
 Efficiently implemented with heap data structure
– Heap is a complete binary tree; can be implemented using the array
representation
 Other representations are leftist trees, fibonacci heaps, binomial heaps, skew
heaps and pairing heaps
9
– Implemented with linked data structures
AVL
TREES.
BALANCED TREES.
AVL TREES.
While the binary search tree is
simple and easy to understand, it
has one major problem.
Not balanced.
AVL TREES.
Also called AVL Search Trees.
In 1962, two Russian mathematicians,
G. M. Adelson-Velskii and E. M. Landis, created
the balanced binary tree structure named
after them
“the AVL tree”
AVL SEARCH TREES.
An AVL tree is a search tree in which the heights of
the subtrees differ by no more than 1.
{-1, 0, 1}

It is thus a balanced binary tree.

-1: Right 0: Even High +1: Left High


High (RH) (EH) (LH)
LST is shorter than RST LST is equal to RST LST is longer than RST
It takes 2 tests to locate 12. It takes 4 tests to locate 8 and 14.
It takes 3 tests to locate 14. It takes 3 tests to locate 20 and 52.
It takes 8 tests to locate 52. The maximum search effort is either 3 or 4.

Search effort for this binary Search effort for this AVL tree
search tree is O(n) is O(log n)
Examples for AVL Trees.
An AVL tree is a height-balanced binary
search tree
AVL SEARCH TREES – Definition.
An AVL tree is a binary tree that either is empty or consists of two AVL
subtrees, TL, and TR,
whose heights differ by no more than 1.

HL is the height of the left subtree and HR is the height of the right subtree.

Because AVL trees are balanced by working with their height, they
are also known as height- balanced trees.
BALANCE FACTORS of AVL Trees.
Why BALANCING TREES?

Whenever we insert a node into a tree or delete a node from a tree, the
resulting tree may be unbalanced.

When we detect that a tree has become unbalanced, we must rebalance


it.

AVL trees are balanced by rotating nodes either to the left or to the right.
Consider the basic balancing algorithms for four cases of unbalancing:

UNBALANCED

Right of right
Left of left Right of left: Left of right:
A subtree of a tree
A subtree of a tree A subtree of a tree A subtree of a tree
that is right high has
that is left high has that is left high has that is right high has
also become right
also become left high. become right high. become left high.
high.

Left Right Right Left


Left Rotation Right Rotation Rotation
Rotation
Single Rotation Left Single Rotation Right Double Rotation Double Rotation
Case 1: Left Rotation or Single Rotation Left

Insertion is Right of Right


Case 1: Left Rotation or Single Rotation Left
Case 2: Right Rotation or Single Rotation Right

Insertion is Left of Left


Case 2: Right Rotation or Single Rotation Right
Case 3: Left - Right Rotation or Double Rotation

Insertion is Left and then Right ( Right of left Insertion).


.
Case 3: Left - Right Rotation or Double Rotation Left
Case 4: Right-Left Rotation or Double Rotation

Insertion is Right and then left (Left of Right Insertion).


Case 4: Right-Left Rotation or Double Rotation
Iterative traversals of binary tree
Iterative preorder traversal:
• Every time a node is visited, its info is printed and node is pushed to stack, since
the address of node is needed to traverse to its right later.
10

20 40

5 30

• Here 10 is printed and this node is pushed onto stack and then move left to 20.
This is done because we need to go right of 10 later.
Similarly 20, 5, 30 and 40 are moved to stack
• Once traversing the left side is over, pop the most pushed node and go to its right.
In the previous tree, after 5 is printed, recently pushed node 20 is popped and
move right to 30.
/*function for iterative preorder traversal*/
Void preorder(NODEPTR root)
{
NODEPTR cur;
NODEPTR stack[20];
int top=-1;
if(root==NULL)
{
printf(“tree is empty”);
return;
}
cur=root;
for(; ;)
{ while(cur!=NULL)
{
printf(“%d”, cur→info);
push(cur,&top,s) /*push the node to stack*/
cur=cur→llink;
}
if(!stack_empty(top)) /*more nodes existing*/
{
cur=pop(&top,s); /* pop most recent node*/
cur=cur→rlink; /*traverse right*/
}
else return;
}}
Ex: 10

20 40

5 30

1. After 1st iteration of while loop


10 is printed
Node 10 is pushed to stack
Cur=cur→llink; i.e Cur=20 10

2. After 2nd iteration of while loop


20 is printed
20
Node 20 is pushed to stack
10
Cur=cur→llink; i.e Cur=5
3. After 3rd iteration of while loop
5
5 is printed 20
Node 5 is pushed to stack 10
Cur=cur→llink; i.e Cur=NULL

4. While loop terminates since cur==NULL


Stack is not empty
20
cur=pop( );i.e cur=node 5;
10
cur=cur→rlink; i.e cur=NULL

5. cur==NULL, while loop not entered


Stack not empty
cur=pop( ); i.e cur=node 20 10
cur=cur→rlink; i.e cur=30
6. While loop is entered
30 is printed 30
10
Node 30 is pushed to stack
Cur=cur→llink; i.e cur=NULL

7. While loop terminates since cur==NULL


Stack not empty
cur=pop( );i.e cur=node 30; 10

cur=cur→rlink; i.e cur=NULL

8. cur==NULL, while loop not entered


Stack not empty
cur=pop( ); i.e cur=node 10
cur=cur→rlink; i.e cur=40
9. While loop is entered
40 is printed
Node 40 is pushed to stack 40
cur=cur→llink; i.e cur=NULL

10. While loop terminates since cur==NULL


Stack not empty
cur=pop( );i.e cur=node 40;
cur=cur→rlink; i.e cur=NULL

11. cur==NULL and stack empty


return

Hence elements printed are 10, 20, 5, 30, 40


Iterative inorder traversal:
• Every time a node is visited, it is pushed to stack without
printing its info and move left.
• After finishing left, pop element from stack, print it and
move right.

10

20 40

5 30

Here 10, 20 , 5 is pushed to stack. Then pop 5, print it and move right.
Now pop 20, print it and move right and push 30 and move left.
Pop 30, print it and move right.
Pop 10, print it and move right and push 40 and so on
/*function for iterative inorder traversal*/
Void inorder(NODEPTR root)
{
NODEPTR cur;
NODEPTR stack[20];
int top=-1;
if(root==NULL)
{
printf(“tree is empty”);
return;
}
cur=root;
for(; ;)
{ while(cur!=NULL)
{
push(cur,&top,s) /*push the node to stack*/
cur=cur→llink;
}
if(!stack_empty(top)) /*more nodes existing*/
{
cur=pop(&top,s); /* pop most recent node*/
printf(“%d”, cur→info);
cur=cur→rlink; /*traverse right*/
}
else return;
}}
Ex: 10

20 40

5 30

1. After 1st iteration of while loop


Node 10 is pushed to stack
Cur=cur→llink; i.e Cur=20
10

2. After 2nd iteration of while loop


Node 20 is pushed to stack
20
Cur=cur→llink; i.e Cur=5
10
3. After 3rd iteration of while loop
5
Node 5 is pushed to stack
20
Cur=cur→llink; i.e Cur=NULL
10

4. While loop terminates since cur==NULL


Stack is not empty
cur=pop( );i.e cur=node 5;
Print 5
cur=cur→rlink; i.e cur=NULL 20
10
5. cur==NULL, while loop not entered
Stack not empty
cur=pop( ); i.e cur=node 20
Print 20
cur=cur→rlink; i.e cur=30 10
6. While loop is entered
Node 30 is pushed to stack 30
Cur=cur→llink; i.e cur=NULL 10

7. While loop terminates since cur==NULL


Stack not empty
cur=pop( );i.e cur=node 30;
Print 30
10
cur=cur→rlink; i.e cur=NULL

8. cur==NULL, while loop not entered


Stack not empty
cur=pop( ); i.e cur=node 10
Print 10
cur=cur→rlink; i.e cur=40
9. While loop is entered
Node 40 is pushed to stack
cur=cur→llink; i.e cur=NULL 40

10. While loop terminates since cur==NULL


Stack not empty
cur=pop( );i.e cur=node 40;
Print 40
cur=cur→rlink; i.e cur=NULL

11. cur==NULL and stack empty


return

Hence elements printed are: 5 20 30 10 40


Iterative postorder traversal
• Here a flag variable to keep track of traversing. Flag is associated with
each node. Flag==-1 indicates that traversing right subtree of that node
is over.

Algorithm:
• Traverse left and push the nodes to stack with their flags set to 1, until
NULL is reached.
• Then flag of current node is set to -1 and its right subtree is traversed.
Flag is set to -1 to indicate that traversing right subtree of that node is
over.
• Hence is flag is -1, it means traversing right subtree of that node is
over and you can print the item. if flag is not –ve, traversing right is
not done, hence traverse right.
/*function for iterative postorder traversal*/
Void postorder(NODEPTR root)
{
struct stack
{
NODEPTR node;
int flag;
};
NODEPTR cur;
struct stack s[20];
int top=-1;
if(root==NULL)
{
printf(“tree is empty”);
return;
}
cur=root;
for(; ;)
{ while(cur!=NULL)
{ s[++top].node=cur; /*traverse left of tree and push the nodes to the stack and
s[top].flag=1; set flag to 1*/
cur=cur→llink;
}
while(s[top].flag<0)
{
cur=s[top--].node; /*if flag is –ve, right subtree is visited and hence node
printf(“%d”, cur→info); is poped and printed*/
if(stack_empty(top)) /*if stack is empty, traversal is complete*/
return;
}
cur= s[top].node; /*after left subtree is
cur=cur→rlink; traversed, move to right and
s[top].flag=-1; set its flag to -1 to indicate
right subtree is traversed*/

}
}
Ex: 10

20 40

5 30

Initially cur =10;


1. After 1st iteration of 1st while loop
Node 10 is pushed to stack & its flag set to 1.
Cur=cur→llink; i.e Cur=20
10 1

2. After 2nd iteration of 1st while loop


Node 20 is pushed to stack & its flag set to 1
Cur=cur→llink; i.e Cur=5 20 1
10 1
3. After 3rd iteration of 1st while loop 5 1
Node 5 is pushed to stack & its flag set to 1. 20 1
Cur=cur→llink; i.e Cur=NULL 10 1

4. While loop terminates since cur==NULL


Since s[top].flag!=-1, 2nd while not entered. 5 -1
cur=s[top].node; i.e cur=node 5; 20 1
Cur=cur→rlink; i.e cur=NULL; 10 1
s[top].flag =-1

5. cur==NULL, 1st while loop not entered


Since s.[top].flag<0, 2nd while is entered.
cur=s[top].node; i.e cur=node 5;
20 1
Print 5 10 1
Stack is not empty, continue;
6. s[top].flag!=-1, 2nd While loop is exited
Cur=s[top].node; i.e cur=20; 20 -1
Cur=cur→rlink; i.e cur=30; 10 1

s[top].flag =-1

7. 1st while is entered 30 1


Node 30 is pushed to stack & its flag set to 1. 20 -1
Cur=cur→llink; i.e Cur=NULL 10 1

8. cur==NULL, 1st while loop exits


Since s[top]!=-1, 2nd while not entered. 30 -1
cur=s[top].node; i.e cur=30; 20 -1
cur=cur→rlink; i.e cur=NULL; 10 1
s[top].flag =-1
9. cur==NULL, 1st while loop not entered
Since s.[top].flag<0, 2nd while is entered. 20 -1
cur=s[top].node; i.e cur=node 30; 10 1

Print 30
Stack is not empty, continue;
10. s[top].flag<0, 2nd While loop continues
cur=s[top].node; i.e cur=node 20;
Print 20
Stack is not empty, continue; 10 1

11. s[top].flag!=-1, 2nd While loop is exited


Cur=s[top].node; i.e cur=10;
Cur=cur→rlink; i.e cur=40; 10 -1
s[top].flag =-1
12. 1st while is entered
Node 40 is pushed to stack & its flag set to 1.
40 1
Cur=cur→llink; i.e Cur=NULL 10 -1

13. cur==NULL, 1st while loop exits


Since s[top]!=-1, 2nd while not entered.
cur=s[top].node; i.e cur=40;
40 -1
cur=cur→rlink; i.e cur=NULL; 10 -1
s[top].flag =-1

14. cur==NULL, 1st while loop not entered


Since s.[top].flag<0, 2nd while is entered.
cur=s[top].node; i.e cur=node 40;
Print 40 10 -1
Stack is not empty, continue;
15. s[top].flag<0, 2nd While loop continues
cur=s[top].node; i.e cur=node 10;
Print 10
Stack is empty, stop;

Hence elements printed in postorder are: 5, 30, 20, 40, 10


Thank You.

You might also like