0% found this document useful (0 votes)
6 views29 pages

Desgin Analysis And Algorithms Full notes

The document provides an introduction to algorithms, defining key characteristics such as input, output, definiteness, finiteness, and effectiveness. It discusses the analysis of algorithms through time and space complexity, including best-case, worst-case, and average-case scenarios, as well as asymptotic analysis using Big-O, Big-Omega, and Theta notations. Additionally, it covers algorithmic strategies like brute-force, greedy algorithms, and dynamic programming, illustrating their applications and time complexities.

Uploaded by

Shirshika Ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views29 pages

Desgin Analysis And Algorithms Full notes

The document provides an introduction to algorithms, defining key characteristics such as input, output, definiteness, finiteness, and effectiveness. It discusses the analysis of algorithms through time and space complexity, including best-case, worst-case, and average-case scenarios, as well as asymptotic analysis using Big-O, Big-Omega, and Theta notations. Additionally, it covers algorithmic strategies like brute-force, greedy algorithms, and dynamic programming, illustrating their applications and time complexities.

Uploaded by

Shirshika Ghosh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

MODULE 1:

Introduction to Algorithms

An algorithm is a step-by-step set of instructions used to solve a specific problem or


complete a task. It’s like a recipe that tells you exactly what to do to get the desired result.

Characteristics of an Algorithm

For a set of instructions to be called an algorithm, it must satisfy the following


characteristics:

1. Input

o An algorithm takes zero or more inputs.

o Example: In a recipe, the ingredients you need are the inputs.

2. Output

o An algorithm must produce at least one output or result.

o Example: In a recipe, the dish you cook is the output.

3. Definiteness

o Every step in the algorithm must be clearly and exactly defined.

o Example: “Mix well” is not definite, but “Stir for 2 minutes” is definite.

4. Finiteness

o An algorithm must complete its steps in a finite number of steps.

o Example: A recipe with infinite steps would mean you never finish cooking!

5. Effectiveness

o Every step of the algorithm must be simple and possible to perform.

o Example: If a recipe says “add magical powder,” it’s not effective because it’s
impossible!

Analysis of Algorithms

Once an algorithm is written, it's important to know how efficient it is. We want to know:

 How fast it runs


 How much memory it uses

 How well it scales (works with larger inputs)

This is done through asymptotic analysis — a fancy way of saying “studying how an
algorithm behaves as the input size increases.”

Asymptotic Analysis of Complexity Bounds

➡️1. What is Asymptotic Analysis?

Asymptotic analysis is used to measure the efficiency of an algorithm in terms of how the
running time or space requirement increases as the input size increases.

 We use symbols like O (Big-O), Ω (Big-Omega), and Θ (Theta) to describe how the
algorithm behaves.

 It helps us to predict how fast the algorithm will work as the problem size increases.

👉 Think of it like this:

 If you are walking at 1 km/hr, you’ll take 1 hour to walk 1 km.

 If you are running at 10 km/hr, you’ll take 6 minutes to cover 1 km.

 Asymptotic analysis helps us figure out which "speed" the algorithm is using!

➡️2. Complexity Bounds

An algorithm can behave differently depending on the type of input it gets. This leads to
three cases:

(i) Best-Case Complexity

 The minimum time taken by an algorithm when the input is in the best possible
state.

 Example:

o If you're searching for an item in a sorted list and it happens to be the first
one, that's the best case.

 Notation: Ω\Omega (Big-Omega)

👉 Example:

 Linear search (finding an element in an array): Best case happens when the element
is at the beginning.
 Best-case time complexity for linear search:

Ω(1)\Omega(1)

(ii) Worst-Case Complexity

 The maximum time an algorithm can take under the worst possible input.

 Example:

o If you are searching for an item and it happens to be the last one (or not in
the list at all).

 Notation: OO (Big-O)

👉 Example:

 Linear search: Worst case happens when the element is at the last position or
missing from the list.

 Worst-case time complexity for linear search:

O(n)O(n)

(iii) Average-Case Complexity

 The expected time an algorithm takes under typical conditions (random input).

 Example:

o If you're searching for an element in a list and it’s equally likely to be at any
position.

 Notation: Θ\Theta (Theta)

👉 Example:

 Linear search: On average, the element is expected to be in the middle.

 Average-case time complexity for linear search:

Θ(n)\Theta(n)

➡️3. Complexity Types

Complexity measures how the algorithm scales as input size increases:


Type Description Example

Takes the same time no matter the Accessing an element in an


Constant Time O(1)
input size. array

Logarithmic Time O(log


Time grows slowly as input increases. Binary search
n)

Linear Time O(n) Time increases directly with input size. Linear search

Time increases with the square of


Quadratic Time O(n^2) Bubble sort
input size.

Exponential Time Solving a chessboard


Time doubles with each new input.
O(2^n) problem

Travelling salesperson
Factorial Time O(n!) Time grows very fast with input size.
problem

✅ Example for Better Understanding

Imagine you are searching for a toy in a toy box:

1. Best Case – You find the toy immediately at the top.


→ Ω(1)

2. Worst Case – The toy is at the bottom or not there at all.


→ O(n)

3. Average Case – The toy is somewhere in the middle.


→ Θ(n)

Summary

Term Meaning Example

Algorithm Step-by-step process to solve a problem Recipe

Best Case Fastest possible time Finding the toy on top

Worst Case Slowest possible time Finding the toy at the bottom

Average Case Expected time on average Toy somewhere in the middle

Big-O Upper bound on time complexity O(n) for linear search

Big-Omega Lower bound on time complexity Ω(1) for linear search


Term Meaning Example

Theta Expected time complexity Θ(n) for linear search

🚀 Key Takeaways

✅ An algorithm is like a recipe — it must be clear, accurate, and effective.


✅ Best-case, worst-case, and average-case tell us how the algorithm behaves under different
conditions.
✅ Big-O, Omega, and Theta are used to describe the efficiency of an algorithm.
✅ Faster algorithms = better performance for larger inputs! 😎

Performance Measurements of an Algorithm

The performance of an algorithm is measured by how efficiently it uses two primary


resources:

1. Time Complexity – How fast the algorithm runs as the input size increases.

2. Space Complexity – How much memory the algorithm needs to execute as the input
size increases.

Understanding the performance helps us decide which algorithm to use based on the
problem size and available resources.

1. Time Complexity

Time complexity measures how the execution time of an algorithm increases as the size of
the input grows.

👉 Example:

 If an algorithm takes 1 second for 10 elements, it might take 4 seconds for 20


elements if it has a time complexity of O(n2)O(n^2).

✅ Types of Time Complexity:

Type Example Description

Accessing an element in an Takes the same time regardless of input


Constant Time O(1)
array size

Linear Time O(n) Linear search Time increases directly with input size

Logarithmic Time Binary search Time increases slowly as input size


Type Example Description

O(log n) grows

Time increases with the square of input


Quadratic Time O(n^2) Bubble sort
size

Exponential Time Solving a chessboard Time doubles with each additional input
O(2^n) problem element

2. Space Complexity

Space complexity measures how much memory an algorithm needs to execute. It includes:

 Input space – Memory needed to store the input.

 Auxiliary space – Extra memory used during execution (e.g., for variables and
recursion stack).

👉 Example:

 A recursive function storing intermediate values in a call stack increases space


complexity.

 An algorithm with a large data structure (e.g., a matrix) will have high space
complexity.

✅ Types of Space Complexity:

Type Example Description

Constant Space O(1) Counting elements Uses fixed memory

Linear Space O(n) Storing an array Memory grows directly with input size

Quadratic Space O(n^2) Storing a matrix Memory grows with the square of input size

3. Time and Space Trade-offs

In most cases, improving the time efficiency of an algorithm increases memory usage — and
vice versa. This is known as the time-space trade-off.

✅ Example 1:

 Bubble Sort – Takes less memory (in-place sorting) but more time → High time, low
space.
 Merge Sort – Takes extra memory (for merging) but works faster → Low time, high
space.

✅ Example 2:

 Memoization in Dynamic Programming – Reduces computation time by storing


results but increases memory usage.

Trade-off Principle:

 If memory is limited → Prefer time-efficient algorithms.

 If time is limited → Prefer space-efficient algorithms.

4. Analysis of Recursive Algorithms Using Recurrence Relations

Recursive algorithms often have time complexities that are defined using recurrence
relations — equations that describe how the algorithm's running time depends on the size
of the input.

Recurrence Relation Structure:

A recurrence relation for an algorithm generally looks like:

T(n)=aT(nb)+f(n)T(n) = aT\left(\frac{n}{b}\right) + f(n)

where:

 T(n)T(n) = Time complexity for input size n

 aa = Number of recursive calls

 bb = Factor by which the problem size is reduced

 f(n)f(n) = Additional work done outside recursive calls

Methods to Solve Recurrence Relations:

➡️1. Substitution Method

This method involves guessing the solution and then proving it by mathematical induction.

✅ Steps:

1. Guess the form of the solution.

2. Use mathematical induction to prove the solution is correct.

3. Adjust the constants to make the solution work.


✅ Example:

Recurrence Relation:

T(n)=2T(n/2)+nT(n) = 2T(n/2) + n

1. Guess:

T(n)=cnlog⁡nT(n) = cn \log n

2. Substitute into the recurrence relation:

T(n)=2(c(n/2)log⁡(n/2))+nT(n) = 2(c(n/2) \log (n/2)) + n

3. Simplify and prove that it holds.


✅ Final Answer:

T(n)=O(nlog⁡n)T(n) = O(n \log n)

➡️2. Recursion Tree Method

This method involves drawing a tree to represent the recursive calls and calculating the total
work done at each level.

✅ Steps:

1. Draw the recursion tree to visualize recursive calls.

2. Calculate the work done at each level of the tree.

3. Sum the work across all levels.

✅ Example:

Recurrence Relation:

T(n)=2T(n/2)+nT(n) = 2T(n/2) + n

Recursion Tree:

 Level 0: n

 Level 1: n/2 + n/2 = n

 Level 2: n/4 + n/4 + n/4 + n/4 = n

 Total work at each level = n

 Number of levels = \log n

✅ Total Work:

n⋅log⁡nn \cdot \log n


✅ Final Answer:

T(n)=O(nlog⁡n)T(n) = O(n \log n)

➡️3. Master’s Theorem

Master’s theorem provides a shortcut to solve recurrence relations of the form:

T(n)=aT(nb)+f(n)T(n) = aT\left(\frac{n}{b}\right) + f(n)

✅ Cases of Master’s Theorem:

T(n)={O(nlog⁡ba)if f(n)<nlog⁡baO(nlog⁡balog⁡n)if f(n)=nlog⁡baO(f(n))if f(n)>nlog⁡baT(n) = \


begin{cases} O(n^{\log_b a}) & \text{if } f(n) < n^{\log_b a} \\ O(n^{\log_b a} \log n) & \
text{if } f(n) = n^{\log_b a} \\ O(f(n)) & \text{if } f(n) > n^{\log_b a} \end{cases}

✅ Example:

Recurrence Relation:

T(n)=2T(n/2)+nT(n) = 2T(n/2) + n

 a=2a = 2, b=2b = 2, f(n)=nf(n) = n

 log⁡ba=log⁡22=1\log_b a = \log_2 2 = 1

 Since f(n)=nlog⁡baf(n) = n^{\log_b a}, the second case applies.

✅ Final Answer:

T(n)=O(nlog⁡n)T(n) = O(n \log n)

Summary of Recurrence Solving Methods

Method Description When to Use

Substitution Guess the solution and prove by Small and simple recurrence
Method induction relations

Recursion Tree Draw recursion tree and sum the work


Clear tree structure
Method at each level

Use direct formula based on log When recurrence fits the


Master’s Theorem
comparison standard form

🚀 Key Takeaways
✅ Performance measurement tells us how fast and memory-efficient an algorithm is.
✅ Time-space trade-off means you can trade memory for speed or vice versa.
✅ Recurrence relations define the running time of recursive algorithms.
✅ Master’s theorem provides a quick way to solve many recurrence relations.
✅ Efficient algorithms are essential for solving large-scale problems! 😎

MODULE 2:
Certainly! Let's break down each algorithmic strategy in detail, along with examples of their
application to classic problems, and then discuss their time complexity.

1. Brute-Force Approach (10 Marks)

Definition:
The brute-force method is the simplest algorithmic technique where every possible solution
is generated and checked until the correct solution is found. This technique does not involve
any sophisticated optimization, and it usually guarantees an optimal solution, though it can
be computationally expensive. The approach is very straightforward and often used when no
better algorithms are available.

Example: Bin Packing Problem

In the Bin Packing problem, we are given a collection of items of various sizes and a set of
bins, each with a fixed capacity. The goal is to minimize the number of bins used to pack all
the items.

Brute-Force Approach:
Try every possible way of packing items into bins and then check how many bins are used.
The solution with the minimum number of bins is selected.

Time Complexity:

 The time complexity of brute-force depends on the number of permutations. For the
Bin Packing problem, it can be O(2^n), where nn is the number of items, because we
need to check all possible subsets and their placements.

2. Greedy Algorithm (10 Marks)

Definition:
A greedy algorithm follows the problem-solving heuristic of making the locally optimal
choice at each step with the hope of finding the global optimum. It works in a step-by-step
manner, always choosing the best available option, without considering the consequences of
that choice in the long term.

Example: Knapsack Problem (Greedy Version)

In the Fractional Knapsack Problem, you can take fractions of items. The goal is to maximize
the total value of items placed into a knapsack with a weight limit.

Greedy Approach:

1. Compute the value-to-weight ratio for each item.

2. Sort the items by this ratio in decreasing order.

3. Take the item with the highest ratio and add as much of it as possible into the
knapsack until the capacity is full.

4. Repeat this until all items are processed or the knapsack is full.

Algorithm:

1. Sort items by value-to-weight ratio in decreasing order.

2. Initialize knapsack weight to 0 and knapsack value to 0.

3. For each item:

o If the current item fits in the remaining knapsack space, add it entirely.

o If it does not fit, add the fractional part of the item that fits.

Time Complexity:

 Sorting the items takes O(n log n), where nn is the number of items.

 Selecting items takes O(n), so the total time complexity is O(n log n).

3. Dynamic Programming (DP) (10 Marks)

Definition:
Dynamic programming is an optimization technique that solves problems by breaking them
down into simpler subproblems and storing the solutions to those subproblems to avoid
redundant computations. It is used when the problem has overlapping subproblems and
optimal substructure.

Example: 0/1 Knapsack Problem (Dynamic Programming)


In the 0/1 Knapsack Problem, we are given a set of items, each with a weight and a value,
and a knapsack with a weight limit. The goal is to determine the maximum value that can be
obtained by putting items in the knapsack without exceeding the weight limit.

DP Approach:

1. Define a DP table dp[i][w]dp[i][w], where ii represents the first ii items, and ww


represents the weight capacity.

2. Transition:

o If the ithi^{th} item is not included, the value is the same as dp[i−1][w]dp[i-1]
[w].

o If the ithi^{th} item is included, the value is dp[i−1][w−wi]+vidp[i-1][w-w_i] +


v_i, where wiw_i and viv_i are the weight and value of the ithi^{th} item.

3. The final answer is found at dp[n][W]dp[n][W], where nn is the number of items and
WW is the capacity.

Time Complexity:

 Filling the DP table takes O(nW)O(nW), where nn is the number of items and WW is
the knapsack's weight capacity.

4. Branch and Bound (B&B) (10 Marks)

Definition:
Branch and Bound is an algorithm design paradigm for solving optimization problems. It
divides the problem into subproblems (branching) and calculates bounds on the solution of
these subproblems to prune those that cannot possibly lead to an optimal solution
(bounding).

Example: Traveling Salesman Problem (TSP)

In the Traveling Salesman Problem, the goal is to find the shortest route that visits all cities
exactly once and returns to the starting point.

B&B Approach:

1. Start with an initial solution (e.g., a path visiting the first city and extending the
route).

2. Branch by considering all possible paths and calculate the lower bound of the
distance that can be traveled.

3. Prune any branches where the lower bound exceeds the current best solution.

4. Continue branching and bounding until the optimal path is found.


Time Complexity:

 The worst-case time complexity can be O(n!) because the algorithm explores all
permutations of cities in the worst case, but pruning can reduce this dramatically in
practice.

5. Backtracking (10 Marks)

Definition:
Backtracking is a method for solving problems incrementally by trying out different solutions
and abandoning ("backtracking") those that do not lead to a valid or optimal solution. It is
often used for constraint satisfaction problems.

Example: N-Queens Problem

In the N-Queens problem, the goal is to place N queens on an N×N chessboard such that no
two queens threaten each other.

Backtracking Approach:

1. Start with an empty board and place the first queen in the first row.

2. Move to the next row and try placing a queen in every column, checking whether it is
safe (i.e., no two queens are in the same column, row, or diagonal).

3. If placing a queen results in an invalid configuration, backtrack by removing the


queen and trying the next column.

4. Repeat until all queens are placed correctly or all possibilities are exhausted.

Time Complexity:

 The worst-case time complexity is O(n!) because, in the worst case, we have to
explore all possible placements for the queens.

6. Heuristics (10 Marks)

Definition:
Heuristics are techniques designed to find a good enough solution for a problem when an
exact solution is impractical due to time or computational limitations. They are often
problem-specific and do not guarantee optimal solutions but work efficiently in practice.

Characteristics of Heuristics:

 Problem-specific: Tailored to a particular problem.

 Fast: Designed to find solutions quickly, even if they are not optimal.
 Approximate: They provide good solutions but may not guarantee the best.

 Non-optimal: They may lead to suboptimal solutions.

Application Domains:

 Routing Problems: Heuristics like the nearest neighbor algorithm in TSP can provide
a quick, though non-optimal, solution.

 Optimization Problems: In problems like the Knapsack problem or the Traveling


Salesman Problem, heuristics like Greedy or Simulated Annealing are used.

 Scheduling Problems: Heuristic approaches can provide feasible solutions for


scheduling tasks on machines or minimizing makespan in parallel processing.

Example: Simulated Annealing is used for optimization by probabilistically exploring the


solution space, accepting worse solutions in the beginning, and then gradually reducing this
acceptance as the search progresses.

Conclusion

Each of these algorithmic strategies—Brute-Force, Greedy, Dynamic Programming, Branch


and Bound, Backtracking, and Heuristics—has its strengths and weaknesses depending on
the problem at hand. While brute-force methods are simple and guarantee correct results,
they tend to be inefficient for large problems. Greedy algorithms are faster but might not
always produce the optimal solution. Dynamic programming is efficient for problems with
overlapping subproblems, while branch and bound and backtracking are more suited for
combinatorial optimization problems. Heuristics provide practical solutions for complex
problems where exact methods are too slow or infeasible.

The time complexities vary depending on the specific problem and algorithm used, but most
of these techniques provide a trade-off between solution quality and computational
efficiency.

MODULE 3:
1. Depth First Search (DFS) (10 Marks)

Definition: Depth First Search (DFS) is a graph traversal algorithm that starts at a selected
node (usually the root or an arbitrary node) and explores as far as possible along each
branch before backtracking.

Working:
1. Start at the source vertex.

2. Explore each unvisited neighboring node recursively (or using a stack in an iterative
version).

3. Once all neighbors of a vertex are visited, backtrack to the previous vertex and
continue.

4. The algorithm continues until all vertices in the connected component are visited.

DFS Algorithm:

1. Mark the current node as visited.

2. For each unvisited neighbor of the current node, recursively perform DFS.

Time Complexity:

 DFS visits each node and edge exactly once.

 If the graph has VV vertices and EE edges, the time complexity is O(V + E).

2. Breadth First Search (BFS) (10 Marks)

Definition: Breadth First Search (BFS) is a graph traversal algorithm that starts at a source
node and explores all the neighbor nodes at the present depth level before moving on to
nodes at the next depth level.

Working:

1. Start at the source vertex.

2. Visit all neighboring vertices (nodes that are one step away from the source).

3. Enqueue all the visited neighbors and proceed with the next level of vertices.

4. Repeat until all vertices are visited.

BFS Algorithm:

1. Initialize an empty queue and enqueue the source node.

2. Mark the source node as visited.

3. While the queue is not empty:

o Dequeue a node, and for each unvisited neighboring node, enqueue it and
mark it as visited.

Time Complexity:

 BFS visits each node and edge exactly once.


 If the graph has VV vertices and EE edges, the time complexity is O(V + E).

3. Shortest Path Algorithms (10 Marks)

Definition: Shortest path algorithms are used to find the shortest path between nodes in a
graph. They are widely used in applications like routing, network design, and geographical
mapping.

Dijkstra’s Algorithm (For weighted graphs with non-negative weights):

1. Initialize the distance to the source node as 0 and to all other nodes as infinity.

2. Mark all nodes as unvisited. Set the initial node as current.

3. For each unvisited neighbor, calculate its tentative distance. If it's smaller than the
known distance, update it.

4. Once all neighbors of the current node are processed, mark the current node as
visited and move to the unvisited node with the smallest tentative distance.

5. Repeat until all nodes are visited.

Time Complexity of Dijkstra’s Algorithm:

 Using a priority queue, the time complexity is O((V + E) log V), where VV is the
number of vertices and EE is the number of edges.

4. Transitive Closure (10 Marks)

Definition: The transitive closure of a graph is a reachability matrix that shows whether
there is a path between any pair of vertices in a directed graph. It is typically represented
using an adjacency matrix.

Working (Using Floyd-Warshall Algorithm for Transitive Closure):

1. Initialize a matrix TT where T[i][j]=1T[i][j] = 1 if there is an edge from vertex ii to


vertex jj, and T[i][j]=0T[i][j] = 0 otherwise.

2. Update the matrix iteratively: For each intermediate node kk, check if a path exists
from node ii to node jj through node kk.

3. The matrix TT will eventually hold the transitive closure of the graph.

Time Complexity:

 The time complexity of the Floyd-Warshall algorithm is O(V^3), where VV is the


number of vertices.
5. Minimum Spanning Tree (MST) (10 Marks)

Definition: A Minimum Spanning Tree (MST) of a connected, undirected graph is a subset of


the edges that connects all the vertices together without any cycles, and with the minimum
possible total edge weight.

Prim’s Algorithm (Greedy Approach for MST):

1. Start with an arbitrary node and initialize its key value to 0 and all others to infinity.

2. Mark the starting node as part of the MST.

3. For every node connected to the MST, update its key value if a lower-weight edge is
found.

4. Select the node with the smallest key value that is not yet part of the MST.

5. Repeat the process until all nodes are included.

Time Complexity:

 Using a priority queue, the time complexity is O((V + E) log V), where VV is the
number of vertices and EE is the number of edges.

Kruskal’s Algorithm (Another Greedy Approach):

1. Sort all edges in the graph by their weight.

2. Pick the smallest edge and add it to the MST if it doesn't form a cycle.

3. Repeat the process for all edges.

Time Complexity:

 Sorting the edges takes O(Elog⁡E)O(E \log E), and the union-find operations take
O(Elog⁡V)O(E \log V), so the total time complexity is O(E log E).

6. Topological Sorting (10 Marks)

Definition: Topological sorting is the process of ordering the vertices of a directed acyclic
graph (DAG) in a linear sequence such that for every directed edge u→vu \to v, vertex uu
comes before vv in the ordering.

Algorithm (Using DFS):

1. Perform DFS on the graph. For each vertex, after visiting all its neighbors, push it to a
stack.

2. Reverse the stack to get the topological order.


Time Complexity:

 Since DFS visits each vertex and edge once, the time complexity is O(V + E), where VV
is the number of vertices and EE is the number of edges.

7. Network Flow Algorithm (10 Marks)

Definition: A network flow algorithm is used to solve problems where there is a flow of
resources (like data or goods) through a network. The goal is to find the maximum flow from
a source node to a sink node, subject to the capacities of the edges.

Ford-Fulkerson Algorithm (for Maximum Flow):

1. Start with an initial flow of 0.

2. While there exists an augmenting path (a path from source to sink with available
capacity), increase the flow along the path.

3. Adjust the capacities and residual graph accordingly.

4. Repeat until no augmenting path exists.

Time Complexity:

 The time complexity of the Ford-Fulkerson algorithm is dependent on the number of


augmenting paths. In the worst case, it can be O(E \cdot F), where EE is the number
of edges and FF is the maximum flow.

Edmonds-Karp Algorithm (Improved version of Ford-Fulkerson using BFS):

1. Use BFS to find the shortest augmenting path.

2. Update the flow and residual graph, and repeat until no augmenting path exists.

Time Complexity:

 The time complexity of the Edmonds-Karp algorithm is O(V \cdot E^2), where VV is
the number of vertices and EE is the number of edges.

Conclusion:

Graph and tree algorithms form the foundation for solving various problems in computer
science. Traversal algorithms like DFS and BFS help explore graphs systematically. Shortest
path algorithms, including Dijkstra's algorithm, solve path-finding problems. Transitive
closure and MST algorithms are crucial for network analysis and optimization. Topological
sorting is vital for tasks like scheduling, and network flow algorithms are fundamental for
resource allocation problems. Each of these algorithms has well-defined time complexities,
and their choice depends on the nature of the problem and the graph structure.

MODULE 4:
1. Tractable and Intractable Problems (10 Marks)

Definition:

 Tractable Problems: These are problems for which an algorithm exists that can solve
the problem in polynomial time. These problems can be efficiently solved for large
inputs, and their solutions scale well as the problem size increases.

 Intractable Problems: These are problems for which no known algorithm can solve
them in polynomial time, making it infeasible to solve them for large inputs.
Intractable problems often require exponential time to solve, which becomes
impractical as the size of the problem grows.

Example of Tractable Problems:

 Sorting algorithms like Merge Sort and Quick Sort. These have time complexities of
O(n log n), making them feasible for large inputs.

Example of Intractable Problems:

 The Traveling Salesman Problem (TSP), where the time complexity can grow
exponentially, particularly when trying to find the optimal solution for large numbers
of cities. The brute-force approach has a time complexity of O(n!), making it
infeasible for large nn.

Time Complexity of Tractable vs. Intractable Problems:

 Tractable: Time complexity is polynomial (e.g., O(n^2), O(n log n), etc.).

 Intractable: Time complexity is exponential (e.g., O(2^n), O(n!), etc.).

2. Computability of Algorithms (10 Marks)

Definition: Computability refers to whether a problem can be solved by an algorithm. In the


context of algorithms, computability is typically discussed in terms of Turing machines,
which are abstract mathematical models of computation.

 A problem is computable if there exists an algorithm (or Turing machine) that can
solve it in a finite amount of time.
 A problem is non-computable if there is no algorithm that can solve it for all possible
inputs.

Example of a Computable Problem:

 The Binary Search algorithm on a sorted list is computable because there exists an
algorithm that will find an element in logarithmic time, O(log n).

Example of a Non-computable Problem:

 The Halting Problem is a classic example of a non-computable problem. It asks


whether there is an algorithm that can decide whether an arbitrary program halts or
runs forever. Alan Turing proved that no such algorithm can exist for all programs.

3. Computability Classes – P, NP, NP-complete, and NP-hard (10 Marks)

Definition of Classes:

1. P (Polynomial Time):
The class of problems that can be solved in polynomial time by a deterministic Turing
machine. These problems are considered "tractable" since their solutions can be
computed in reasonable time for large inputs.

Example:

o Sorting (like Merge Sort, Quick Sort): Time complexity O(n log n), which is
polynomial.

2. NP (Nondeterministic Polynomial Time):


The class of problems for which a proposed solution can be verified in polynomial
time by a deterministic Turing machine. However, there is no known polynomial-time
algorithm to find the solution.

Example:

o Boolean satisfiability problem (SAT): Given a Boolean expression, checking if


there exists an assignment of truth values to variables that make the
expression true can be verified in polynomial time, but finding such an
assignment might take exponential time.

3. NP-complete:
These are problems in NP that are at least as hard as every other problem in NP. If
any NP-complete problem can be solved in polynomial time, then all NP problems
can be solved in polynomial time (i.e., P=NPP = NP).

Cook’s Theorem:
Cook's Theorem, proven by Stephen Cook in 1971, states that the Boolean satisfiability
problem (SAT) is NP-complete. This was the first such result, showing that SAT is both in NP
and NP-hard.

Example of NP-complete problems:

o Traveling Salesman Problem (TSP): Finding the shortest possible route that
visits each city exactly once.

o Knapsack Problem: Given a set of items, find the subset that fits into a
knapsack of limited capacity while maximizing the total value.

4. NP-hard:
These problems are at least as hard as the hardest problems in NP, but they are not
necessarily in NP (i.e., they might not have a solution that can be verified in
polynomial time). An NP-hard problem does not have to be a decision problem, and
it can be harder than NP-complete problems.

Example of NP-hard problems:

o Halting Problem: The problem of determining whether an arbitrary program


halts or runs forever is NP-hard because it is at least as difficult as any other
problem in NP, though it is not in NP.

Time Complexity of P, NP, NP-complete, and NP-hard:

 P: Solvable in polynomial time.

 NP: Solutions can be verified in polynomial time, but finding the solution may take
non-polynomial time.

 NP-complete: These are the hardest problems in NP, and solving any one NP-
complete problem in polynomial time would imply that all NP problems can be
solved in polynomial time (i.e., P=NPP = NP).

 NP-hard: These are at least as difficult as NP-complete problems, but they may not
belong to NP (e.g., non-decision problems).

4. Cook's Theorem (10 Marks)

Definition: Cook’s Theorem, proved by Stephen Cook in 1971, states that the Boolean
satisfiability problem (SAT) is NP-complete. This was a groundbreaking result as it was the
first proof that a problem is both in NP and NP-hard.

Cook’s Theorem essentially shows that any problem in NP can be reduced to SAT in
polynomial time. Therefore, SAT is the "hardest" problem in NP. If SAT can be solved in
polynomial time, then every problem in NP can also be solved in polynomial time, implying
P=NPP = NP.
Cook’s Theorem Concept:

1. SAT is in NP: Given a proposed solution (an assignment of truth values to the
variables), we can verify in polynomial time whether the Boolean formula evaluates
to true.

2. SAT is NP-hard: Every problem in NP can be polynomially reduced to SAT.

Time Complexity of Cook’s Theorem:

 The proof of Cook’s theorem involves polynomial-time reductions, meaning the


transformation from any NP problem to SAT takes polynomial time.

5. Standard NP-complete Problems (10 Marks)

NP-complete problems are the hardest problems in NP, and every NP problem can be
reduced to them in polynomial time. Here are some classic examples of NP-complete
problems:

1. Travelling Salesman Problem (TSP):

o Problem: Given a set of cities and distances between each pair, find the
shortest possible route that visits every city once and returns to the starting
city.

o Reduction: The decision version of TSP (i.e., does a tour exist with a cost less
than a given threshold?) is NP-complete.

2. Knapsack Problem:

o Problem: Given a set of items, each with a weight and value, determine the
maximum value that can be placed in a knapsack of limited capacity.

o Reduction: The decision version of the Knapsack Problem (i.e., can we


achieve at least a certain value with a weight limit?) is NP-complete.

3. Boolean Satisfiability Problem (SAT):

o Problem: Given a Boolean formula, determine whether there is an


assignment of truth values to variables that makes the formula true.

o Reduction: SAT is NP-complete, as it is both in NP and NP-hard.

4. Subset Sum Problem:

o Problem: Given a set of integers and a target sum, determine if there is a


subset of the integers whose sum is exactly equal to the target.

o Reduction: The Subset Sum Problem is NP-complete.


6. Reduction Techniques (10 Marks)

Definition: Reduction techniques are used to prove that a problem belongs to a certain
complexity class, such as NP-complete. A reduction involves transforming one problem into
another problem in such a way that the solution to the transformed problem can be used to
solve the original problem.

Types of Reductions:

1. Polynomial-time Reduction:
A reduction from problem A to problem B means that if we can solve problem B in
polynomial time, we can solve problem A in polynomial time. This is the core idea
behind proving NP-completeness.

2. Many-One Reduction:
In a many-one reduction, an instance of problem A is transformed into an instance of
problem B in polynomial time, and a solution to B corresponds directly to a solution
to A.

Example of Reduction (SAT to 3-SAT):

The 3-SAT problem is an NP-complete problem that asks whether a Boolean formula in
conjunctive normal form (CNF) can be satisfied, where each clause has exactly three literals.
Using a reduction from the SAT problem (which may have clauses with more than three
literals) to 3-SAT, we can prove that 3-SAT is NP-complete.

Time Complexity of Reductions:

 Polynomial-time reductions mean the transformation itself takes polynomial time,


and the overall complexity depends on solving the transformed problem.

Conclusion:

The study of tractable and intractable problems, along with computability and complexity
classes like P, NP, NP-complete, and NP-hard, plays a central role in understanding the limits
of what can be computed efficiently. Cook's theorem and the concept of reductions have
deepened our understanding of NP-completeness, showing that if any NP-complete problem
can be solved in polynomial time, all NP problems can. The importance of these results lies
in their applications to optimization problems, cryptography, and algorithm design.

MODULE 5:
1. Approximation Algorithms (10 Marks)

Definition: Approximation algorithms are algorithms used to find near-optimal solutions to


optimization problems, where finding the exact optimal solution is computationally hard
(often NP-hard). These algorithms provide a solution that is close to the optimal solution,
usually within some guaranteed factor or bound.

Key Concepts:

 Approximation Ratio: The ratio of the algorithm's solution to the optimal solution. If
A(x)A(x) is the solution returned by the approximation algorithm and OPT(x)OPT(x) is
the optimal solution, the approximation ratio is defined as A(x)OPT(x)\frac{A(x)}
{OPT(x)}.

 Greedy Algorithms: Many approximation algorithms use greedy strategies, where a


locally optimal choice is made at each step with the hope that it leads to a globally
optimal solution.

Example: Vertex Cover Problem

 Problem: Given an undirected graph, find the smallest subset of vertices such that
every edge in the graph is incident to at least one vertex in the subset.

 Approximation Algorithm: A simple 2-approximation algorithm works by repeatedly


selecting an uncovered edge, adding both its endpoints to the cover, and removing
all edges covered by those vertices.

Time Complexity:

o The algorithm runs in O(E) time, where EE is the number of edges, since each
edge is processed at most once.

Performance Bound:

 For the Vertex Cover Problem, the approximation ratio of this greedy algorithm is 2.
This means the solution is at most twice the size of the optimal solution.

Other Examples of Approximation Algorithms:

1. Traveling Salesman Problem (TSP) – Christofides' Algorithm: Provides a 3/2


approximation for the metric TSP.

2. Set Cover Problem – Greedy approximation algorithm with an approximation ratio of


O(log n).

Time Complexity:

 The time complexity of approximation algorithms depends on the specific problem


but often involves greedy or dynamic programming-based approaches that operate
within polynomial time.
2. Randomized Algorithms (10 Marks)

Definition: Randomized algorithms make use of randomization to improve performance or


simplify the problem-solving process. These algorithms use random decisions during
execution to achieve better average-case performance or to solve problems where a
deterministic solution would be inefficient.

Types of Randomized Algorithms:

1. Las Vegas Algorithms: These algorithms always produce the correct result, but their
runtime may vary. The algorithm's correctness is guaranteed, but the time taken may
depend on the random choices made.

2. Monte Carlo Algorithms: These algorithms have a probability of giving an incorrect


result. They provide a solution in a fixed amount of time, but the probability of error
is bounded and can be reduced by repeating the algorithm.

Example: QuickSort (Randomized Version)

 Problem: Sort an array of numbers.

 Randomized QuickSort Algorithm:

1. Pick a random pivot element from the array.

2. Partition the array into two subarrays such that all elements less than the
pivot are on the left and all elements greater than the pivot are on the right.

3. Recursively apply the same procedure to the subarrays.

Time Complexity:

o Expected Time Complexity: O(nlog⁡n)O(n \log n), where nn is the number of


elements. The randomized version avoids the worst-case time complexity of
O(n2)O(n^2) that occurs in deterministic QuickSort when the pivot is poorly
chosen.

o Worst-case Time Complexity: O(n2)O(n^2), though this is highly unlikely if


the pivot is chosen randomly.

Example: Randomized Algorithms in Primality Testing

 Problem: Determine whether a number is prime.

 Algorithm: Miller-Rabin Primality Test

o This test uses randomization to quickly determine whether a number is prime


with a certain probability of error. The more iterations it runs, the lower the
error probability.
Time Complexity:

 The Miller-Rabin Test runs in O(k log n), where kk is the number of iterations (to
reduce the probability of error), and nn is the number being tested for primality.

Applications:

 Randomized Algorithms in Approximation: Many approximation algorithms are


based on randomized techniques (e.g., randomized algorithms for graph coloring,
matrix multiplication, etc.).

 Monte Carlo Simulations: Used extensively in physical systems, finance, and


statistical modeling.

3. Class of Problems Beyond NP – PSPACE (10 Marks)

Definition: PSPACE is the class of decision problems that can be solved using a polynomial
amount of memory (space). It is a more general class than NP, focusing on memory usage
rather than time. A problem belongs to PSPACE if there is an algorithm that solves the
problem in polynomial space, even if it may require exponential time.

Relationship with Other Classes:

 P: Problems solvable in polynomial time.

 NP: Problems for which a solution can be verified in polynomial time.

PSPACE is believed to be a superset of NP, i.e., NP ⊆ PSPACE.


 PSPACE: Problems solvable in polynomial space (regardless of the time complexity).

 EXPTIME: Problems solvable in exponential time, and PSPACE is typically considered


to be a subset of EXPTIME.

Example: Quantified Boolean Formula (QBF) Problem

∃x1∀x2∃x3…\exists x_1 \forall x_2 \exists x_3 \ldots, where x1,x2,x3x_1, x_2, x_3
 Problem: Given a Boolean formula with quantifiers (i.e., a formula of the form

are Boolean variables), determine if the formula is true or false.

 PSPACE Algorithm: QBF can be solved using a backtracking algorithm with


polynomial space. The challenge is to maintain only a small portion of the search
space at a time rather than storing the entire computation tree.

Time Complexity:

 The time complexity for PSPACE algorithms can be exponential (in terms of the
number of steps), but the space complexity is constrained to be polynomial.

Space Complexity:
 PSPACE problems use a polynomial amount of space, meaning that the amount of
memory used during computation does not grow exponentially with the input size.
This contrasts with EXPTIME problems, which require exponential time and space.

PSPACE Completeness:

 A problem is PSPACE-complete if it is in PSPACE, and every problem in PSPACE can be


polynomial-time reduced to it. QBF is a PSPACE-complete problem, meaning it is one
of the hardest problems in PSPACE.

Conclusion:

1. Approximation Algorithms provide solutions that are close to optimal for problems
that are computationally intractable to solve exactly. They often rely on greedy or
dynamic programming strategies with a known approximation ratio.

2. Randomized Algorithms use randomization to simplify problem-solving, improve


average performance, or solve problems more efficiently when deterministic
algorithms are too slow or complex. They offer solutions with high probability and
bounded error.

3. PSPACE covers problems that require polynomial space, but their time complexity
may still be exponential. Problems like QBF are examples of PSPACE-complete
problems, which are the hardest problems in this class. PSPACE allows for space-
efficient computation, even if the time taken may be very large.

Here is a comprehensive list of important university exam questions

✅ UNIT 1: Introduction & Analysis of Algorithms

1. Define an algorithm. What are the characteristics of a good algorithm?

2. Explain time and space complexity. What is time-space trade-off? Give examples.

3. Differentiate between best case, worst case, and average case complexity with
examples.

4. Explain Asymptotic Notations: Big-O, Big-Theta, and Big-Omega.

5. Compare different sorting algorithms based on time and space complexities.

6. Analyze the time complexity of recursive algorithms using:

o Substitution Method
o Recursion Tree Method

o Master’s Theorem

7. Solve the following recurrence using Master’s Theorem:

o T(n)=2T(n/2)+nT(n) = 2T(n/2) + n

o T(n)=3T(n/4)+nT(n) = 3T(n/4) + n

8. Explain how performance of algorithms is measured and analyzed.

✅ UNIT 2: Algorithmic Strategies

1. Explain Brute Force approach with suitable examples.

2. Describe Greedy strategy. Illustrate with a problem like activity selection or coin
change.

3. Define Dynamic Programming. Explain with an example:

o 0/1 Knapsack Problem

o Longest Common Subsequence (LCS)

4. Differentiate between Greedy and Dynamic Programming strategies.

5. What is Backtracking? Explain with an example like N-Queens or Subset Sum.

6. Explain Branch and Bound strategy with the example of 0/1 Knapsack or TSP.

7. Illustrate how different strategies can be used to solve the same problem (e.g., TSP
using Greedy, DP, and Branch and Bound).

8. What are heuristics? Discuss their characteristics and application domains.

9. Explain Bin Packing Problem and the approximation strategies to solve it.

✅ UNIT 3: Graph and Tree Algorithms

1. Explain BFS and DFS traversal algorithms. Write their applications and compare
them.

2. Describe Dijkstra’s algorithm with an example.

3. Explain Bellman-Ford algorithm. How does it handle negative weight edges?

4. Differentiate between Dijkstra’s and Bellman-Ford algorithms.

5. What is Transitive Closure of a graph? Explain how to compute it.


6. What is a Minimum Spanning Tree (MST)? Explain Kruskal’s and Prim’s algorithms.

7. Compare Kruskal’s and Prim’s algorithms.

8. What is Topological Sorting? Explain the algorithm with an example.

9. Explain Network Flow problem. Describe Ford-Fulkerson method for computing


Max Flow.

✅ UNIT 4: Tractable and Intractable Problems

1. What is the meaning of tractable and intractable problems? Explain with examples.

2. Define class P and class NP problems. Give examples.

3. What are NP-Complete and NP-Hard problems? Explain with examples.

4. Explain Cook’s Theorem. Why is it significant?

5. Describe reduction technique. How is it used to prove NP-completeness?

6. List and describe standard NP-complete problems (e.g., SAT, Clique, Vertex Cover).

7. Explain the difference between NP-complete and NP-hard with examples.

✅ UNIT 5: Advanced Topics

1. What are Approximation Algorithms? Explain with the example of Vertex Cover or
TSP.

2. Discuss approximation ratio. How does it affect algorithm selection?

3. What are Randomized Algorithms? Differentiate between Las Vegas and Monte
Carlo algorithms.

4. Explain Randomized QuickSort and its time complexity.

5. What is the Miller-Rabin Primality Test? Explain its working and error probability.

6. What is PSPACE? How does it differ from P and NP?

7. Explain the Quantified Boolean Formula (QBF) problem and its significance in
PSPACE.

8. Draw a diagram to show relationships between P, NP, NP-Complete, NP-Hard, and


PSPACE.

You might also like