0% found this document useful (0 votes)
166 views49 pages

DSA Quick Revision Guide

The document provides a comprehensive overview of data structures and algorithms, covering arrays, strings, hashing, sliding window, two pointers, stacks, queues, and linked lists. It includes definitions, key operations with syntax, time complexities, and common patterns for each data structure. The information is structured to facilitate understanding and application in programming contexts.

Uploaded by

amitstyles30
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
166 views49 pages

DSA Quick Revision Guide

The document provides a comprehensive overview of data structures and algorithms, covering arrays, strings, hashing, sliding window, two pointers, stacks, queues, and linked lists. It includes definitions, key operations with syntax, time complexities, and common patterns for each data structure. The information is structured to facilitate understanding and application in programming contexts.

Uploaded by

amitstyles30
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

DSA SHORT NOTES

1. Array
📌 1️⃣ Array Definition
An array is a collection of elements stored in contiguous memory locations, indexed starting
from 0.

L
📌 Declaration & Initialization:

A
H
C
📌 2️⃣ Key Array Operations & Syntax
Operation Syntax (Java)
H Time
IS
Complexity
Accessing an arr[i] O(1)
N
Element
Updating an Element arr[i] = newValue; O(1)
H

Finding Length arr.length O(1)


Traversing an Array for(int i = 0; i < arr.length; i++) O(n)
IT

Linear Search for(int i = 0; i < arr.length; i++) if(arr[i] == target) return O(n)
i;
EW

Binary Search Arrays.binarySearch(arr, target); O(log n)


(Sorted)
Sorting an Array Arrays.sort(arr); O(n log n)
Reversing an Array Collections.reverse(Arrays.asList(arr)); O(n)
D

Copying an Array int[] newArr = Arrays.copyOf(arr, arr.length); O(n)

📌 3️⃣ Array Patterns to Remember


O

🔹 Sliding Window (For Subarrays & Contiguous Elements)


C

Used for problems that involve subarrays of a fixed/variable size.

📌 Use Case: Maximum sum subarray of size k.


🔹 Two Pointer Approach (For Sorted Arrays & Pair Problems)
Used to avoid extra loops for finding pairs or subarrays.

📌 Use Case: Finding a pair with a given sum.

L
A
🔹 Kadane’s Algorithm (For Maximum Sum Subarray)

H
C
H
IS
📌 Use Case: Maximum sum contiguous subarray.
🔹 Prefix Sum (For Range Sum Queries in O(1))
N
H
IT

📌 Use Case: Fast computation of subarray sums.


EW

🔹 Hashing for Frequency Count (📌 Use Case: Counting occurrences of numbers.)


D

📌 5️⃣ Time Complexity Recap


O

Operation Best Case Worst Case


C

Access O(1) O(1)


Search (Linear Search) O(1) O(n)
Search (Binary Search - O(1) O(log n)
Sorted)
Insertion (End) O(1) O(1)
Insertion (Middle/Start) O(n) O(n)
Deletion (End) O(1) O(1)
Deletion (Middle/Start) O(n) O(n)
Sorting O(n log n) O(n log n)
2. String
📌 1️⃣ String Definition
A String is a sequence of characters stored as an array but is immutable in languages like
Java.

📌 Declaration & Initialization:

L
A
📌 2️⃣ Key String Operations & Syntax

H
Operation Syntax (Java) Time
Complexity

C
Get Length s.length() O(1)
Access Character s.charAt(i) O(1)
Concatenation s1 + s2 / s.concat(s2)

H O(n)
IS
Substring s.substring(i, j) O(j - i)
Convert to Char Array s.toCharArray() O(n)
Check Equality s1.equals(s2) O(n)
N
Compare Strings s1.compareTo(s2) O(n)
Reverse a String new O(n)
H

StringBuilder(s).reverse().toString()
Change Case s.toUpperCase() / s.toLowerCase() O(n)
IT

Find Index of a s.indexOf('a') O(n)


Character
EW

Trim Whitespaces s.trim() O(n)


Replace Characters s.replace('a', 'b') O(n)
Split a String s.split(" ") O(n)

📌 3️⃣ String Patterns to Remember


D

🔹 Two Pointer Approach (For Palindromes & Reversing)


O
C

Used to compare characters from both ends efficiently.

📌 Use Case: Check if a string is a palindrome.


🔹 Sliding Window (For Substrings & Anagrams)
Used when we need to check substring properties dynamically.

📌 Use Case: Finding anagrams or longest unique substring.

L
A
📌
=>Used to count occurrences of characters.

H
Use Case: Finding first unique character.

C
🔹 KMP Algorithm (For Pattern Matching) H
IS
Efficient for searching substrings in O(n). 📌 Use Case: Checking if a pattern exists in a
N
string.
H
IT
EW
D
O

📌 5️⃣ Time Complexity Recap


C

Operation Best Worst


Case Case
Access O(1) O(1)
Search O(1) O(n)
Concatenation O(1) O(n)
Substring O(1) O(n)
Extraction
Reversal O(n) O(n)
Sorting O(n log n) O(n log n)
3.Hashing
📌 1️⃣ What is Hashing?
Hashing is a technique used to map keys to values efficiently using a hash function. It
helps in quick search, insert, and delete operations.

📌 Common Hashing Data Structures:


1.​ Hash Table - Stores key-value pairs using an array + linked list (chaining).
2.​ Hash Map - Stores key-value pairs (e.g., HashMap in Java).

L
3.​ Hash Set - Stores unique elements.

A
4.​ Bloom Filters - Probabilistic hash structure used for membership checking.

📌 2️⃣ Key Hashing Operations & Syntax

H
C
Operation Syntax (Java) Time
Complexity

H
Insert Key-Value Pair map.put(key, value); O(1) (Avg)
Access Value by Key map.get(key); O(1) (Avg)
IS
Check if Key Exists map.containsKey(key); O(1)
Remove Key map.remove(key); O(1)
N
Get All Keys map.keySet(); O(n)
Get All Values map.values(); O(n)
H

Iterate Through HashMap for (Map.Entry<K, V> entry : O(n)


map.entrySet())
IT

Insert into HashSet set.add(value); O(1)


Check Presence in set.contains(value); O(1)
EW

HashSet
Delete from HashSet set.remove(value); O(1)

📌 3️⃣ Hashing Patterns to Remember


D
O

🔹 Hashing for Frequency Count


C

Used to count occurrences of elements.

📌 Use Case: Finding duplicates, majority element, anagrams.


🔹 Hashing for Finding Pairs (Two Sum Problem)
Used to find pairs in O(n) instead of O(n²).

📌 Use Case: Finding a pair that sums to a target.


🔹 Hashing for Anagram Detection

L
Used to check if two strings have the same character frequencies.

A
H
C
H
IS
N

📌 Use Case: Checking if two strings are anagrams.


H

📌 6️⃣ Time Complexity Recap


IT

Operation Best Worst Case


Case
EW

Insert O(1) O(n) (for worst-case


collision)
Search O(1) O(n)
Delete O(1) O(n)
D

Sorting O(n log n) O(n log n)


Keys
O
C
4. Sliding Window

📌 1️⃣ Definition
The Sliding Window technique is used for efficiently processing contiguous subarrays
or substrings of a given size or dynamically adjusting window size to solve problems
involving arrays or strings.

●​ Helps reduce time complexity from O(N²) → O(N)


●​ Used in subarray sum, longest substring, maximum/minimum in a subarray

L
problems.

📌 2️⃣ Types of Sliding Window

A
H
1.​ Fixed-size window → Window size remains constant (e.g., "Find max sum of K

C
elements").
2.​ Variable-size window → Window expands/contracts based on conditions (e.g.,

H
"Find the longest substring with at most K distinct characters").

📌 3️⃣ Fixed-Size Sliding Window Implementation


IS
📌 Problem: Find the maximum sum of K consecutive elements in an array.
N

✅ Approach:
H
IT

1.​ Use two pointers: left (start) and right (end).


2.​ Maintain a window sum of size K.
EW

3.​ Slide the window by adding right element and removing left element.
D
O
C
📌 4️⃣ Variable-Size Sliding Window Implementation
📌 Problem: Find the length of the longest subarray with sum ≤ S.
✅ Approach:
1.​ Expand the window by adding elements to the right.
2.​ Shrink window from the left if sum exceeds S.
3.​ Keep track of the maximum window size.

L
A
H
C
H
IS
N
H
IT
EW
D
O
C
5. Two Pointer

📌 1️⃣ Definition
The Two Pointer technique is an optimization method used to process pairs of elements
in an array or string by using two pointers instead of nested loops.

●​ Helps reduce time complexity from O(N²) → O(N).


●​ Used for searching, sorting, merging, partitioning, and subarray problems.

📌 2️⃣ Types of Two Pointer Approach

L
A
1.​ Opposite Direction Pointers​

H
○​ Used when dealing with sorted arrays.

C
○​ Common for finding pairs (e.g., sum problems, palindromes, two-sum in
sorted array).
2.​ Same Direction Pointers​

H
IS
○​ Used for traversing and modifying elements efficiently.
○​ Common in merging, removing duplicates, and longest subarray
N
problems.

📌 3️⃣ Opposite Direction Two Pointer Example


H

📌 Problem: Check if a given string is a palindrome.


IT

✅ Approach
EW

1.​ Use two pointers:


○​ Left pointer (l) at the beginning.
D

○​ Right pointer (r) at the end.


2.​ Compare characters at both pointers.
O

3.​ Move pointers inward until they meet.


C
✔ Time Complexity: O(N)​
✔ Space Complexity: O(1)

📌 4️⃣ Same Direction Two Pointer Example


📌 Problem: Remove duplicates from a sorted array in-place and return the new length.
✅ Approach
1.​ Use two pointers:
○​ i (slow pointer) tracks the last unique element.

L
○​ j (fast pointer) scans for new unique elements.

A
2.​ When arr[j] ≠ arr[i], move the unique element forward.

H
C
H
IS
N
H
IT
EW

✔ Time Complexity: O(N)​


✔ Space Complexity: O(1)

📌 6️⃣ Key Points to Remember


D
O

●​ Two Pointers optimize O(N²) problems to O(N).


C

●​ Use opposite pointers for sum/palindrome problems.


●​ Use same-direction pointers for merging and modifying.
●​ Works best with sorted arrays and contiguous subarrays.
6.Stack
📌 1️⃣ What is a Stack?
A stack is a LIFO (Last-In, First-Out) data structure where elements are inserted (pushed)
and removed (popped) from the same end (the top).

📌 Key Applications of Stacks:​


✔️ Function call management (Recursion)​
✔️ Undo/Redo operations​
✔️ Expression evaluation (postfix, infix, prefix)​
✔️ Backtracking (e.g., maze solving, DFS)​

L
✔️ Parentheses matching

A
H
📌 2️⃣ Stack Operations & Syntax

C
Operation Syntax (Java - Time

H
Stack) Complexity
Push (Insert at stack.push(value); O(1)
IS
Top)
Pop (Remove Top) stack.pop(); O(1)
N
Peek (Top stack.peek(); O(1)
Element)
H

Check if Empty stack.isEmpty(); O(1)


Size of Stack stack.size(); O(1)
IT

📌 In Java (Using Stack class):


EW
D

📌 3️⃣ Stack Implementations


O
C

1️⃣ Using Arrays (Fixed Size)

●​ Simple but has a predefined size limitation.

2️⃣ Using Linked List (Dynamic Size)

●​ No size limitation, but requires extra memory for pointers.


📌 4️⃣ Stack Patterns
🔹 Monotonic Stack → Used in problems where elements need to be processed in
🔹 Min Stack → Stack that supports retrieving the minimum element in O(1).​
increasing or decreasing order.​

🔹 Two Stacks in One Array → Efficient way to use limited space.​


🔹 Stack Implementation using Queues → Stack using two queues.​
🔹 Queue Implementation using Stacks → Queue using two stacks.
📌 6️⃣ Time Complexity Recap
Operation Best Worst

L
Case Case
Push O(1) O(1)

A
Pop O(1) O(1)
Peek O(1) O(1)

H
Search O(n) O(n)

C
7.Queue H
IS
📌 1️⃣ Definition
N
H

A queue is a FIFO (First-In, First-Out) data structure where elements are inserted at the
rear and removed from the front.​
IT

Queues are useful for managing tasks in scheduling, buffers, and scenarios where order
of processing matters.

📌 Key Applications:​
EW

✔️ Task Scheduling (OS and CPU scheduling)​


✔️ BFS (Breadth-First Search)​
✔️ Print Queue/Job Processing​
✔️ Order Processing Systems (e.g., Bank Queue)​
D

✔️ Circular Buffers
O

📌 2️⃣ Queue Operations & Syntax


C

Operation Syntax (Java - Queue Time


Interface) Complexity
Enqueue (Insert at Rear) queue.offer(value); O(1)
Dequeue (Remove from queue.poll(); O(1)
Front)
Peek (Front Element) queue.peek(); O(1)
Check if Empty queue.isEmpty(); O(1)
Size of Queue queue.size(); O(1)

📌 In Java (Using LinkedList as Queue):


L
📌 3️⃣ Types of Queues

A
H
🔹 1️⃣ Simple Queue

C
H
Basic queue where elements are inserted at the rear and removed from the front.

🔹 2️⃣ Circular Queue


IS
In a circular queue, the rear and front pointers wrap around when they reach the end. This
N
helps in utilizing the entire array space efficiently.

🔹 3️⃣ Priority Queue


H
IT

A priority queue stores elements in order of priority, rather than the order of insertion. A
higher priority element is always dequeued before a lower priority element.
EW

●​ Min-Heap: The element with the lowest value has the highest priority.
●​ Max-Heap: The element with the highest value has the highest priority.

🔹 4️⃣ Deque (Double-Ended Queue)


D
O

A deque allows adding and removing elements from both ends (front and rear). It's often
used for problems like sliding window, or when you need access to both ends.
C

📌 5️⃣ Time Complexity Recap


Operation Time
Complexity
Enqueue (Insert at Rear) O(1)
Dequeue (Remove from O(1)
Front)
Peek (Front Element) O(1)
Search O(n)
8. Linked List

📌 1️⃣ Definition
A linked list is a linear data structure where elements (called nodes) are stored in
non-contiguous memory locations. Each node contains:

1.​ Data - The value stored.


2.​ Next (pointer/reference) - A reference to the next node in the sequence.

📌 Key Applications:​

L
✔️ Dynamic Memory Allocation​
✔️ Implementing Queues, Stacks, Graphs, and Hash Tables​

A
✔️ Memory-Efficient Data Structures (e.g., when frequent insertions/deletions are required)

H
📌 1️⃣ Definition
C
H
A linked list is a linear data structure where elements (nodes) are stored at different memory
locations and are connected using pointers. It allows efficient insertions and deletions but
IS
has slower access times compared to arrays.

📌 2️⃣ Types of Linked Lists


N
H

1.​ Singly Linked List → Each node points to the next node.
IT

2.​ Doubly Linked List → Each node has pointers to both the next and previous nodes.
3.​ Circular Linked List → The last node connects back to the first node.
4.​ Circular Doubly Linked List → A doubly linked list where the last node links to the
EW

first node.

📌 3️⃣ Node Structure (Java)


D

A linked list consists of nodes where each node has:


O

●​ Data (value stored in the node)


●​ Pointer (reference to the next node)
C
📌 4️⃣ Basic Operations
4.1 Insert at Head

4.2 Insert at Tail

L
A
H
C
H
IS
N
H

4.3 Delete a Node by Value


IT
EW
D
O
C

4.4 Search a Node


📌 3️⃣ Operations on Linked List
Operation Syntax Time Complexity
Insertion (at front) addFirst(value); O(1)
Insertion (at end) addLast(value); O(n) (for singly) or O(1) (for
doubly)
Insertion (at specific position) add(index, O(n)
value);
Deletion (from front) removeFirst(); O(1)
Deletion (from end) removeLast(); O(n) (for singly) or O(1) (for
doubly)
Deletion (from specific remove(index); O(n)

L
position)
Search (by value) contains(value); O(n)

A
Traversal printList(); O(n)

H
📌 6️⃣ Time Complexity Recap

C
Operation Time Complexity
Insertion (at front) O(1)
H
IS
Insertion (at end) O(n) (for singly) or O(1) (for
doubly)
Insertion (at specific position) O(n)
N
Deletion (from front) O(1)
Deletion (from end) O(n) (for singly) or O(1) (for
H

doubly)
Deletion (from specific O(n)
IT

position)
Search (by value) O(n)
EW

Traversal O(n)
D
O
C
9. Binary Search

📌 1️⃣ Definition
Binary Search is an efficient algorithm for searching an element in a sorted array by
repeatedly dividing the search interval in half.

📌 2️⃣ Binary Search Operations


Operation Syntax (Java) Time
Complexity

L
Binary Search binarySearch(arr, target) O(log n)
Binary Search binarySearch(arr, left, right, O(log n)

A
(Recursive) target)

H
Recursive Method Syntax (Java)

C
H
IS
N
H
IT

📌 3️⃣ Binary Search Variants


EW

1.​ Finding the First Occurrence​


Modify the algorithm to continue searching on the left even after finding the target.​
D

2.​ Finding the Last Occurrence​


O

Modify the algorithm to continue searching on the right even after finding the target.​
C

3.​ Finding the Closest Element​


Adjust the search space to return the closest element if the target is not found.

📌 4️⃣ Time Complexity


●​ Best Case: O(1) (Target is at mid)
●​ Average & Worst Case: O(log n)
10.Tree

📌 1️⃣ Definition
A tree is a hierarchical data structure consisting of nodes connected by edges. The top node
is called the root, and each node contains a value or data and references (or links) to its
children.

Basic Terminology:

L
●​ Node: An element of the tree containing data.
●​ Root: The topmost node.

A
●​ Child: A node directly connected to another node when moving away from the root.
●​ Parent: A node directly connected to another node when moving towards the root.

H
●​ Leaf: A node with no children.

C
●​ Height: The length of the longest path from the root to a leaf.
●​ Depth: The length of the path from the root to a node.

H
●​ Subtree: A tree consisting of a node and its descendants.

📌 3️⃣ Tree Node Structure (Java)


IS
N
H
IT
EW

📌 2️⃣ Types of Trees


D
O

🔹 1️⃣ Binary Tree


C

Each node has at most two children (left and right).

🔹 2️⃣ Binary Search Tree (BST)


A binary tree where:

●​ Left subtree contains only nodes with smaller values.


●​ Right subtree contains only nodes with larger values.
🔹 3️⃣ AVL Tree (Self-balancing BST)
A balanced binary search tree where the difference between the heights of left and right
subtrees is at most 1.

🔹 4️⃣ Heap
A binary tree that satisfies the heap property:

●​ Max-Heap: Parent nodes are greater than or equal to their children.


●​ Min-Heap: Parent nodes are smaller than or equal to their children.

📌 3️⃣ Tree Operations

L
A
Operation Syntax Time
Complexity

H
Insertion insert(root, value); O(log n) (for
BST)

C
Deletion delete(root, value); O(log n) (for
BST)

H
Search search(root, value); O(log n) (for
BST)
IS
Traversal inOrderTraversal(root); O(n)
Preorder preOrderTraversal(root); O(n)
N
Postorder postOrderTraversal(root O(n)
);
H

Level levelOrderTraversal(root O(n)


Order );
IT

📌 4️⃣ Tree Traversals


EW

1.​ Inorder Traversal (Left, Root, Right)​


D

○​ Used for BSTs to get nodes in ascending order.


2.​ Syntax:
O
C
Preorder Traversal (Root, Left, Right)

●​ Used to copy a tree or create a prefix expression.

Syntax:

L
A
Postorder Traversal (Left, Right, Root)

H
●​ Used for deletion or postfix expressions.

C
Syntax:

H
IS
N
H
IT

Level Order Traversal (Breadth-first)


EW

●​ Traverse the tree level by level from top to bottom.

Syntax:
D
O
C
📌 5️⃣ Time Complexity for Operations
Operation Time
Complexity
Search O(log n) (BST)
Insertion O(log n) (BST)
Deletion O(log n) (BST)
Traversal O(n)
Level O(n)
Order

📌 7️⃣ Common Tree Patterns

L
A
●​ Balanced Trees: AVL trees, Red-Black trees (balancing on insertion and deletion).
●​ Complete Trees: A binary tree in which every level, except possibly the last, is

H
completely filled.
●​ Height of a Tree: Calculated from the root to the deepest leaf.

C
11.Trie
H
IS
📌 1️⃣ Definition
N

A Trie (or Prefix Tree) is a tree-like data structure that stores a dynamic set of strings, where
H

each node represents a character of a string. It is primarily used for fast string search and
prefix matching.
IT

Key Terminology:
EW

●​ Root: The starting point of the trie, usually empty.


●​ Node: Each node represents a character of the string.
●​ Edge: A link between two nodes.
D

●​ Leaf: A node representing the end of a string.


●​ Word: A complete string represented by the path from the root to a leaf node.
O

📌 2️⃣ Trie Operations


C

Operation Syntax Time


Complexity
Insert insert(root, word) O(m)
Search search(root, word) O(m)
StartsWit startsWith(root, O(m)
h prefix)
Delete delete(root, word) O(m)
Where m is the length of the string (word or prefix).
📌 3️⃣ Trie Node Structure (Java)

L
📌 4️⃣ Trie Operations

A
🔹 Insert Operation

H
C
To insert a word, start from the root node and traverse through the trie. If a node does not

H
exist for a character, create a new node. Mark the last node of the word as isEndOfWord =
true.
IS
🔹 Search Operation
N
To search for a word, follow the same steps as in the insert operation. If a node for a
character does not exist, return false. After traversing all characters, check if the last node
H

is marked as isEndOfWord = true.

🔹 StartsWith Operation
IT
EW

To check if a prefix exists, traverse the trie similarly to the search operation but do not
require the last node to be isEndOfWord.

🔹 Delete Operation
D

To delete a word, follow the search path to find the node corresponding to the last character
O

of the word. Then mark isEndOfWord = false. If that node has no children, delete the
node. This operation is more complicated than insert/search due to potential cleanup of
C

empty nodes.

📌 5️⃣ Time Complexity


●​ Insert Operation: O(m) where m is the length of the word.
●​ Search Operation: O(m) where m is the length of the word.
●​ StartsWith Operation: O(m) where m is the length of the prefix.
●​ Delete Operation: O(m) where m is the length of the word.
📌 7️⃣ Space Complexity
The space complexity of a trie depends on the number of words and their lengths. For n
words, each of length m, the worst-case space complexity is O(n * m), because each node
stores 26 pointers (for each lowercase letter).

L
A
12.Heap/Priority Queue

H
📌 1️⃣ Definition
C
H
A Heap is a special tree-based data structure that satisfies the heap property:
IS
●​ Max Heap: The value of each node is greater than or equal to the values of its
children.
●​ Min Heap: The value of each node is less than or equal to the values of its children.
N

A Priority Queue is an abstract data type that supports operations to insert elements and
H

remove the element with the highest priority. It can be implemented using a heap, where:
IT

●​ Max Priority Queue: Removes the largest element.


●​ Min Priority Queue: Removes the smallest element.
EW

📌 2️⃣ Heap Operations


Operation Syntax Time
Complexity
D

Insert insert(heap, O(log n)


element)
O

Extract-Max extractMax(heap) O(log n)


C

Extract-Min extractMin(heap) O(log n)


Peek (Max/Min) peek(heap) O(1)
Heapify heapify(arr) O(n)
Delete delete(heap, index) O(log n)
Where n is the number of elements in the heap.
📌 3️⃣ Heap Node Structure (Java)

📌 4️⃣ Priority Queue Operations (Java)

L
A
H
Max Priority Queue (Using a Max Heap)

C
H
IS
N
H

Min Priority Queue (Using a Min Heap)


IT
EW
D

📌 5️⃣ Time Complexity of Operations


O

Operation Time
C

Complexity
Insert O(log n)
Extract-Max/Min O(log n)
Peek (Max/Min) O(1)
Heapify O(n)
Delete O(log n)
📌 7️⃣ Use Cases of Heaps/Priority Queues
●​ Dijkstra’s Algorithm (for shortest path).
●​ Huffman Encoding (compression algorithms).
●​ Task Scheduling (priority-based task execution).
●​ K-th Largest/Smallest Element in an array.
●​ Real-time Data Streaming (median finding).

13. Intervals

L
📌 1️⃣ Definition

A
H
An interval is a range of values, typically used to represent a continuous sequence of
numbers. In the context of algorithms and data structures, intervals are often represented as

C
pairs of integers, such as [start, end].

Types of Intervals:
H
IS
●​ Closed Interval: [start, end] includes both endpoints.
●​ Open Interval: (start, end) excludes both endpoints.
N
●​ Half-Open Interval: [start, end) or (start, end] includes one endpoint but
not the other.
H

📌 2️⃣ Interval Operations


IT

Operation Syntax Time


Complexity
EW

Merge Intervals mergeIntervals(intervals) O(n log n)


Insert Interval insertInterval(intervals, newInterval) O(n)
Interval Intersection intervalIntersection(intervals1, O(n + m)
intervals2)
D

Interval Removal/Deletion removeInterval(intervals, toRemove) O(n)


Overlapping Intervals hasOverlap(intervals) O(n)
O

Check
Where n is the number of intervals.

📌 3️⃣ Merge Intervals


C

Problem: Given a collection of intervals, merge all overlapping intervals.

Algorithm:

1.​ Sort the intervals by the start time.


2.​ Traverse through the sorted intervals, merging intervals that overlap.
3.​ If two intervals don’t overlap, add the current interval to the result.
📌 4️⃣ Insert Interval
Insert a new interval into a list of non-overlapping intervals and return the new list of
intervals.

Algorithm:

1.​ Sort the intervals (if not already sorted).


2.​ Find the position to insert the new interval.
3.​ Merge any overlapping intervals.

📌 5️⃣ Interval Intersection

L
A
Given two lists of intervals, return the intersection of these two intervals.

H
Algorithm:

C
1.​ Sort both intervals by their start times.

H
2.​ Use two pointers to traverse through both intervals.
3.​ If there’s an overlap, add the intersection to the result.
IS
📌 6️⃣ Time Complexity
Operation Time
N
Complexity
Merge Intervals O(n log n)
H

Insert Interval O(n)


IT

Interval O(n + m)
Intersection
Interval Removal O(n)
EW

Overlapping Check O(n)


Where n and m are the number of intervals in the respective lists.

📌 7️⃣ Interval Overlap Check


D

Problem: Check if two intervals overlap.


O

Algorithm:
C

●​ Two intervals [a1, a2] and [b1, b2] overlap if:


●​ a1 <= b2 and b1 <= a2

📌 8️⃣ Use Cases of Intervals


●​ Scheduling Problems: Finding overlapping schedules or gaps between schedules.
●​ Calendar Applications: Merging events that overlap.
●​ Range Queries: Searching for data within a specified range.
●​ Resource Allocation: Ensuring resources are allocated within time slots without
conflicts.
14. Greedy Algorithms
📌 1️⃣ Definition
A Greedy Algorithm is an approach for solving optimization problems by making a
sequence of choices, each of which looks the best at the moment. The idea is to choose the
local optimum at each step with the hope of finding the global optimum.

Greedy algorithms are often used when a problem can be broken down into stages where
decisions at each stage are made based on local optimal choices.

L
📌 2️⃣ Characteristics of Greedy Algorithms

A
H
1.​ Greedy Choice Property: A global optimum can be arrived at by selecting a local
optimum.

C
2.​ Optimal Substructure: A problem has optimal substructure if an optimal solution to
the problem contains optimal solutions to the subproblems.

H
Note: Greedy algorithms do not always guarantee an optimal solution,
IS
but for certain problems (like Huffman Coding, Dijkstra’s Algorithm),
they are proven to give the best result.
N

📌 3️⃣ Greedy Algorithm Steps


H
IT

1.​ Initialize: Set the initial solution to the problem.


2.​ Iterate: For each step, select the next local optimal choice.
EW

3.​ Check: Evaluate the selected solution.


4.​ Repeat: If the solution is not complete, repeat the process with the next choice.
5.​ Terminate: When the solution is complete, return the solution.

📌 4️⃣ Greedy Algorithm Operations/Applications


D

Operation Description Time


O

Complexity
Activity Selection Select the maximum number of non-overlapping O(n log n)
C

activities.
Fractional Knapsack Maximize value with a given weight capacity using O(n log n)
fractional items.
Huffman Coding Build an optimal prefix code for encoding. O(n log n)
Dijkstra’s Shortest Path Find the shortest path in a graph. O(V log V + E)
Algorithm
Prim’s Algorithm Find the minimum spanning tree of a graph. O(V log V + E)
Where n is the number of items, V is the number of vertices, and E is the
number of edges.
📌 5️⃣ Greedy Algorithm Examples
1. Activity Selection Problem

Given a set of activities with start and finish times, select the maximum number of
non-overlapping activities.

Algorithm:

1.​ Sort the activities based on their finish times.


2.​ Iterate through the activities, selecting those that start after the last selected one end

L
A
H
C
H
IS
N
H
IT
EW
D

2. Fractional Knapsack Problem


O

Given a set of items, each with a weight and value, and a knapsack with a given capacity,
C

determine the maximum value that can be obtained by filling the knapsack with fractions of
items.

Algorithm:

1.​ Calculate the value-to-weight ratio for each item.


2.​ Sort the items by the value-to-weight ratio in descending order.
3.​ Add items (or their fractions) to the knapsack until it is full.
4. Dijkstra’s Algorithm (Greedy for Shortest Path)

This algorithm finds the shortest path between a source node and all other nodes in a graph
with non-negative weights.

Algorithm:

1.​ Initialize the distance to the source node as 0 and all other nodes as infinity.
2.​ Use a priority queue to always choose the next node with the smallest tentative
distance.
3.​ Update the distance of the neighbors based on the chosen node.

L
A
Algorithm Time
Complexity

H
Activity Selection O(n log n)
Fractional O(n log n)

C
Knapsack
Huffman Coding O(n log n)
Dijkstra’s Algorithm O(V log V + E)
Prim’s Algorithm O(V log V + E)
H
IS
Where n is the number of activities/items and V, E are the vertices and edges of the
graph, respectively.

📌 7️⃣ Use Cases of Greedy Algorithms


N
H
IT

●​ Resource Allocation (e.g., Knapsack problem).


●​ Pathfinding Algorithms (e.g., Dijkstra's and Prim's algorithm).
●​ Huffman Coding for data compression.
EW

●​ Scheduling Problems (e.g., Activity Selection).


●​ Currency Denominations (e.g., finding the least number of coins needed).
D
O
C
15. Recursion
📌 1️⃣ Definition
Recursion is a programming technique where a function calls itself to solve a smaller
instance of the same problem. It breaks a problem into smaller subproblems until it reaches
a base case, which does not require further recursion.

📌 2️⃣ Base and Recursive Cases

L
Every recursive function should have:

A
●​ Base Case: The condition that stops the recursion. Without this, the function would

H
call itself indefinitely.
●​ Recursive Case: The part where the function calls itself with a modified argument to

C
move closer to the base case.

📌 3️⃣ Recursion Syntax H


IS
In most programming languages, the general structure of a recursive function is:
N
H
IT
EW
D

📌 4️⃣ Common Recursion Patterns


O
C

4.1 Factorial Calculation

A classic example of recursion is calculating the factorial of a number n:

Factorial Formula:
For n = 0, 0! = 1 is the base case.

4.2 Fibonacci Sequence

L
The Fibonacci sequence is another classic example, where each term is the sum of the two

A
preceding ones.

H
Fibonacci Formula:

C
H
IS
N

📌 6️⃣ Key Points to Remember


H
IT

●​ Base Case: Ensure that every recursive function has a well-defined base case to
terminate the recursion.
EW

●​ Recursive Case: Ensure that each recursive call moves closer to the base case.
●​ Memory Consumption: Recursive calls consume stack space, so avoid excessive
recursion depth in some cases.
●​ Debugging: Recursive solutions can be difficult to debug; visualize recursion tree if
necessary.
D

The time complexity of a recursive function depends on the number of recursive calls and
O

the work done in each call.


C

●​ Factorial: O(n)
●​ Fibonacci: O(2^n) (Exponential without memoization)
●​ Binary Search: O(log n)
●​ Tree Traversals: O(n) (where n is the number of nodes in the tree)

📌 8️⃣ Space Complexity


The space complexity of a recursive solution depends on the depth of the recursion tree (the
maximum number of function calls on the call stack). For most recursive algorithms:
●​ Factorial: O(n) (in worst case)
●​ Fibonacci (naive): O(n)
●​ Binary Search: O(log n)
●​ Tree Traversal: O(h), where h is the height of the tree

📌 9️⃣ Recursion vs Iteration


●​ Recursion: Can be simpler and more intuitive for problems like tree traversal,
backtracking, or divide and conquer algorithms.
●​ Iteration: Generally more efficient in terms of space, as it doesn’t add new stack
frames.

L
A
16. Backtracking

H
📌 1️⃣ Definition
C
H
Backtracking is an algorithmic technique for solving problems by incrementally building
solutions and abandoning a solution as soon as it is determined that it cannot possibly lead
IS
to a valid solution.
N
It is a form of depth-first search (DFS) where, when a problem cannot be solved any
further, it backtracks to the previous step and tries another possibility.

📌 2️⃣ Backtracking Approach


H
IT

1.​ Choice: Make a choice at each step.


EW

2.​ Constraint: Check if the current solution satisfies the problem's constraints.
3.​ Goal: If the solution meets the goal, record it.
4.​ Backtrack: If no valid solution is found, backtrack to the previous choice and try
another path.
D

5.​ Repeat until all paths are explored.

📌 3️⃣ Operations/Applications of Backtracking


O

Operation Description Time


C

Complexity
Subset Generate all subsets of a set. O(2^n)
Generation
Permutations Generate all possible permutations of a set. O(n!)
N-Queens Place N queens on an N x N chessboard so that no two O(N!)
Problem queens threaten each other.
Sudoku Solver Solve a given Sudoku puzzle. O(9^81)
Hamiltonian Find a path in a graph that visits each node exactly once. O(N!)
Path
Where n is the size of the input (e.g., number of elements in a set, or number of nodes).
📌 4️⃣ Backtracking Example Problems
1. N-Queens Problem

The N-Queens problem is a classic problem where the goal is to place N queens on an NxN
chessboard such that no two queens threaten each other.

Algorithm:

1.​ Try placing a queen on each row.


2.​ For each column in the row, check if placing a queen leads to a solution.

L
3.​ If placing a queen does not lead to a valid configuration, backtrack and try another
column or row.

A
H
2. Subset Generation

C
Given a set of numbers, generate all possible subsets (the power set).

H
Algorithm:
IS
1.​ Start from an empty subset.
2.​ For each element, either include it in the subset or exclude it.
3.​ Use recursion to generate all possible combinations.
N

📌 5️⃣ Time Complexity of Backtracking


H

Problem Time
IT

Complexity
Subset Generation O(2^n)
EW

Permutations O(n!)
N-Queens O(N!)
Problem
Sudoku Solver O(9^81)
D

Hamiltonian Path O(N!)


Where n is the size of the set (for subsets or permutations) or the size of the grid (for Sudoku).
O

📌 6️⃣ Use Cases of Backtracking


C

●​ Puzzle Solving: Sudoku, crosswords, and N-Queens.


●​ Combinatorial Problems: Generating subsets, permutations, combinations.
●​ Constraint Satisfaction Problems: Solving problems that require satisfying certain
constraints (e.g., Sudoku).
●​ Graph Traversal: Finding paths, Hamiltonian cycles.
17. Graphs
📌 1️⃣ Definition
A Graph is a collection of nodes (vertices) connected by edges (arcs). Graphs are used to
model relationships between pairs of objects, like social networks, web pages, cities, and
much more.

There are two primary types of graphs:

L
●​ Directed Graph (Digraph): Each edge has a direction, going from one vertex to
another.

A
●​ Undirected Graph: Edges have no direction, meaning the relationship between the
vertices is mutual.

H
📌 2️⃣ Graph Terminology
C
●​
●​
Vertex (V): A node in the graph.
Edge (E): A connection between two vertices.
H
IS
●​ Adjacent Vertices: Two vertices connected by an edge.
●​ Degree: The number of edges incident to a vertex.
N
○​ In-degree (for directed graphs): The number of edges coming into a vertex.
○​ Out-degree (for directed graphs): The number of edges going out of a vertex.
H

●​ Path: A sequence of edges that connect a sequence of vertices.


●​ Cycle: A path that starts and ends at the same vertex without repeating any other
IT

vertices.
●​ Connected Graph: A graph in which there is a path between every pair of vertices.
EW

●​ Disconnected Graph: A graph where at least one pair of vertices is not connected
by a path.

📌 3️⃣ Types of Graphs


D
O

●​ Weighted Graph: Each edge has an associated weight or cost.


C

●​ Unweighted Graph: Edges do not have weights.


●​ Complete Graph: A graph in which there is an edge between every pair of vertices.
●​ Bipartite Graph: A graph whose vertex set can be divided into two disjoint sets, such
that no two vertices within the same set are adjacent.
●​ Tree: A connected graph with no cycles.
●​ Forest: A collection of disjoint trees.


📌 4️⃣ Graph Representation
Graphs can be represented in two common ways:

1.​ Adjacency Matrix:​

○​ A 2D array of size V x V, where each element indicates whether there is an


edge between the corresponding pair of vertices.
○​ Space Complexity: O(V²).
2.​ Example (for an undirected graph):

L
A
H
C
Adjacency List:

H
●​ A list of lists, where each list at index i contains all the vertices adjacent to vertex i.
IS
●​ Space Complexity: O(V + E), where E is the number of edges.

Example:
N
H
IT
EW

📌 5️⃣ Graph Traversal


D
O
C

1. Depth-First Search (DFS)

DFS is a traversal technique that explores as far as possible along each branch before
backtracking.

●​ Time Complexity: O(V + E) for both directed and undirected graphs.


●​ Space Complexity: O(V) due to the recursion stack.

Java Code (DFS using Recursion):


L
A
2. Breadth-First Search (BFS)

H
BFS explores the graph level by level, starting from a source node and visiting all of its
neighbors before moving on to their neighbors.

C
●​ Time Complexity: O(V + E).
●​ Space Complexity: O(V) for the queue.

H
IS
Java Code (BFS using Queue):
N
H
IT
EW
D
O
C
📌 6️⃣ Graph Algorithms
1. Dijkstra's Shortest Path Algorithm (for Weighted Graphs)

Dijkstra's algorithm finds the shortest path from a source vertex to all other vertices in a
weighted graph with non-negative weights.

●​ Time Complexity: O((V + E) log V) using a priority queue.


●​ Space Complexity: O(V).

2. Bellman-Ford Algorithm (for Weighted Graphs)

L
This algorithm computes the shortest paths from a source vertex to all other vertices, even

A
with negative weights.

H
●​ Time Complexity: O(V * E).
●​ Space Complexity: O(V).

C
3. Floyd-Warshall Algorithm (for All-Pairs Shortest Paths)

H
Floyd-Warshall computes shortest paths between all pairs of vertices in a weighted graph.
IS
●​ Time Complexity: O(V³).
N
●​ Space Complexity: O(V²).
H

4. Kruskal's Algorithm (for Minimum Spanning Tree)


IT

Kruskal's algorithm finds a minimum spanning tree for a connected, undirected graph.

●​ Time Complexity: O(E log E).


EW

●​ Space Complexity: O(E).

5. Prim's Algorithm (for Minimum Spanning Tree)


D

Prim’s algorithm also finds a minimum spanning tree but grows the tree by adding vertices
one by one.
O

●​ Time Complexity: O(V²) using an adjacency matrix, O(E + V log V) with a priority
C

queue.
●​ Space Complexity: O(V).
📌 7️⃣ Time Complexity of Common Graph Operations
Operation Time Complexity
DFS O(V + E)
BFS O(V + E)
Dijkstra’s O((V + E) log V)
Algorithm
Bellman-Ford O(V * E)
Floyd-Warshall O(V³)
Kruskal’s O(E log E)
Algorithm
Prim’s Algorithm O(V²) or O(E + V log

L
V)
Where V is the number of vertices and E is the number of edges.

A
H
📌 2️⃣ Advanced Graph Algorithms
C
2.1 Maximum Flow Problem - Ford-Fulkerson Algorithm
H
IS
The Maximum Flow problem involves finding the maximum flow from a source node to a
sink node in a flow network.
N

●​ Ford-Fulkerson Algorithm is the most common approach to solving the maximum


H

flow problem.
●​ It uses augmenting paths to find the maximum flow in a graph.
IT

Key Concepts
EW

●​ Residual Graph: A graph showing remaining capacities of edges after some flow
has been pushed through.
●​ Augmenting Path: A path from source to sink that can carry additional flow.
D

Time Complexity:
O

●​ O(max flow × number of edges), though it depends on how augmenting paths are
chosen.
C

2.2 Shortest Path Algorithms (Advanced)

Dijkstra's Algorithm (Optimized)

●​ Dijkstra’s algorithm can be optimized using priority queues (min-heaps).


●​ Time Complexity: O((V + E) log V) using a priority queue.

For negative weights, Dijkstra doesn't work, and other algorithms must be used.
Bellman-Ford Algorithm (for Negative Weights)

The Bellman-Ford algorithm computes shortest paths from a source to all other vertices,
even in graphs with negative edge weights.

●​ Time Complexity: O(V * E).


●​ Can detect negative weight cycles.

Johnson's Algorithm (for All-Pairs Shortest Path)

This algorithm works by first running Bellman-Ford to reweight the edges, making all edge
weights non-negative, then running Dijkstra for each vertex.

L
●​ Time Complexity: O(V² log V + VE) (for sparse graphs).

A
H
2.3 Topological Sorting

C
Topological Sorting is a linear ordering of vertices in a Directed Acyclic Graph (DAG) such

H
that for every directed edge (u, v), vertex u comes before vertex v in the ordering.
IS
●​ Applications: Task scheduling, compilation ordering.
●​ Approach: Can be done using DFS or Kahn’s Algorithm (BFS-based).
N

Time Complexity: O(V + E)


H
IT

2.4 Articulation Points and Bridges

●​ Articulation Points: A vertex whose removal increases the number of connected


EW

components in a graph.
●​ Bridges: An edge whose removal increases the number of connected components in
a graph.
D

These concepts are important in network design (for identifying critical points).
O

DFS-based approach:
C

●​ Use DFS to detect articulation points and bridges by tracking discovery and low
values of vertices.
📌 3️⃣ Advanced Graph Representation
3.1 Adjacency Matrix (for Dense Graphs)

●​ Space Complexity: O(V²) (useful for dense graphs).


●​ Operations: Checking if there’s an edge between two vertices can be done in
constant time O(1).

3.2 Adjacency List (for Sparse Graphs)

●​ Space Complexity: O(V + E) (useful for sparse graphs).

L
●​ Operations: Neighbors of a vertex can be accessed in O(k) time, where k is the
number of neighbors.

A
H
3.3 Incidence Matrix

C
An Incidence Matrix represents a graph by indicating the edges incident to the vertices.

H
●​ Space Complexity: O(V * E)
●​ Useful for edge-centric algorithms like maximum flow.
IS
📌 4️⃣ Important Graph Theorems
N
H

4.1 Euler’s Theorem (for Eulerian Circuits)


IT

●​ Eulerian Circuit: A cycle that visits every edge of the graph exactly once.
EW

●​ Conditions: A connected graph has an Eulerian Circuit if and only if every vertex has
an even degree.

4.2 Fleury’s Algorithm (for Eulerian Path/Circuit)


D

●​ An algorithm to find an Eulerian path or circuit in a graph, based on conditions for


edge traversal.
O
C

📌 5️⃣ Applications of Advanced Graphs


1.​ Network Flow: Used in transportation, communication, and data networks to model
the flow of resources.
2.​ Matching Problems: Bipartite graph matching for job assignments, marriage
problems, etc.
3.​ Social Networks: Analyzing relationships, influence, and connectivity.
4.​ Recommendation Systems: Collaborative filtering and graph-based
recommendation engines.
5.​ Route Planning: Shortest path algorithms applied in navigation systems (Google
Maps).
6.​ Network Reliability: Articulation points and bridges to determine critical points in
communication networks.
7.​ Cycle Detection: Used in scheduling and deadlock detection.

📌 6️⃣ Time Complexity Summary for Advanced Graph Algorithms


Algorithm Time
Complexity

L
Ford-Fulkerson (Edmonds-Karp) O(VE²)
Hopcroft-Karp (Bipartite O(√V * E)

A
Matching)
Dijkstra's Algorithm O((V + E) log V)

H
Bellman-Ford Algorithm O(V * E)
Floyd-Warshall Algorithm O(V³)

C
Kosaraju’s Algorithm (SCC) O(V + E)

H
Topological Sort (DFS) O(V + E)
IS
N

18.Dynamic Programming
H

📌 1️⃣ Definition
IT

Dynamic Programming (DP) is a method used for solving complex problems by breaking
EW

them down into simpler subproblems. It is an optimization technique used to avoid redundant
calculations by storing the results of subproblems and reusing them when needed.

DP is used for optimization problems where we need to find the best solution under given
D

constraints, and where solutions to subproblems overlap.

📌 2️⃣ Key Concepts


O
C

2.1 Optimal Substructure

A problem has optimal substructure if the optimal solution of the problem can be
constructed efficiently from optimal solutions of its subproblems.

2.2 Overlapping Subproblems

A problem has overlapping subproblems if the problem can be broken down into
subproblems that are solved multiple times.
📌 3️⃣ DP Approaches
3.1 Top-Down Approach (Memoization)

●​ Memoization involves solving the problem recursively, but storing the results of
subproblems to avoid repeated work.
●​ When a subproblem is encountered for the first time, it is solved and stored; if it’s
encountered again, the stored result is used.

Time Complexity: O(n) (depends on the number of unique subproblems)

L
Space Complexity: O(n) (for storing results)

A
Example (Fibonacci):

H
C
H
IS
N
H
IT
EW

3.2 Bottom-Up Approach (Tabulation)

●​ Tabulation involves solving the problem iteratively, starting from the base case and
building up to the final solution.
D

●​ The solutions to all subproblems are stored in a table (array) in a bottom-up manner.
O

Time Complexity: O(n) (for iterating through the table)


C

Space Complexity: O(n) (for the table)

Example (Fibonacci):
📌 4️⃣ Common DP Problems
4.1 Fibonacci Sequence

●​ Problem: Find the nth Fibonacci number.


●​ Recursive Relation: F(n) = F(n-1) + F(n-2)

4.2 Knapsack Problem

●​ Problem: Given weights and values of items, determine the maximum value that can
be obtained by selecting items such that their total weight does not exceed the

L
knapsack capacity.

A
●​ Recursive Relation:​

H
C
Time Complexity: O(n * W), where n is the number of items, and W is the capacity of the

H
knapsack.
IS
4.3 Longest Common Subsequence (LCS)

●​ Problem: Given two sequences, find the length of the longest subsequence common
N
to both sequences.
●​ Recursive Relation:
H
IT
EW

Time Complexity: O(m * n), where m and n are the lengths of the two sequences.
D

4.4 Longest Increasing Subsequence (LIS)


O

●​ Problem: Find the longest subsequence of a given sequence such that all elements
C

are sorted in increasing order.


●​ Recursive Relation:​

Time Complexity: O(n²) (for the dynamic programming approach)


📌 5️⃣ DP Patterns to Remember
5.1 0/1 Knapsack Pattern

This pattern is useful in problems where we are given a set of items and need to make a
decision (select or reject) based on certain constraints (e.g., weight, value).

●​ Recursive Relation:

L
A
5.2 Unbounded Knapsack Pattern

H
Similar to 0/1 Knapsack but with the difference that each item can be taken any number of

C
times (unbounded).

H
●​ Recursive Relation:​ IS
N
H

5.3 Longest Common Subsequence Pattern


IT

Used in problems where we are asked to find common subsequences (e.g., LCS, Edit
Distance).
EW

●​ Recursive Relation:
D
O

📌 6️⃣ Time Complexity of Common DP Problems


C

Problem Time
Complexity
Fibonacci Sequence O(n)
Knapsack Problem O(n * W)
Longest Common Subsequence O(m * n)
(LCS)
Longest Increasing Subsequence O(n²)
(LIS)
Coin Change Problem O(n * amount)
📌 7️⃣ When to Use Dynamic Programming
●​ When the problem involves optimal substructure and overlapping subproblems.
●​ When brute force recursion would result in repetitive calculations.
●​ DP is ideal for optimization problems, such as maximizing profit, minimizing cost,
or counting distinct ways to achieve a result.

📌 8️⃣ Space Optimization in DP


●​ If only the current and previous states are needed, you can optimize space from O(n
* W) to O(W) (or similar), which helps to reduce memory usage.

L
Example (1D DP Optimization):

A
H
C
19. 2-D Dynamic Programming H
IS
📌 1️⃣ Definition
N
H

2-D Dynamic Programming (DP) involves using a 2-dimensional table (usually a 2D array) to
IT

store the results of overlapping subproblems. This approach is useful for problems that
involve two sets of parameters, often representing two dimensions of state.
EW

2-D DP is typically used when the problem involves two varying parameters or indices.
Common problems include:
D

●​ Finding the Longest Common Subsequence (LCS).


●​ 2-D Knapsack Problem.
O

●​ Matrix Chain Multiplication.


●​ Edit Distance (Levenshtein Distance).
C

●​ Word Break Problem.

📌 3️⃣ Common 2-D DP Problems


3.1 Longest Common Subsequence (LCS)

●​ Problem: Given two sequences, find the length of the longest subsequence common
to both sequences.
Recursive Relation:

Time Complexity: O(m * n), where m and n are the lengths of the two sequences.

Example (LCS):

L
A
H
C
H
IS
N
H
IT
EW
D

3.3 Matrix Chain Multiplication


O

●​ Problem: Given a sequence of matrices, find the most efficient way to multiply them
C

together. The problem is to find the minimum number of scalar multiplications needed
to multiply the chain.

Recursive Relation:

Time Complexity: O(n³), where n is the number of matrices.


📌 4️⃣ General 2-D DP Approach
4.1 Table Initialization

For most DP problems, the 2D table is initialized to zero or a base value (e.g., dp[0][0] =
0).

4.2 Recurrence Relation

The recurrence relation should be carefully constructed based on the problem, as it


determines how the values in the DP table are updated.

L
A
4.3 Backtracking

H
Once the table is filled, the solution is typically found by backtracking from the last cell
(dp[m][n] for LCS) to trace back the path to the optimal solution.

C
📌 5️⃣ 2-D DP Patterns to Remember
H
IS
5.1 Path Problems (e.g., Grid Problems)

Many problems involve finding the number of unique paths from the top-left to the
N

bottom-right of a grid. These types of problems can be solved using 2-D DP.
H

Example (Unique Paths in a Grid):


IT
EW
D
O
C
📌 6️⃣ Time Complexity of Common 2-D DP Problems
Problem Time
Complexity
Longest Common Subsequence O(m * n)
(LCS)
Edit Distance (Levenshtein Distance) O(m * n)
Matrix Chain Multiplication O(n³)
2-D Knapsack Problem O(n * W)
Unique Paths in Grid O(m * n)

📌 7️⃣ Space Optimization in 2-D DP

L
A
H
You can optimize space by using a 1-D DP array in some problems where only the current
and previous rows are needed. This reduces space complexity from O(m * n) to O(n) or

C
O(m), depending on the problem.

H
IS
N

20. Bit Manipulation


H

📌 1️⃣ Definition
IT
EW

Bit manipulation involves operations that directly manipulate bits (0s and 1s) of binary
numbers. Bitwise operations are used to perform tasks like checking bits, setting bits,
toggling bits, and shifting bits. It is commonly used for optimization, low-level programming,
and problem-solving in competitive programming.

📌 2️⃣ Common Bitwise Operators


D

Operator Description Syntax


O

AND (&) Bitwise AND: returns 1 if both bits are 1. a&b


**OR (` `)** Bitwise OR: returns 1 if at least
C

one bit is 1.
XOR (^) Bitwise XOR: returns 1 if bits are different. a^b
NOT (~) Bitwise NOT: inverts the bits. ~a
Left Shift Shifts bits to the left by a given number of positions a << n
(<<) (multiplies by 2^n).
Right Shift Shifts bits to the right by a given number of a >> n
(>>) positions (divides by 2^n).
📌 3️⃣ Key Operations & Concepts
3.1 Checking if a Number is Odd or Even

To check if a number is odd or even, use the AND operator (&) with 1.

●​ Even: n & 1 == 0
●​ Odd: n & 1 == 1

📌 6️⃣ Time Complexity of Common Bit Manipulation Operations


Operation Time

L
Complexity

A
AND (&) O(1)
**OR (` `)**

H
XOR (^) O(1)
NOT (~) O(1)

C
Left Shift (<<) O(1)
Right Shift (>>) O(1)
Counting Set Bits
Checking Power of
O(number of bits)
O(1)
H
IS
Two
N

NEVER LOOK FOR THE SOLUTION, ALWAYS THINK FOR THE SOLUTION! - NISHCHAL
H

FOLLOW ME ON IG: https://www.instagram.com/codewithnishchal/


IT
EW
D
O
C

You might also like