DSA Quick Revision Guide
DSA Quick Revision Guide
1. Array
📌 1️⃣ Array Definition
An array is a collection of elements stored in contiguous memory locations, indexed starting
from 0.
L
📌 Declaration & Initialization:
A
H
C
📌 2️⃣ Key Array Operations & Syntax
Operation Syntax (Java)
H Time
IS
Complexity
Accessing an arr[i] O(1)
N
Element
Updating an Element arr[i] = newValue; O(1)
H
Linear Search for(int i = 0; i < arr.length; i++) if(arr[i] == target) return O(n)
i;
EW
L
A
🔹 Kadane’s Algorithm (For Maximum Sum Subarray)
H
C
H
IS
📌 Use Case: Maximum sum contiguous subarray.
🔹 Prefix Sum (For Range Sum Queries in O(1))
N
H
IT
L
A
📌 2️⃣ Key String Operations & Syntax
H
Operation Syntax (Java) Time
Complexity
C
Get Length s.length() O(1)
Access Character s.charAt(i) O(1)
Concatenation s1 + s2 / s.concat(s2)
H O(n)
IS
Substring s.substring(i, j) O(j - i)
Convert to Char Array s.toCharArray() O(n)
Check Equality s1.equals(s2) O(n)
N
Compare Strings s1.compareTo(s2) O(n)
Reverse a String new O(n)
H
StringBuilder(s).reverse().toString()
Change Case s.toUpperCase() / s.toLowerCase() O(n)
IT
L
A
📌
=>Used to count occurrences of characters.
H
Use Case: Finding first unique character.
C
🔹 KMP Algorithm (For Pattern Matching) H
IS
Efficient for searching substrings in O(n). 📌 Use Case: Checking if a pattern exists in a
N
string.
H
IT
EW
D
O
L
3. Hash Set - Stores unique elements.
A
4. Bloom Filters - Probabilistic hash structure used for membership checking.
H
C
Operation Syntax (Java) Time
Complexity
H
Insert Key-Value Pair map.put(key, value); O(1) (Avg)
Access Value by Key map.get(key); O(1) (Avg)
IS
Check if Key Exists map.containsKey(key); O(1)
Remove Key map.remove(key); O(1)
N
Get All Keys map.keySet(); O(n)
Get All Values map.values(); O(n)
H
HashSet
Delete from HashSet set.remove(value); O(1)
L
Used to check if two strings have the same character frequencies.
A
H
C
H
IS
N
📌 1️⃣ Definition
The Sliding Window technique is used for efficiently processing contiguous subarrays
or substrings of a given size or dynamically adjusting window size to solve problems
involving arrays or strings.
L
problems.
A
H
1. Fixed-size window → Window size remains constant (e.g., "Find max sum of K
C
elements").
2. Variable-size window → Window expands/contracts based on conditions (e.g.,
H
"Find the longest substring with at most K distinct characters").
✅ Approach:
H
IT
3. Slide the window by adding right element and removing left element.
D
O
C
📌 4️⃣ Variable-Size Sliding Window Implementation
📌 Problem: Find the length of the longest subarray with sum ≤ S.
✅ Approach:
1. Expand the window by adding elements to the right.
2. Shrink window from the left if sum exceeds S.
3. Keep track of the maximum window size.
L
A
H
C
H
IS
N
H
IT
EW
D
O
C
5. Two Pointer
📌 1️⃣ Definition
The Two Pointer technique is an optimization method used to process pairs of elements
in an array or string by using two pointers instead of nested loops.
L
A
1. Opposite Direction Pointers
H
○ Used when dealing with sorted arrays.
C
○ Common for finding pairs (e.g., sum problems, palindromes, two-sum in
sorted array).
2. Same Direction Pointers
H
IS
○ Used for traversing and modifying elements efficiently.
○ Common in merging, removing duplicates, and longest subarray
N
problems.
✅ Approach
EW
L
○ j (fast pointer) scans for new unique elements.
A
2. When arr[j] ≠ arr[i], move the unique element forward.
H
C
H
IS
N
H
IT
EW
L
✔️ Parentheses matching
A
H
📌 2️⃣ Stack Operations & Syntax
C
Operation Syntax (Java - Time
H
Stack) Complexity
Push (Insert at stack.push(value); O(1)
IS
Top)
Pop (Remove Top) stack.pop(); O(1)
N
Peek (Top stack.peek(); O(1)
Element)
H
L
Case Case
Push O(1) O(1)
A
Pop O(1) O(1)
Peek O(1) O(1)
H
Search O(n) O(n)
C
7.Queue H
IS
📌 1️⃣ Definition
N
H
A queue is a FIFO (First-In, First-Out) data structure where elements are inserted at the
rear and removed from the front.
IT
Queues are useful for managing tasks in scheduling, buffers, and scenarios where order
of processing matters.
📌 Key Applications:
EW
✔️ Circular Buffers
O
A
H
🔹 1️⃣ Simple Queue
C
H
Basic queue where elements are inserted at the rear and removed from the front.
A priority queue stores elements in order of priority, rather than the order of insertion. A
higher priority element is always dequeued before a lower priority element.
EW
● Min-Heap: The element with the lowest value has the highest priority.
● Max-Heap: The element with the highest value has the highest priority.
A deque allows adding and removing elements from both ends (front and rear). It's often
used for problems like sliding window, or when you need access to both ends.
C
📌 1️⃣ Definition
A linked list is a linear data structure where elements (called nodes) are stored in
non-contiguous memory locations. Each node contains:
📌 Key Applications:
L
✔️ Dynamic Memory Allocation
✔️ Implementing Queues, Stacks, Graphs, and Hash Tables
A
✔️ Memory-Efficient Data Structures (e.g., when frequent insertions/deletions are required)
H
📌 1️⃣ Definition
C
H
A linked list is a linear data structure where elements (nodes) are stored at different memory
locations and are connected using pointers. It allows efficient insertions and deletions but
IS
has slower access times compared to arrays.
1. Singly Linked List → Each node points to the next node.
IT
2. Doubly Linked List → Each node has pointers to both the next and previous nodes.
3. Circular Linked List → The last node connects back to the first node.
4. Circular Doubly Linked List → A doubly linked list where the last node links to the
EW
first node.
L
A
H
C
H
IS
N
H
L
position)
Search (by value) contains(value); O(n)
A
Traversal printList(); O(n)
H
📌 6️⃣ Time Complexity Recap
C
Operation Time Complexity
Insertion (at front) O(1)
H
IS
Insertion (at end) O(n) (for singly) or O(1) (for
doubly)
Insertion (at specific position) O(n)
N
Deletion (from front) O(1)
Deletion (from end) O(n) (for singly) or O(1) (for
H
doubly)
Deletion (from specific O(n)
IT
position)
Search (by value) O(n)
EW
Traversal O(n)
D
O
C
9. Binary Search
📌 1️⃣ Definition
Binary Search is an efficient algorithm for searching an element in a sorted array by
repeatedly dividing the search interval in half.
L
Binary Search binarySearch(arr, target) O(log n)
Binary Search binarySearch(arr, left, right, O(log n)
A
(Recursive) target)
H
Recursive Method Syntax (Java)
C
H
IS
N
H
IT
Modify the algorithm to continue searching on the right even after finding the target.
C
📌 1️⃣ Definition
A tree is a hierarchical data structure consisting of nodes connected by edges. The top node
is called the root, and each node contains a value or data and references (or links) to its
children.
Basic Terminology:
L
● Node: An element of the tree containing data.
● Root: The topmost node.
A
● Child: A node directly connected to another node when moving away from the root.
● Parent: A node directly connected to another node when moving towards the root.
H
● Leaf: A node with no children.
C
● Height: The length of the longest path from the root to a leaf.
● Depth: The length of the path from the root to a node.
H
● Subtree: A tree consisting of a node and its descendants.
🔹 4️⃣ Heap
A binary tree that satisfies the heap property:
L
A
Operation Syntax Time
Complexity
H
Insertion insert(root, value); O(log n) (for
BST)
C
Deletion delete(root, value); O(log n) (for
BST)
H
Search search(root, value); O(log n) (for
BST)
IS
Traversal inOrderTraversal(root); O(n)
Preorder preOrderTraversal(root); O(n)
N
Postorder postOrderTraversal(root O(n)
);
H
Syntax:
L
A
Postorder Traversal (Left, Right, Root)
H
● Used for deletion or postfix expressions.
C
Syntax:
H
IS
N
H
IT
Syntax:
D
O
C
📌 5️⃣ Time Complexity for Operations
Operation Time
Complexity
Search O(log n) (BST)
Insertion O(log n) (BST)
Deletion O(log n) (BST)
Traversal O(n)
Level O(n)
Order
L
A
● Balanced Trees: AVL trees, Red-Black trees (balancing on insertion and deletion).
● Complete Trees: A binary tree in which every level, except possibly the last, is
H
completely filled.
● Height of a Tree: Calculated from the root to the deepest leaf.
C
11.Trie
H
IS
📌 1️⃣ Definition
N
A Trie (or Prefix Tree) is a tree-like data structure that stores a dynamic set of strings, where
H
each node represents a character of a string. It is primarily used for fast string search and
prefix matching.
IT
Key Terminology:
EW
L
📌 4️⃣ Trie Operations
A
🔹 Insert Operation
H
C
To insert a word, start from the root node and traverse through the trie. If a node does not
H
exist for a character, create a new node. Mark the last node of the word as isEndOfWord =
true.
IS
🔹 Search Operation
N
To search for a word, follow the same steps as in the insert operation. If a node for a
character does not exist, return false. After traversing all characters, check if the last node
H
🔹 StartsWith Operation
IT
EW
To check if a prefix exists, traverse the trie similarly to the search operation but do not
require the last node to be isEndOfWord.
🔹 Delete Operation
D
To delete a word, follow the search path to find the node corresponding to the last character
O
of the word. Then mark isEndOfWord = false. If that node has no children, delete the
node. This operation is more complicated than insert/search due to potential cleanup of
C
empty nodes.
L
A
12.Heap/Priority Queue
H
📌 1️⃣ Definition
C
H
A Heap is a special tree-based data structure that satisfies the heap property:
IS
● Max Heap: The value of each node is greater than or equal to the values of its
children.
● Min Heap: The value of each node is less than or equal to the values of its children.
N
A Priority Queue is an abstract data type that supports operations to insert elements and
H
remove the element with the highest priority. It can be implemented using a heap, where:
IT
L
A
H
Max Priority Queue (Using a Max Heap)
C
H
IS
N
H
Operation Time
C
Complexity
Insert O(log n)
Extract-Max/Min O(log n)
Peek (Max/Min) O(1)
Heapify O(n)
Delete O(log n)
📌 7️⃣ Use Cases of Heaps/Priority Queues
● Dijkstra’s Algorithm (for shortest path).
● Huffman Encoding (compression algorithms).
● Task Scheduling (priority-based task execution).
● K-th Largest/Smallest Element in an array.
● Real-time Data Streaming (median finding).
13. Intervals
L
📌 1️⃣ Definition
A
H
An interval is a range of values, typically used to represent a continuous sequence of
numbers. In the context of algorithms and data structures, intervals are often represented as
C
pairs of integers, such as [start, end].
Types of Intervals:
H
IS
● Closed Interval: [start, end] includes both endpoints.
● Open Interval: (start, end) excludes both endpoints.
N
● Half-Open Interval: [start, end) or (start, end] includes one endpoint but
not the other.
H
Check
Where n is the number of intervals.
Algorithm:
Algorithm:
L
A
Given two lists of intervals, return the intersection of these two intervals.
H
Algorithm:
C
1. Sort both intervals by their start times.
H
2. Use two pointers to traverse through both intervals.
3. If there’s an overlap, add the intersection to the result.
IS
📌 6️⃣ Time Complexity
Operation Time
N
Complexity
Merge Intervals O(n log n)
H
Interval O(n + m)
Intersection
Interval Removal O(n)
EW
Algorithm:
C
Greedy algorithms are often used when a problem can be broken down into stages where
decisions at each stage are made based on local optimal choices.
L
📌 2️⃣ Characteristics of Greedy Algorithms
A
H
1. Greedy Choice Property: A global optimum can be arrived at by selecting a local
optimum.
C
2. Optimal Substructure: A problem has optimal substructure if an optimal solution to
the problem contains optimal solutions to the subproblems.
H
Note: Greedy algorithms do not always guarantee an optimal solution,
IS
but for certain problems (like Huffman Coding, Dijkstra’s Algorithm),
they are proven to give the best result.
N
Complexity
Activity Selection Select the maximum number of non-overlapping O(n log n)
C
activities.
Fractional Knapsack Maximize value with a given weight capacity using O(n log n)
fractional items.
Huffman Coding Build an optimal prefix code for encoding. O(n log n)
Dijkstra’s Shortest Path Find the shortest path in a graph. O(V log V + E)
Algorithm
Prim’s Algorithm Find the minimum spanning tree of a graph. O(V log V + E)
Where n is the number of items, V is the number of vertices, and E is the
number of edges.
📌 5️⃣ Greedy Algorithm Examples
1. Activity Selection Problem
Given a set of activities with start and finish times, select the maximum number of
non-overlapping activities.
Algorithm:
L
A
H
C
H
IS
N
H
IT
EW
D
Given a set of items, each with a weight and value, and a knapsack with a given capacity,
C
determine the maximum value that can be obtained by filling the knapsack with fractions of
items.
Algorithm:
This algorithm finds the shortest path between a source node and all other nodes in a graph
with non-negative weights.
Algorithm:
1. Initialize the distance to the source node as 0 and all other nodes as infinity.
2. Use a priority queue to always choose the next node with the smallest tentative
distance.
3. Update the distance of the neighbors based on the chosen node.
L
A
Algorithm Time
Complexity
H
Activity Selection O(n log n)
Fractional O(n log n)
C
Knapsack
Huffman Coding O(n log n)
Dijkstra’s Algorithm O(V log V + E)
Prim’s Algorithm O(V log V + E)
H
IS
Where n is the number of activities/items and V, E are the vertices and edges of the
graph, respectively.
L
Every recursive function should have:
A
● Base Case: The condition that stops the recursion. Without this, the function would
H
call itself indefinitely.
● Recursive Case: The part where the function calls itself with a modified argument to
C
move closer to the base case.
Factorial Formula:
For n = 0, 0! = 1 is the base case.
L
The Fibonacci sequence is another classic example, where each term is the sum of the two
A
preceding ones.
H
Fibonacci Formula:
C
H
IS
N
● Base Case: Ensure that every recursive function has a well-defined base case to
terminate the recursion.
EW
● Recursive Case: Ensure that each recursive call moves closer to the base case.
● Memory Consumption: Recursive calls consume stack space, so avoid excessive
recursion depth in some cases.
● Debugging: Recursive solutions can be difficult to debug; visualize recursion tree if
necessary.
D
The time complexity of a recursive function depends on the number of recursive calls and
O
● Factorial: O(n)
● Fibonacci: O(2^n) (Exponential without memoization)
● Binary Search: O(log n)
● Tree Traversals: O(n) (where n is the number of nodes in the tree)
L
A
16. Backtracking
H
📌 1️⃣ Definition
C
H
Backtracking is an algorithmic technique for solving problems by incrementally building
solutions and abandoning a solution as soon as it is determined that it cannot possibly lead
IS
to a valid solution.
N
It is a form of depth-first search (DFS) where, when a problem cannot be solved any
further, it backtracks to the previous step and tries another possibility.
2. Constraint: Check if the current solution satisfies the problem's constraints.
3. Goal: If the solution meets the goal, record it.
4. Backtrack: If no valid solution is found, backtrack to the previous choice and try
another path.
D
Complexity
Subset Generate all subsets of a set. O(2^n)
Generation
Permutations Generate all possible permutations of a set. O(n!)
N-Queens Place N queens on an N x N chessboard so that no two O(N!)
Problem queens threaten each other.
Sudoku Solver Solve a given Sudoku puzzle. O(9^81)
Hamiltonian Find a path in a graph that visits each node exactly once. O(N!)
Path
Where n is the size of the input (e.g., number of elements in a set, or number of nodes).
📌 4️⃣ Backtracking Example Problems
1. N-Queens Problem
The N-Queens problem is a classic problem where the goal is to place N queens on an NxN
chessboard such that no two queens threaten each other.
Algorithm:
L
3. If placing a queen does not lead to a valid configuration, backtrack and try another
column or row.
A
H
2. Subset Generation
C
Given a set of numbers, generate all possible subsets (the power set).
H
Algorithm:
IS
1. Start from an empty subset.
2. For each element, either include it in the subset or exclude it.
3. Use recursion to generate all possible combinations.
N
Problem Time
IT
Complexity
Subset Generation O(2^n)
EW
Permutations O(n!)
N-Queens O(N!)
Problem
Sudoku Solver O(9^81)
D
L
● Directed Graph (Digraph): Each edge has a direction, going from one vertex to
another.
A
● Undirected Graph: Edges have no direction, meaning the relationship between the
vertices is mutual.
H
📌 2️⃣ Graph Terminology
C
●
●
Vertex (V): A node in the graph.
Edge (E): A connection between two vertices.
H
IS
● Adjacent Vertices: Two vertices connected by an edge.
● Degree: The number of edges incident to a vertex.
N
○ In-degree (for directed graphs): The number of edges coming into a vertex.
○ Out-degree (for directed graphs): The number of edges going out of a vertex.
H
vertices.
● Connected Graph: A graph in which there is a path between every pair of vertices.
EW
● Disconnected Graph: A graph where at least one pair of vertices is not connected
by a path.
📌 4️⃣ Graph Representation
Graphs can be represented in two common ways:
L
A
H
C
Adjacency List:
H
● A list of lists, where each list at index i contains all the vertices adjacent to vertex i.
IS
● Space Complexity: O(V + E), where E is the number of edges.
Example:
N
H
IT
EW
DFS is a traversal technique that explores as far as possible along each branch before
backtracking.
H
BFS explores the graph level by level, starting from a source node and visiting all of its
neighbors before moving on to their neighbors.
C
● Time Complexity: O(V + E).
● Space Complexity: O(V) for the queue.
H
IS
Java Code (BFS using Queue):
N
H
IT
EW
D
O
C
📌 6️⃣ Graph Algorithms
1. Dijkstra's Shortest Path Algorithm (for Weighted Graphs)
Dijkstra's algorithm finds the shortest path from a source vertex to all other vertices in a
weighted graph with non-negative weights.
L
This algorithm computes the shortest paths from a source vertex to all other vertices, even
A
with negative weights.
H
● Time Complexity: O(V * E).
● Space Complexity: O(V).
C
3. Floyd-Warshall Algorithm (for All-Pairs Shortest Paths)
H
Floyd-Warshall computes shortest paths between all pairs of vertices in a weighted graph.
IS
● Time Complexity: O(V³).
N
● Space Complexity: O(V²).
H
Kruskal's algorithm finds a minimum spanning tree for a connected, undirected graph.
Prim’s algorithm also finds a minimum spanning tree but grows the tree by adding vertices
one by one.
O
● Time Complexity: O(V²) using an adjacency matrix, O(E + V log V) with a priority
C
queue.
● Space Complexity: O(V).
📌 7️⃣ Time Complexity of Common Graph Operations
Operation Time Complexity
DFS O(V + E)
BFS O(V + E)
Dijkstra’s O((V + E) log V)
Algorithm
Bellman-Ford O(V * E)
Floyd-Warshall O(V³)
Kruskal’s O(E log E)
Algorithm
Prim’s Algorithm O(V²) or O(E + V log
L
V)
Where V is the number of vertices and E is the number of edges.
A
H
📌 2️⃣ Advanced Graph Algorithms
C
2.1 Maximum Flow Problem - Ford-Fulkerson Algorithm
H
IS
The Maximum Flow problem involves finding the maximum flow from a source node to a
sink node in a flow network.
N
flow problem.
● It uses augmenting paths to find the maximum flow in a graph.
IT
Key Concepts
EW
● Residual Graph: A graph showing remaining capacities of edges after some flow
has been pushed through.
● Augmenting Path: A path from source to sink that can carry additional flow.
D
Time Complexity:
O
● O(max flow × number of edges), though it depends on how augmenting paths are
chosen.
C
For negative weights, Dijkstra doesn't work, and other algorithms must be used.
Bellman-Ford Algorithm (for Negative Weights)
The Bellman-Ford algorithm computes shortest paths from a source to all other vertices,
even in graphs with negative edge weights.
This algorithm works by first running Bellman-Ford to reweight the edges, making all edge
weights non-negative, then running Dijkstra for each vertex.
L
● Time Complexity: O(V² log V + VE) (for sparse graphs).
A
H
2.3 Topological Sorting
C
Topological Sorting is a linear ordering of vertices in a Directed Acyclic Graph (DAG) such
H
that for every directed edge (u, v), vertex u comes before vertex v in the ordering.
IS
● Applications: Task scheduling, compilation ordering.
● Approach: Can be done using DFS or Kahn’s Algorithm (BFS-based).
N
components in a graph.
● Bridges: An edge whose removal increases the number of connected components in
a graph.
D
These concepts are important in network design (for identifying critical points).
O
DFS-based approach:
C
● Use DFS to detect articulation points and bridges by tracking discovery and low
values of vertices.
📌 3️⃣ Advanced Graph Representation
3.1 Adjacency Matrix (for Dense Graphs)
L
● Operations: Neighbors of a vertex can be accessed in O(k) time, where k is the
number of neighbors.
A
H
3.3 Incidence Matrix
C
An Incidence Matrix represents a graph by indicating the edges incident to the vertices.
H
● Space Complexity: O(V * E)
● Useful for edge-centric algorithms like maximum flow.
IS
📌 4️⃣ Important Graph Theorems
N
H
● Eulerian Circuit: A cycle that visits every edge of the graph exactly once.
EW
● Conditions: A connected graph has an Eulerian Circuit if and only if every vertex has
an even degree.
L
Ford-Fulkerson (Edmonds-Karp) O(VE²)
Hopcroft-Karp (Bipartite O(√V * E)
A
Matching)
Dijkstra's Algorithm O((V + E) log V)
H
Bellman-Ford Algorithm O(V * E)
Floyd-Warshall Algorithm O(V³)
C
Kosaraju’s Algorithm (SCC) O(V + E)
H
Topological Sort (DFS) O(V + E)
IS
N
18.Dynamic Programming
H
📌 1️⃣ Definition
IT
Dynamic Programming (DP) is a method used for solving complex problems by breaking
EW
them down into simpler subproblems. It is an optimization technique used to avoid redundant
calculations by storing the results of subproblems and reusing them when needed.
DP is used for optimization problems where we need to find the best solution under given
D
A problem has optimal substructure if the optimal solution of the problem can be
constructed efficiently from optimal solutions of its subproblems.
A problem has overlapping subproblems if the problem can be broken down into
subproblems that are solved multiple times.
📌 3️⃣ DP Approaches
3.1 Top-Down Approach (Memoization)
● Memoization involves solving the problem recursively, but storing the results of
subproblems to avoid repeated work.
● When a subproblem is encountered for the first time, it is solved and stored; if it’s
encountered again, the stored result is used.
L
Space Complexity: O(n) (for storing results)
A
Example (Fibonacci):
H
C
H
IS
N
H
IT
EW
● Tabulation involves solving the problem iteratively, starting from the base case and
building up to the final solution.
D
● The solutions to all subproblems are stored in a table (array) in a bottom-up manner.
O
Example (Fibonacci):
📌 4️⃣ Common DP Problems
4.1 Fibonacci Sequence
● Problem: Given weights and values of items, determine the maximum value that can
be obtained by selecting items such that their total weight does not exceed the
L
knapsack capacity.
A
● Recursive Relation:
H
C
Time Complexity: O(n * W), where n is the number of items, and W is the capacity of the
H
knapsack.
IS
4.3 Longest Common Subsequence (LCS)
● Problem: Given two sequences, find the length of the longest subsequence common
N
to both sequences.
● Recursive Relation:
H
IT
EW
Time Complexity: O(m * n), where m and n are the lengths of the two sequences.
D
● Problem: Find the longest subsequence of a given sequence such that all elements
C
This pattern is useful in problems where we are given a set of items and need to make a
decision (select or reject) based on certain constraints (e.g., weight, value).
● Recursive Relation:
L
A
5.2 Unbounded Knapsack Pattern
H
Similar to 0/1 Knapsack but with the difference that each item can be taken any number of
C
times (unbounded).
H
● Recursive Relation: IS
N
H
Used in problems where we are asked to find common subsequences (e.g., LCS, Edit
Distance).
EW
● Recursive Relation:
D
O
Problem Time
Complexity
Fibonacci Sequence O(n)
Knapsack Problem O(n * W)
Longest Common Subsequence O(m * n)
(LCS)
Longest Increasing Subsequence O(n²)
(LIS)
Coin Change Problem O(n * amount)
📌 7️⃣ When to Use Dynamic Programming
● When the problem involves optimal substructure and overlapping subproblems.
● When brute force recursion would result in repetitive calculations.
● DP is ideal for optimization problems, such as maximizing profit, minimizing cost,
or counting distinct ways to achieve a result.
L
Example (1D DP Optimization):
A
H
C
19. 2-D Dynamic Programming H
IS
📌 1️⃣ Definition
N
H
2-D Dynamic Programming (DP) involves using a 2-dimensional table (usually a 2D array) to
IT
store the results of overlapping subproblems. This approach is useful for problems that
involve two sets of parameters, often representing two dimensions of state.
EW
2-D DP is typically used when the problem involves two varying parameters or indices.
Common problems include:
D
● Problem: Given two sequences, find the length of the longest subsequence common
to both sequences.
Recursive Relation:
Time Complexity: O(m * n), where m and n are the lengths of the two sequences.
Example (LCS):
L
A
H
C
H
IS
N
H
IT
EW
D
● Problem: Given a sequence of matrices, find the most efficient way to multiply them
C
together. The problem is to find the minimum number of scalar multiplications needed
to multiply the chain.
Recursive Relation:
For most DP problems, the 2D table is initialized to zero or a base value (e.g., dp[0][0] =
0).
L
A
4.3 Backtracking
H
Once the table is filled, the solution is typically found by backtracking from the last cell
(dp[m][n] for LCS) to trace back the path to the optimal solution.
C
📌 5️⃣ 2-D DP Patterns to Remember
H
IS
5.1 Path Problems (e.g., Grid Problems)
Many problems involve finding the number of unique paths from the top-left to the
N
bottom-right of a grid. These types of problems can be solved using 2-D DP.
H
L
A
H
You can optimize space by using a 1-D DP array in some problems where only the current
and previous rows are needed. This reduces space complexity from O(m * n) to O(n) or
C
O(m), depending on the problem.
H
IS
N
📌 1️⃣ Definition
IT
EW
Bit manipulation involves operations that directly manipulate bits (0s and 1s) of binary
numbers. Bitwise operations are used to perform tasks like checking bits, setting bits,
toggling bits, and shifting bits. It is commonly used for optimization, low-level programming,
and problem-solving in competitive programming.
one bit is 1.
XOR (^) Bitwise XOR: returns 1 if bits are different. a^b
NOT (~) Bitwise NOT: inverts the bits. ~a
Left Shift Shifts bits to the left by a given number of positions a << n
(<<) (multiplies by 2^n).
Right Shift Shifts bits to the right by a given number of a >> n
(>>) positions (divides by 2^n).
📌 3️⃣ Key Operations & Concepts
3.1 Checking if a Number is Odd or Even
To check if a number is odd or even, use the AND operator (&) with 1.
● Even: n & 1 == 0
● Odd: n & 1 == 1
L
Complexity
A
AND (&) O(1)
**OR (` `)**
H
XOR (^) O(1)
NOT (~) O(1)
C
Left Shift (<<) O(1)
Right Shift (>>) O(1)
Counting Set Bits
Checking Power of
O(number of bits)
O(1)
H
IS
Two
N
NEVER LOOK FOR THE SOLUTION, ALWAYS THINK FOR THE SOLUTION! - NISHCHAL
H