0% found this document useful (0 votes)
20 views10 pages

Recap

RECAP OF CS577

Uploaded by

Harshit Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views10 pages

Recap

RECAP OF CS577

Uploaded by

Harshit Joshi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Graph Representation

Wednesday, April 24, 2024 8:01 PM

1. Adjacency Matrix - |V| * |V| matrix with a value 1 if the nodes are adjacent.
2. If it is a weighted graph, we can also record the weights.
3. Adjacency list - List of all the adjacent nodes for each node.
4. Edge list - List all the node pairs representing the edges.
5. Incidence Matrix - |V| * |E| matrix with a value 1 if the node is adjacent to the edge.
6. Complexities for each representation:

7. For BFS and DFS, adjacency lists are the best representation.

Recap Page 1
Strong Connectivity
Sunday, February 4, 2024 1:07 PM

1. A pair of nodes in a directed graph are mutually reachable, if there is a path from u to v and from v to u.
2. A digraph is strongly connected if all its nodes are mutually reachable from one another.
3. We can check if two nodes are mutually reachable, by doing a DFS/BFS from u to v and v to u.
4. A strongly connect component(SCC), is a maximal strongly connected subgraph.
5. Maximal means that we cannot add any more nodes to the subgraph.
6. To find an SCC, we use the Kosaraju's algorithm:

7. Here, GREV refers to the reverse graph of G, which is basically taking G and flipping all the edges so they point in
the opposite direction.

Recap Page 2
Greedy
Sunday, February 4, 2024 2:25 PM

1. Greedy Algorithms are thought of as heuristic, that are locally optimal.


2. A greedy algorithm processes the input in a specific order. For each request in the input, it processes it in a way
to minimize the objective, assuming that the current input is the last input.
3. This basically means that a greedy algorithm will do the currently best thing without worrying about the future.
4. Note that greedy algorithms are not always optimal and there can be multiple greedy algorithms for a problem.
5. To show that a greedy algorithm is optimal, we have the below two techniques:
a. Stays Ahead - Here, at every time step we show that our solution is better or equal to the optimal solution.
b. Exchange Argument - Start with an optimal solution and transform it using a series of steps to our solution.
6. An example of a greedy algorithm is Dijkstra's Algorithm.

Recap Page 3
Divide and Conquer
Tuesday, April 30, 2024 8:44 PM

1. Split the problem into smaller sub-problems and recursively solve them.
2. Use the output from the smaller sub problems to build the actual problem.
3. For proving such recursive algorithms we always use strong induction.
4. The algorithm for merge sort is:

5. Counting inversions - Same as Merge Sort, but whenever we add an item from B everything in A is an
inversion.
6. Finding kth value - Choose a pivot T, and partition around it. This pivot gets sorted. Recursively call with
this with below k and above k.

Recap Page 4
Dynamic Programming
Thursday, May 2, 2024 2:07 PM

1. For DP problems, always come up with a recursive solution. To get the recursive algorithm, use dichotomy
approach, where the input is divided in two fixed cases.
2. After we come up with the recursive algorithm, we need to identify the number of exclusive parameters our
code is dependent on, which will also be the dimensions of the solution matrix.
3. Once we know the structure of the solution matrix, we define the bellman equation.
4. This is simply contains the base case and the restructured recursive calls that we were making.
5. Next, we explain how we populate the solution matrix from the bellman equation.
6. Finally, we tell which cell contains the solution, usually the first or last cell.

Recap Page 5
Network Flow
Thursday, May 2, 2024 10:32 PM

1. Basically a graph theory problem with a connected directed graph.


2. The edges will have capacities, that is how much max flow can go through the edge, and nodes act as switches
directing the flow.
3. The flow starts at a start node S, and ends at a node t.
4. All the other internal nodes follow the conservation of flow, which means the incoming flow = outgoing flow.
5. The flow of the network flow is the flow going out of S = flow coming in at T.
6. All the problems are going to be a max flow or min cut problem (same value).
7. Max flow is changing the flow in edges to maximize the flow from S to T while maintaining the above constrains.
8. A cut is partitioning the nodes into two sets, where s and t must belong to different sets.
9. Next, we look at the cut capacity, which is basically the sum of all the outgoing flow from the nodes in the set.
10. This cut capacity gives us an upper bound on the max flow.
11. Next, we define the min cut which is cutting the graph in a way that minimizes this cut capacity.
12. Usually when we want to extract a set of nodes we use min cut.
13. Even though the goal might be to maximize something, if it is done by selecting particular nodes, it is going to be
min cut problem.

Recap Page 6
Ford Fulkerson Method
Friday, May 3, 2024 1:25 AM

1. First we define a flow function, which is taking an arbitrary path from S to T and pushing max flow.
2. Next we build a residual graph, that has the exact same nodes.
3. To determine the flow on the edges, we loop through all edges (u, v) and do:
a. Add an edge (u, v) of flow = Original capacity - flow in flow function.
b. If the edge was not in our arbitrary path, then the flow in function will be 0.
c. Next, we add an edge (v, u) (opposite direction), of capacity flow in function.
4. Next, we can get an augmenting path, which is any path from S to T in the residual graph.
5. We can also get the Bottleneck, which is the minimum residual capacity on the augmenting path.
6. We will push this bottleneck on our residual graph, and repeat the process from step 1.
7. We keep doing this until we don’t have any augmenting paths left.
8. The max flow will be the sum of flow through all the augmenting paths we got.
9. This entire thing is basically the ford Fulkerson method.
10. Scaled more efficient version of ford fulkerson:

Recap Page 7
Scaled Version
Friday, May 3, 2024 11:21 AM

1. In the scaled version of ford fulkerson method, we will add a parameter delta, and only consider edges of
residual capacity > delta.
2. Hence, first we initialize all edges in the flow function to be 0.
3. Next, we initialize delta to be the highest power of 2, less than the maximum flow of an edge out of S.
4. For example, if we have edges going out of S with max flow 100, delta will be 64.
5. Next we run the following while loop:
a. While delta >= 1:
i. Run ford fulkerson as above with delta.
ii. Update delta to delta/2.

Recap Page 8
Bipartite Matching
Friday, May 3, 2024 2:56 PM

1. Let's say we have a bipartite graph, and we want to find the largest matching in terms of cardinality.
2. For this, we can convert it to a max flow problem using reduction.
3. Below is how we convert bipartite graph to max flow algorithm.

4. On the exam drawing the graph is not enough, we have to give the actual reduction algorithm for
converting.
5. Note that if we have an undirected graph in our problem, we can convert it to direct by simply adding edges
going in both directions, and then reducing it.
6. Once we have found the maximum flow, below is the method for extracting the disjoint edges in the max
disjoint edges problem:

Recap Page 9
Network extensions
Sunday, May 5, 2024 5:57 PM

1. Flow network with demands:


a. Allows us to have multiple sources and sinks.
b. For this, each node has a demand dv:
i. If dv < 0 - A source node where fin(v) - fout(v) = dv
ii. If dv = 0 - A internal node where fin(v) - fout(v) = 0
iii. If dv > 0 - A sink node where fin(v) - fout(v) = dv
c. Because of the setup, the aim is not to find max flow, but to determine if there is feasible flow, i.e. is there
a flow that satisfies all the demands.
d. If the sum of all demands != 0, then there is no feasible flow.
e. However, if the sum = 0 then there might be feasible flow.
f. Note that this concept of demands is only there to help reduce the problem to normal network flow.
g. Below is how it is done:

2. Flow network with lower bounds:


a. Here, we take the previous graph with demands, and add lower bounds to the edges as well.
b. Now, the flow through all the edges should be between the lower bound and capacity.
c. The conservation fin(v) - fout(v) = dv still holds.
d. Now the goal will be to reduce this to a flow network with just demands, since we already know to reduce
those. Below is the method:

Recap Page 10

You might also like