Note Flow Net Review
Note Flow Net Review
A process cannot be understood by stopping it. Understanding must move with the
flow of the process, must join it and flow with it.
— The First Law of Mentat, in Frank Herbert’s Dune (1965)
Contrary to expectation, flow usually happens not during relaxing moments of leisure
and entertainment, but rather when we are actively involved in a difficult enterprise,
in a task that stretches our mental and physical abilities. . . . Flow is hard to achieve
without effort. Flow is not “wasting time.”
— Mihaly Csíkszentmihályi, Flow: The Psychology of Optimal Experience (1990)
There’s a difference between knowing the path and walking the path.
— Morpheus [Laurence Fishburne], The Matrix (1999)
This one of the first recorded applications of the maximum flow and minimum cut problems.
For both problems, the input is a directed graph G = (V, E), along with special vertices s and t
called the source and target. As in the previous lectures, I will use u v to denote the directed
edge from vertex u to vertex v. Intuitively, the maximum flow problem asks for the largest
amount of material that can be transported from s to t; the minimum cut problem asks for the
minimum damage needed to separate s from t.
23.1 Flows
An (s , t )-flow (or just a flow if the source and target are clear from context) is a function
f : E → R≥0 that satisfies the following conservation constraint at every vertex v except possibly
s and t: X X
f (u v) = f (v w).
u w
In English, the total flow into v is equal to the total flow out of v. To keep the notation simple,
we define f (u v) = 0 if there is no edge u v in the graph. The value of the flow f , denoted | f |,
is the total net flow out of the source vertex s:
X X
| f | := f (sw) − f (us).
w u
It’s not hard to prove that | f | is also equal to the total net flow into the target vertex t, as
follows. To simplify notation, let ∂ f (v) denote the total net flow out of any vertex v:
X X
∂ f (v) := f (u v) − f (v w).
u w
The conservation constraint implies that ∂ f (v) = 0 or every vertex v except s and t, so
X
∂ f (v) = ∂ f (s) + ∂ f (t).
v
On
P the other hand, any flow that leaves one vertex must enter another vertex, so we must have
v ∂ f (v) = 0. It follows immediately that | f | = ∂ f (s) = −∂ f (t).
Now suppose we have another function c : E → R≥0 that assigns a non-negative capacity c(e)
to each edge e. We say that a flow f is feasible (with respect to c) if f (e) ≤ c(e) for every edge e.
Most of the time we will consider only flows that are feasible with respect to some fixed capacity
function c. We say that a flow f saturates edge e if f (e) = c(e), and avoids edge e if f (e) = 0.
The maximum flow problem is to compute a feasible (s, t)-flow in a given directed graph, with
a given capacity function, whose value is as large as possible.
0/5
10/20 5/15
0/15
s 10/10 5/10 t
0/10 5/20
10/10
An (s, t)-flow with value 10. Each edge is labeled with its flow/capacity.
2
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
23.2 Cuts
An (s , t )-cut (or just cut if the source and target are clear from context) is a partition of the
vertices into disjoint subsets S and T —meaning S ∪ T = V and S ∩ T = ∅—where s ∈ S and
t ∈ T.
If we have a capacity function c : E → R≥0 , the capacity of a cut is the sum of the capacities
of the edges that start in S and end in T :
XX
kS, T k := c(v w).
v∈S w∈T
(Again, if v w is not an edge in the graph, we assume c(v w) = 0.) Notice that the definition is
asymmetric; edges that start in T and end in S are unimportant. The minimum cut problem is
to compute an (s, t)-cut whose capacity is as large as possible.
5
20 15
15
s 10 10 t
10 20
10
An (s, t)-cut with capacity 15. Each edge is labeled with its capacity.
Intuitively, the minimum cut is the cheapest way to disrupt all flow from s to t. Indeed, it is
not hard to show the following relationship between flows and cuts:
Lemma 1. Let f be any feasible (s, t)-flow, and let (S, T ) be any (s, t)-cut. The value of f is at
most the capacity of (S, T ). Moreover, | f | = kS, T k if and only if f saturates every edge from S to T
and avoids every edge from T to S.
Proof: Choose your favorite flow f and your favorite cut (S, T ), and then follow the bouncing
inequalities:
X X
|f | = f (sw) − f (us) by definition
w u
X X X
= f (v w) − f (u v) by the conservation constraint
v∈S w u
X X X
= f (v w) − f (u v) removing duplicate edges
v∈S w∈T u∈T
XX
≤ f (v w) because f (u v) ≥ 0
v∈S w∈T
XX
≤ c(v w) because f (v w) ≤ c(v w)
v∈S w∈T
= kS, T k by definition
The two inequalities in this derivation are actually equalities if and only if f (u v) = 0 and
f (v w) = c(v w) for all v ∈ S and u, w ∈ T .
Lemma 1 implies that if | f | = kS, T k, then f must be a maximum flow, and (S, T ) must be a
minimum cut.
3
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
The Maxflow Mincut Theorem. In any flow network with source s and target t, the value of the
maximum (s, t)-flow is equal to the capacity of the minimum (s, t)-cut.
Ford and Fulkerson proved this theorem as follows. Fix a graph G, vertices s and t, and a
capacity function c : E → R≥0 . The proof will be easier if we assume that the capacity function
is reduced: For any vertices u and v, either c(u v) = 0 or c(v u) = 0, or equivalently, if an
edge appears in G, then its reversal does not. This assumption is easy to enforce. Whenever an
edge u v and its reversal v u are both the graph, replace the edge u v with a path u x v of
length two, where x is a new vertex and c(u x) = c(x v) = c(u v). The modified graph has
the same maximum flow value and minimum cut capacity as the original graph.
Since f ≥ 0 and f ≤ c, the residual capacities are always non-negative. It is possible to have
c f (u v) > 0 even if u v is not an edge in the original graph G. Thus, we define the residual
graph G f = (V, E f ), where E f is the set of edges whose residual capacity is positive. Notice that
the residual capacities are not necessarily reduced; it is quite possible to have both c f (u v) > 0
and c f (v u) > 0.
0/5 5
5/15 10 10
10/20
0/15 10 5
s 10/10 5/10 t s 10 15 5 5 t
5
0/10 5/20 10
15
10/10 10
A flow f in a weighted graph G and the corresponding residual graph G f .
Suppose there is no path from the source s to the target t in the residual graph G f . Let S
be the set of vertices that are reachable from s in G f , and let T = V \ S. The partition (S, T ) is
clearly an (s, t)-cut. For every vertex u ∈ S and v ∈ T , we have
4
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
which implies that c(u v) − f (u v) = 0 and f (v u) = 0. In other words, our flow f saturates
every edge from S to T and avoids every edge from T to S. Lemma 1 now implies that | f | = kS, T k,
which means f is a maximum flow and (S, T ) is a minimum cut.
5 5/5
10 10 5/15
10/20
10 5 0/15
s 10 15 5 5 t s 5/10 0/10 t
5
10 5/10 10/20
15
10 10/10
An augmenting path in G f with value F = 5 and the augmented flow f 0 .
To prove that the flow f 0 is feasible with respect to the original capacities c, we need to verify
that f 0 ≥ 0 and f 0 ≤ c. Consider an edge u v in G. If u v is in the augmenting path, then
f 0 (u v) > f (u v) ≥ 0 and
On the other hand, if the reversal v u is in the augmenting path, then f 0 (u v) < f (u v) ≤
c(u v), which implies that
Finally, we observe that (without loss of generality) only the first edge in the augmenting path
leaves s, so | f 0 | = | f | + F > 0. In other words, f is not a maximum flow.
This completes the proof!
5
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
Integrality Theorem. If all capacities in a flow network are integers, then there is a maximum
flow such that the flow through every edge is an integer.
Proof: We argue by induction that after each iteration of the augmenting path algorithm, all
flow values and residual capacities are integers. Before the first iteration, residual capacities are
the original capacities, which are integral by definition. In each later iteration, the induction
hypothesis implies that the capacity of the augmenting path is an integer, so augmenting changes
the flow on each edge, and therefore the residual capacity of each edge, by an integer.
In particular, the algorithm increases the overall value of the flow by a positive integer, which
implies that the augmenting path algorithm halts and returns a maximum flow.
If every edge capacity is an integer, the algorithm halts after | f ∗ | iterations, where f ∗ is
the actual maximum flow. In each iteration, we can build the residual graph G f and perform a
whatever-first-search to find an augmenting path in O(E) time. Thus, for networks with integer
capacities, the Ford-Fulkerson algorithm runs in O(E| f ∗ |) time in the worst case.
The following example shows that this running time analysis is essentially tight. Consider
the 4-node network illustrated below, where X is some large integer. The maximum flow in this
network is clearly 2X . However, Ford-Fulkerson might alternate between pushing 1 unit of flow
along the augmenting path su v t and then pushing 1 unit of flow along the augmenting path
s v u t, leading to a running time of Θ(X ) = Ω(| f ∗ |).
v
X X
s 1 t
X X
u
A bad example for the Ford-Fulkerson algorithm.
Ford and Fulkerson’s algorithm works quite well in many practical situations, or in settings
where the maximum flow value | f ∗ | is small, but without further constraints on the augmenting
paths, this is not an efficient algorithm in general. The example network above can be described
using only O(log X ) bits; thus, the running time of Ford-Fulkerson is actually exponential in the
input size.
6
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
Suppose the Ford-Fulkerson algorithm starts by choosing the central augmenting path, shown
in the large figure on the next page. The three horizontal edges, in order from left to right, now
have residual capacities 1, 0, and φ. Suppose inductively that the horizontal residual capacities
are φ k−1 , 0, φ k for some non-negative integer k.
1. Augment along B, adding φ k to the flow; the residual capacities are now φ k+1 , φ k , 0.
2. Augment along C, adding φ k to the flow; the residual capacities are now φ k+1 , 0, φ k .
3. Augment along B, adding φ k+1 to the flow; the residual capacities are now 0, φ k+1 , φ k+2 .
4. Augment along A, adding φ k+1 to the flow; the residual capacities are now φ k+1 , 0, φ k+2 .
It follows by induction that after 4n + 1 augmentation steps, the horizontal edges have residual
capacities φ 2n−2 , 0, φ 2n−1 . As the number of augmentations grows to infinity, the value of the
flow converges to
∞
X 2 p
1+2 φi = 1 + = 4 + 5 < 7,
i=1
1−φ
s
X X
X
1 1 ϕ
X
X X
A B C
Uri Zwick’s non-terminating flow example, and three augmenting paths.
Practically-minded readers might wonder why we should care about irrational capacities;
after all, computers can’t represent anything but (small) integers or (dyadic) rationals exactly.
Good question! One reason is that the integer restriction is literally artificial; it’s an artifact of
actual computational hardware (or perhaps the otherwise-irrelevant laws of physics), not an
inherent feature of the abstract computational problem. But a more practical reason is that the
behavior of the algorithm with irrational inputs tells us something about its worst-case behavior in
practice given floating-point capacities—terrible! Even with very reasonable capacities, a careless
implementation of Ford-Fulkerson could enter an infinite loop simply because of round-off error.
7
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
Consider an arbitrary graph G with source vertex s and target vertex t. Fix any two (s, t)-flows
f and g and any two real numbers α and β, and consider the function h: E → R defined by
setting
h(u v) := α · f (u v) + β · g(u v)
for every edge u v; we can write this definition more simply as h = α f + β g. Straightforward
definition-chasing implies that h is also an (s, t)-flow with value |h| = α| f | + β|g|. More generally,
any weighted sum of (s, t)-flows is also an (s, t)-flow.
It turns out that any (s, t)-flow can be written as a weighted sum of flows with a very special
structure. For any directed path P from s to t, we define a corresponding path flow as follows;
1
if u v ∈ P,
P(u v) = −1 if v u ∈ P,
0 otherwise.
again, it is easy to verify that C : E → R is an (s, t)-flow with value zero. Our earlier argument
implies that any weighted sum of these path and cycle flows gives us another an (s, t)-flow; this
weighted sum is called a flow decomposition of the resulting flow. Moreover, every flow has such
a decomposition.
Flow Decomposition Theorem. Every feasible (s, t)-flow f can be written as a weighted sum of
directed (s, t)-paths and directed cycles. Moreover, a directed edge u v appears in at least one of
these paths or cycles if and only if f (u v) > 0, and the total number of paths and cycles is at most
the number of edges in the network.
Proof: We prove the theorem by induction on the number of edges carrying non-zero flow. Fix
an arbitrary (s, t)-flow f in an arbitrary flow network G. There are three cases to consider:
• If f (u v) = 0 for every edge u v, then f is a weighted sum of the empty set of paths and
cycles.
8
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
Define fmin (C) := mine∈C f (e), and consider the function f 0 := f − fmin (C) · C, or more
verbosely,
f (u v) − fmin (C) if u v ∈ C,
f 0 (u v) := f (u v) + fmin (C) if v u ∈ C,
f (u v)
otherwise.
Straightforward definition chasing shows that f 0 is indeed a feasible flow in G with value 0.
There is at least one edge e ∈ C such that f (e) = fmin (C) and therefore f 0 (e) = 0. Thus,
fewer edges carry flow in f 0 than in f . The induction hypothesis implies that f 0 has a valid
decomposition into at most E − 1 paths and cycles. Adding C with the appropriate weight
gives us a flow decomposition for f ; specifically, f = f 0 + fmin (C) · C.
• The final case | f | > 0 is similar to the previous case. Conservation implies that there is a
directed walk s v1 v2 · · · v` t where every edge carries positive flow. By removing
loops, we can assume without loss of generality that this walk is a simple path P.
Let fmin (P) := mine∈P f (e); and define a new flow f 0 := f − fmin (P) · P. We easily
verify that f 0 is a feasible flow in G with value | f | − fmin (P). The induction hypothesis
implies that f 0 can be decomposed into at most E − 1 paths and cycles. Adding P with
weight gives us a flow decomposition for f .
The previous argument implies that we can strengthen the decomposition theorem in two
interesting special cases. First, we can decompose any flow with value zero into a weighted sum
of cycles; no paths are necessary. Flows with value zero are often called circulations. On the
other hand, we can decompose any acyclic (s, t)-flow into a weighted sum of (s, t)-paths; no
cycles are necessary.
The proof also immediately translates directly into an algorithm, similar to the Ford-Fulkerson
algorithm, to decompose any (s, t)-flow into paths and cycles in O(V E) time. The algorithm
repeatedly seeks either a directed (s, t)-path or a directed cycle in the remaining flow, and then
subtracts as much flow as possible along that path or cycle, until the flow is empty. Each iteration
can be executed in O(V ) time and removes at least one edge from the graph, so the entire
algorithm runs in O(V E) time.
Flow decompositions provide a natural lower bound on the running time of any maximum-flow
algorithm that builds the flow one path or cycle at a time. Every flow can be decomposed into
at most E paths and cycles, each of which uses at most V edges, so the overall complexity of
the flow decomposition is O(V E). Moreover, it is easy to construct flows for which every flow
decomposition has complexity Θ(V E). Thus, any maximum-flow algorithm that (either explicitly
or implicitly) constructs a flow as a sum of paths or cycles—in particular, any implementation of
Ford and Fulkerson’s augmenting path algorithm—must take Ω(V E) time in the worst case.
9
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
It’s a fairly easy to show that the maximum-bottleneck (s, t)-path in a directed graph can be
computed in O(E log V ) time using a variant of Jarník’s minimum-spanning-tree algorithm, or
of Dijkstra’s shortest path algorithm. Simply grow a directed spanning tree T , rooted at s.
Repeatedly find the highest-capacity edge leaving T and add it to T , until T contains a path
from s to t. Alternately, one could emulate Kruskal’s algorithm—insert edges one at a time in
decreasing capacity order until there is a path from s to t—although this is less efficient, at least
when the graph is directed.
We can now analyze the algorithm in terms of the value of the maximum flow f ∗ . Let f be
any flow in G, and let f 0 be the maximum flow in the current residual graph G f . (At the beginning
of the algorithm, G f = G and f 0 = f ∗ .) We have already proved that f 0 can be decomposed into
at most E paths and cycles. A simple averaging argument implies that at least one of the paths in
this decomposition must carry at least | f 0 |/E units of flow. It follows immediately that the fattest
(s, t)-path in G f carries at least | f 0 |/E units of flow.
Thus, augmenting f along the maximum-bottleneck path in G f multiplies the value of the
remaining maximum flow in G f by a factor of at most 1 − 1/E. In other words, the residual
maximum flow value decays exponentially with the number of iterations. After E · ln| f ∗ | iterations,
the maximum flow value in G f is at most
∗ ∗
| f ∗ | · (1 − 1/E) E·ln| f | < | f ∗ | e− ln| f | = 1.
(That’s Euler’s constant e, not the edge e. Sorry.) In particular, if all the capacities are integers,
then after E · ln| f ∗ | iterations, the maximum capacity of the residual graph is zero and f is a
maximum flow.
We conclude that for graphs with integer capacities, the Edmonds-Karp ‘fat pipe’ algorithm
runs in O(E 2 log E log| f ∗ |) time, which is actually a polynomial function of the input size.
The second Edmonds-Karp rule was actually proposed by Ford and Fulkerson in their original
max-flow paper; a variant of this rule was independently considered by the Russian mathematician
Yefim Dinits around the same time as Edmonds and Karp.
The shortest augmenting path can be found in O(E) time by running breadth-first search in the
residual graph. Surprisingly, the resulting algorithm halts after a polynomial number of iterations,
independent of the actual edge capacities!
The proof of this polynomial upper bound relies on two observations about the evolution of the
residual graph. Let f i be the current flow after i augmentation steps, let Gi be the corresponding
residual graph. In particular, f0 is zero everywhere and G0 = G. For each vertex v, let leveli (v)
denote the unweighted shortest path distance from s to v in Gi , or equivalently, the level of v in a
breadth-first search tree of Gi rooted at s.
Our first observation is that these levels can only increase over time.
10
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
Lemma 2. leveli+1 (v) ≥ leveli (v) for all vertices v and integers i.
Proof: The claim is trivial for v = s, since leveli (s) = 0 for all i. Choose an arbitrary vertex
v 6= s, and let s · · · u v be a shortest path from s to v in Gi+1 . (If there is no such
path, then leveli+1 (v) = ∞, and we’re done.) Because this is a shortest path, we have
leveli+1 (v) = leveli+1 (u) + 1, and the inductive hypothesis implies that leveli+1 (u) ≥ leveli (u).
We now have two cases to consider. If u v is an edge in Gi , then leveli (v) ≤ leveli (u) + 1,
because the levels are defined by breadth-first traversal.
On the other hand, if u v is not an edge in Gi , then v u must be an edge in the ith
augmenting path. Thus, v u must lie on the shortest path from s to t in Gi , which implies that
leveli (v) = leveli (u) − 1 ≤ leveli (u) + 1.
In both cases, we have leveli+1 (v) = leveli+1 (u) + 1 ≥ leveli (u) + 1 ≥ leveli (v).
Whenever we augment the flow, the bottleneck edge in the augmenting path disappears from
the residual graph, and some other edge in the reversal of the augmenting path may (re-)appear.
Our second observation is that an edge cannot appear or disappear too many times.
Lemma 3. During the execution of the Edmonds-Karp short-pipe algorithm, any edge u v disap-
pears from the residual graph G f at most V /2 times.
Proof: Suppose u v is in two residual graphs Gi and G j+1 , but not in any of the intermediate
residual graphs Gi+1 , . . . , G j , for some i < j. Then u v must be in the ith augmenting path, so
leveli (v) = leveli (u) + 1, and v u must be on the jth augmenting path, so level j (v) = level j (u) − 1.
By the previous lemma, we have
In other words, the distance from s to u increased by at least 2 between the disappearance
and reappearance of u v. Since every level is either less than V or infinite, the number of
disappearances is at most V /2.
Now we can derive an upper bound on the number of iterations. Since each edge can
disappear at most V /2 times, there are at most EV /2 edge disappearances overall. But at least
one edge disappears on each iteration, so the algorithm must halt after at most EV /2 iterations.
Finally, since each iteration requires O(E) time, this algorithm runs in O(V E 2 ) time overall.
11
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
The fastest maximum flow algorithm known, announced by James Orlin in 2012, runs in
O(V E) time, exactly matching the worst-case complexity of a flow decomposition. The details of
Orlin’s algorithm are far beyond the scope of this course; in addition to his own new techniques,
Orlin uses several older algorithms and data structures as black boxes, most of which are
themselves quite complicated. (In particular, orlin’s algorithm does not construct an explicit flow
decomposition; in fact, for graphs with only O(V ) edges, an extension of his algorithm actually
runs in only O(V 2 / log V ) time!) Nevertheless, for purposes of analyzing algorithms that use
maximum flows, this is the time bound you should cite. So write the following sentence on your
cheat sheets and cite it in your homeworks:
Exercises
1. Suppose you are given a directed graph G = (V, E), two vertices s and t, a capacity function
c : E → R+ , and a second function f : E → R. Describe an algorithm to determine whether
f is a maximum (s, t)-flow in G.
2. Let (S, T ) and (S 0 , T 0 ) be minimum (s, t)-cuts in some flow network G. Prove that (S ∩ S 0 ,
T ∪ T 0 ) and (S ∪ S 0 , T ∩ T 0 ) are also minimum (s, t)-cuts in G.
4. Fix any flow network G = (V, E). Our observation that any weighted sum of (s, t)-flows
is itself an (s, t)-flow implies that the set of all (s, t)-flows in any graph actually define a
vector space over the reals.
12
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
(c) Let T be any spanning tree of G, and let F be the forest obtained by deleting any
single edge in T . Prove that the following collection of paths and cycles define a basis
for this vector space:
• The unique path in F ∪ {e} from s to t, for every edge e 6∈ F that has one endpoint
in each component of F ;
• The unique cycle in F ∪ {e}, for every edge e 6∈ F with both endpoints in the same
component of F .
5. Cuts are sometimes defined as subsets of the edges of the graph, instead of as partitions of
its vertices. In this problem, you will prove that these two definitions are almost equivalent.
We say that a subset X of (directed) edges separates s and t if every directed path
from s to t contains at least one (directed) edge in X . For any subset S of vertices, let δS
denote the set of directed edges leaving S; that is, δS := {u v | u ∈ S, v 6∈ S}.
6. Suppose instead of capacities, we consider networks where each edge u v has a non-
negative demand d(u v). Now an (s, t)-flow f is feasible if and only if f (u v) ≥ d(u v)
for every edge u v. (Feasible flow values can now be arbitrarily large.) A natural problem
in this setting is to find a feasible (s, t)-flow of minimum value.
(a) Describe an efficient algorithm to compute a feasible (s, t)-flow, given the graph, the
demand function, and the vertices s and t as input. [Hint: Find a flow that is non-zero
everywhere, and then scale it up to make it feasible.]
(b) Suppose you have access to a subroutine MaxFlow that computes maximum flows in
networks with edge capacities. Describe an efficient algorithm to compute a minimum
flow in a given network with edge demands; your algorithm should call MaxFlow
exactly once.
(c) State and prove an analogue of the max-flow min-cut theorem for this setting. (Do
minimum flows correspond to maximum cuts?)
7. For any flow network G and any vertices u and v, let bottleneckG (u, v) denote the maximum,
over all paths π in G from u to v, of the minimum-capacity edge along π.
(a) Describe and analyze an algorithm to compute bottleneckG (s, t) in O(E log V ) time.
(b) Describe an algorithm to construct a spanning tree T of G such that bottleneck T (u, v) =
bottleneckG (u, v) for all vertices u and v. (Edges in T inherit their capacities from G.)
8. Suppose you have already computed a maximum flow f ∗ in a flow network G with integer
edge capacities.
13
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
(a) Describe and analyze an algorithm to update the maximum flow after the capacity of
a single edge is increased by 1.
(b) Describe and analyze an algorithm to update the maximum flow after the capacity of
a single edge is decreased by 1.
Both algorithms should be significantly faster than recomputing the maximum flow from
scratch.
(a) Does every network G have at least one upper-binding edge? Prove your answer is
correct.
(b) Does every network G have at least one lower-binding edge? Prove your answer is
correct.
(c) Describe an algorithm to find all upper-binding edges in G, given both G and a
maximum flow in G as input, in O(E) time.
(d) Describe an algorithm to find all lower-binding edges in G, given both G and a
maximum flow in G as input, in O(EV ) time.
10. A new assistant professor, teaching maximum flows for the first time, suggests the following
greedy modification to the generic Ford-Fulkerson augmenting path algorithm. Instead
of maintaining a residual graph, just reduce the capacity of edges along the augmenting
path! In particular, whenever we saturate an edge, just remove it from the graph.
GreedyFlow(G, c, s, t):
for every edge e in G
f (e) ← 0
while there is a path from s to t
π ← an arbitrary path from s to t
F ← minimum capacity of any edge in π
for every edge e in π
f (e) ← f (e) + F
if c(e) = F
remove e from G
else
c(e) ← c(e) − F
return f
(a) Show that GreedyFlow does not always compute a maximum flow.
(b) Show that GreedyFlow is not even guaranteed to compute a good approximation
to the maximum flow. That is, for any constant α > 1, there is a flow network G
such that the value of the maximum flow is more than α times the value of the flow
computed by GreedyFlow. [Hint: Assume that GreedyFlow chooses the worst possible
path π at each iteration.]
14
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
(c) Prove that for any flow network, if the Greedy Path Fairy tells you precisely which
path π to use at each iteration, then GreedyFlow does compute a maximum flow.
(Sadly, the Greedy Path Fairy does not actually exist.)
11. A given flow network G may have more than one minimum (s, t)-cut. Let’s define the
best minimum (s, t)-cut to be any minimum cut (S, T ) with the smallest number of edges
crossing from S to T .
(a) Describe an efficient algorithm to determine whether a given flow network contains a
unique minimum (s, t)-cut.
(b) Describe an efficient algorithm to find the best minimum (s, t)-cut when the capacities
are integers.
(c) Describe an efficient algorithm to find the best minimum (s, t)-cut for arbitrary edge
capacities.
(d) Describe an efficient algorithm to determine whether a given flow network contains a
unique best minimum (s, t)-cut.
12. We can speed up the Edmonds-Karp “fat pipe” heuristic, at least for integer capacities, by
relaxing our requirements for the next augmenting path. Instead of finding the augmenting
path with maximum bottleneck capacity, we find a path whose bottleneck capacity is at
least half of maximum, using the following capacity scaling algorithm.
The algorithm maintains a bottleneck threshold ∆; initially, ∆ is the maximum capacity
among all edges in the graph. In each phase, the algorithm augments along paths from s to
t in which every edge has residual capacity at least ∆. When there is no such path, the
phase ends, we set ∆ ← b∆/2c, and the next phase begins.
(a) How many phases will the algorithm execute in the worst case, if the edge capacities
are integers?
(b) Let f be the flow at the end of a phase for a particular value of ∆. Let S be the nodes
that are reachable from s in the residual graph G f using only edges with residual
capacity at least ∆, and let T = V \ S. Prove that the capacity (with respect to G’s
original edge capacities) of the cut (S, T ) is at most | f | + E · ∆.
(c) Prove that in each phase of the scaling algorithm, there are at most 2E augmentations.
(d) What is the overall running time of the scaling algorithm, assuming all the edge
capacities are integers?
13. An (s , t )-series-parallel graph is an directed acyclic graph with two designated vertices s
(the source) and t (the target or sink) and with one of the following structures:
15
Algorithms Lecture 23: Maximum Flows and Minimum Cuts [Sp’15]
14. In 1980 Maurice Queyranne published the following example of a flow network where
Edmonds and Karp’s “fat pipe” heuristic does not halt. Here, as in Zwick’s bad pexample
for the original Ford-Fulkerson algorithm, φ denotes the inverse golden ratio ( 5 − 1)/2.
The three vertical edges play essentially the same role as the horizontal edges in Zwick’s
example.
s s s
t t t
(a) Show that the following infinite sequence of path augmentations is a valid execution
of the Edmonds-Karp “fat pipe” algorithm. (See the figure above.)
QueyranneFatPipes:
for i ← 1 to ∞
push φ 3i−2 units of flow along sa f g bhc d t
push φ 3i−1 units of flow along s f a b g hc t
push φ 3i units of flow along se f a g bc h t
forever
(b) Describe a sequence of O(1) path augmentations that yields a maximum flow in
Queyranne’s network.