0% found this document useful (0 votes)
92 views6 pages

Lecture 19: Dynamic Programming I: Memoization, Fibonacci, Shortest Paths, Guessing

This document summarizes key concepts from a lecture on dynamic programming. It discusses memoization and subproblems, using the Fibonacci sequence and shortest paths problems as examples. For Fibonacci, it shows how a naive recursive solution is exponential but a memoized solution is polynomial time. It also introduces bottom-up dynamic programming. For shortest paths, it describes how dynamic programming can handle directed acyclic graphs in polynomial time by modeling the problem as finding shortest paths in an associated DAG.

Uploaded by

Ioannis Frangis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views6 pages

Lecture 19: Dynamic Programming I: Memoization, Fibonacci, Shortest Paths, Guessing

This document summarizes key concepts from a lecture on dynamic programming. It discusses memoization and subproblems, using the Fibonacci sequence and shortest paths problems as examples. For Fibonacci, it shows how a naive recursive solution is exponential but a memoized solution is polynomial time. It also introduces bottom-up dynamic programming. For shortest paths, it describes how dynamic programming can handle directed acyclic graphs in polynomial time by modeling the problem as finding shortest paths in an associated DAG.

Uploaded by

Ioannis Frangis
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Lecture 19

Dynamic Programming I of IV 6.006 Fall 2011

Lecture 19: Dynamic Programming I:


Memoization, Fibonacci, Shortest Paths, Guessing
Lecture Overview
• Memoization and subproblems

• Examples

– Fibonacci
– Shortest Paths

• Guessing & DAG View

Dynamic Programming (DP)


Big idea, hard, yet simple

• Powerful algorithmic design technique

• Large class of seemingly exponential problems have a polynomial solution (“only”)


via DP

• Particularly for optimization problems (min / max) (e.g., shortest paths)

* DP ≈ “controlled brute force”


* DP ≈ recursion + re-use

History
Richard E. Bellman (1920-1984)
Richard Bellman received the IEEE Medal of Honor, 1979. “Bellman . . . explained that
he invented the name ‘dynamid programming’ to hide the fact that he was doing mathe­
matical research at RAND under a Secretary of Defense who ‘had a pathological fear and
hatred of the term, research’. He settled on the term ‘dynamic programming’ because it
would be difficult to give a ‘pejorative meaning’ and because ‘it was something not even a
Congressman could object to’ ” [John Rust 2006]

Fibonacci Numbers
F1 = F2 = 1; Fn = Fn−1 + Fn−2
Goal: compute Fn

Lecture 19 Dynamic Programming I of IV 6.006 Fall 2011

Naive Algorithm
follow recursive definition

fib(n):
if n ≤ 2: return f = 1
else: return f = fib(n − 1) + fib(n − 2)
=⇒ T (n) = T (n − 1) + T (n − 2) + O(1) ≥ Fn ≈ ϕn
≥ 2T (n − 2) + O(1) ≥ 2n/2
EXPONENTIAL — BAD!

Fn

Fn-1 Fn-2

Fn-2
Fn-3 Fn-3 Fn-4

Figure 1: Naive Fibonacci Algorithm.

Memoized DP Algorithm
Remember, remember

memo = { }
fib(n):
if n in memo: return memo[n]
else: if n ≤ 2 : f = 1
else: f = fib(n − 1) + fib(n − 2)
memo[n] = f
return f

Lecture 19 Dynamic Programming I of IV 6.006 Fall 2011

• =⇒ fib(k) only recurses first time called, ∀k

• =⇒ only n nonmemoized calls: k = n, n − 1, . . . , 1

• memoized calls free (Θ(1) time)

• =⇒ Θ(1) time per call (ignoring recursion)

POLYNOMIAL — GOOD!

* DP ≈ recursion + memoization

• memoize (remember) & re-use solutions to subproblems that help solve problem

– in Fibonacci, subproblems are F1 , F2 , . . . , Fn

* =⇒ time = # of subproblems · time/subproblem

• Fibonacci: # of subproblems is n, and time/subproblem is Θ(1) = Θ(n) (ignore


recursion!).

Bottom-up DP Algorithm

fib = {}

⎪ Θ(n)

for k in [1, 2, . . . , n]:



⎫ ⎪


if k ≤ 2: f = 1


Θ(1)

else: f = fib[k − 1] + fib[k − 2] ⎪



⎪ ⎪
fib[k] = f ⎭ ⎪




return fib[n]

• exactly the same computation as memoized DP (recursion “unrolled”)

• in general: topological sort of subproblem dependency DAG

. . . Fn-2 Fn-1 Fn

• practically faster: no recursion

• analysis more obvious

• can save space: just remember last 2 fibs =⇒ Θ(1)

[Sidenote: There is also an O(lg n)-time algorithm for Fibonacci, via different techniques]

Lecture 19 Dynamic Programming I of IV 6.006 Fall 2011

Shortest Paths
• Recursive formulation:
δ(s, v) = min{w(u, v) + δ(s, u) (u, v) ∈ E}

• Memoized DP algorithm: takes infinite time if cycles!


in some sense necessary to handle negative cycles

s t

Figure 2: Shortest Paths

• works for directed acyclic graphs in O(V + E)


effectively DFS/topological sort + Bellman-Ford round rolled into a single recursion

* Subproblem dependency should be acyclic


• more subproblems remove cyclic dependence:

δk (s, v) = shortest s → v path using ≤ k edges

• recurrence:

δk (s, v) = min{δk−1 (s, u) + w(u, v) (u, v) ∈ E}


δ0 (s, v) = ∞ for s = v (base case)
δk (s, s) = 0 for any k (base case, if no negative cycles)

• Goal: δ(s, v) = δ|V |−1 (s, v) (if no negative cycles)

• memoize

• time: # subproblems · time/subproblem


# - # -
|V |·|V | · O(v) = 0(V 3 )
• actually Θ(indegree(v)) for δk (s, v)

• =⇒ time = Θ(V v∈V indegree(V )) = Θ(V E)


BELLMAN-FORD!

Lecture 19 Dynamic Programming I of IV 6.006 Fall 2011

Guessing
How to design recurrence

• want shortest s → v path

s . . . u v

• what is the last edge in path? dunno

• guess it is (u, v)

• path is shortest s → u path + edge (u, v)


# -
by optimal substructure
• cost is δk−1 (s, u) + w(u, v)
# -
another subproblem
• to find best guess, try all (|V | choices) and use best

• * key: small (polynomial) # possible guesses per subproblem — typically this domi­
nates time/subproblem

* DP ≈ recursion + memoization + guessing

DAG view

1
time
2

• like replicating graph to represent time

• converting shortest paths in graph → shortest paths in DAG

* DP ≈ shortest paths in some DAG

5
MIT OpenCourseWare
http://ocw.mit.edu

6.006 Introduction to Algorithms


Fall 2011

For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.

You might also like