SlideShare a Scribd company logo
COMPSCI 311: Introduction to
Algorithms
Lecture 14: Dynamic Programming
Ghazaleh Parvini
University of Massachusetts
Amherst
Algorithm Design Techniques
► Greedy
► Divide and Conquer
► Dynamic
Programming
► Network Flows
Learning Goals
Greed
y
Divide
and
Conquer
Dynamic
Programmi
ng
Formulate problem
Design algorithm
Prove correctness
Analyze running
time Specific
algorithms
Dijkstra
, MST
Bellman-
Ford
Weighted Interval Scheduling
► TV scheduling problem: n shows, can only watch one at
a time. New twist: show j has value vj . Want a set of
shows S with no overlap and highest total value.
► Example on board
► Greedy?
Weighted Interval Scheduling
► TV scheduling problem: n shows, can only watch one at
a time. New twist: show j has value vj . Want a set of
shows S with no overlap and highest total value.
► Example on board
► Greedy? No longer optimal.
Problem Formulation
► Show (job) j has value vj , start time sj , finish time fj
► Assume shows sorted by finishing time f1 ≤ f2 ≤ . . .
≤ fn
► Shows i and j are compatible if they don’t overlap
► Goal: subset of non-overlapping jobs with
maximum value
Dynamic Programming Recipe
► Step 1: Devise simple recursive algorithm for value
of optimal solution
► Flavor: make “first choice”, then recursively
solve remaining part of the problem.
(Problem: solve redundant subproblems →
exponential time)
Dynamic Programming Recipe
► Step 1: Devise simple recursive algorithm for value
of optimal solution
► Flavor: make “first choice”, then recursively
solve remaining part of the problem.
(Problem: solve redundant subproblems →
exponential time)
► Step 2: Write recurrence for optimal value
Dynamic Programming Recipe
► Step 1: Devise simple recursive algorithm for value
of optimal solution
► Flavor: make “first choice”, then recursively
solve remaining part of the problem.
(Problem: solve redundant subproblems →
exponential time)
► Step 2: Write recurrence for optimal value
► Step 3: Design bottom-up iterative algorithm
Dynamic Programming Recipe
► Step 1: Devise simple recursive algorithm for value
of optimal solution
► Flavor: make “first choice”, then recursively
solve remaining part of the problem.
(Problem: solve redundant subproblems →
exponential time)
► Step 2: Write recurrence for optimal value
► Step 3: Design bottom-up iterative algorithm
► Epilogue: Recover optimal solution
Step 1: Recursive Algorithm
► Observation: Let O be the optimal solution. Either
n ∈ O or n ∈/ O. In either case, we can reduce the
problem to a smaller instance of the same problem.
Step 1: Recursive Algorithm
► Observation: Let O be the optimal solution. Either
n ∈ O or n ∈/ O. In either case, we can reduce the
problem to a smaller instance of the same problem.
► Recursive algorithm to compute value of optimal
subset of first
j shows
Compute-Value(j)
Step 1: Recursive Algorithm
► Observation: Let O be the optimal solution. Either
n ∈ O or n ∈/ O. In either case, we can reduce the
problem to a smaller instance of the same problem.
► Recursive algorithm to compute value of optimal
subset of first
j shows
Compute-Value(j)
Base case: if j = 0 return 0
Step 1: Recursive Algorithm
► Observation: Let O be the optimal solution. Either
n ∈ O or n ∈/ O. In either case, we can reduce the
problem to a smaller instance of the same problem.
► Recursive algorithm to compute value of optimal
subset of first
j shows
Compute-Value(j)
Base case: if j = 0 return 0
Case 1: j ∈ O
Let i < j be highest-numbered show compatible
with j
val1 = vj + Compute-Value(i)
Step 1: Recursive Algorithm
► Observation: Let O be the optimal solution. Either
n ∈ O or n ∈/ O. In either case, we can reduce the
problem to a smaller instance of the same problem.
► Recursive algorithm to compute value of optimal
subset of first
j shows
Compute-Value(j)
Base case: if j = 0 return 0
Case 1: j ∈ O
Let i < j be highest-numbered show compatible
with j
val1 = vj + Compute-Value(i)
Case 2: j ∈/ O
val2 = Compute-Value(j − 1)
Step 1: Recursive Algorithm
► Observation: Let O be the optimal solution. Either
n ∈ O or n ∈/ O. In either case, we can reduce the
problem to a smaller instance of the same problem.
► Recursive algorithm to compute value of optimal
subset of first
j shows
Compute-Value(j)
Base case: if j = 0 return 0
Case 1: j ∈ O
Let i < j be highest-numbered show compatible
with j
val1 = vj + Compute-Value(i)
Case 2: j ∈/ O
val2 = Compute-Value(j − 1)
Clicker
Compute-
Value(j) if j = 0
return 0
Let i < j be
highest-
numbered
show
compatible
with j
val1 = vj + Compute-Value(i)
val2 = Compute-Value(j −
1) return max(val1, val2)
The running time of this
recursive solution is
A. O(n log n)
Running
Time?
► Recursion
tree
Running
Time?
► Recursion tree
► ≈ 2n subproblems ⇒ exponential
time
Running Time?
► Recursion tree
► ≈ 2n subproblems ⇒ exponential time
► Only n unique subproblems. Save work by
ordering computation to solve each problem
once.
Step 2: Recurrence
A recurrence expresses the optimal value for a problem of
size j in terms of the optimal value of subproblems of size i
< j.
`
x
C
a
˛
s
¸
e
1
OPT(0) = 0
OPT(j) = max{vj + OPT(pj ),
`
x
C
a
˛
s
¸
e
2
OPT(j − 1)}
► OPT(j): value of optimal solution on first j shows
► pj : highest-numbered show i < j that is compatible
with j
Recursive Algorithm vs. Recurrence
► Compute-Value(j)
If j = 0 return 0
val1 = vj + Compute-Value(pj
) val2 = Compute-Value(j −
1) return max(val1, val2)
Recursive Algorithm vs. Recurrence
► Compute-Value(j)
If j = 0 return 0
val1 = vj + Compute-Value(pj
) val2 = Compute-Value(j −
1) return max(val1, val2)
► Recurrence
OPT(j) = max{vj + OPT(pj ), OPT(j − 1)}
OPT(0) = 0
Recursive Algorithm vs. Recurrence
► Compute-Value(j)
If j = 0 return 0
val1 = vj + Compute-Value(pj
) val2 = Compute-Value(j −
1) return max(val1, val2)
► Recurrence
OPT(j) = max{vj + OPT(pj ), OPT(j − 1)}
OPT(0) = 0
► Direct correspondence between the algorithm
and recurrence
Recursive Algorithm vs. Recurrence
► Compute-Value(j)
If j = 0 return 0
val1 = vj + Compute-Value(pj
) val2 = Compute-Value(j −
1) return max(val1, val2)
► Recurrence
OPT(j) = max{vj + OPT(pj ), OPT(j − 1)}
OPT(0) = 0
► Direct correspondence between the algorithm
and recurrence
► Tip: start by writing the recursive algorithm and
translating it to a recurrence (replace method name by
“OPT”). After some practice, skip straight to the
recurrence
Step 3: Iterative “Bottom-Up”
Algorithm
Idea: compute the optimal value of every unique
subproblem in order from smallest (base case) to largest
(original problem). Use recurrence for each subproblem.
WeightedIS
Step 3: Iterative “Bottom-Up”
Algorithm
Idea: compute the optimal value of every unique
subproblem in order from smallest (base case) to largest
(original problem). Use recurrence for each subproblem.
WeightedIS
Initialize array M of size n to hold optimal values
Step 3: Iterative “Bottom-Up”
Algorithm
Idea: compute the optimal value of every unique
subproblem in order from smallest (base case) to largest
(original problem). Use recurrence for each subproblem.
WeightedIS
Initialize array M of size n to hold optimal values
M [0] = 0 ▷ Value of
empty set
Step 3: Iterative “Bottom-Up”
Algorithm
Idea: compute the optimal value of every unique
subproblem in order from smallest (base case) to largest
(original problem). Use recurrence for each subproblem.
WeightedIS
Initialize array M of size n to hold optimal values
M [0] = 0 ▷ Value of
empty set
for j = 1 to n do
Step 3: Iterative “Bottom-Up”
Algorithm
Idea: compute the optimal value of every unique
subproblem in order from smallest (base case) to largest
(original problem). Use recurrence for each subproblem.
WeightedIS
Initialize array M of size n to hold optimal values
▷ Value of empty
set
M [0] = 0
for j = 1 to n do
M [j] = max(vj + M [pj ], M [j −
1])
Step 3: Iterative “Bottom-Up”
Algorithm
Idea: compute the optimal value of every unique
subproblem in order from smallest (base case) to largest
(original problem). Use recurrence for each subproblem.
WeightedIS
Initialize array M of size n to hold optimal values
▷ Value of empty
set
M [0] = 0
for j = 1 to n do
M [j] = max(vj + M [pj ], M [j −
1])
► Example
Step 3: Observations
WeightedIS
Initialize array M of size n to hold optimal
values ▷ Value of empty
set
M [0] = 0
for j = 1 to n do
M [j] = max(vj + M [pj ], M [j −
1])
► Iterative algorithm is a direct “wrapping” of
recurrence in appropriate for loop.
► Pay attention to dependence on previously-
computed entries of
M to know in what order to iterate through array.
► Running time?
Step 3: Observations
WeightedIS
Initialize array M of size n to hold optimal
values ▷ Value of empty
set
M [0] = 0
for j = 1 to n do
M [j] = max(vj + M [pj ], M [j −
1])
► Iterative algorithm is a direct “wrapping” of
recurrence in appropriate for loop.
► Pay attention to dependence on previously-
computed entries of
M to know in what order to iterate through array.
► Running time? O(n)
Memoization
Intermediate approach: keep recursive function
structure, but store value in array on first
computation, and reuse it
Memoization
Intermediate approach: keep recursive function
structure, but store value in array on first
computation, and reuse it
Initialize array M of size n to empty, M[0] = 0
Memoization
Intermediate approach: keep recursive function
structure, but store value in array on first
computation, and reuse it
Initialize array M of size n to empty, M[0] = 0
function Mfun(j)
Memoization
Intermediate approach: keep recursive function
structure, but store value in array on first
computation, and reuse it
Initialize array M of size n to empty, M[0] = 0
function Mfun(j)
if M[j] = empty then
Memoization
Intermediate approach: keep recursive function
structure, but store value in array on first
computation, and reuse it
Initialize array M of size n to empty, M[0] = 0
function Mfun(j)
if M[j] = empty then
M[j] = max(vj + Mfun(pj ), Mfun(j − 1))
Memoization
Intermediate approach: keep recursive function
structure, but store value in array on first
computation, and reuse it
Initialize array M of size n to empty, M[0] = 0
function Mfun(j)
if M[j] = empty then
M[j] = max(vj + Mfun(pj ), Mfun(j − 1))
return M[j]
► Can help if we have recursive structure but
unsure of iteration order, or as intermediate step in
converting to iteration
Clicker
The asymptotic running time of the memoized algorithm is
A. the same as the initial recursive solution.
B. between the initial recursive solution and the iterative
version.
C. the same as the iterative version.
Epilogue: Recovering the Solution
(1)
Idea: modify the algorithm to save best choice for each
subproblem WeigthedIS
Initialize array M [0 . . . n] to hold optimal
values Initialize array choose[1 . . . n] to
hold choices M [0] = 0
for j = 1 to n do
M [j] = max(vj + M [pj ], M [j − 1])
Set choose[j] = 1 if first value is bigger,
and 0 otherwise
Epilogue: Recovering the Solution (2)
Then trace back from end and "execute" the
choices
Epilogue: Recovering the Solution
(2)
Then trace back from end and "execute" the
choices Use algorithm above to fill in M and
choose arrays
Epilogue: Recovering the Solution
(2)
Then trace back from end and "execute" the
choices Use algorithm above to fill in M and
choose arrays O = {}
Epilogue: Recovering the Solution
(2)
Then trace back from end and "execute" the
choices Use algorithm above to fill in M and
choose arrays O = {}
j = n
Epilogue: Recovering the Solution
(2)
Then trace back from end and "execute" the
choices Use algorithm above to fill in M and
choose arrays O = {}
j = n
while j > 0 do
Epilogue: Recovering the Solution
(2)
Then trace back from end and "execute" the
choices Use algorithm above to fill in M and
choose arrays O = {}
j = n
while j > 0 do
if choose(j) == 1 then
Epilogue: Recovering the Solution
(2)
Then trace back from end and "execute" the
choices Use algorithm above to fill in M and
choose arrays O = {}
j = n
while j > 0 do
if choose(j) == 1 then
O = O ∪ {j}
Epilogue: Recovering the Solution
(2)
Then trace back from end and "execute" the
choices Use algorithm above to fill in M and
choose arrays O = {}
j = n
while j > 0 do
if choose(j) == 1 then
O = O ∪ {j}
j = pj
Epilogue: Recovering the Solution
(2)
Then trace back from end and "execute" the
choices Use algorithm above to fill in M and
choose arrays O = {}
j = n
while j > 0 do
if choose(j) == 1 then
O = O ∪ {j}
j = pj
else
Epilogue: Recovering the Solution
(2)
Then trace back from end and "execute" the
choices Use algorithm above to fill in M and
choose arrays O = {}
j = n
while j > 0 do
if choose(j) == 1 then
O = O ∪ {j}
j = pj
else
j = j − 1
Epilogue: Recovering the Solution
(2)
Then trace back from end and "execute" the
choices Use algorithm above to fill in M and
choose arrays O = {}
j = n
while j > 0 do
if choose(j) == 1 then
O = O ∪ {j}
j = pj
else
j = j − 1
► Tip: first write algorithm to compute optimal
value, then modify to compute actual solution
Revie
w
► Recursive algorithm → recurrence → iterative
algorithm
Review
► Recursive algorithm → recurrence → iterative
algorithm
► Three ways of expressing value of optimal
solutions of subproblems
Review
► Recursive algorithm → recurrence → iterative
algorithm
► Three ways of expressing value of optimal
solutions of subproblems
► Compute-Value(j). Recursive algorithm:
arguments identify subproblems.
Review
► Recursive algorithm → recurrence → iterative
algorithm
► Three ways of expressing value of optimal
solutions of subproblems
► Compute-Value(j). Recursive algorithm:
arguments identify subproblems.
► OPT(j). Used in recurrence; matches recursive
algorithm.
Review
► Recursive algorithm → recurrence → iterative
algorithm
► Three ways of expressing value of optimal
solutions of subproblems
► Compute-Value(j). Recursive algorithm:
arguments identify subproblems.
► OPT(j). Used in recurrence; matches recursive
algorithm.
► M [j]. Array to hold optimal values, filled in
during iterative algorithm.
Key Step: Identify Subproblems
► Finding solution means: make “first choice”, then
recursively solve a smaller instance of same problem.
Key Step: Identify Subproblems
► Finding solution means: make “first choice”, then
recursively solve a smaller instance of same problem.
► First example: Weighted Interval Scheduling
► Binary first choice: j ∈ O or j ∈/ O?
Key Step: Identify Subproblems
► Finding solution means: make “first choice”, then
recursively solve a smaller instance of same problem.
► First example: Weighted Interval Scheduling
► Binary first choice: j ∈ O or j ∈/ O?
► Next example: rod cutting
► First choice has n options
Rod
Cutting
► Input: steel rod of length n, can be cut into
integer lengths, get price p(i) for piece of length i
► Goal: subdivide to maximize total value
► Example / problem formulation on board
First
decision?
First decision?
Choose length i of first piece, then recurse on
smaller rod
max (
p1 + Rod_Cut(n-1),
p2 + Rod_Cut(n-2),
.
.
.
P(n-1)+ Rod_Cut(1),
P(n) )
Step 1: Recursive
Algorithm
CutRod(j)
Step 1: Recursive
Algorithm
CutRod(j)
if j = 0 then return
0
Step 1: Recursive
Algorithm
CutRod(j)
if j = 0 then return
0
v = 0
Step 1: Recursive
Algorithm
CutRod(j)
if j = 0 then return
0
v = 0
for i = 1 to j do
Step 1: Recursive Algorithm
CutRod(j)
if j = 0 then return 0
v = 0
for i = 1 to j do
v = max
!
v, p[i] + CutRod(j − i)
"
Step 1: Recursive
Algorithm
CutRod(j)
if j = 0 then return 0
v = 0
for i = 1 to j do
v = max
!
v, p[i] + CutRod(j − i)
"
return v
CutRod(0)
CutRod(1)
CutRod(2)
..
.
.
CutRod(n
Step 1: Recursive Algorithm
CutRod(j)
if j = 0 then return 0
v = 0
for i = 1 to j do
v = max
!
v, p[i] + CutRod(j − i)
"
return v
► Running time for CutRod(n)?
Step 1: Recursive Algorithm
CutRod(j)
if j = 0 then return 0
v = 0
for i = 1 to j do
v = max
!
v, p[i] + CutRod(j − i)
"
return v
► Running time for CutRod(n)?
Θ(2n)
Step 2:
Recurrence
Step 2:
Recurrence
1≤i≤j
OPT(j) = max
)
pi + OPT(j − i)
}
Step 2:
Recurrence
OPT(j) = max
)
pi + OPT(j − i)
}
1≤i≤j
OPT(0) = 0
From Recurrence to
Algorithm
From Recurrence to
Algorithm
1≤i≤j
OPT(j) = max
)
pi + OPT(j − i)
}
From Recurrence to Algorithm
OPT(j) = max
)
pi + OPT(j − i)
}
1≤i≤j
OPT(0) = 0
What size memoization array M ? What order to fill?
The recurrence provides all of the information needed
to design an iterative algorithm.
From Recurrence to Algorithm
OPT(j) = max
)
pi + OPT(j − i)
}
1≤i≤j
OPT(0) = 0
What size memoization array M ? What order to fill?
The recurrence provides all of the information needed
to design an iterative algorithm.
► Cutrod(·), OPT (·), and M [·] have same argument:
From Recurrence to Algorithm
OPT(j) = max
)
pi + OPT(j − i)
}
1≤i≤j
OPT(0) = 0
What size memoization array M ? What order to fill?
The recurrence provides all of the information needed
to design an iterative algorithm.
► Cutrod(·), OPT (·), and M [·] have same argument:
index j of unique subproblems
From Recurrence to Algorithm
OPT(j) = max
)
pi + OPT(j − i)
}
1≤i≤j
OPT(0) = 0
What size memoization array M ? What order to fill?
The recurrence provides all of the information needed
to design an iterative algorithm.
► Cutrod(·), OPT (·), and M [·] have same argument:
index j of unique subproblems
► Range of values of j determines size of M .
From Recurrence to Algorithm
OPT(j) = max
)
pi + OPT(j − i)
}
1≤i≤j
OPT(0) = 0
What size memoization array M ? What order to fill?
The recurrence provides all of the information needed
to design an iterative algorithm.
► Cutrod(·), OPT (·), and M [·] have same argument:
index j of unique subproblems
► Range of values of j determines size of M . M [0..n]
From Recurrence to Algorithm
OPT(j) = max
)
pi + OPT(j − i)
}
1≤i≤j
OPT(0) = 0
What size memoization array M ? What order to fill?
The recurrence provides all of the information needed
to design an iterative algorithm.
► Cutrod(·), OPT (·), and M [·] have same argument:
index j of unique subproblems
► Range of values of j determines size of M . M [0..n]
► Fill M so RHS values are computed before LHS.
From Recurrence to Algorithm
OPT(j) = max
)
pi + OPT(j − i)
}
1≤i≤j
OPT(0) = 0
What size memoization array M ? What order to fill?
The recurrence provides all of the information needed
to design an iterative algorithm.
► Cutrod(·), OPT (·), and M [·] have same argument:
index j of unique subproblems
► Range of values of j determines size of M . M [0..n]
► Fill M so RHS values are computed before LHS. Fill
from 0 to
n
Step 3: Iterative
Algorithm
CutRod-Iterative
Step 3: Iterative
Algorithm
CutRod-Iterative
Initialize array M
[0..n]
Step 3: Iterative
Algorithm
CutRod-Iterative
Initialize array M
[0..n] Set M [0] = 0
Step 3: Iterative Algorithm
CutRod-Iterative
Initialize array M
[0..n]
Set M [0] = 0
for j = 1 to n do
Step 3: Iterative Algorithm
CutRod-Iterative
Initialize array M
[0..n]
Set M [0] = 0
for j = 1 to n do
v = 0
Step 3: Iterative Algorithm
CutRod-Iterative
Initialize array M
[0..n]
Set M [0] = 0
for j = 1 to n do
v = 0
for i = 1 to j do
Step 3: Iterative Algorithm
CutRod-Iterative
Initialize array M
[0..n]
Set M [0] = 0
for j = 1 to n do
v = 0
for i = 1 to j do
v = max
!
v, p[i]
+ M [j − i]
"
Step 3: Iterative Algorithm
CutRod-Iterative
Initialize array M
[0..n]
Set M [0] = 0
for j = 1 to n do
v = 0
for i = 1 to j do
v = max
!
v, p[i]
+ M [j − i]
"
Set M [j] = v
Step 3: Iterative Algorithm
CutRod-Iterative
Initialize array M
[0..n]
Set M [0] = 0
for j = 1 to n do
v = 0
for i = 1 to j do
v = max
!
v, p[i]
+ M [j − i]
"
Set M [j] = v
► Note: body of for loop identical to recursive
algorithm, directly implements recurrence
Step 3: Iterative Algorithm
CutRod-Iterative
Initialize array M
[0..n]
Set M [0] = 0
for j = 1 to n do
v = 0
for i = 1 to j do
v = max
!
v, p[i]
+ M [j − i]
"
Set M [j] = v
► Note: body of for loop identical to recursive
algorithm, directly implements recurrence
► Running time?
Step 3: Iterative Algorithm
CutRod-Iterative
Initialize array M
[0..n] Set M [0] = 0
for j = 1 to n
do v = 0
for i = 1 to j do
v = max
!
v,
p[i] + M
[j − i]
"
Set M [j] = v
► Note: body of for loop identical to recursive
algorithm, directly implements recurrence
► Running time? Θ(n2)
Epilogue: Recover Optimal Solution
Idea: Modify algorithm to record choices that lead to
optimal value for each subproblem, then trace back from
the end and “execute” the choices, starting with the largest
problem.
Epilogue: Recover Optimal Solution
Idea: Modify algorithm to record choices that lead to
optimal value for each subproblem, then trace back from
the end and “execute” the choices, starting with the largest
problem.
Step 1: Run previous algorithm to fill in M array, but with
the following modification: let first-cut[j] be the index i
that leads to the largest value when computing M [j].
Epilogue: Recover Optimal Solution
Idea: Modify algorithm to record choices that lead to
optimal value for each subproblem, then trace back from
the end and “execute” the choices, starting with the largest
problem.
Step 1: Run previous algorithm to fill in M array, but with
the following modification: let first-cut[j] be the index i
that leads to the largest value when computing M [j].
Step 2: Trace back from end and execute choices.
Epilogue: Recover Optimal Solution
Idea: Modify algorithm to record choices that lead to
optimal value for each subproblem, then trace back from
the end and “execute” the choices, starting with the largest
problem.
Step 1: Run previous algorithm to fill in M array, but with
the following modification: let first-cut[j] be the index i
that leads to the largest value when computing M [j].
Step 2: Trace back from end and execute
choices. cuts = {}
Epilogue: Recover Optimal Solution
Idea: Modify algorithm to record choices that lead to
optimal value for each subproblem, then trace back from
the end and “execute” the choices, starting with the largest
problem.
Step 1: Run previous algorithm to fill in M array, but with
the following modification: let first-cut[j] be the index i
that leads to the largest value when computing M [j].
Step 2: Trace back from end and execute
choices. cuts = {}
j = n
▷ Remaining length
Epilogue: Recover Optimal Solution
Idea: Modify algorithm to record choices that lead to
optimal value for each subproblem, then trace back from
the end and “execute” the choices, starting with the largest
problem.
Step 1: Run previous algorithm to fill in M array, but with
the following modification: let first-cut[j] be the index i
that leads to the largest value when computing M [j].
Step 2: Trace back from end and execute
choices. cuts = {}
j = n
▷ Remaining length
while j > 0 do
Epilogue: Recover Optimal Solution
Idea: Modify algorithm to record choices that lead to
optimal value for each subproblem, then trace back from
the end and “execute” the choices, starting with the largest
problem.
Step 1: Run previous algorithm to fill in M array, but with
the following modification: let first-cut[j] be the index i
that leads to the largest value when computing M [j].
Step 2: Trace back from end and execute
choices. cuts = {} ▷ Remaining
length
j = n
while j > 0 do
j = j − first-
cut[j]
Epilogue: Recover Optimal Solution
Idea: Modify algorithm to record choices that lead to
optimal value for each subproblem, then trace back from
the end and “execute” the choices, starting with the largest
problem.
Step 1: Run previous algorithm to fill in M array, but with
the following modification: let first-cut[j] be the index i
that leads to the largest value when computing M [j].
Step 2: Trace back from end and execute
choices. cuts = {} ▷ Remaining
length
j = n
while j > 0 do
j = j − first-cut[j]
cuts = cuts ∪ {first-
cut[j]}

More Related Content

PPTX
Algorithm Design Techiques, divide and conquer
PDF
PPT
Dynamic_methods_Greedy_algorithms_11.ppt
PPTX
Dynamic Programming in design and analysis .pptx
PPT
0/1 knapsack
PDF
Dynamic programming
DOCX
Problem descriptionThe Jim Thornton Coffee House chain is .docx
PPT
Dynamic Programming for 4th sem cse students
Algorithm Design Techiques, divide and conquer
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic Programming in design and analysis .pptx
0/1 knapsack
Dynamic programming
Problem descriptionThe Jim Thornton Coffee House chain is .docx
Dynamic Programming for 4th sem cse students

Similar to 14-dynamic-programming-work-methods.pptx (20)

PPT
PPTX
8_dynamic_algorithm powerpoint ptesentation.pptx
PPT
Learn about dynamic programming and how to design algorith
PPT
Dynamic1
PPTX
dynamic programming complete by Mumtaz Ali (03154103173)
DOC
Data structure notes
PPTX
Dynamic programming - fundamentals review
PPT
Knapsack problem and Memory Function
PDF
phan-tich-va-thiet-ke-thuat-toan_pham-quang-dung-and-do-phan-thuan_chapter01-...
PPT
Lecture 8 dynamic programming
PDF
Unit 2_final DESIGN AND ANALYSIS OF ALGORITHMS.pdf
PPTX
Design and Analysis of Algorithm-Lecture.pptx
PPT
Learn about dynamic programming and how to design algorith
PDF
DynamicProgramming.pdf
PPT
DynamicProgramming.ppt
PPTX
AAC ch 3 Advance strategies (Dynamic Programming).pptx
PPTX
Daa:Dynamic Programing
PPTX
Chapter 5.pptx
8_dynamic_algorithm powerpoint ptesentation.pptx
Learn about dynamic programming and how to design algorith
Dynamic1
dynamic programming complete by Mumtaz Ali (03154103173)
Data structure notes
Dynamic programming - fundamentals review
Knapsack problem and Memory Function
phan-tich-va-thiet-ke-thuat-toan_pham-quang-dung-and-do-phan-thuan_chapter01-...
Lecture 8 dynamic programming
Unit 2_final DESIGN AND ANALYSIS OF ALGORITHMS.pdf
Design and Analysis of Algorithm-Lecture.pptx
Learn about dynamic programming and how to design algorith
DynamicProgramming.pdf
DynamicProgramming.ppt
AAC ch 3 Advance strategies (Dynamic Programming).pptx
Daa:Dynamic Programing
Chapter 5.pptx
Ad

Recently uploaded (20)

DOCX
573137875-Attendance-Management-System-original
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Welding lecture in detail for understanding
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PDF
PPT on Performance Review to get promotions
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPTX
additive manufacturing of ss316l using mig welding
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
bas. eng. economics group 4 presentation 1.pptx
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
Geodesy 1.pptx...............................................
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
573137875-Attendance-Management-System-original
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
CYBER-CRIMES AND SECURITY A guide to understanding
Welding lecture in detail for understanding
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PPT on Performance Review to get promotions
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
UNIT 4 Total Quality Management .pptx
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
additive manufacturing of ss316l using mig welding
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
bas. eng. economics group 4 presentation 1.pptx
Automation-in-Manufacturing-Chapter-Introduction.pdf
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Geodesy 1.pptx...............................................
R24 SURVEYING LAB MANUAL for civil enggi
CH1 Production IntroductoryConcepts.pptx
Model Code of Practice - Construction Work - 21102022 .pdf
Embodied AI: Ushering in the Next Era of Intelligent Systems
Ad

14-dynamic-programming-work-methods.pptx

  • 1. COMPSCI 311: Introduction to Algorithms Lecture 14: Dynamic Programming Ghazaleh Parvini University of Massachusetts Amherst
  • 2. Algorithm Design Techniques ► Greedy ► Divide and Conquer ► Dynamic Programming ► Network Flows
  • 3. Learning Goals Greed y Divide and Conquer Dynamic Programmi ng Formulate problem Design algorithm Prove correctness Analyze running time Specific algorithms Dijkstra , MST Bellman- Ford
  • 4. Weighted Interval Scheduling ► TV scheduling problem: n shows, can only watch one at a time. New twist: show j has value vj . Want a set of shows S with no overlap and highest total value. ► Example on board ► Greedy?
  • 5. Weighted Interval Scheduling ► TV scheduling problem: n shows, can only watch one at a time. New twist: show j has value vj . Want a set of shows S with no overlap and highest total value. ► Example on board ► Greedy? No longer optimal.
  • 6. Problem Formulation ► Show (job) j has value vj , start time sj , finish time fj ► Assume shows sorted by finishing time f1 ≤ f2 ≤ . . . ≤ fn ► Shows i and j are compatible if they don’t overlap ► Goal: subset of non-overlapping jobs with maximum value
  • 7. Dynamic Programming Recipe ► Step 1: Devise simple recursive algorithm for value of optimal solution ► Flavor: make “first choice”, then recursively solve remaining part of the problem. (Problem: solve redundant subproblems → exponential time)
  • 8. Dynamic Programming Recipe ► Step 1: Devise simple recursive algorithm for value of optimal solution ► Flavor: make “first choice”, then recursively solve remaining part of the problem. (Problem: solve redundant subproblems → exponential time) ► Step 2: Write recurrence for optimal value
  • 9. Dynamic Programming Recipe ► Step 1: Devise simple recursive algorithm for value of optimal solution ► Flavor: make “first choice”, then recursively solve remaining part of the problem. (Problem: solve redundant subproblems → exponential time) ► Step 2: Write recurrence for optimal value ► Step 3: Design bottom-up iterative algorithm
  • 10. Dynamic Programming Recipe ► Step 1: Devise simple recursive algorithm for value of optimal solution ► Flavor: make “first choice”, then recursively solve remaining part of the problem. (Problem: solve redundant subproblems → exponential time) ► Step 2: Write recurrence for optimal value ► Step 3: Design bottom-up iterative algorithm ► Epilogue: Recover optimal solution
  • 11. Step 1: Recursive Algorithm ► Observation: Let O be the optimal solution. Either n ∈ O or n ∈/ O. In either case, we can reduce the problem to a smaller instance of the same problem.
  • 12. Step 1: Recursive Algorithm ► Observation: Let O be the optimal solution. Either n ∈ O or n ∈/ O. In either case, we can reduce the problem to a smaller instance of the same problem. ► Recursive algorithm to compute value of optimal subset of first j shows Compute-Value(j)
  • 13. Step 1: Recursive Algorithm ► Observation: Let O be the optimal solution. Either n ∈ O or n ∈/ O. In either case, we can reduce the problem to a smaller instance of the same problem. ► Recursive algorithm to compute value of optimal subset of first j shows Compute-Value(j) Base case: if j = 0 return 0
  • 14. Step 1: Recursive Algorithm ► Observation: Let O be the optimal solution. Either n ∈ O or n ∈/ O. In either case, we can reduce the problem to a smaller instance of the same problem. ► Recursive algorithm to compute value of optimal subset of first j shows Compute-Value(j) Base case: if j = 0 return 0 Case 1: j ∈ O Let i < j be highest-numbered show compatible with j val1 = vj + Compute-Value(i)
  • 15. Step 1: Recursive Algorithm ► Observation: Let O be the optimal solution. Either n ∈ O or n ∈/ O. In either case, we can reduce the problem to a smaller instance of the same problem. ► Recursive algorithm to compute value of optimal subset of first j shows Compute-Value(j) Base case: if j = 0 return 0 Case 1: j ∈ O Let i < j be highest-numbered show compatible with j val1 = vj + Compute-Value(i) Case 2: j ∈/ O val2 = Compute-Value(j − 1)
  • 16. Step 1: Recursive Algorithm ► Observation: Let O be the optimal solution. Either n ∈ O or n ∈/ O. In either case, we can reduce the problem to a smaller instance of the same problem. ► Recursive algorithm to compute value of optimal subset of first j shows Compute-Value(j) Base case: if j = 0 return 0 Case 1: j ∈ O Let i < j be highest-numbered show compatible with j val1 = vj + Compute-Value(i) Case 2: j ∈/ O val2 = Compute-Value(j − 1)
  • 17. Clicker Compute- Value(j) if j = 0 return 0 Let i < j be highest- numbered show compatible with j val1 = vj + Compute-Value(i) val2 = Compute-Value(j − 1) return max(val1, val2) The running time of this recursive solution is A. O(n log n)
  • 19. Running Time? ► Recursion tree ► ≈ 2n subproblems ⇒ exponential time
  • 20. Running Time? ► Recursion tree ► ≈ 2n subproblems ⇒ exponential time ► Only n unique subproblems. Save work by ordering computation to solve each problem once.
  • 21. Step 2: Recurrence A recurrence expresses the optimal value for a problem of size j in terms of the optimal value of subproblems of size i < j. ` x C a ˛ s ¸ e 1 OPT(0) = 0 OPT(j) = max{vj + OPT(pj ), ` x C a ˛ s ¸ e 2 OPT(j − 1)} ► OPT(j): value of optimal solution on first j shows ► pj : highest-numbered show i < j that is compatible with j
  • 22. Recursive Algorithm vs. Recurrence ► Compute-Value(j) If j = 0 return 0 val1 = vj + Compute-Value(pj ) val2 = Compute-Value(j − 1) return max(val1, val2)
  • 23. Recursive Algorithm vs. Recurrence ► Compute-Value(j) If j = 0 return 0 val1 = vj + Compute-Value(pj ) val2 = Compute-Value(j − 1) return max(val1, val2) ► Recurrence OPT(j) = max{vj + OPT(pj ), OPT(j − 1)} OPT(0) = 0
  • 24. Recursive Algorithm vs. Recurrence ► Compute-Value(j) If j = 0 return 0 val1 = vj + Compute-Value(pj ) val2 = Compute-Value(j − 1) return max(val1, val2) ► Recurrence OPT(j) = max{vj + OPT(pj ), OPT(j − 1)} OPT(0) = 0 ► Direct correspondence between the algorithm and recurrence
  • 25. Recursive Algorithm vs. Recurrence ► Compute-Value(j) If j = 0 return 0 val1 = vj + Compute-Value(pj ) val2 = Compute-Value(j − 1) return max(val1, val2) ► Recurrence OPT(j) = max{vj + OPT(pj ), OPT(j − 1)} OPT(0) = 0 ► Direct correspondence between the algorithm and recurrence ► Tip: start by writing the recursive algorithm and translating it to a recurrence (replace method name by “OPT”). After some practice, skip straight to the recurrence
  • 26. Step 3: Iterative “Bottom-Up” Algorithm Idea: compute the optimal value of every unique subproblem in order from smallest (base case) to largest (original problem). Use recurrence for each subproblem. WeightedIS
  • 27. Step 3: Iterative “Bottom-Up” Algorithm Idea: compute the optimal value of every unique subproblem in order from smallest (base case) to largest (original problem). Use recurrence for each subproblem. WeightedIS Initialize array M of size n to hold optimal values
  • 28. Step 3: Iterative “Bottom-Up” Algorithm Idea: compute the optimal value of every unique subproblem in order from smallest (base case) to largest (original problem). Use recurrence for each subproblem. WeightedIS Initialize array M of size n to hold optimal values M [0] = 0 ▷ Value of empty set
  • 29. Step 3: Iterative “Bottom-Up” Algorithm Idea: compute the optimal value of every unique subproblem in order from smallest (base case) to largest (original problem). Use recurrence for each subproblem. WeightedIS Initialize array M of size n to hold optimal values M [0] = 0 ▷ Value of empty set for j = 1 to n do
  • 30. Step 3: Iterative “Bottom-Up” Algorithm Idea: compute the optimal value of every unique subproblem in order from smallest (base case) to largest (original problem). Use recurrence for each subproblem. WeightedIS Initialize array M of size n to hold optimal values ▷ Value of empty set M [0] = 0 for j = 1 to n do M [j] = max(vj + M [pj ], M [j − 1])
  • 31. Step 3: Iterative “Bottom-Up” Algorithm Idea: compute the optimal value of every unique subproblem in order from smallest (base case) to largest (original problem). Use recurrence for each subproblem. WeightedIS Initialize array M of size n to hold optimal values ▷ Value of empty set M [0] = 0 for j = 1 to n do M [j] = max(vj + M [pj ], M [j − 1]) ► Example
  • 32. Step 3: Observations WeightedIS Initialize array M of size n to hold optimal values ▷ Value of empty set M [0] = 0 for j = 1 to n do M [j] = max(vj + M [pj ], M [j − 1]) ► Iterative algorithm is a direct “wrapping” of recurrence in appropriate for loop. ► Pay attention to dependence on previously- computed entries of M to know in what order to iterate through array. ► Running time?
  • 33. Step 3: Observations WeightedIS Initialize array M of size n to hold optimal values ▷ Value of empty set M [0] = 0 for j = 1 to n do M [j] = max(vj + M [pj ], M [j − 1]) ► Iterative algorithm is a direct “wrapping” of recurrence in appropriate for loop. ► Pay attention to dependence on previously- computed entries of M to know in what order to iterate through array. ► Running time? O(n)
  • 34. Memoization Intermediate approach: keep recursive function structure, but store value in array on first computation, and reuse it
  • 35. Memoization Intermediate approach: keep recursive function structure, but store value in array on first computation, and reuse it Initialize array M of size n to empty, M[0] = 0
  • 36. Memoization Intermediate approach: keep recursive function structure, but store value in array on first computation, and reuse it Initialize array M of size n to empty, M[0] = 0 function Mfun(j)
  • 37. Memoization Intermediate approach: keep recursive function structure, but store value in array on first computation, and reuse it Initialize array M of size n to empty, M[0] = 0 function Mfun(j) if M[j] = empty then
  • 38. Memoization Intermediate approach: keep recursive function structure, but store value in array on first computation, and reuse it Initialize array M of size n to empty, M[0] = 0 function Mfun(j) if M[j] = empty then M[j] = max(vj + Mfun(pj ), Mfun(j − 1))
  • 39. Memoization Intermediate approach: keep recursive function structure, but store value in array on first computation, and reuse it Initialize array M of size n to empty, M[0] = 0 function Mfun(j) if M[j] = empty then M[j] = max(vj + Mfun(pj ), Mfun(j − 1)) return M[j] ► Can help if we have recursive structure but unsure of iteration order, or as intermediate step in converting to iteration
  • 40. Clicker The asymptotic running time of the memoized algorithm is A. the same as the initial recursive solution. B. between the initial recursive solution and the iterative version. C. the same as the iterative version.
  • 41. Epilogue: Recovering the Solution (1) Idea: modify the algorithm to save best choice for each subproblem WeigthedIS Initialize array M [0 . . . n] to hold optimal values Initialize array choose[1 . . . n] to hold choices M [0] = 0 for j = 1 to n do M [j] = max(vj + M [pj ], M [j − 1]) Set choose[j] = 1 if first value is bigger, and 0 otherwise
  • 42. Epilogue: Recovering the Solution (2) Then trace back from end and "execute" the choices
  • 43. Epilogue: Recovering the Solution (2) Then trace back from end and "execute" the choices Use algorithm above to fill in M and choose arrays
  • 44. Epilogue: Recovering the Solution (2) Then trace back from end and "execute" the choices Use algorithm above to fill in M and choose arrays O = {}
  • 45. Epilogue: Recovering the Solution (2) Then trace back from end and "execute" the choices Use algorithm above to fill in M and choose arrays O = {} j = n
  • 46. Epilogue: Recovering the Solution (2) Then trace back from end and "execute" the choices Use algorithm above to fill in M and choose arrays O = {} j = n while j > 0 do
  • 47. Epilogue: Recovering the Solution (2) Then trace back from end and "execute" the choices Use algorithm above to fill in M and choose arrays O = {} j = n while j > 0 do if choose(j) == 1 then
  • 48. Epilogue: Recovering the Solution (2) Then trace back from end and "execute" the choices Use algorithm above to fill in M and choose arrays O = {} j = n while j > 0 do if choose(j) == 1 then O = O ∪ {j}
  • 49. Epilogue: Recovering the Solution (2) Then trace back from end and "execute" the choices Use algorithm above to fill in M and choose arrays O = {} j = n while j > 0 do if choose(j) == 1 then O = O ∪ {j} j = pj
  • 50. Epilogue: Recovering the Solution (2) Then trace back from end and "execute" the choices Use algorithm above to fill in M and choose arrays O = {} j = n while j > 0 do if choose(j) == 1 then O = O ∪ {j} j = pj else
  • 51. Epilogue: Recovering the Solution (2) Then trace back from end and "execute" the choices Use algorithm above to fill in M and choose arrays O = {} j = n while j > 0 do if choose(j) == 1 then O = O ∪ {j} j = pj else j = j − 1
  • 52. Epilogue: Recovering the Solution (2) Then trace back from end and "execute" the choices Use algorithm above to fill in M and choose arrays O = {} j = n while j > 0 do if choose(j) == 1 then O = O ∪ {j} j = pj else j = j − 1 ► Tip: first write algorithm to compute optimal value, then modify to compute actual solution
  • 53. Revie w ► Recursive algorithm → recurrence → iterative algorithm
  • 54. Review ► Recursive algorithm → recurrence → iterative algorithm ► Three ways of expressing value of optimal solutions of subproblems
  • 55. Review ► Recursive algorithm → recurrence → iterative algorithm ► Three ways of expressing value of optimal solutions of subproblems ► Compute-Value(j). Recursive algorithm: arguments identify subproblems.
  • 56. Review ► Recursive algorithm → recurrence → iterative algorithm ► Three ways of expressing value of optimal solutions of subproblems ► Compute-Value(j). Recursive algorithm: arguments identify subproblems. ► OPT(j). Used in recurrence; matches recursive algorithm.
  • 57. Review ► Recursive algorithm → recurrence → iterative algorithm ► Three ways of expressing value of optimal solutions of subproblems ► Compute-Value(j). Recursive algorithm: arguments identify subproblems. ► OPT(j). Used in recurrence; matches recursive algorithm. ► M [j]. Array to hold optimal values, filled in during iterative algorithm.
  • 58. Key Step: Identify Subproblems ► Finding solution means: make “first choice”, then recursively solve a smaller instance of same problem.
  • 59. Key Step: Identify Subproblems ► Finding solution means: make “first choice”, then recursively solve a smaller instance of same problem. ► First example: Weighted Interval Scheduling ► Binary first choice: j ∈ O or j ∈/ O?
  • 60. Key Step: Identify Subproblems ► Finding solution means: make “first choice”, then recursively solve a smaller instance of same problem. ► First example: Weighted Interval Scheduling ► Binary first choice: j ∈ O or j ∈/ O? ► Next example: rod cutting ► First choice has n options
  • 61. Rod Cutting ► Input: steel rod of length n, can be cut into integer lengths, get price p(i) for piece of length i ► Goal: subdivide to maximize total value ► Example / problem formulation on board
  • 63. First decision? Choose length i of first piece, then recurse on smaller rod max ( p1 + Rod_Cut(n-1), p2 + Rod_Cut(n-2), . . . P(n-1)+ Rod_Cut(1), P(n) )
  • 66. Step 1: Recursive Algorithm CutRod(j) if j = 0 then return 0 v = 0
  • 67. Step 1: Recursive Algorithm CutRod(j) if j = 0 then return 0 v = 0 for i = 1 to j do
  • 68. Step 1: Recursive Algorithm CutRod(j) if j = 0 then return 0 v = 0 for i = 1 to j do v = max ! v, p[i] + CutRod(j − i) "
  • 69. Step 1: Recursive Algorithm CutRod(j) if j = 0 then return 0 v = 0 for i = 1 to j do v = max ! v, p[i] + CutRod(j − i) " return v CutRod(0) CutRod(1) CutRod(2) .. . . CutRod(n
  • 70. Step 1: Recursive Algorithm CutRod(j) if j = 0 then return 0 v = 0 for i = 1 to j do v = max ! v, p[i] + CutRod(j − i) " return v ► Running time for CutRod(n)?
  • 71. Step 1: Recursive Algorithm CutRod(j) if j = 0 then return 0 v = 0 for i = 1 to j do v = max ! v, p[i] + CutRod(j − i) " return v ► Running time for CutRod(n)? Θ(2n)
  • 73. Step 2: Recurrence 1≤i≤j OPT(j) = max ) pi + OPT(j − i) }
  • 74. Step 2: Recurrence OPT(j) = max ) pi + OPT(j − i) } 1≤i≤j OPT(0) = 0
  • 76. From Recurrence to Algorithm 1≤i≤j OPT(j) = max ) pi + OPT(j − i) }
  • 77. From Recurrence to Algorithm OPT(j) = max ) pi + OPT(j − i) } 1≤i≤j OPT(0) = 0 What size memoization array M ? What order to fill? The recurrence provides all of the information needed to design an iterative algorithm.
  • 78. From Recurrence to Algorithm OPT(j) = max ) pi + OPT(j − i) } 1≤i≤j OPT(0) = 0 What size memoization array M ? What order to fill? The recurrence provides all of the information needed to design an iterative algorithm. ► Cutrod(·), OPT (·), and M [·] have same argument:
  • 79. From Recurrence to Algorithm OPT(j) = max ) pi + OPT(j − i) } 1≤i≤j OPT(0) = 0 What size memoization array M ? What order to fill? The recurrence provides all of the information needed to design an iterative algorithm. ► Cutrod(·), OPT (·), and M [·] have same argument: index j of unique subproblems
  • 80. From Recurrence to Algorithm OPT(j) = max ) pi + OPT(j − i) } 1≤i≤j OPT(0) = 0 What size memoization array M ? What order to fill? The recurrence provides all of the information needed to design an iterative algorithm. ► Cutrod(·), OPT (·), and M [·] have same argument: index j of unique subproblems ► Range of values of j determines size of M .
  • 81. From Recurrence to Algorithm OPT(j) = max ) pi + OPT(j − i) } 1≤i≤j OPT(0) = 0 What size memoization array M ? What order to fill? The recurrence provides all of the information needed to design an iterative algorithm. ► Cutrod(·), OPT (·), and M [·] have same argument: index j of unique subproblems ► Range of values of j determines size of M . M [0..n]
  • 82. From Recurrence to Algorithm OPT(j) = max ) pi + OPT(j − i) } 1≤i≤j OPT(0) = 0 What size memoization array M ? What order to fill? The recurrence provides all of the information needed to design an iterative algorithm. ► Cutrod(·), OPT (·), and M [·] have same argument: index j of unique subproblems ► Range of values of j determines size of M . M [0..n] ► Fill M so RHS values are computed before LHS.
  • 83. From Recurrence to Algorithm OPT(j) = max ) pi + OPT(j − i) } 1≤i≤j OPT(0) = 0 What size memoization array M ? What order to fill? The recurrence provides all of the information needed to design an iterative algorithm. ► Cutrod(·), OPT (·), and M [·] have same argument: index j of unique subproblems ► Range of values of j determines size of M . M [0..n] ► Fill M so RHS values are computed before LHS. Fill from 0 to n
  • 87. Step 3: Iterative Algorithm CutRod-Iterative Initialize array M [0..n] Set M [0] = 0 for j = 1 to n do
  • 88. Step 3: Iterative Algorithm CutRod-Iterative Initialize array M [0..n] Set M [0] = 0 for j = 1 to n do v = 0
  • 89. Step 3: Iterative Algorithm CutRod-Iterative Initialize array M [0..n] Set M [0] = 0 for j = 1 to n do v = 0 for i = 1 to j do
  • 90. Step 3: Iterative Algorithm CutRod-Iterative Initialize array M [0..n] Set M [0] = 0 for j = 1 to n do v = 0 for i = 1 to j do v = max ! v, p[i] + M [j − i] "
  • 91. Step 3: Iterative Algorithm CutRod-Iterative Initialize array M [0..n] Set M [0] = 0 for j = 1 to n do v = 0 for i = 1 to j do v = max ! v, p[i] + M [j − i] " Set M [j] = v
  • 92. Step 3: Iterative Algorithm CutRod-Iterative Initialize array M [0..n] Set M [0] = 0 for j = 1 to n do v = 0 for i = 1 to j do v = max ! v, p[i] + M [j − i] " Set M [j] = v ► Note: body of for loop identical to recursive algorithm, directly implements recurrence
  • 93. Step 3: Iterative Algorithm CutRod-Iterative Initialize array M [0..n] Set M [0] = 0 for j = 1 to n do v = 0 for i = 1 to j do v = max ! v, p[i] + M [j − i] " Set M [j] = v ► Note: body of for loop identical to recursive algorithm, directly implements recurrence ► Running time?
  • 94. Step 3: Iterative Algorithm CutRod-Iterative Initialize array M [0..n] Set M [0] = 0 for j = 1 to n do v = 0 for i = 1 to j do v = max ! v, p[i] + M [j − i] " Set M [j] = v ► Note: body of for loop identical to recursive algorithm, directly implements recurrence ► Running time? Θ(n2)
  • 95. Epilogue: Recover Optimal Solution Idea: Modify algorithm to record choices that lead to optimal value for each subproblem, then trace back from the end and “execute” the choices, starting with the largest problem.
  • 96. Epilogue: Recover Optimal Solution Idea: Modify algorithm to record choices that lead to optimal value for each subproblem, then trace back from the end and “execute” the choices, starting with the largest problem. Step 1: Run previous algorithm to fill in M array, but with the following modification: let first-cut[j] be the index i that leads to the largest value when computing M [j].
  • 97. Epilogue: Recover Optimal Solution Idea: Modify algorithm to record choices that lead to optimal value for each subproblem, then trace back from the end and “execute” the choices, starting with the largest problem. Step 1: Run previous algorithm to fill in M array, but with the following modification: let first-cut[j] be the index i that leads to the largest value when computing M [j]. Step 2: Trace back from end and execute choices.
  • 98. Epilogue: Recover Optimal Solution Idea: Modify algorithm to record choices that lead to optimal value for each subproblem, then trace back from the end and “execute” the choices, starting with the largest problem. Step 1: Run previous algorithm to fill in M array, but with the following modification: let first-cut[j] be the index i that leads to the largest value when computing M [j]. Step 2: Trace back from end and execute choices. cuts = {}
  • 99. Epilogue: Recover Optimal Solution Idea: Modify algorithm to record choices that lead to optimal value for each subproblem, then trace back from the end and “execute” the choices, starting with the largest problem. Step 1: Run previous algorithm to fill in M array, but with the following modification: let first-cut[j] be the index i that leads to the largest value when computing M [j]. Step 2: Trace back from end and execute choices. cuts = {} j = n ▷ Remaining length
  • 100. Epilogue: Recover Optimal Solution Idea: Modify algorithm to record choices that lead to optimal value for each subproblem, then trace back from the end and “execute” the choices, starting with the largest problem. Step 1: Run previous algorithm to fill in M array, but with the following modification: let first-cut[j] be the index i that leads to the largest value when computing M [j]. Step 2: Trace back from end and execute choices. cuts = {} j = n ▷ Remaining length while j > 0 do
  • 101. Epilogue: Recover Optimal Solution Idea: Modify algorithm to record choices that lead to optimal value for each subproblem, then trace back from the end and “execute” the choices, starting with the largest problem. Step 1: Run previous algorithm to fill in M array, but with the following modification: let first-cut[j] be the index i that leads to the largest value when computing M [j]. Step 2: Trace back from end and execute choices. cuts = {} ▷ Remaining length j = n while j > 0 do j = j − first- cut[j]
  • 102. Epilogue: Recover Optimal Solution Idea: Modify algorithm to record choices that lead to optimal value for each subproblem, then trace back from the end and “execute” the choices, starting with the largest problem. Step 1: Run previous algorithm to fill in M array, but with the following modification: let first-cut[j] be the index i that leads to the largest value when computing M [j]. Step 2: Trace back from end and execute choices. cuts = {} ▷ Remaining length j = n while j > 0 do j = j − first-cut[j] cuts = cuts ∪ {first- cut[j]}