SlideShare a Scribd company logo
Chapter 8Chapter 8
Dynamic ProgrammingDynamic Programming
Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
8-2Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Dynamic ProgrammingDynamic Programming
DDynamic Programmingynamic Programming is a general algorithm design techniqueis a general algorithm design technique
for solving problems defined by or formulated as recurrencesfor solving problems defined by or formulated as recurrences
with overlapping subinstanceswith overlapping subinstances
• Invented by American mathematician Richard Bellman in theInvented by American mathematician Richard Bellman in the
1950s to solve optimization problems and later assimilated by CS1950s to solve optimization problems and later assimilated by CS
• ““Programming” here means “planning”Programming” here means “planning”
• Main idea:Main idea:
- set up a recurrence relating a solution to a larger instanceset up a recurrence relating a solution to a larger instance
to solutions of some smaller instancesto solutions of some smaller instances
- solve smaller instances once- solve smaller instances once
- record solutions in a tablerecord solutions in a table
- extract solution to the initial instance from that tableextract solution to the initial instance from that table
8-3Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Example: Fibonacci numbersExample: Fibonacci numbers
• Recall definition of Fibonacci numbers:Recall definition of Fibonacci numbers:
FF((nn)) = F= F((nn-1)-1) + F+ F((nn-2)-2)
FF(0)(0) == 00
FF(1)(1) == 11
• Computing theComputing the nnthth
Fibonacci number recursively (top-down):Fibonacci number recursively (top-down):
FF((nn))
FF((n-n-1)1) + F+ F((n-n-2)2)
FF((n-n-2)2) + F+ F((n-n-3)3) FF((n-n-3)3) + F+ F((n-n-4)4)
......
8-4Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Example: Fibonacci numbers (cont.)Example: Fibonacci numbers (cont.)
Computing theComputing the nnthth
Fibonacci number using bottom-up iteration andFibonacci number using bottom-up iteration and
recording results:recording results:
FF(0)(0) == 00
FF(1)(1) == 11
FF(2)(2) == 1+0 = 11+0 = 1
……
FF((nn-2) =-2) =
FF((nn-1) =-1) =
FF((nn) =) = FF((nn-1)-1) + F+ F((nn-2)-2)
Efficiency:Efficiency:
- time- time
- space- space
0 1 1 . . . F(n-2) F(n-1) F(n)
n
n
What if we solve
it recursively?
8-5Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Examples of DP algorithmsExamples of DP algorithms
• Computing a binomial coefficientComputing a binomial coefficient
• Longest common subsequenceLongest common subsequence
• Warshall’s algorithm for transitive closureWarshall’s algorithm for transitive closure
• Floyd’s algorithm for all-pairs shortest pathsFloyd’s algorithm for all-pairs shortest paths
• Constructing an optimal binary search treeConstructing an optimal binary search tree
• Some instances of difficult discrete optimization problems:Some instances of difficult discrete optimization problems:
- traveling salesman- traveling salesman
- knapsack- knapsack
8-6Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Computing a binomial coefficient by DPComputing a binomial coefficient by DP
Binomial coefficients are coefficients of the binomial formula:Binomial coefficients are coefficients of the binomial formula:
((a + ba + b))nn
== CC((nn,0),0)aann
bb00
+ . . . ++ . . . + CC((nn,,kk))aan-kn-k
bbkk
+ . . . +. . . + CC((nn,,nn))aa00
bbnn
Recurrence:Recurrence: CC((nn,,kk) =) = CC((n-n-1,1,kk) +) + CC((nn-1,-1,kk-1) for-1) for n > kn > k > 0> 0
CC((nn,0) = 1,,0) = 1, CC((nn,,nn) = 1 for) = 1 for nn ≥≥ 00
Value ofValue of CC((nn,,kk) can be computed by filling a table:) can be computed by filling a table:
0 1 2 . . .0 1 2 . . . kk-1-1 kk
0 10 1
1 1 11 1 1
..
..
..
n-n-11 CC((n-n-1,1,kk-1)-1) CC((n-n-1,1,kk))
nn CC((nn,,kk))
8-7Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
ComputingComputing CC((n,kn,k): pseudocode and analysis): pseudocode and analysis
Time efficiency:Time efficiency: ΘΘ((nknk))
Space efficiency:Space efficiency: ΘΘ((nknk))
8-8Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Knapsack Problem by DPKnapsack Problem by DP
GivenGiven nn items ofitems of
integer weights:integer weights: ww11 ww22 … w… wnn
values:values: vv11 vv22 … v… vnn
a knapsack of integer capacitya knapsack of integer capacity WW
find most valuable subset of the items that fit into the knapsackfind most valuable subset of the items that fit into the knapsack
Consider instance defined by firstConsider instance defined by first ii items and capacityitems and capacity jj ((jj ≤≤ WW))..
LetLet VV[[ii,,jj] be optimal value of such an instance. Then] be optimal value of such an instance. Then
max {max {VV[[ii-1,-1,jj],], vvii ++ VV[[ii-1,-1,j-j- wwii]} if]} if j-j- wwii ≥≥ 00
VV[[ii,,jj] =] =
VV[[ii-1,-1,jj] if] if j-j- wwii < 0< 0
Initial conditions:Initial conditions: VV[0,[0,jj] = 0 and] = 0 and VV[[ii,0] = 0,0] = 0
{
8-9Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Knapsack Problem by DP (example)Knapsack Problem by DP (example)
Example: Knapsack of capacityExample: Knapsack of capacity WW = 5= 5
item weight valueitem weight value
1 2 $121 2 $12
2 1 $102 1 $10
3 3 $203 3 $20
4 2 $15 capacity4 2 $15 capacity jj
0 1 2 3 40 1 2 3 4 55
00
ww11 = 2,= 2, vv11==12 112 1
ww22 = 1,= 1, vv22==10 210 2
ww33 = 3,= 3, vv33==20 320 3
ww44 = 2,= 2, vv44==15 415 4 ??
0 0 0
0 0 12
0 10 12 22 22 22
0 10 12 22 30 32
0 10 15 25 30 37
Backtracing
finds the actual
optimal subset,
i.e. solution.
8-10Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Knapsack Problem by DP (pseudocode)Knapsack Problem by DP (pseudocode)
Algorithm DPKnapsack(Algorithm DPKnapsack(ww[[11....nn],], vv[[1..n1..n],], WW))
varvar VV[[0..n,0..W0..n,0..W]], P, P[[1..n,1..W1..n,1..W]]:: intint
forfor j := 0j := 0 toto WW dodo
VV[[0,j0,j] :=] := 00
forfor i := 0i := 0 toto nn dodo
VV[[i,0i,0]] := 0:= 0
forfor i := 1i := 1 toto nn dodo
forfor j := 1j := 1 toto WW dodo
ifif ww[[ii]] ≤≤ jj andand vv[[ii]] + V+ V[[i-1,j-wi-1,j-w[[ii]]]] > V> V[[i-1,ji-1,j] then] then
VV[[i,ji,j]] := v:= v[[ii]] + V+ V[[i-1,j-wi-1,j-w[[ii]]]]; P; P[[i,ji,j]] := j-w:= j-w[[ii]]
elseelse
VV[[i,ji,j]] := V:= V[[i-1,ji-1,j]]; P; P[[i,ji,j]] := j:= j
returnreturn VV[[n,Wn,W] and the optimal subset by backtracing] and the optimal subset by backtracing
Running time and space:
O(nW).
8-11Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Longest Common Subsequence (LCS)Longest Common Subsequence (LCS)
A subsequence of a sequence/stringA subsequence of a sequence/string SS is obtained byis obtained by
deleting zero or more symbols fromdeleting zero or more symbols from SS. For example, the. For example, the
following arefollowing are somesome subsequences of “president”: pred, sdn,subsequences of “president”: pred, sdn,
predent. In other words, the letters of a subsequence of Spredent. In other words, the letters of a subsequence of S
appear in order inappear in order in SS, but they are not required to be, but they are not required to be
consecutive.consecutive.
The longest common subsequence problem is to find aThe longest common subsequence problem is to find a
maximum length common subsequence between twomaximum length common subsequence between two
sequences.sequences.
8-12Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
LCSLCS
For instance,For instance,
Sequence 1: presidentSequence 1: president
Sequence 2: providenceSequence 2: providence
Its LCS is priden.Its LCS is priden.
president
providence
8-13Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
LCSLCS
Another example:Another example:
Sequence 1: algorithmSequence 1: algorithm
Sequence 2: alignmentSequence 2: alignment
One of its LCS is algm.One of its LCS is algm.
a l g o r i t h m
a l i g n m e n t
8-14Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
How to compute LCS?How to compute LCS?
Let ALet A=a=a11aa22…a…amm andand B=bB=b11bb22…b…bnn ..
lenlen((i, ji, j): the length of an LCS between): the length of an LCS between
aa11aa22…a…aii andand bb11bb22…b…bjj
With proper initializations,With proper initializations, lenlen((i, ji, j) can be computed as follows.) can be computed as follows.
,
.and0,if)),1(),1,(max(
and0,if1)1,1(
,0or0if0
),(





≠>−−
=>+−−
==
=
ji
ji
bajijilenjilen
bajijilen
ji
jilen
8-15Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
procedure LCS-Length(A, B)
1. for i ← 0 to m dolen(i,0) = 0
2. for j ← 1 to n dolen(0,j) = 0
3. for i ← 1 to m do
4. for j ← 1 to n do
5. if ji ba = then 


=
+−−=
""),(
1)1,1(),(
jiprev
jilenjilen
6. else if )1,(),1( −≥− jilenjilen
7. then 


=
−=
""),(
),1(),(
jiprev
jilenjilen
8. else 


=
−=
""),(
)1,(),(
jiprev
jilenjilen
9. return len and prev
8-16Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
i j 0 1
p
2
r
3
o
4
v
5
i
6
d
7
e
8
n
9
c
10
e
0 0 0 0 0 0 0 0 0 0 0 0
1 p
2
0 1 1 1 1 1 1 1 1 1 1
2 r 0 1 2 2 2 2 2 2 2 2 2
3 e 0 1 2 2 2 2 2 3 3 3 3
4 s 0 1 2 2 2 2 2 3 3 3 3
5 i 0 1 2 2 2 3 3 3 3 3 3
6 d 0 1 2 2 2 3 4 4 4 4 4
7 e 0 1 2 2 2 3 4 5 5 5 5
8 n 0 1 2 2 2 3 4 5 6 6 6
9 t 0 1 2 2 2 3 4 5 6 6 6
Running time and memory: O(mn) and O(mn).
8-17Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
procedure Output-LCS(A, prev, i, j)
1 if i = 0 or j = 0 then return
2 if prev(i, j)=” “ then 

 −−−
ia
jiprevALCSOutput
print
)1,1,,(
3 else if prev(i, j)=” “ then Output-LCS(A, prev, i-1, j)
4 else Output-LCS(A, prev, i, j-1)
The backtracing algorithm
8-18Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
i j 0 1
p
2
r
3
o
4
v
5
i
6
d
7
e
8
n
9
c
10
e
0 0 0 0 0 0 0 0 0 0 0 0
1 p
2
0 1 1 1 1 1 1 1 1 1 1
2 r 0 1 2 2 2 2 2 2 2 2 2
3 e 0 1 2 2 2 2 2 3 3 3 3
4 s 0 1 2 2 2 2 2 3 3 3 3
5 i 0 1 2 2 2 3 3 3 3 3 3
6 d 0 1 2 2 2 3 4 4 4 4 4
7 e 0 1 2 2 2 3 4 5 5 5 5
8 n 0 1 2 2 2 3 4 5 6 6 6
9 t 0 1 2 2 2 3 4 5 6 6 6
Output: priden
8-19Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s Algorithm: Transitive ClosureWarshall’s Algorithm: Transitive Closure
• Computes the transitive closure of a relationComputes the transitive closure of a relation
• Alternatively: existence of all nontrivial paths in a digraphAlternatively: existence of all nontrivial paths in a digraph
• Example of transitive closure:Example of transitive closure:
3
4
2
1
0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
0 0 1 0
1 1 11 1 1
0 0 0 0
11 1 1 11 1
3
4
2
1
8-20Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s AlgorithmWarshall’s Algorithm
Constructs transitive closureConstructs transitive closure TT as the last matrix in the sequenceas the last matrix in the sequence
ofof nn-by--by-nn matricesmatrices RR(0)(0)
, … ,, … , RR((kk))
, … ,, … , RR((nn))
wherewhere
RR((kk))
[[ii,,jj] = 1 iff there is nontrivial path from] = 1 iff there is nontrivial path from ii toto jj with only thewith only the
firstfirst kk vertices allowed as intermediatevertices allowed as intermediate
Note thatNote that RR(0)(0)
== AA (adjacency matrix)(adjacency matrix),, RR((nn))
= T= T (transitive closure)(transitive closure)
3
42
1
3
42
1
3
42
1
3
42
1
R(0)
0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
R(1)
0 0 1 0
1 0 11 1
0 0 0 0
0 1 0 0
R(2)
0 0 1 0
1 0 1 1
0 0 0 0
11 1 1 11 1
R(3)
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(4)
0 0 1 0
1 11 1 1
0 0 0 0
1 1 1 1
3
42
1
8-21Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s Algorithm (recurrence)Warshall’s Algorithm (recurrence)
On theOn the k-k-th iteration, the algorithm determines for every pair ofth iteration, the algorithm determines for every pair of
verticesvertices i, ji, j if a path exists fromif a path exists from ii andand jj with just vertices 1,…,with just vertices 1,…,kk
allowedallowed asas intermediateintermediate
RR((kk-1)-1)
[[i,ji,j]] (path using just 1 ,…,(path using just 1 ,…,k-k-1)1)
RR((kk))
[[i,ji,j] =] = oror
RR((kk-1)-1)
[[i,ki,k] and] and RR((kk-1)-1)
[[k,jk,j]] (path from(path from ii toto kk
and fromand from kk toto jj
using just 1 ,…,using just 1 ,…,k-k-1)1)
i
j
k
{
Initial condition?
8-22Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s Algorithm (matrix generation)Warshall’s Algorithm (matrix generation)
Recurrence relating elementsRecurrence relating elements RR((kk))
to elements ofto elements of RR((kk-1)-1)
is:is:
RR((kk))
[[i,ji,j] =] = RR((kk-1)-1)
[[i,ji,j] or] or ((RR((kk-1)-1)
[[i,ki,k] and] and RR((kk-1)-1)
[[k,jk,j])])
It implies the following rules for generatingIt implies the following rules for generating RR((kk))
fromfrom RR((kk-1)-1)
::
Rule 1Rule 1 If an element in rowIf an element in row ii and columnand column jj is 1 inis 1 in RR((k-k-1)1)
,,
it remains 1 init remains 1 in RR((kk))
Rule 2Rule 2 If an element in rowIf an element in row ii and columnand column jj is 0 inis 0 in RR((k-k-1)1)
,,
it has to be changed to 1 init has to be changed to 1 in RR((kk))
if and only ifif and only if
the element in its rowthe element in its row ii and columnand column kk and the elementand the element
in its columnin its column jj and rowand row kk are both 1’s inare both 1’s in RR((k-k-1)1)
8-23Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s Algorithm (example)Warshall’s Algorithm (example)
3
42
1 0 0 1 0
1 0 0 1
0 0 0 0
0 1 0 0
R(0) =
0 0 1 0
1 0 1 1
0 0 0 0
0 1 0 0
R(1) =
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(2) =
0 0 1 0
1 0 1 1
0 0 0 0
1 1 1 1
R(3) =
0 0 1 0
1 1 1 1
0 0 0 0
1 1 1 1
R(4) =
8-24Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Warshall’s Algorithm (pseudocode and analysis)Warshall’s Algorithm (pseudocode and analysis)
Time efficiency:Time efficiency: ΘΘ((nn33
))
Space efficiency: Matrices can be written over their predecessorsSpace efficiency: Matrices can be written over their predecessors
(with some care), so it’s(with some care), so it’s ΘΘ((nn^2).^2).
8-25Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Floyd’s Algorithm: All pairs shortest pathsFloyd’s Algorithm: All pairs shortest paths
Problem: In a weighted (di)graph, find shortest paths betweenProblem: In a weighted (di)graph, find shortest paths between
every pair of verticesevery pair of vertices
Same idea: construct solution through series of matricesSame idea: construct solution through series of matrices DD(0)(0)
, …,, …,
DD ((nn))
using increasing subsets of the vertices allowedusing increasing subsets of the vertices allowed
as intermediateas intermediate
Example:Example: 3
4
2
1
4
1
6
1
5
3
0 ∞ 4 ∞
1 0 4 3
∞ ∞ 0 ∞
6 5 1 0
8-26Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Floyd’s Algorithm (matrix generation)Floyd’s Algorithm (matrix generation)
On theOn the k-k-th iteration, the algorithm determines shortest pathsth iteration, the algorithm determines shortest paths
between every pair of verticesbetween every pair of vertices i, ji, j that use only vertices among 1,that use only vertices among 1,
…,…,kk as intermediateas intermediate
DD((kk))
[[i,ji,j] = min {] = min {DD((kk-1)-1)
[[i,ji,j],], DD((kk-1)-1)
[[i,ki,k] +] + DD((kk-1)-1)
[[k,jk,j]}]}
i
j
k
DD((kk-1)-1)
[[i,ji,j]]
DD((kk-1)-1)
[[i,ki,k]]
DD((kk-1)-1)
[[k,jk,j]]
Initial condition?
8-27Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Floyd’s Algorithm (example)Floyd’s Algorithm (example)
0 ∞ 3 ∞
2 0 ∞ ∞
∞ 7 0 1
6 ∞ ∞ 0
D(0) =
0 ∞ 3 ∞
2 0 5 ∞
∞ 7 0 1
6 ∞ 9 0
D(1) =
0 ∞ 3 ∞
2 0 5 ∞
9 7 0 1
6 ∞ 9 0
D(2) =
0 10 3 4
2 0 5 6
9 7 0 1
6 16 9 0
D(3) =
0 10 3 4
2 0 5 6
7 7 0 1
6 16 9 0
D(4) =
3
1
3
2
6 7
4
1 2
8-28Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Floyd’s Algorithm (pseudocode and analysis)Floyd’s Algorithm (pseudocode and analysis)
Time efficiency:Time efficiency: ΘΘ((nn33
))
Space efficiency: Matrices can be written over their predecessorsSpace efficiency: Matrices can be written over their predecessors
Note: Works on graphs with negative edges but without negative cycles.Note: Works on graphs with negative edges but without negative cycles.
Shortest paths themselves can be found, too.Shortest paths themselves can be found, too. How?How?
If D[i,k] + D[k,j] < D[i,j] then P[i,j]  k
Since the superscripts k or k-1 make
no difference to D[i,k] and D[k,j].
8-29Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Optimal Binary Search TreesOptimal Binary Search Trees
Problem: GivenProblem: Given nn keyskeys aa11 < …<< …< aann and probabilitiesand probabilities pp11,, …,…, ppnn
searching for them, find a BST with a minimumsearching for them, find a BST with a minimum
average number of comparisons in successful search.average number of comparisons in successful search.
Since total number of BSTs withSince total number of BSTs with nn nodes is given by C(2nodes is given by C(2nn,,nn)/)/
((nn+1), which grows exponentially, brute force is hopeless.+1), which grows exponentially, brute force is hopeless.
Example: What is an optimal BST for keysExample: What is an optimal BST for keys AA,, BB,, CC, and, and DD withwith
search probabilities 0.1, 0.2, 0.4, and 0.3, respectively?search probabilities 0.1, 0.2, 0.4, and 0.3, respectively?
D
A
B
C
Average # of comparisons
= 1*0.4 + 2*(0.2+0.3) + 3*0.1
= 1.7
8-30Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
DP for Optimal BST ProblemDP for Optimal BST Problem
LetLet CC[[i,ji,j] be minimum average number of comparisons made in] be minimum average number of comparisons made in
T[T[i,ji,j], optimal BST for keys], optimal BST for keys aaii < …<< …< aajj ,, where 1 ≤where 1 ≤ ii ≤≤ jj ≤≤ n.n.
Consider optimal BST among all BSTs with someConsider optimal BST among all BSTs with some aakk ((ii ≤≤ kk ≤≤ jj ))
as their root; T[as their root; T[i,ji,j] is the best among them.] is the best among them.
a
Optimal
BST for
a , ..., a
Optimal
BST for
a , ..., ai
k
k-1 k+1 j
CC[[i,ji,j] =] =
min {min {ppkk ·· 1 +1 +
∑∑ ppss (level(level aass in T[in T[i,k-i,k-1] +1)1] +1) ++
∑∑ ppss (level(level aass in T[in T[k+k+11,j,j] +1)}] +1)}
ii ≤≤ kk ≤≤ jj
ss == ii
k-k-11
s =s =k+k+11
jj
8-31Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
goal0
0
C[i,j]
0
1
n+1
0 1 n
p 1
p2
np
i
j
DP for Optimal BST Problem (cont.)DP for Optimal BST Problem (cont.)
After simplifications, we obtain the recurrence forAfter simplifications, we obtain the recurrence for CC[[i,ji,j]:]:
CC[[i,ji,j] =] = min {min {CC[[ii,,kk-1] +-1] + CC[[kk+1,+1,jj]} + ∑]} + ∑ ppss forfor 11 ≤≤ ii ≤≤ jj ≤≤ nn
CC[[i,ii,i] =] = ppii for 1for 1 ≤≤ ii ≤≤ jj ≤≤ nn
ss == ii
jj
ii ≤≤ kk ≤≤ jj
Example: keyExample: key A B C DA B C D
probability 0.1 0.2 0.4 0.3probability 0.1 0.2 0.4 0.3
The tables below are filled diagonal by diagonal: the left one is filledThe tables below are filled diagonal by diagonal: the left one is filled
using the recurrenceusing the recurrence
CC[[i,ji,j] =] = min {min {CC[[ii,,kk-1] +-1] + CC[[kk+1,+1,jj]} + ∑]} + ∑ pps ,s , CC[[i,ii,i] =] = ppii ;;
the right one, for trees’ roots, recordsthe right one, for trees’ roots, records kk’s values giving the minima’s values giving the minima
00 11 22 33 44
11 00 .1.1 .4.4 1.11.1 1.71.7
22 00 .2.2 .8.8 1.41.4
33 00 .4.4 1.01.0
44 00 .3.3
55 00
00 11 22 33 44
11 11 22 33 33
22 22 33 33
33 33 33
44 44
55
ii ≤≤ kk ≤≤ jj ss == ii
jj
optimal BSToptimal BST
B
A
C
D
ii
jj
ii
jj
8-33Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Optimal Binary Search TreesOptimal Binary Search Trees
8-34Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd
ed., Ch. 8
Analysis DP for Optimal BST ProblemAnalysis DP for Optimal BST Problem
Time efficiency:Time efficiency: ΘΘ((nn33
) but can be reduced to) but can be reduced to ΘΘ((nn22
)) by takingby taking
advantage of monotonicity of entries in theadvantage of monotonicity of entries in the
root table, i.e.,root table, i.e., RR[[i,ji,j] is always in the range] is always in the range
betweenbetween RR[[i,ji,j-1] and R[-1] and R[ii+1,j]+1,j]
Space efficiency:Space efficiency: ΘΘ((nn22
))
Method can be expanded to include unsuccessful searchesMethod can be expanded to include unsuccessful searches

More Related Content

PPTX
Regular expressions
PDF
Approximation Algorithms
PPTX
Stressen's matrix multiplication
PPT
Asymptotic notations
PDF
backtracking algorithms of ada
PDF
Dynamic programming
PDF
Algorithms Lecture 3: Analysis of Algorithms II
PPTX
N queens using backtracking
Regular expressions
Approximation Algorithms
Stressen's matrix multiplication
Asymptotic notations
backtracking algorithms of ada
Dynamic programming
Algorithms Lecture 3: Analysis of Algorithms II
N queens using backtracking

What's hot (20)

PDF
Closure properties of context free grammar
PPTX
NP completeness
PPTX
Asymptotic Notation
PPTX
Algorithm Complexity and Main Concepts
PPTX
strassen matrix multiplication algorithm
PPTX
Context free grammars
PPTX
CMSC 56 | Lecture 8: Growth of Functions
PDF
Algorithms Lecture 2: Analysis of Algorithms I
PDF
Shortest path algorithms
PPT
Data Structure and Algorithms Hashing
PPTX
Divide and conquer 1
PPTX
Strassen's matrix multiplication
PPT
Regular Languages
PPT
Greedy algorithms
PDF
Lecture: Regular Expressions and Regular Languages
PPTX
Lecture optimal binary search tree
PPTX
Knapsack Problem
PPTX
Basic Traversal and Search Techniques
PPT
context free language
Closure properties of context free grammar
NP completeness
Asymptotic Notation
Algorithm Complexity and Main Concepts
strassen matrix multiplication algorithm
Context free grammars
CMSC 56 | Lecture 8: Growth of Functions
Algorithms Lecture 2: Analysis of Algorithms I
Shortest path algorithms
Data Structure and Algorithms Hashing
Divide and conquer 1
Strassen's matrix multiplication
Regular Languages
Greedy algorithms
Lecture: Regular Expressions and Regular Languages
Lecture optimal binary search tree
Knapsack Problem
Basic Traversal and Search Techniques
context free language
Ad

Similar to 5.3 dynamic programming (20)

PPTX
8_dynamic_algorithm powerpoint ptesentation.pptx
PPT
Learn about dynamic programming and how to design algorith
PPT
ch08-2019-03-27 (1).ppt
PDF
Sienna 10 dynamic
PPTX
Chapter 5.pptx
PPTX
Design and Analysis of Algorithm-Lecture.pptx
PPT
daa_notes_of_backtracking_branchandbound.ppt
PDF
DS & Algo 6 - Dynamic Programming
PPT
ERK_SRU_ch08-2019-03-27.ppt discussion in class room
PDF
Dynamic programing
PDF
Cs6402 scad-msm
PDF
Skiena algorithm 2007 lecture15 backtracing
PDF
PPTX
Dynamic programming - fundamentals review
PDF
Ch01 basic concepts_nosoluiton
PPTX
greedy algorithm Fractional Knapsack
PDF
Lecture_DynamicProgramming test12345.pdf
PPT
AOA ppt.ppt
PPT
d0a2de03-27d3-4ca2-9ac6-d83440657a6c.ppt
PDF
Sure interview algorithm-1103
8_dynamic_algorithm powerpoint ptesentation.pptx
Learn about dynamic programming and how to design algorith
ch08-2019-03-27 (1).ppt
Sienna 10 dynamic
Chapter 5.pptx
Design and Analysis of Algorithm-Lecture.pptx
daa_notes_of_backtracking_branchandbound.ppt
DS & Algo 6 - Dynamic Programming
ERK_SRU_ch08-2019-03-27.ppt discussion in class room
Dynamic programing
Cs6402 scad-msm
Skiena algorithm 2007 lecture15 backtracing
Dynamic programming - fundamentals review
Ch01 basic concepts_nosoluiton
greedy algorithm Fractional Knapsack
Lecture_DynamicProgramming test12345.pdf
AOA ppt.ppt
d0a2de03-27d3-4ca2-9ac6-d83440657a6c.ppt
Sure interview algorithm-1103
Ad

More from Krish_ver2 (20)

PPT
5.5 back tracking
PPT
5.5 back track
PPT
5.5 back tracking 02
PPT
5.4 randomized datastructures
PPT
5.4 randomized datastructures
PPT
5.4 randamized algorithm
PPT
5.3 dynamic programming 03
PPT
5.3 dyn algo-i
PPT
5.2 divede and conquer 03
PPT
5.2 divide and conquer
PPT
5.2 divede and conquer 03
PPT
5.1 greedyyy 02
PPT
5.1 greedy
PPT
5.1 greedy 03
PPT
4.4 hashing02
PPT
4.4 hashing
PPT
4.4 hashing ext
PPT
4.4 external hashing
PPT
4.2 bst
PPT
4.2 bst 03
5.5 back tracking
5.5 back track
5.5 back tracking 02
5.4 randomized datastructures
5.4 randomized datastructures
5.4 randamized algorithm
5.3 dynamic programming 03
5.3 dyn algo-i
5.2 divede and conquer 03
5.2 divide and conquer
5.2 divede and conquer 03
5.1 greedyyy 02
5.1 greedy
5.1 greedy 03
4.4 hashing02
4.4 hashing
4.4 hashing ext
4.4 external hashing
4.2 bst
4.2 bst 03

Recently uploaded (20)

PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
PDF
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
PDF
VCE English Exam - Section C Student Revision Booklet
PPTX
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
TR - Agricultural Crops Production NC III.pdf
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PPTX
Institutional Correction lecture only . . .
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PPTX
master seminar digital applications in india
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PPTX
Pharma ospi slides which help in ospi learning
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
Insiders guide to clinical Medicine.pdf
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
Supply Chain Operations Speaking Notes -ICLT Program
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
Origin of periodic table-Mendeleev’s Periodic-Modern Periodic table
VCE English Exam - Section C Student Revision Booklet
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
102 student loan defaulters named and shamed – Is someone you know on the list?
TR - Agricultural Crops Production NC III.pdf
Microbial diseases, their pathogenesis and prophylaxis
O5-L3 Freight Transport Ops (International) V1.pdf
Renaissance Architecture: A Journey from Faith to Humanism
Institutional Correction lecture only . . .
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPH.pptx obstetrics and gynecology in nursing
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
master seminar digital applications in india
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Pharma ospi slides which help in ospi learning
Abdominal Access Techniques with Prof. Dr. R K Mishra
Insiders guide to clinical Medicine.pdf

5.3 dynamic programming

  • 1. Chapter 8Chapter 8 Dynamic ProgrammingDynamic Programming Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
  • 2. 8-2Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Dynamic ProgrammingDynamic Programming DDynamic Programmingynamic Programming is a general algorithm design techniqueis a general algorithm design technique for solving problems defined by or formulated as recurrencesfor solving problems defined by or formulated as recurrences with overlapping subinstanceswith overlapping subinstances • Invented by American mathematician Richard Bellman in theInvented by American mathematician Richard Bellman in the 1950s to solve optimization problems and later assimilated by CS1950s to solve optimization problems and later assimilated by CS • ““Programming” here means “planning”Programming” here means “planning” • Main idea:Main idea: - set up a recurrence relating a solution to a larger instanceset up a recurrence relating a solution to a larger instance to solutions of some smaller instancesto solutions of some smaller instances - solve smaller instances once- solve smaller instances once - record solutions in a tablerecord solutions in a table - extract solution to the initial instance from that tableextract solution to the initial instance from that table
  • 3. 8-3Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Example: Fibonacci numbersExample: Fibonacci numbers • Recall definition of Fibonacci numbers:Recall definition of Fibonacci numbers: FF((nn)) = F= F((nn-1)-1) + F+ F((nn-2)-2) FF(0)(0) == 00 FF(1)(1) == 11 • Computing theComputing the nnthth Fibonacci number recursively (top-down):Fibonacci number recursively (top-down): FF((nn)) FF((n-n-1)1) + F+ F((n-n-2)2) FF((n-n-2)2) + F+ F((n-n-3)3) FF((n-n-3)3) + F+ F((n-n-4)4) ......
  • 4. 8-4Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Example: Fibonacci numbers (cont.)Example: Fibonacci numbers (cont.) Computing theComputing the nnthth Fibonacci number using bottom-up iteration andFibonacci number using bottom-up iteration and recording results:recording results: FF(0)(0) == 00 FF(1)(1) == 11 FF(2)(2) == 1+0 = 11+0 = 1 …… FF((nn-2) =-2) = FF((nn-1) =-1) = FF((nn) =) = FF((nn-1)-1) + F+ F((nn-2)-2) Efficiency:Efficiency: - time- time - space- space 0 1 1 . . . F(n-2) F(n-1) F(n) n n What if we solve it recursively?
  • 5. 8-5Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Examples of DP algorithmsExamples of DP algorithms • Computing a binomial coefficientComputing a binomial coefficient • Longest common subsequenceLongest common subsequence • Warshall’s algorithm for transitive closureWarshall’s algorithm for transitive closure • Floyd’s algorithm for all-pairs shortest pathsFloyd’s algorithm for all-pairs shortest paths • Constructing an optimal binary search treeConstructing an optimal binary search tree • Some instances of difficult discrete optimization problems:Some instances of difficult discrete optimization problems: - traveling salesman- traveling salesman - knapsack- knapsack
  • 6. 8-6Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Computing a binomial coefficient by DPComputing a binomial coefficient by DP Binomial coefficients are coefficients of the binomial formula:Binomial coefficients are coefficients of the binomial formula: ((a + ba + b))nn == CC((nn,0),0)aann bb00 + . . . ++ . . . + CC((nn,,kk))aan-kn-k bbkk + . . . +. . . + CC((nn,,nn))aa00 bbnn Recurrence:Recurrence: CC((nn,,kk) =) = CC((n-n-1,1,kk) +) + CC((nn-1,-1,kk-1) for-1) for n > kn > k > 0> 0 CC((nn,0) = 1,,0) = 1, CC((nn,,nn) = 1 for) = 1 for nn ≥≥ 00 Value ofValue of CC((nn,,kk) can be computed by filling a table:) can be computed by filling a table: 0 1 2 . . .0 1 2 . . . kk-1-1 kk 0 10 1 1 1 11 1 1 .. .. .. n-n-11 CC((n-n-1,1,kk-1)-1) CC((n-n-1,1,kk)) nn CC((nn,,kk))
  • 7. 8-7Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 ComputingComputing CC((n,kn,k): pseudocode and analysis): pseudocode and analysis Time efficiency:Time efficiency: ΘΘ((nknk)) Space efficiency:Space efficiency: ΘΘ((nknk))
  • 8. 8-8Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Knapsack Problem by DPKnapsack Problem by DP GivenGiven nn items ofitems of integer weights:integer weights: ww11 ww22 … w… wnn values:values: vv11 vv22 … v… vnn a knapsack of integer capacitya knapsack of integer capacity WW find most valuable subset of the items that fit into the knapsackfind most valuable subset of the items that fit into the knapsack Consider instance defined by firstConsider instance defined by first ii items and capacityitems and capacity jj ((jj ≤≤ WW)).. LetLet VV[[ii,,jj] be optimal value of such an instance. Then] be optimal value of such an instance. Then max {max {VV[[ii-1,-1,jj],], vvii ++ VV[[ii-1,-1,j-j- wwii]} if]} if j-j- wwii ≥≥ 00 VV[[ii,,jj] =] = VV[[ii-1,-1,jj] if] if j-j- wwii < 0< 0 Initial conditions:Initial conditions: VV[0,[0,jj] = 0 and] = 0 and VV[[ii,0] = 0,0] = 0 {
  • 9. 8-9Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Knapsack Problem by DP (example)Knapsack Problem by DP (example) Example: Knapsack of capacityExample: Knapsack of capacity WW = 5= 5 item weight valueitem weight value 1 2 $121 2 $12 2 1 $102 1 $10 3 3 $203 3 $20 4 2 $15 capacity4 2 $15 capacity jj 0 1 2 3 40 1 2 3 4 55 00 ww11 = 2,= 2, vv11==12 112 1 ww22 = 1,= 1, vv22==10 210 2 ww33 = 3,= 3, vv33==20 320 3 ww44 = 2,= 2, vv44==15 415 4 ?? 0 0 0 0 0 12 0 10 12 22 22 22 0 10 12 22 30 32 0 10 15 25 30 37 Backtracing finds the actual optimal subset, i.e. solution.
  • 10. 8-10Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Knapsack Problem by DP (pseudocode)Knapsack Problem by DP (pseudocode) Algorithm DPKnapsack(Algorithm DPKnapsack(ww[[11....nn],], vv[[1..n1..n],], WW)) varvar VV[[0..n,0..W0..n,0..W]], P, P[[1..n,1..W1..n,1..W]]:: intint forfor j := 0j := 0 toto WW dodo VV[[0,j0,j] :=] := 00 forfor i := 0i := 0 toto nn dodo VV[[i,0i,0]] := 0:= 0 forfor i := 1i := 1 toto nn dodo forfor j := 1j := 1 toto WW dodo ifif ww[[ii]] ≤≤ jj andand vv[[ii]] + V+ V[[i-1,j-wi-1,j-w[[ii]]]] > V> V[[i-1,ji-1,j] then] then VV[[i,ji,j]] := v:= v[[ii]] + V+ V[[i-1,j-wi-1,j-w[[ii]]]]; P; P[[i,ji,j]] := j-w:= j-w[[ii]] elseelse VV[[i,ji,j]] := V:= V[[i-1,ji-1,j]]; P; P[[i,ji,j]] := j:= j returnreturn VV[[n,Wn,W] and the optimal subset by backtracing] and the optimal subset by backtracing Running time and space: O(nW).
  • 11. 8-11Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Longest Common Subsequence (LCS)Longest Common Subsequence (LCS) A subsequence of a sequence/stringA subsequence of a sequence/string SS is obtained byis obtained by deleting zero or more symbols fromdeleting zero or more symbols from SS. For example, the. For example, the following arefollowing are somesome subsequences of “president”: pred, sdn,subsequences of “president”: pred, sdn, predent. In other words, the letters of a subsequence of Spredent. In other words, the letters of a subsequence of S appear in order inappear in order in SS, but they are not required to be, but they are not required to be consecutive.consecutive. The longest common subsequence problem is to find aThe longest common subsequence problem is to find a maximum length common subsequence between twomaximum length common subsequence between two sequences.sequences.
  • 12. 8-12Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 LCSLCS For instance,For instance, Sequence 1: presidentSequence 1: president Sequence 2: providenceSequence 2: providence Its LCS is priden.Its LCS is priden. president providence
  • 13. 8-13Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 LCSLCS Another example:Another example: Sequence 1: algorithmSequence 1: algorithm Sequence 2: alignmentSequence 2: alignment One of its LCS is algm.One of its LCS is algm. a l g o r i t h m a l i g n m e n t
  • 14. 8-14Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 How to compute LCS?How to compute LCS? Let ALet A=a=a11aa22…a…amm andand B=bB=b11bb22…b…bnn .. lenlen((i, ji, j): the length of an LCS between): the length of an LCS between aa11aa22…a…aii andand bb11bb22…b…bjj With proper initializations,With proper initializations, lenlen((i, ji, j) can be computed as follows.) can be computed as follows. , .and0,if)),1(),1,(max( and0,if1)1,1( ,0or0if0 ),(      ≠>−− =>+−− == = ji ji bajijilenjilen bajijilen ji jilen
  • 15. 8-15Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 procedure LCS-Length(A, B) 1. for i ← 0 to m dolen(i,0) = 0 2. for j ← 1 to n dolen(0,j) = 0 3. for i ← 1 to m do 4. for j ← 1 to n do 5. if ji ba = then    = +−−= ""),( 1)1,1(),( jiprev jilenjilen 6. else if )1,(),1( −≥− jilenjilen 7. then    = −= ""),( ),1(),( jiprev jilenjilen 8. else    = −= ""),( )1,(),( jiprev jilenjilen 9. return len and prev
  • 16. 8-16Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 i j 0 1 p 2 r 3 o 4 v 5 i 6 d 7 e 8 n 9 c 10 e 0 0 0 0 0 0 0 0 0 0 0 0 1 p 2 0 1 1 1 1 1 1 1 1 1 1 2 r 0 1 2 2 2 2 2 2 2 2 2 3 e 0 1 2 2 2 2 2 3 3 3 3 4 s 0 1 2 2 2 2 2 3 3 3 3 5 i 0 1 2 2 2 3 3 3 3 3 3 6 d 0 1 2 2 2 3 4 4 4 4 4 7 e 0 1 2 2 2 3 4 5 5 5 5 8 n 0 1 2 2 2 3 4 5 6 6 6 9 t 0 1 2 2 2 3 4 5 6 6 6 Running time and memory: O(mn) and O(mn).
  • 17. 8-17Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 procedure Output-LCS(A, prev, i, j) 1 if i = 0 or j = 0 then return 2 if prev(i, j)=” “ then    −−− ia jiprevALCSOutput print )1,1,,( 3 else if prev(i, j)=” “ then Output-LCS(A, prev, i-1, j) 4 else Output-LCS(A, prev, i, j-1) The backtracing algorithm
  • 18. 8-18Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 i j 0 1 p 2 r 3 o 4 v 5 i 6 d 7 e 8 n 9 c 10 e 0 0 0 0 0 0 0 0 0 0 0 0 1 p 2 0 1 1 1 1 1 1 1 1 1 1 2 r 0 1 2 2 2 2 2 2 2 2 2 3 e 0 1 2 2 2 2 2 3 3 3 3 4 s 0 1 2 2 2 2 2 3 3 3 3 5 i 0 1 2 2 2 3 3 3 3 3 3 6 d 0 1 2 2 2 3 4 4 4 4 4 7 e 0 1 2 2 2 3 4 5 5 5 5 8 n 0 1 2 2 2 3 4 5 6 6 6 9 t 0 1 2 2 2 3 4 5 6 6 6 Output: priden
  • 19. 8-19Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s Algorithm: Transitive ClosureWarshall’s Algorithm: Transitive Closure • Computes the transitive closure of a relationComputes the transitive closure of a relation • Alternatively: existence of all nontrivial paths in a digraphAlternatively: existence of all nontrivial paths in a digraph • Example of transitive closure:Example of transitive closure: 3 4 2 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0 1 1 11 1 1 0 0 0 0 11 1 1 11 1 3 4 2 1
  • 20. 8-20Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s AlgorithmWarshall’s Algorithm Constructs transitive closureConstructs transitive closure TT as the last matrix in the sequenceas the last matrix in the sequence ofof nn-by--by-nn matricesmatrices RR(0)(0) , … ,, … , RR((kk)) , … ,, … , RR((nn)) wherewhere RR((kk)) [[ii,,jj] = 1 iff there is nontrivial path from] = 1 iff there is nontrivial path from ii toto jj with only thewith only the firstfirst kk vertices allowed as intermediatevertices allowed as intermediate Note thatNote that RR(0)(0) == AA (adjacency matrix)(adjacency matrix),, RR((nn)) = T= T (transitive closure)(transitive closure) 3 42 1 3 42 1 3 42 1 3 42 1 R(0) 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 R(1) 0 0 1 0 1 0 11 1 0 0 0 0 0 1 0 0 R(2) 0 0 1 0 1 0 1 1 0 0 0 0 11 1 1 11 1 R(3) 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R(4) 0 0 1 0 1 11 1 1 0 0 0 0 1 1 1 1 3 42 1
  • 21. 8-21Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s Algorithm (recurrence)Warshall’s Algorithm (recurrence) On theOn the k-k-th iteration, the algorithm determines for every pair ofth iteration, the algorithm determines for every pair of verticesvertices i, ji, j if a path exists fromif a path exists from ii andand jj with just vertices 1,…,with just vertices 1,…,kk allowedallowed asas intermediateintermediate RR((kk-1)-1) [[i,ji,j]] (path using just 1 ,…,(path using just 1 ,…,k-k-1)1) RR((kk)) [[i,ji,j] =] = oror RR((kk-1)-1) [[i,ki,k] and] and RR((kk-1)-1) [[k,jk,j]] (path from(path from ii toto kk and fromand from kk toto jj using just 1 ,…,using just 1 ,…,k-k-1)1) i j k { Initial condition?
  • 22. 8-22Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s Algorithm (matrix generation)Warshall’s Algorithm (matrix generation) Recurrence relating elementsRecurrence relating elements RR((kk)) to elements ofto elements of RR((kk-1)-1) is:is: RR((kk)) [[i,ji,j] =] = RR((kk-1)-1) [[i,ji,j] or] or ((RR((kk-1)-1) [[i,ki,k] and] and RR((kk-1)-1) [[k,jk,j])]) It implies the following rules for generatingIt implies the following rules for generating RR((kk)) fromfrom RR((kk-1)-1) :: Rule 1Rule 1 If an element in rowIf an element in row ii and columnand column jj is 1 inis 1 in RR((k-k-1)1) ,, it remains 1 init remains 1 in RR((kk)) Rule 2Rule 2 If an element in rowIf an element in row ii and columnand column jj is 0 inis 0 in RR((k-k-1)1) ,, it has to be changed to 1 init has to be changed to 1 in RR((kk)) if and only ifif and only if the element in its rowthe element in its row ii and columnand column kk and the elementand the element in its columnin its column jj and rowand row kk are both 1’s inare both 1’s in RR((k-k-1)1)
  • 23. 8-23Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s Algorithm (example)Warshall’s Algorithm (example) 3 42 1 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 R(0) = 0 0 1 0 1 0 1 1 0 0 0 0 0 1 0 0 R(1) = 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R(2) = 0 0 1 0 1 0 1 1 0 0 0 0 1 1 1 1 R(3) = 0 0 1 0 1 1 1 1 0 0 0 0 1 1 1 1 R(4) =
  • 24. 8-24Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Warshall’s Algorithm (pseudocode and analysis)Warshall’s Algorithm (pseudocode and analysis) Time efficiency:Time efficiency: ΘΘ((nn33 )) Space efficiency: Matrices can be written over their predecessorsSpace efficiency: Matrices can be written over their predecessors (with some care), so it’s(with some care), so it’s ΘΘ((nn^2).^2).
  • 25. 8-25Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Floyd’s Algorithm: All pairs shortest pathsFloyd’s Algorithm: All pairs shortest paths Problem: In a weighted (di)graph, find shortest paths betweenProblem: In a weighted (di)graph, find shortest paths between every pair of verticesevery pair of vertices Same idea: construct solution through series of matricesSame idea: construct solution through series of matrices DD(0)(0) , …,, …, DD ((nn)) using increasing subsets of the vertices allowedusing increasing subsets of the vertices allowed as intermediateas intermediate Example:Example: 3 4 2 1 4 1 6 1 5 3 0 ∞ 4 ∞ 1 0 4 3 ∞ ∞ 0 ∞ 6 5 1 0
  • 26. 8-26Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Floyd’s Algorithm (matrix generation)Floyd’s Algorithm (matrix generation) On theOn the k-k-th iteration, the algorithm determines shortest pathsth iteration, the algorithm determines shortest paths between every pair of verticesbetween every pair of vertices i, ji, j that use only vertices among 1,that use only vertices among 1, …,…,kk as intermediateas intermediate DD((kk)) [[i,ji,j] = min {] = min {DD((kk-1)-1) [[i,ji,j],], DD((kk-1)-1) [[i,ki,k] +] + DD((kk-1)-1) [[k,jk,j]}]} i j k DD((kk-1)-1) [[i,ji,j]] DD((kk-1)-1) [[i,ki,k]] DD((kk-1)-1) [[k,jk,j]] Initial condition?
  • 27. 8-27Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Floyd’s Algorithm (example)Floyd’s Algorithm (example) 0 ∞ 3 ∞ 2 0 ∞ ∞ ∞ 7 0 1 6 ∞ ∞ 0 D(0) = 0 ∞ 3 ∞ 2 0 5 ∞ ∞ 7 0 1 6 ∞ 9 0 D(1) = 0 ∞ 3 ∞ 2 0 5 ∞ 9 7 0 1 6 ∞ 9 0 D(2) = 0 10 3 4 2 0 5 6 9 7 0 1 6 16 9 0 D(3) = 0 10 3 4 2 0 5 6 7 7 0 1 6 16 9 0 D(4) = 3 1 3 2 6 7 4 1 2
  • 28. 8-28Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Floyd’s Algorithm (pseudocode and analysis)Floyd’s Algorithm (pseudocode and analysis) Time efficiency:Time efficiency: ΘΘ((nn33 )) Space efficiency: Matrices can be written over their predecessorsSpace efficiency: Matrices can be written over their predecessors Note: Works on graphs with negative edges but without negative cycles.Note: Works on graphs with negative edges but without negative cycles. Shortest paths themselves can be found, too.Shortest paths themselves can be found, too. How?How? If D[i,k] + D[k,j] < D[i,j] then P[i,j]  k Since the superscripts k or k-1 make no difference to D[i,k] and D[k,j].
  • 29. 8-29Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Optimal Binary Search TreesOptimal Binary Search Trees Problem: GivenProblem: Given nn keyskeys aa11 < …<< …< aann and probabilitiesand probabilities pp11,, …,…, ppnn searching for them, find a BST with a minimumsearching for them, find a BST with a minimum average number of comparisons in successful search.average number of comparisons in successful search. Since total number of BSTs withSince total number of BSTs with nn nodes is given by C(2nodes is given by C(2nn,,nn)/)/ ((nn+1), which grows exponentially, brute force is hopeless.+1), which grows exponentially, brute force is hopeless. Example: What is an optimal BST for keysExample: What is an optimal BST for keys AA,, BB,, CC, and, and DD withwith search probabilities 0.1, 0.2, 0.4, and 0.3, respectively?search probabilities 0.1, 0.2, 0.4, and 0.3, respectively? D A B C Average # of comparisons = 1*0.4 + 2*(0.2+0.3) + 3*0.1 = 1.7
  • 30. 8-30Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 DP for Optimal BST ProblemDP for Optimal BST Problem LetLet CC[[i,ji,j] be minimum average number of comparisons made in] be minimum average number of comparisons made in T[T[i,ji,j], optimal BST for keys], optimal BST for keys aaii < …<< …< aajj ,, where 1 ≤where 1 ≤ ii ≤≤ jj ≤≤ n.n. Consider optimal BST among all BSTs with someConsider optimal BST among all BSTs with some aakk ((ii ≤≤ kk ≤≤ jj )) as their root; T[as their root; T[i,ji,j] is the best among them.] is the best among them. a Optimal BST for a , ..., a Optimal BST for a , ..., ai k k-1 k+1 j CC[[i,ji,j] =] = min {min {ppkk ·· 1 +1 + ∑∑ ppss (level(level aass in T[in T[i,k-i,k-1] +1)1] +1) ++ ∑∑ ppss (level(level aass in T[in T[k+k+11,j,j] +1)}] +1)} ii ≤≤ kk ≤≤ jj ss == ii k-k-11 s =s =k+k+11 jj
  • 31. 8-31Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 goal0 0 C[i,j] 0 1 n+1 0 1 n p 1 p2 np i j DP for Optimal BST Problem (cont.)DP for Optimal BST Problem (cont.) After simplifications, we obtain the recurrence forAfter simplifications, we obtain the recurrence for CC[[i,ji,j]:]: CC[[i,ji,j] =] = min {min {CC[[ii,,kk-1] +-1] + CC[[kk+1,+1,jj]} + ∑]} + ∑ ppss forfor 11 ≤≤ ii ≤≤ jj ≤≤ nn CC[[i,ii,i] =] = ppii for 1for 1 ≤≤ ii ≤≤ jj ≤≤ nn ss == ii jj ii ≤≤ kk ≤≤ jj
  • 32. Example: keyExample: key A B C DA B C D probability 0.1 0.2 0.4 0.3probability 0.1 0.2 0.4 0.3 The tables below are filled diagonal by diagonal: the left one is filledThe tables below are filled diagonal by diagonal: the left one is filled using the recurrenceusing the recurrence CC[[i,ji,j] =] = min {min {CC[[ii,,kk-1] +-1] + CC[[kk+1,+1,jj]} + ∑]} + ∑ pps ,s , CC[[i,ii,i] =] = ppii ;; the right one, for trees’ roots, recordsthe right one, for trees’ roots, records kk’s values giving the minima’s values giving the minima 00 11 22 33 44 11 00 .1.1 .4.4 1.11.1 1.71.7 22 00 .2.2 .8.8 1.41.4 33 00 .4.4 1.01.0 44 00 .3.3 55 00 00 11 22 33 44 11 11 22 33 33 22 22 33 33 33 33 33 44 44 55 ii ≤≤ kk ≤≤ jj ss == ii jj optimal BSToptimal BST B A C D ii jj ii jj
  • 33. 8-33Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Optimal Binary Search TreesOptimal Binary Search Trees
  • 34. 8-34Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 8 Analysis DP for Optimal BST ProblemAnalysis DP for Optimal BST Problem Time efficiency:Time efficiency: ΘΘ((nn33 ) but can be reduced to) but can be reduced to ΘΘ((nn22 )) by takingby taking advantage of monotonicity of entries in theadvantage of monotonicity of entries in the root table, i.e.,root table, i.e., RR[[i,ji,j] is always in the range] is always in the range betweenbetween RR[[i,ji,j-1] and R[-1] and R[ii+1,j]+1,j] Space efficiency:Space efficiency: ΘΘ((nn22 )) Method can be expanded to include unsuccessful searchesMethod can be expanded to include unsuccessful searches