SlideShare a Scribd company logo
CS 3343: Analysis of
Algorithms
Introduction to Greedy Algorithms
Outline
• Review of DP
• Greedy algorithms
– Similar to DP, not an actual algorithm, but a
meta algorithm
Two steps to dynamic programming
• Formulate the solution as a recurrence
relation of solutions to subproblems.
• Specify an order of evaluation for the
recurrence so you always have what you
need.
Restaurant location problem
• You work in the fast food business
• Your company plans to open up new restaurants in
Texas along I-35
• Many towns along the highway, call them t1, t2, …, tn
• Restaurants at ti has estimated annual profit pi
• No two restaurants can be located within 10 miles of
each other due to regulation
• Your boss wants to maximize the total profit
• You want a big bonus
10 mile
A DP algorithm
• Suppose you’ve already found the optimal solution
• It will either include tn or not include tn
• Case 1: tn not included in optimal solution
– Best solution same as best solution for t1 , …, tn-1
• Case 2: tn included in optimal solution
– Best solution is pn + best solution for t1 , …, tj , where j < n is the
largest index so that dist(tj, tn) ≥ 10
Recurrence formulation
• Let S(i) be the total profit of the optimal solution when the
first i towns are considered (not necessarily selected)
– S(n) is the optimal solution to the complete problem
S(n-1)
S(j) + pn j < n & dist (tj, tn) ≥ 10
S(n) = max
S(i-1)
S(j) + pi j < i & dist (tj, ti) ≥ 10
S(i) = max
Generalize
Number of sub-problems: n. Boundary condition: S(0) = 0.
Dependency: i
i-1
j
S
Example
5 2 2 6 6 6
3 10 7
6 7 9 8 3 3 2 4 12 5
Distance (mi)
Profit (100k)
6 7 9 9 10 12 12 14 26 26
S(i)
S(i-1)
S(j) + pi j < i & dist (tj, ti) ≥ 10
S(i) = max
100
0
7 3 4 12
dummy
Optimal: 26
Complexity
• Time: O(nk), where k is the maximum
number of towns that are within 10 miles
to the left of any town
– In the worst case, O(n2
)
– Can be reduced to O(n) by pre-processing
• Memory: Θ(n)
Knapsack problem
Three versions:
0-1 knapsack problem: take
each item or leave it
Fractional knapsack problem:
items are divisible
Unbounded knapsack problem:
unlimited supplies of each item.
Which one is easiest to solve?
•Each item has a value and a weight
•Objective: maximize value
•Constraint: knapsack has a weight
limitation
We studied the 0-1 problem.
Formal definition (0-1 problem)
• Knapsack has weight limit W
• Items labeled 1, 2, …, n (arbitrarily)
• Items have weights w1, w2, …, wn
– Assume all weights are integers
– For practical reason, only consider wi < W
• Items have values v1, v2, …, vn
• Objective: find a subset of items, S, such that iS
wi  W and iS vi is maximal among all such
(feasible) subsets
A DP algorithm
• Suppose you’ve find the optimal solution S
• Case 1: item n is included
• Case 2: item n is not included
Total weight limit:
W
wn
Total weight limit:
W
Find an optimal solution using items
1, 2, …, n-1 with weight limit W - wn
wn
Find an optimal solution using items
1, 2, …, n-1 with weight limit W
Recursive formulation
• Let V[i, w] be the optimal total value when items 1, 2, …, i
are considered for a knapsack with weight limit w
=> V[n, W] is the optimal solution
V[n, W] = max
V[n-1, W-wn] + vn
V[n-1, W]
Generalize
V[i, w] = max
V[i-1, w-wi] + vi item i is taken
V[i-1, w] item i not taken
V[i-1, w] if wi > w item i not taken
Boundary condition: V[i, 0] = 0, V[0, w] = 0. Number of sub-problems = ?
Example
• n = 6 (# of items)
• W = 10 (weight limit)
• Items (weight, value):
2 2
4 3
3 3
5 6
2 4
6 9
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
w 0 1 2 3 4 5 6 7 8 9 10
4
2
5
6
4
3
2
1
i
9
6
6
5
3
3
3
4
2
2
vi
wi
max
V[i-1, w-wi] + vi item i is taken
V[i-1, w] item i not taken
V[i-1, w] if wi > w item i not taken
V[i, w] =
V[i, w]
V[i-1, w]
V[i-1, w-wi]
6
wi
5
10
7
4
0
0
13
10
7
6
4
4
0
0
9
6
3
3
2
0
0
8
6
5
3
2
0
0
5
5
5
5
3
2
2
0
0
2
2
2
2
2
2
2
2
0
0
0
0
0
0
0
0
0
0
0
0
0
w 0 1 2 3 4 5 6 7 8 9 10
i wi vi
1 2 2
2 4 3
3 3 3
4 5 6
5 2 4
6 6 9
max
V[i-1, w-wi] + vi item i is taken
V[i-1, w] item i not taken
V[i-1, w] if wi > w item i not taken
V[i, w] =
2
4
3
6
5
6
7
5
9
6
8
10
9 11
8
3
12 13
13 15
10
7
4
0
0
13
10
7
6
4
4
0
0
9
6
3
3
2
0
0
8
6
5
3
2
0
0
5
5
5
5
3
2
2
0
0
2
2
2
2
2
2
2
2
0
0
0
0
0
0
0
0
0
0
0
0
0
w 0 1 2 3 4 5 6 7 8 9 10
i wi vi
1 2 2
2 4 3
3 3 3
4 5 6
5 2 4
6 6 9
2
4
3
6
5
6
7
5
9
6
8
10
9 11
8
3
12 13
13 15
Item: 6, 5, 1
Weight: 6 + 2 + 2 = 10
Value: 9 + 4 + 2 = 15
Optimal value: 15
Time complexity
• Θ (nW)
• Polynomial?
– Pseudo-polynomial
– Works well if W is small
• Consider following items (weight, value):
(10, 5), (15, 6), (20, 5), (18, 6)
• Weight limit 35
– Optimal solution: item 2, 4 (value = 12). Iterate: 2^4 = 16 subsets
– Dynamic programming: fill up a 4 x 35 = 140 table entries
• What’s the problem?
– Many entries are unused: no such weight combination
– Top-down may be better
Events scheduling problem
• A list of events to schedule
– ei has start time si and finishing time fi
– Indexed such that fi < fj if i < j
• Each event has a value vi
• Schedule to make the largest value
– You can attend only one event at any time
Time
e1 e2
e3
e4 e5
e6
e7
e8
e9
s9 f9
s8 f8
s7 f7
Events scheduling problem
Time
e1 e2
e3
e4 e5
e6
e7
e8
e9
• V(i) is the optimal value that can be achieved
when the first i events are considered
• V(n) =
V(n-1) en not selected
en selected
V(j) + vn
max {
j < n and fj < sn
s9 f9
s8 f8
s7 f7
Restaurant location problem 2
• Now the objective is to maximize the
number of new restaurants (subject to the
distance constraint)
– In other words, we assume that each
restaurant makes the same profit, no matter
where it is opened
10 mile
A DP Algorithm
• Exactly as before, but pi = 1 for all i
S(i-1)
S(j) + 1 j < i & dist (tj, ti) ≥ 10
S(i) = max
S(i-1)
S(j) + pi j < i & dist (tj, ti) ≥ 10
S(i) = max
Example
• Natural greedy 1: 1 + 1 + 1 + 1 = 4
• Maybe greedy is ok here? Does it work for all cases?
5 2 2 6 6 6
3 10 7
1 1 1 1 1 1 1 1 1 1
Distance (mi)
Profit (100k)
S(i)
S(i-1)
S(j) + 1 j < i & dist (tj, ti) ≥ 10
S(i) = max
100
0
2 3 4
1
dummy
Optimal: 4
1 1 1 2 2 4
Comparison
5 2 2 6 6 6
3 10 7
1 1 1 1 1 1 1 1 1 1
Dist(mi)
Profit (100k)
S(i)
100
0
2 3 4
1 1 1 1 2 2 4
5 2 2 6 6 6
3 10 7
6 7 9 8 3 3 2 4 12 5
Profit (100k)
6 7 9 9 10 12 12 14 26 26
S(i)
100
0
Benefit of taking t1 rather than t2?
Dist(mi)
t2 may have a bigger profit
t1 gives you more choices for the future
None!
t1 gives you more choices for the future
Benefit of taking t1 rather than t2?
Benefit of waiting to see t2?
Moral of the story
• If a better opportunity may come out next,
you may want to hold on your decision
• Otherwise, grasp the current opportunity
immediately because there is no reason to
wait …
Greedy algorithm
• For certain problems, DP is an overkill
– Greedy algorithm may guarantee to give you
the optimal solution
– Much more efficient
Formal argument
• Claim 1: if A = [m1, m2, …, mk] is the optimal solution to the
restaurant location problem for a set of towns [t1, …, tn]
– m1 < m2 < … < mk are indices of the selected towns
– Then B = [m2, m3, …, mk] is the optimal solution to the sub-problem
[tj, …, tn], where tj is the first town that are at least 10 miles to the
right of tm1
• Proof by contradiction: suppose B is not the optimal
solution to the sub-problem, which means there is a better
solution B’ to the sub-problem
• Then A’ = m1 || B’ gives a better solution than A = m1 || B
=> A is not optimal => contradiction => B is optimal
m1 B’ (imaginary)
A’
B
m1
A
m2 mk
Implication of Claim 1
• If we know the first town that needs to be
chosen, we can reduce the problem to a
smaller sub-problem
– This is similar to dynamic programming
– Optimal substructure
Formal argument (cont’d)
• Claim 2: for the uniform-profit restaurant location
problem, there is an optimal solution that chooses t1
• Proof by contradiction: suppose that no optimal solution
can be obtained by choosing t1
– Say the first town chosen by the optimal solution S is ti, i > 1
– Replace ti with t1 will not violate the distance constraint, and the
total profit remains the same => S’ is an optimal solution
– Contradiction
– Therefore claim 2 is valid
S
S’
Implication of Claim 2
• We can simply choose the first town as
part of the optimal solution
– This is different from DP
– Decisions are made immediately
• By Claim 1, we then only need to repeat
this strategy to the remaining sub-problem
Greedy algorithm for restaurant
location problem
select t1
d = 0;
for (i = 2 to n)
d = d + dist(ti, ti-1);
if (d >= min_dist)
select ti
d = 0;
end
end
5 2 2 6 6 6
3 10 7
d 0 5 7 9 15
0
6 9 15
0
10
0
7
Complexity
• Time: Θ(n)
• Memory:
– Θ(n) to store the input
– Θ(1) for greedy selection
Events scheduling problem
• Objective: to schedule the maximal number of events
• Let vi = 1 for all i and solve by DP, but overkill
• Greedy strategy: choose the first-finishing event that is compatible with
previous selection (1, 2, 4, 6, 8 for the above example)
• Why is this a valid strategy?
– Claim 1: optimal substructure
– Claim 2: there is an optimal solution that chooses e1
– Proof by contradiction: Suppose that no optimal solution contains e1
– Say the first event chosen is ei => other chosen events start after ei finishes
– Replace ei by e1 will result in another optimal solution (e1 finishes earlier than ei)
– Contradiction
• Simple idea: attend the event that will left you with the most amount of time
when finished
Time
e1 e2
e3
e4 e5
e6
e7
e8
e9
Knapsack problem
Three versions:
0-1 knapsack problem: take
each item or leave it
Fractional knapsack problem:
items are divisible
Unbounded knapsack problem:
unlimited supplies of each item.
Which one is easiest to solve?
•Each item has a value and a weight
•Objective: maximize value
•Constraint: knapsack has a weight
limitation
We can solve the fractional knapsack
problem using greedy algorithm
Greedy algorithm for fractional
knapsack problem
• Compute value/weight ratio for each item
• Sort items by their value/weight ratio into
decreasing order
– Call the remaining item with the highest ratio the most
valuable item (MVI)
• Iteratively:
– If the weight limit can not be reached by adding MVI
• Select MVI
– Otherwise select MVI partially until weight limit
Example
• Weight limit: 10
1.5
2
1.2
1
0.75
1
$ / LB
9
6
6
4
2
5
6
5
4
3
3
3
3
4
2
2
2
1
Value
($)
Weight
(LB)
item
Example
• Weight limit: 10
• Take item 5
– 2 LB, $4
• Take item 6
– 8 LB, $13
• Take 2 LB of item 4
– 10 LB, 15.4
item Weight
(LB)
Value
($)
$ / LB
5 2 4 2
6 6 9 1.5
4 5 6 1.2
1 2 2 1
3 3 3 1
2 4 3 0.75
Why is greedy algorithm for
fractional knapsack problem valid?
• Claim: the optimal solution must contain the MVI as
much as possible (either up to the weight limit or until
MVI is exhausted)
• Proof by contradiction: suppose that the optimal solution
does not use all available MVI (i.e., there is still w (w <
W) units of MVI left while we choose other items)
– We can replace w pounds of less valuable items by MVI
– The total weight is the same, but with value higher than the
“optimal”
– Contradiction
w w w w
Elements of greedy algorithm
1. Optimal substructure
2. Locally optimal decision leads to globally
optimal solution
•For most optimization problems, greedy algorithm
will not guarantee an optimal solution
•But may give you a good starting point to use
other optimization techniques
•Starting from next lecture, we’ll study several
problems in graph theory that can actually be
solved by greedy algorithm

More Related Content

PPTX
daa-unit-3-greedy method
PPTX
THE GREEDY METHOD notes related to education.pptx
PPT
PPTX
Design and Analysis of Algorithm-Lecture.pptx
PPT
Knapsack problem and Memory Function
PPT
DOC
Data structure notes
PPTX
Chapter 12 Dynamic programming.pptx
daa-unit-3-greedy method
THE GREEDY METHOD notes related to education.pptx
Design and Analysis of Algorithm-Lecture.pptx
Knapsack problem and Memory Function
Data structure notes
Chapter 12 Dynamic programming.pptx

Similar to Elak3 need of greedy for design and analysis of algorithms.ppt (20)

PPTX
Daa:Dynamic Programing
PDF
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
PPTX
Dynamic Programming in design and analysis .pptx
PDF
PPT
4 greedy methodnew
PDF
module4_dynamic programming_2022.pdf
PPT
Design and analysis of Algorithms - Lecture 15.ppt
PPT
0/1 knapsack
PPTX
01 Knapsack using Dynamic Programming
PDF
Comparative analysis-of-dynamic-and-greedy-approaches-for-dynamic-programming
PPTX
Algorithm Design Techiques, divide and conquer
PPTX
Dynamic programming (dp) in Algorithm
PPTX
Chapter 5.pptx
PPTX
UNIT 4 Chapter 1 DYNAMIC PROGRAMMING.pptx
PPT
Dynamic Programming for 4th sem cse students
PPT
Knapsack problem
PDF
12 Greeddy Method
PPTX
Ms nikita greedy agorithm
PPT
5.3 dynamic programming 03
PDF
ADA Unit — 3 Dynamic Programming and Its Applications.pdf
Daa:Dynamic Programing
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Dynamic Programming in design and analysis .pptx
4 greedy methodnew
module4_dynamic programming_2022.pdf
Design and analysis of Algorithms - Lecture 15.ppt
0/1 knapsack
01 Knapsack using Dynamic Programming
Comparative analysis-of-dynamic-and-greedy-approaches-for-dynamic-programming
Algorithm Design Techiques, divide and conquer
Dynamic programming (dp) in Algorithm
Chapter 5.pptx
UNIT 4 Chapter 1 DYNAMIC PROGRAMMING.pptx
Dynamic Programming for 4th sem cse students
Knapsack problem
12 Greeddy Method
Ms nikita greedy agorithm
5.3 dynamic programming 03
ADA Unit — 3 Dynamic Programming and Its Applications.pdf
Ad

Recently uploaded (20)

PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
GDM (1) (1).pptx small presentation for students
PDF
01-Introduction-to-Information-Management.pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PPTX
Institutional Correction lecture only . . .
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Complications of Minimal Access Surgery at WLH
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
Pharma ospi slides which help in ospi learning
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Module 4: Burden of Disease Tutorial Slides S2 2025
GDM (1) (1).pptx small presentation for students
01-Introduction-to-Information-Management.pdf
human mycosis Human fungal infections are called human mycosis..pptx
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Institutional Correction lecture only . . .
102 student loan defaulters named and shamed – Is someone you know on the list?
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
Anesthesia in Laparoscopic Surgery in India
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
O5-L3 Freight Transport Ops (International) V1.pdf
Supply Chain Operations Speaking Notes -ICLT Program
Complications of Minimal Access Surgery at WLH
2.FourierTransform-ShortQuestionswithAnswers.pdf
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Pharma ospi slides which help in ospi learning
Ad

Elak3 need of greedy for design and analysis of algorithms.ppt

  • 1. CS 3343: Analysis of Algorithms Introduction to Greedy Algorithms
  • 2. Outline • Review of DP • Greedy algorithms – Similar to DP, not an actual algorithm, but a meta algorithm
  • 3. Two steps to dynamic programming • Formulate the solution as a recurrence relation of solutions to subproblems. • Specify an order of evaluation for the recurrence so you always have what you need.
  • 4. Restaurant location problem • You work in the fast food business • Your company plans to open up new restaurants in Texas along I-35 • Many towns along the highway, call them t1, t2, …, tn • Restaurants at ti has estimated annual profit pi • No two restaurants can be located within 10 miles of each other due to regulation • Your boss wants to maximize the total profit • You want a big bonus 10 mile
  • 5. A DP algorithm • Suppose you’ve already found the optimal solution • It will either include tn or not include tn • Case 1: tn not included in optimal solution – Best solution same as best solution for t1 , …, tn-1 • Case 2: tn included in optimal solution – Best solution is pn + best solution for t1 , …, tj , where j < n is the largest index so that dist(tj, tn) ≥ 10
  • 6. Recurrence formulation • Let S(i) be the total profit of the optimal solution when the first i towns are considered (not necessarily selected) – S(n) is the optimal solution to the complete problem S(n-1) S(j) + pn j < n & dist (tj, tn) ≥ 10 S(n) = max S(i-1) S(j) + pi j < i & dist (tj, ti) ≥ 10 S(i) = max Generalize Number of sub-problems: n. Boundary condition: S(0) = 0. Dependency: i i-1 j S
  • 7. Example 5 2 2 6 6 6 3 10 7 6 7 9 8 3 3 2 4 12 5 Distance (mi) Profit (100k) 6 7 9 9 10 12 12 14 26 26 S(i) S(i-1) S(j) + pi j < i & dist (tj, ti) ≥ 10 S(i) = max 100 0 7 3 4 12 dummy Optimal: 26
  • 8. Complexity • Time: O(nk), where k is the maximum number of towns that are within 10 miles to the left of any town – In the worst case, O(n2 ) – Can be reduced to O(n) by pre-processing • Memory: Θ(n)
  • 9. Knapsack problem Three versions: 0-1 knapsack problem: take each item or leave it Fractional knapsack problem: items are divisible Unbounded knapsack problem: unlimited supplies of each item. Which one is easiest to solve? •Each item has a value and a weight •Objective: maximize value •Constraint: knapsack has a weight limitation We studied the 0-1 problem.
  • 10. Formal definition (0-1 problem) • Knapsack has weight limit W • Items labeled 1, 2, …, n (arbitrarily) • Items have weights w1, w2, …, wn – Assume all weights are integers – For practical reason, only consider wi < W • Items have values v1, v2, …, vn • Objective: find a subset of items, S, such that iS wi  W and iS vi is maximal among all such (feasible) subsets
  • 11. A DP algorithm • Suppose you’ve find the optimal solution S • Case 1: item n is included • Case 2: item n is not included Total weight limit: W wn Total weight limit: W Find an optimal solution using items 1, 2, …, n-1 with weight limit W - wn wn Find an optimal solution using items 1, 2, …, n-1 with weight limit W
  • 12. Recursive formulation • Let V[i, w] be the optimal total value when items 1, 2, …, i are considered for a knapsack with weight limit w => V[n, W] is the optimal solution V[n, W] = max V[n-1, W-wn] + vn V[n-1, W] Generalize V[i, w] = max V[i-1, w-wi] + vi item i is taken V[i-1, w] item i not taken V[i-1, w] if wi > w item i not taken Boundary condition: V[i, 0] = 0, V[0, w] = 0. Number of sub-problems = ?
  • 13. Example • n = 6 (# of items) • W = 10 (weight limit) • Items (weight, value): 2 2 4 3 3 3 5 6 2 4 6 9
  • 14. 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 w 0 1 2 3 4 5 6 7 8 9 10 4 2 5 6 4 3 2 1 i 9 6 6 5 3 3 3 4 2 2 vi wi max V[i-1, w-wi] + vi item i is taken V[i-1, w] item i not taken V[i-1, w] if wi > w item i not taken V[i, w] = V[i, w] V[i-1, w] V[i-1, w-wi] 6 wi 5
  • 15. 10 7 4 0 0 13 10 7 6 4 4 0 0 9 6 3 3 2 0 0 8 6 5 3 2 0 0 5 5 5 5 3 2 2 0 0 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 w 0 1 2 3 4 5 6 7 8 9 10 i wi vi 1 2 2 2 4 3 3 3 3 4 5 6 5 2 4 6 6 9 max V[i-1, w-wi] + vi item i is taken V[i-1, w] item i not taken V[i-1, w] if wi > w item i not taken V[i, w] = 2 4 3 6 5 6 7 5 9 6 8 10 9 11 8 3 12 13 13 15
  • 16. 10 7 4 0 0 13 10 7 6 4 4 0 0 9 6 3 3 2 0 0 8 6 5 3 2 0 0 5 5 5 5 3 2 2 0 0 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 w 0 1 2 3 4 5 6 7 8 9 10 i wi vi 1 2 2 2 4 3 3 3 3 4 5 6 5 2 4 6 6 9 2 4 3 6 5 6 7 5 9 6 8 10 9 11 8 3 12 13 13 15 Item: 6, 5, 1 Weight: 6 + 2 + 2 = 10 Value: 9 + 4 + 2 = 15 Optimal value: 15
  • 17. Time complexity • Θ (nW) • Polynomial? – Pseudo-polynomial – Works well if W is small • Consider following items (weight, value): (10, 5), (15, 6), (20, 5), (18, 6) • Weight limit 35 – Optimal solution: item 2, 4 (value = 12). Iterate: 2^4 = 16 subsets – Dynamic programming: fill up a 4 x 35 = 140 table entries • What’s the problem? – Many entries are unused: no such weight combination – Top-down may be better
  • 18. Events scheduling problem • A list of events to schedule – ei has start time si and finishing time fi – Indexed such that fi < fj if i < j • Each event has a value vi • Schedule to make the largest value – You can attend only one event at any time Time e1 e2 e3 e4 e5 e6 e7 e8 e9 s9 f9 s8 f8 s7 f7
  • 19. Events scheduling problem Time e1 e2 e3 e4 e5 e6 e7 e8 e9 • V(i) is the optimal value that can be achieved when the first i events are considered • V(n) = V(n-1) en not selected en selected V(j) + vn max { j < n and fj < sn s9 f9 s8 f8 s7 f7
  • 20. Restaurant location problem 2 • Now the objective is to maximize the number of new restaurants (subject to the distance constraint) – In other words, we assume that each restaurant makes the same profit, no matter where it is opened 10 mile
  • 21. A DP Algorithm • Exactly as before, but pi = 1 for all i S(i-1) S(j) + 1 j < i & dist (tj, ti) ≥ 10 S(i) = max S(i-1) S(j) + pi j < i & dist (tj, ti) ≥ 10 S(i) = max
  • 22. Example • Natural greedy 1: 1 + 1 + 1 + 1 = 4 • Maybe greedy is ok here? Does it work for all cases? 5 2 2 6 6 6 3 10 7 1 1 1 1 1 1 1 1 1 1 Distance (mi) Profit (100k) S(i) S(i-1) S(j) + 1 j < i & dist (tj, ti) ≥ 10 S(i) = max 100 0 2 3 4 1 dummy Optimal: 4 1 1 1 2 2 4
  • 23. Comparison 5 2 2 6 6 6 3 10 7 1 1 1 1 1 1 1 1 1 1 Dist(mi) Profit (100k) S(i) 100 0 2 3 4 1 1 1 1 2 2 4 5 2 2 6 6 6 3 10 7 6 7 9 8 3 3 2 4 12 5 Profit (100k) 6 7 9 9 10 12 12 14 26 26 S(i) 100 0 Benefit of taking t1 rather than t2? Dist(mi) t2 may have a bigger profit t1 gives you more choices for the future None! t1 gives you more choices for the future Benefit of taking t1 rather than t2? Benefit of waiting to see t2?
  • 24. Moral of the story • If a better opportunity may come out next, you may want to hold on your decision • Otherwise, grasp the current opportunity immediately because there is no reason to wait …
  • 25. Greedy algorithm • For certain problems, DP is an overkill – Greedy algorithm may guarantee to give you the optimal solution – Much more efficient
  • 26. Formal argument • Claim 1: if A = [m1, m2, …, mk] is the optimal solution to the restaurant location problem for a set of towns [t1, …, tn] – m1 < m2 < … < mk are indices of the selected towns – Then B = [m2, m3, …, mk] is the optimal solution to the sub-problem [tj, …, tn], where tj is the first town that are at least 10 miles to the right of tm1 • Proof by contradiction: suppose B is not the optimal solution to the sub-problem, which means there is a better solution B’ to the sub-problem • Then A’ = m1 || B’ gives a better solution than A = m1 || B => A is not optimal => contradiction => B is optimal m1 B’ (imaginary) A’ B m1 A m2 mk
  • 27. Implication of Claim 1 • If we know the first town that needs to be chosen, we can reduce the problem to a smaller sub-problem – This is similar to dynamic programming – Optimal substructure
  • 28. Formal argument (cont’d) • Claim 2: for the uniform-profit restaurant location problem, there is an optimal solution that chooses t1 • Proof by contradiction: suppose that no optimal solution can be obtained by choosing t1 – Say the first town chosen by the optimal solution S is ti, i > 1 – Replace ti with t1 will not violate the distance constraint, and the total profit remains the same => S’ is an optimal solution – Contradiction – Therefore claim 2 is valid S S’
  • 29. Implication of Claim 2 • We can simply choose the first town as part of the optimal solution – This is different from DP – Decisions are made immediately • By Claim 1, we then only need to repeat this strategy to the remaining sub-problem
  • 30. Greedy algorithm for restaurant location problem select t1 d = 0; for (i = 2 to n) d = d + dist(ti, ti-1); if (d >= min_dist) select ti d = 0; end end 5 2 2 6 6 6 3 10 7 d 0 5 7 9 15 0 6 9 15 0 10 0 7
  • 31. Complexity • Time: Θ(n) • Memory: – Θ(n) to store the input – Θ(1) for greedy selection
  • 32. Events scheduling problem • Objective: to schedule the maximal number of events • Let vi = 1 for all i and solve by DP, but overkill • Greedy strategy: choose the first-finishing event that is compatible with previous selection (1, 2, 4, 6, 8 for the above example) • Why is this a valid strategy? – Claim 1: optimal substructure – Claim 2: there is an optimal solution that chooses e1 – Proof by contradiction: Suppose that no optimal solution contains e1 – Say the first event chosen is ei => other chosen events start after ei finishes – Replace ei by e1 will result in another optimal solution (e1 finishes earlier than ei) – Contradiction • Simple idea: attend the event that will left you with the most amount of time when finished Time e1 e2 e3 e4 e5 e6 e7 e8 e9
  • 33. Knapsack problem Three versions: 0-1 knapsack problem: take each item or leave it Fractional knapsack problem: items are divisible Unbounded knapsack problem: unlimited supplies of each item. Which one is easiest to solve? •Each item has a value and a weight •Objective: maximize value •Constraint: knapsack has a weight limitation We can solve the fractional knapsack problem using greedy algorithm
  • 34. Greedy algorithm for fractional knapsack problem • Compute value/weight ratio for each item • Sort items by their value/weight ratio into decreasing order – Call the remaining item with the highest ratio the most valuable item (MVI) • Iteratively: – If the weight limit can not be reached by adding MVI • Select MVI – Otherwise select MVI partially until weight limit
  • 35. Example • Weight limit: 10 1.5 2 1.2 1 0.75 1 $ / LB 9 6 6 4 2 5 6 5 4 3 3 3 3 4 2 2 2 1 Value ($) Weight (LB) item
  • 36. Example • Weight limit: 10 • Take item 5 – 2 LB, $4 • Take item 6 – 8 LB, $13 • Take 2 LB of item 4 – 10 LB, 15.4 item Weight (LB) Value ($) $ / LB 5 2 4 2 6 6 9 1.5 4 5 6 1.2 1 2 2 1 3 3 3 1 2 4 3 0.75
  • 37. Why is greedy algorithm for fractional knapsack problem valid? • Claim: the optimal solution must contain the MVI as much as possible (either up to the weight limit or until MVI is exhausted) • Proof by contradiction: suppose that the optimal solution does not use all available MVI (i.e., there is still w (w < W) units of MVI left while we choose other items) – We can replace w pounds of less valuable items by MVI – The total weight is the same, but with value higher than the “optimal” – Contradiction w w w w
  • 38. Elements of greedy algorithm 1. Optimal substructure 2. Locally optimal decision leads to globally optimal solution •For most optimization problems, greedy algorithm will not guarantee an optimal solution •But may give you a good starting point to use other optimization techniques •Starting from next lecture, we’ll study several problems in graph theory that can actually be solved by greedy algorithm