SlideShare a Scribd company logo
David Luebke 1
06/12/25
CS 332: Algorithms
Greedy Algorithms
David Luebke 2
06/12/25
Review: Dynamic Programming
● Dynamic programming is another strategy for
designing algorithms
● Use when problem breaks down into recurring
small subproblems
David Luebke 3
06/12/25
Review: Optimal Substructure of
LCS
● Observation 1: Optimal substructure
■ A simple recursive algorithm will suffice
■ Draw sample recursion tree from c[3,4]
■ What will be the depth of the tree?
● Observation 2: Overlapping subproblems
■ Find some places where we solve the same subproblem more
than once










otherwise
])
,
1
[
],
1
,
[
max(
],
[
]
[
if
1
]
1
,
1
[
]
,
[
j
i
c
j
i
c
j
y
i
x
j
i
c
j
i
c
David Luebke 4
06/12/25
Review: Structure of Subproblems
● For the LCS problem:
■ There are few subproblems in total
■ And many recurring instances of each
(unlike divide & conquer, where subproblems unique)
● How many distinct problems exist for the LCS
of x[1..m] and y[1..n]?
● A: mn
David Luebke 5
06/12/25
Memoization
● Memoization is another way to deal with overlapping
subproblems
■ After computing the solution to a subproblem, store in a
table
■ Subsequent calls just do a table lookup
● Can modify recursive alg to use memoziation:
■ There are mn subproblems
■ How many times is each subproblem wanted?
■ What will be the running time for this algorithm? The
running space?
David Luebke 6
06/12/25
Review: Dynamic Programming
● Dynamic programming: build table bottom-up
■ Same table as memoization, but instead of starting
at (m,n) and recursing down, start at (1,1)
● Least Common Subsequence: LCS easy to
calculate from LCS of prefixes
○ As your homework shows, can actually reduce space to
O(min(m,n))
● Knapsack problem: we’ll review this in a bit
David Luebke 7
06/12/25
Review: Dynamic Programming
● Summary of the basic idea:
■ Optimal substructure: optimal solution to problem
consists of optimal solutions to subproblems
■ Overlapping subproblems: few subproblems in total,
many recurring instances of each
■ Solve bottom-up, building a table of solved
subproblems that are used to solve larger ones
● Variations:
■ “Table” could be 3-dimensional, triangular, a tree, etc.
David Luebke 8
06/12/25
Greedy Algorithms
● A greedy algorithm always makes the choice that
looks best at the moment
■ My everyday examples:
○ Walking to the Corner
○ Playing a bridge hand
■ The hope: a locally optimal choice will lead to a globally
optimal solution
■ For some problems, it works
● Dynamic programming can be overkill; greedy
algorithms tend to be easier to code
David Luebke 9
06/12/25
Activity-Selection Problem
● Problem: get your money’s worth out of a
carnival
■ Buy a wristband that lets you onto any ride
■ Lots of rides, each starting and ending at different
times
■ Your goal: ride as many rides as possible
○ Another, alternative goal that we don’t solve here:
maximize time spent on rides
● Welcome to the activity selection problem
David Luebke 10
06/12/25
Activity-Selection
● Formally:
■ Given a set S of n activities
si = start time of activity i
fi = finish time of activity i
■ Find max-size subset A of compatible activities
 Assume (wlog) that f1  f2  …  fn
1
2
3
4
5
6
David Luebke 11
06/12/25
Activity Selection:
Optimal Substructure
● Let k be the minimum activity in A (i.e., the one
with the earliest finish time). Then A - {k} is an
optimal solution to S’ = {i  S: si  fk}
■ In words: once activity #1 is selected, the problem
reduces to finding an optimal solution for activity-
selection over activities in S compatible with #1
■ Proof: if we could find optimal solution B’ to S’ with |B|
> |A - {k}|,
○ Then B U {k} is compatible
○ And |B U {k}| > |A|
David Luebke 12
06/12/25
Activity Selection:
Repeated Subproblems
● Consider a recursive algorithm that tries all
possible compatible subsets to find a maximal
set, and notice repeated subproblems:
S
1A?
S’
2A?
S-{1}
2A?
S-{1,2}
S’’
S’-{2}
S’’
yes no
no
no
yes yes
David Luebke 13
06/12/25
Greedy Choice Property
● Dynamic programming? Memoize? Yes, but…
● Activity selection problem also exhibits the greedy
choice property:
■ Locally optimal choice  globally optimal sol’n
■ Them 17.1: if S is an activity selection problem sorted
by finish time, then  optimal solution
A  S such that {1}  A
○ Sketch of proof: if  optimal solution B that does not contain
{1}, can always replace first activity in B with {1} (Why?).
Same number of activities, thus optimal.
David Luebke 14
06/12/25
Activity Selection:
A Greedy Algorithm
● So actual algorithm is simple:
■ Sort the activities by finish time
■ Schedule the first activity
■ Then schedule the next activity in sorted list which
starts after previous activity finishes
■ Repeat until no more activities
● Intuition is even more simple:
■ Always pick the shortest ride available at the time
David Luebke 15
06/12/25
Minimum Spanning Tree Revisited
● Recall: MST problem has optimal substructure
■ Prove it
● Is Prim’s algorithm greedy? Why?
● Is Kruskal’s algorithm greedy? Why?
David Luebke 16
06/12/25
Review:
The Knapsack Problem
● The famous knapsack problem:
■ A thief breaks into a museum. Fabulous paintings,
sculptures, and jewels are everywhere. The thief has
a good eye for the value of these objects, and knows
that each will fetch hundreds or thousands of dollars
on the clandestine art collector’s market. But, the
thief has only brought a single knapsack to the scene
of the robbery, and can take away only what he can
carry. What items should the thief take to maximize
the haul?
David Luebke 17
06/12/25
Review: The Knapsack Problem
● More formally, the 0-1 knapsack problem:
■ The thief must choose among n items, where the ith item
worth vi dollars and weighs wi pounds
■ Carrying at most W pounds, maximize value
○ Note: assume vi, wi, and W are all integers
○ “0-1” b/c each item must be taken or left in entirety
● A variation, the fractional knapsack problem:
■ Thief can take fractions of items
■ Think of items in 0-1 problem as gold ingots, in fractional
problem as buckets of gold dust
David Luebke 18
06/12/25
Review: The Knapsack Problem
And Optimal Substructure
● Both variations exhibit optimal substructure
● To show this for the 0-1 problem, consider the
most valuable load weighing at most W pounds
■ If we remove item j from the load, what do we know
about the remaining load?
■ A: remainder must be the most valuable load
weighing at most W - wj that thief could take from
museum, excluding item j
David Luebke 19
06/12/25
Solving The Knapsack Problem
● The optimal solution to the fractional knapsack
problem can be found with a greedy algorithm
■ How?
● The optimal solution to the 0-1 problem cannot be
found with the same greedy strategy
■ Greedy strategy: take in order of dollars/pound
■ Example: 3 items weighing 10, 20, and 30 pounds,
knapsack can hold 50 pounds
○ Suppose item 2 is worth $100. Assign values to the other items
so that the greedy strategy will fail
David Luebke 20
06/12/25
The Knapsack Problem:
Greedy Vs. Dynamic
● The fractional problem can be solved greedily
● The 0-1 problem cannot be solved with a
greedy approach
■ As you have seen, however, it can be solved with
dynamic programming

More Related Content

PPT
Greedy1.ppt
PPT
Greedy algorithms
PPT
lecture 26
PPT
Lecture34
PPT
lec
PPT
lect
PPT
Lecture34
PPT
lect
Greedy1.ppt
Greedy algorithms
lecture 26
Lecture34
lec
lect
Lecture34
lect

Similar to CSS 332 : Algorithms - greedy Algorithms (20)

PPT
Greedy Algorithms WITH Activity Selection Problem.ppt
PPT
Greedy algorithms
PPSX
Design and Analysis of Algorithms (Greedy Algorithm)
PDF
12 Greeddy Method
PPTX
Design and Analysis of Algorithm-Lecture.pptx
PPTX
Chapter 5.pptx
PPT
CS 332 : Algorithms - Concept of NP Completeness
PDF
greedy method.pdf
PPTX
Ms nikita greedy agorithm
PDF
Greedy algorithm activity selection fractional
PDF
Sec16 greedy algorithm no1
PPTX
Dynamic programming
PPTX
Algorithms Design Patterns
PPT
lecture 27
PPT
Greedy_Backtracking graph coloring.ppt
PDF
Lec07-Greedy Algorithms.pdf Lec07-Greedy Algorithms.pdf
PPTX
Greedy algorithms -Making change-Knapsack-Prim's-Kruskal's
PPT
PPT
Chapter 17
PPTX
Greedy algorithm for design and analysis
Greedy Algorithms WITH Activity Selection Problem.ppt
Greedy algorithms
Design and Analysis of Algorithms (Greedy Algorithm)
12 Greeddy Method
Design and Analysis of Algorithm-Lecture.pptx
Chapter 5.pptx
CS 332 : Algorithms - Concept of NP Completeness
greedy method.pdf
Ms nikita greedy agorithm
Greedy algorithm activity selection fractional
Sec16 greedy algorithm no1
Dynamic programming
Algorithms Design Patterns
lecture 27
Greedy_Backtracking graph coloring.ppt
Lec07-Greedy Algorithms.pdf Lec07-Greedy Algorithms.pdf
Greedy algorithms -Making change-Knapsack-Prim's-Kruskal's
Chapter 17
Greedy algorithm for design and analysis
Ad

More from SohamSaha49 (6)

PPT
Iteration ,randomness, and Zero-knowledge
PPT
NFAs which recognize regularl languages.
PDF
Python: Strings & Intro Programming PDFs
PDF
Python: Strings & Intro Programming PDFs
PPTX
Pivot Tables summarize complex datasets.
PPTX
Excel Macros automate tasks you do often.
Iteration ,randomness, and Zero-knowledge
NFAs which recognize regularl languages.
Python: Strings & Intro Programming PDFs
Python: Strings & Intro Programming PDFs
Pivot Tables summarize complex datasets.
Excel Macros automate tasks you do often.
Ad

Recently uploaded (20)

PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PDF
Trump Administration's workforce development strategy
PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
IGGE1 Understanding the Self1234567891011
PPTX
Computer Architecture Input Output Memory.pptx
PDF
1_English_Language_Set_2.pdf probationary
PDF
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PPTX
Virtual and Augmented Reality in Current Scenario
PPTX
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
PDF
Indian roads congress 037 - 2012 Flexible pavement
PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PPTX
TNA_Presentation-1-Final(SAVE)) (1).pptx
PDF
HVAC Specification 2024 according to central public works department
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
AI-driven educational solutions for real-life interventions in the Philippine...
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
Trump Administration's workforce development strategy
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
Weekly quiz Compilation Jan -July 25.pdf
IGGE1 Understanding the Self1234567891011
Computer Architecture Input Output Memory.pptx
1_English_Language_Set_2.pdf probationary
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
Share_Module_2_Power_conflict_and_negotiation.pptx
LDMMIA Reiki Yoga Finals Review Spring Summer
Virtual and Augmented Reality in Current Scenario
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
Indian roads congress 037 - 2012 Flexible pavement
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
TNA_Presentation-1-Final(SAVE)) (1).pptx
HVAC Specification 2024 according to central public works department
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
AI-driven educational solutions for real-life interventions in the Philippine...

CSS 332 : Algorithms - greedy Algorithms

  • 1. David Luebke 1 06/12/25 CS 332: Algorithms Greedy Algorithms
  • 2. David Luebke 2 06/12/25 Review: Dynamic Programming ● Dynamic programming is another strategy for designing algorithms ● Use when problem breaks down into recurring small subproblems
  • 3. David Luebke 3 06/12/25 Review: Optimal Substructure of LCS ● Observation 1: Optimal substructure ■ A simple recursive algorithm will suffice ■ Draw sample recursion tree from c[3,4] ■ What will be the depth of the tree? ● Observation 2: Overlapping subproblems ■ Find some places where we solve the same subproblem more than once           otherwise ]) , 1 [ ], 1 , [ max( ], [ ] [ if 1 ] 1 , 1 [ ] , [ j i c j i c j y i x j i c j i c
  • 4. David Luebke 4 06/12/25 Review: Structure of Subproblems ● For the LCS problem: ■ There are few subproblems in total ■ And many recurring instances of each (unlike divide & conquer, where subproblems unique) ● How many distinct problems exist for the LCS of x[1..m] and y[1..n]? ● A: mn
  • 5. David Luebke 5 06/12/25 Memoization ● Memoization is another way to deal with overlapping subproblems ■ After computing the solution to a subproblem, store in a table ■ Subsequent calls just do a table lookup ● Can modify recursive alg to use memoziation: ■ There are mn subproblems ■ How many times is each subproblem wanted? ■ What will be the running time for this algorithm? The running space?
  • 6. David Luebke 6 06/12/25 Review: Dynamic Programming ● Dynamic programming: build table bottom-up ■ Same table as memoization, but instead of starting at (m,n) and recursing down, start at (1,1) ● Least Common Subsequence: LCS easy to calculate from LCS of prefixes ○ As your homework shows, can actually reduce space to O(min(m,n)) ● Knapsack problem: we’ll review this in a bit
  • 7. David Luebke 7 06/12/25 Review: Dynamic Programming ● Summary of the basic idea: ■ Optimal substructure: optimal solution to problem consists of optimal solutions to subproblems ■ Overlapping subproblems: few subproblems in total, many recurring instances of each ■ Solve bottom-up, building a table of solved subproblems that are used to solve larger ones ● Variations: ■ “Table” could be 3-dimensional, triangular, a tree, etc.
  • 8. David Luebke 8 06/12/25 Greedy Algorithms ● A greedy algorithm always makes the choice that looks best at the moment ■ My everyday examples: ○ Walking to the Corner ○ Playing a bridge hand ■ The hope: a locally optimal choice will lead to a globally optimal solution ■ For some problems, it works ● Dynamic programming can be overkill; greedy algorithms tend to be easier to code
  • 9. David Luebke 9 06/12/25 Activity-Selection Problem ● Problem: get your money’s worth out of a carnival ■ Buy a wristband that lets you onto any ride ■ Lots of rides, each starting and ending at different times ■ Your goal: ride as many rides as possible ○ Another, alternative goal that we don’t solve here: maximize time spent on rides ● Welcome to the activity selection problem
  • 10. David Luebke 10 06/12/25 Activity-Selection ● Formally: ■ Given a set S of n activities si = start time of activity i fi = finish time of activity i ■ Find max-size subset A of compatible activities  Assume (wlog) that f1  f2  …  fn 1 2 3 4 5 6
  • 11. David Luebke 11 06/12/25 Activity Selection: Optimal Substructure ● Let k be the minimum activity in A (i.e., the one with the earliest finish time). Then A - {k} is an optimal solution to S’ = {i  S: si  fk} ■ In words: once activity #1 is selected, the problem reduces to finding an optimal solution for activity- selection over activities in S compatible with #1 ■ Proof: if we could find optimal solution B’ to S’ with |B| > |A - {k}|, ○ Then B U {k} is compatible ○ And |B U {k}| > |A|
  • 12. David Luebke 12 06/12/25 Activity Selection: Repeated Subproblems ● Consider a recursive algorithm that tries all possible compatible subsets to find a maximal set, and notice repeated subproblems: S 1A? S’ 2A? S-{1} 2A? S-{1,2} S’’ S’-{2} S’’ yes no no no yes yes
  • 13. David Luebke 13 06/12/25 Greedy Choice Property ● Dynamic programming? Memoize? Yes, but… ● Activity selection problem also exhibits the greedy choice property: ■ Locally optimal choice  globally optimal sol’n ■ Them 17.1: if S is an activity selection problem sorted by finish time, then  optimal solution A  S such that {1}  A ○ Sketch of proof: if  optimal solution B that does not contain {1}, can always replace first activity in B with {1} (Why?). Same number of activities, thus optimal.
  • 14. David Luebke 14 06/12/25 Activity Selection: A Greedy Algorithm ● So actual algorithm is simple: ■ Sort the activities by finish time ■ Schedule the first activity ■ Then schedule the next activity in sorted list which starts after previous activity finishes ■ Repeat until no more activities ● Intuition is even more simple: ■ Always pick the shortest ride available at the time
  • 15. David Luebke 15 06/12/25 Minimum Spanning Tree Revisited ● Recall: MST problem has optimal substructure ■ Prove it ● Is Prim’s algorithm greedy? Why? ● Is Kruskal’s algorithm greedy? Why?
  • 16. David Luebke 16 06/12/25 Review: The Knapsack Problem ● The famous knapsack problem: ■ A thief breaks into a museum. Fabulous paintings, sculptures, and jewels are everywhere. The thief has a good eye for the value of these objects, and knows that each will fetch hundreds or thousands of dollars on the clandestine art collector’s market. But, the thief has only brought a single knapsack to the scene of the robbery, and can take away only what he can carry. What items should the thief take to maximize the haul?
  • 17. David Luebke 17 06/12/25 Review: The Knapsack Problem ● More formally, the 0-1 knapsack problem: ■ The thief must choose among n items, where the ith item worth vi dollars and weighs wi pounds ■ Carrying at most W pounds, maximize value ○ Note: assume vi, wi, and W are all integers ○ “0-1” b/c each item must be taken or left in entirety ● A variation, the fractional knapsack problem: ■ Thief can take fractions of items ■ Think of items in 0-1 problem as gold ingots, in fractional problem as buckets of gold dust
  • 18. David Luebke 18 06/12/25 Review: The Knapsack Problem And Optimal Substructure ● Both variations exhibit optimal substructure ● To show this for the 0-1 problem, consider the most valuable load weighing at most W pounds ■ If we remove item j from the load, what do we know about the remaining load? ■ A: remainder must be the most valuable load weighing at most W - wj that thief could take from museum, excluding item j
  • 19. David Luebke 19 06/12/25 Solving The Knapsack Problem ● The optimal solution to the fractional knapsack problem can be found with a greedy algorithm ■ How? ● The optimal solution to the 0-1 problem cannot be found with the same greedy strategy ■ Greedy strategy: take in order of dollars/pound ■ Example: 3 items weighing 10, 20, and 30 pounds, knapsack can hold 50 pounds ○ Suppose item 2 is worth $100. Assign values to the other items so that the greedy strategy will fail
  • 20. David Luebke 20 06/12/25 The Knapsack Problem: Greedy Vs. Dynamic ● The fractional problem can be solved greedily ● The 0-1 problem cannot be solved with a greedy approach ■ As you have seen, however, it can be solved with dynamic programming