SlideShare a Scribd company logo
Heterogeneous Resource Scheduling Using
Apache Mesos for Cloud Native Frameworks
Sharma Podila
Senior Software Engineer
Netflix
Aug 20th
MesosCon 2015
Agenda
● Context, motivation
● Fenzo scheduler library
● Usage at Netflix
● Future direction
Why use Apache Mesos in a cloud?
Resource granularity
Application start latency
A tale of two frameworks
Reactive stream processing
Container deployment and management
Reactive stream processing, Mantis
● Cloud native
● Lightweight, dynamic jobs
○ Stateful, multi-stage
○ Real time, anomaly detection, etc.
● Task placement constraints
○ Cloud constructs
○ Resource utilization
● Service and batch style
Mantis job topology
Worker
SourceApp
Stage1
Stage2
Stage3
Sink
A job is set of 1 or more stages
A stage is a set of 1 or more workers
A worker is a Mesos task
Job
Container management, Titan
● Cloud native
● Service and batch workloads
● Jobs with multiple sets of container tasks
● Container placement constraints
○ Cloud constructs
○ Resource affinity
○ Task locality
Job 2
Set 0
Set 1
Container scheduling model
Job 1
Co-locate tasks from multiple
task sets
Why develop a new framework?
Easy to write a new framework?
Easy to write a new framework?
What about scale?
Performance?
Fault tolerance?
Availability?
Easy to write a new framework?
What about scale?
Performance?
Fault tolerance?
Availability?
And scheduling is a hard problem to solve
Long term justification is needed to
create a new Mesos framework
Our motivations for new framework
● Cloud native
(autoscaling)
Our motivations for new framework
● Cloud native
(autoscaling)
● Customizable task placement optimizations
(Mix of service, batch, and stream topologies)
Cluster autoscaling challenge
Host 1 Host 2 Host 3 Host 4
Host 1 Host 2 Host 3 Host 4
vs.
For long running stateful services
Cluster autoscaling challenge
Host 1 Host 2 Host 3 Host 4
Host 1 Host 2 Host 3 Host 4
vs.
For long running stateful services
Components of a mesos framework
API for users to interact
Components of a mesos framework
API for users to interact
Be connected to Mesos via the driver
Components of a mesos framework
API for users to interact
Be connected to Mesos via the driver
Compute resource assignments for tasks
Components of a mesos framework
API for users to interact
Be connected to Mesos via the driver
Compute resource assignments for tasks
Fenzo
A common scheduling library for Mesos
frameworks
Fenzo usage in frameworks
Mesos master
Mesos framework
Tasks
requests
Available
resource
offers
Fenzo task
scheduler
Task assignment result
• Host1
• Task1
• Task2
• Host2
• Task3
• Task4
Persistence
Fenzo scheduling library
Heterogeneous
resources
Autoscaling
of cluster
Visibility of
scheduler
actions
Plugins for
Constraints, Fitness
High speed
Heterogeneous
task requests
Announcing availability of Fenzo in
Netflix OSS suite
Fenzo details
Scheduling problem
Fitness
Pending
Assigned
Urgency
N tasks to assign from M possible slaves
Scheduling optimizations
Speed Accuracy
First fit assignment Optimal assignment
Real world trade-offs~ O (1) ~ O (N * M)1
1
Assuming tasks are not reassigned
Scheduling strategy
For each task
On each host
Validate hard constraints
Eval fitness and soft constraints
Until fitness good enough, and
A minimum #hosts evaluated
Task constraints
Soft
Hard
Task constraints
Soft
Hard
Extensible
Built-in Constraints
Host attribute value constraint
Task
HostAttrConstraint:instanceType=r3
Host1
Attr:instanceType=m3
Host2
Attr:instanceType=r3
Host3
Attr:instanceType=c3
Fenzo
Unique host attribute constraint
Task
UniqueAttr:zone
Host1
Attr:zone=1a
Host2
Attr:zone=1a
Host3
Attr:zone=1b
Fenzo
Balance host attribute constraint
Host1
Attr:zone=1a
Host2
Attr:zone=1b
Host3
Attr:zone=1c
Job with 9 tasks, BalanceAttr:zone
Fenzo
Fitness evaluation
Degree of fitness
Composable
Bin packing fitness calculator
Fitness for
Host1 Host2 Host3 Host4 Host5
fitness = usedCPUs / totalCPUs
Bin packing fitness calculator
Fitness for 0.25 0.5 0.75 1.0 0.0
fitness = usedCPUs / totalCPUs
Host1 Host2 Host3 Host4 Host5
Bin packing fitness calculator
Fitness for 0.25 0.5 0.75 1.0 0.0
✔
Host1 Host2 Host3 Host4 Host5
fitness = usedCPUs / totalCPUs
Host1 Host2 Host3 Host4 Host5
Composable fitness calculators
Fitness
= ( BinPackFitness * BinPackWeight +
RuntimePackFitness * RuntimeWeight
) / 2.0
Cluster autoscaling in Fenzo
ASG/Cluster:
mantisagent
MinIdle: 8
MaxIdle: 20
CooldownSecs:
360
ASG/Cluster:
mantisagent
MinIdle: 8
MaxIdle: 20
CooldownSecs:
360
ASG/cluster:
computeCluster
MinIdle: 8
MaxIdle: 20
CooldownSecs:
360
Fenzo
ScaleUp
action:
Cluster, N
ScaleDown
action:
Cluster,
HostList
Rules based cluster autoscaling
● Set up rules per host attribute value
○ E.g., one autoscale rule per ASG/cluster, one cluster for network-
intensive jobs, another for CPU/memory-intensive jobs
● Sample:
#Idle
hosts
Trigger down scale
Trigger up scale
min
max
Cluster Name Min Idle Count Max Idle Count Cooldown Secs
NetworkClstr 5 15 360
ComputeClstr 10 20 300
Shortfall analysis based autoscaling
● Rule-based scale up has a cool down period
○ What if there’s a surge of incoming requests?
● Pending requests trigger shortfall analysis
○ Scale up happens regardless of cool down period
○ Remembers which tasks have already been covered
Usage at Netflix
Cluster autoscaling
NumberofMesosSlaves
Time
Scheduler run time (milliseconds)
Scheduler run time in milliseconds
(over a week)
Average 2 mS
Maximum 38 mS
Note: times can vary depending on # of tasks, # and types of constraints, and # of hosts
Experimenting with Fenzo
Note: Experiments can be run without requiring a physical cluster
A bin packing experiment
Host 3
Host 2
Host 1
Mesos Slaves
Host 3000
Tasks with cpu=1
Tasks with cpu=3
Tasks with cpu=6
Fenzo
Iteratively assign
Bunch of tasks
Start with idle cluster
Bin packing sample results
Bin pack tasks using Fenzo’s built-in CPU bin packer
Bin packing sample results
Bin pack tasks using Fenzo’s built-in CPU bin packer
Task runtime bin packing sample
Bin pack tasks based on custom fitness calculator to pack
short vs. long run time jobs separately
Scheduler speed experiment
# of hosts # of tasks to
assign each
time
Avg time Avg time
per task
Min
time
Max time Total time
Hosts: 8-CPUs each
Task mix: 20% running 1-CPU jobs, 40% running 4-CPU, and 40% running 6-CPU jobs
Goal: starting from an empty cluster, assign tasks to fill all hosts
Scheduling strategy: CPU bin packing
Scheduler speed experiment
# of hosts # of tasks to
assign each
time
Avg time Avg time
per task
Min
time
Max time Total time
1,000 1 3 mS 3 mS 1 mS 188 mS 9 s
1,000 200 40 mS 0.2 mS 17 mS 100 mS 0.5 s
Hosts: 8-CPUs each
Task mix: 20% running 1-CPU jobs, 40% running 4-CPU, and 40% running 6-CPU jobs
Goal: starting from an empty cluster, assign tasks to fill all hosts
Scheduling strategy: CPU bin packing
Scheduler speed experiment
Hosts: 8-CPUs each
Task mix: 20% running 1-CPU jobs, 40% running 4-CPU, and 40% running 6-CPU jobs
Goal: starting from an empty cluster, assign tasks to fill all hosts
Scheduling strategy: CPU bin packing
# of hosts # of tasks to
assign each
time
Avg time Avg time
per task
Min
time
Max time Total time
1,000 1 3 mS 3 mS 1 mS 188 mS 9 s
1,000 200 40 mS 0.2 mS 17 mS 100 mS 0.5 s
10,000 1 29 mS 29 mS 10 mS 240 mS 870 s
10,000 200 132 mS 0.66 mS 22 mS 434 mS 19 s
Accessing Fenzo
Code at
https://github.com/Netflix/Fenzo
Wiki at
https://github.com/Netflix/Fenzo/wiki
Future directions
● Task management SLAs
● Support for newer Mesos features
● Collaboration
To summarize...
Fenzo: scheduling library for
frameworks
Heterogeneous
resources
Autoscaling
of cluster
Visibility of
scheduler
actions
Plugins for
Constraints, Fitness
High speed
Heterogeneous
task requests
Fenzo is now available in Netflix
OSS suite at
https://github.com/Netflix/Fenzo
Questions?
Heterogeneous Resource Scheduling Using
Apache Mesos for Cloud Native Frameworks
Sharma Podila
spodila @ netflix . com
@podila

More Related Content

PPTX
Apache Storm Internals
PPTX
Realtime Statistics based on Apache Storm and RocketMQ
PDF
Pushing Python: Building a High Throughput, Low Latency System
PPTX
AWS re:Invent 2014 talk: Scheduling using Apache Mesos in the Cloud
PDF
Resource Scheduling using Apache Mesos in Cloud Native Environments
PPTX
Resource Aware Scheduling in Apache Storm
PDF
Monitoring NGINX (plus): key metrics and how-to
PPTX
Slide #1:Introduction to Apache Storm
Apache Storm Internals
Realtime Statistics based on Apache Storm and RocketMQ
Pushing Python: Building a High Throughput, Low Latency System
AWS re:Invent 2014 talk: Scheduling using Apache Mesos in the Cloud
Resource Scheduling using Apache Mesos in Cloud Native Environments
Resource Aware Scheduling in Apache Storm
Monitoring NGINX (plus): key metrics and how-to
Slide #1:Introduction to Apache Storm

What's hot (20)

PDF
rspamd-hyperscan
PDF
rspamd-slides
PDF
Anatomy of an action
PPTX
Scaling Apache Storm (Hadoop Summit 2015)
PDF
Storm
PPTX
Monitoring MySQL with OpenTSDB
PPTX
Introduction to Storm
PDF
PHP Backends for Real-Time User Interaction using Apache Storm.
PDF
Gevent at TellApart
PDF
Performance optimization 101 - Erlang Factory SF 2014
PDF
Monitoring with Prometheus
PPTX
Stabilising the jenga tower
PDF
rspamd-fosdem
PDF
Jan 2012 HUG: Storm
PDF
Chronix as Long-Term Storage for Prometheus
PPTX
Multi-Tenant Storm Service on Hadoop Grid
PDF
Analysis big data by use php with storm
PDF
Fact-Based Monitoring - PuppetConf 2014
PDF
Openstack meetup lyon_2017-09-28
PDF
Scaling Apache Storm - Strata + Hadoop World 2014
rspamd-hyperscan
rspamd-slides
Anatomy of an action
Scaling Apache Storm (Hadoop Summit 2015)
Storm
Monitoring MySQL with OpenTSDB
Introduction to Storm
PHP Backends for Real-Time User Interaction using Apache Storm.
Gevent at TellApart
Performance optimization 101 - Erlang Factory SF 2014
Monitoring with Prometheus
Stabilising the jenga tower
rspamd-fosdem
Jan 2012 HUG: Storm
Chronix as Long-Term Storage for Prometheus
Multi-Tenant Storm Service on Hadoop Grid
Analysis big data by use php with storm
Fact-Based Monitoring - PuppetConf 2014
Openstack meetup lyon_2017-09-28
Scaling Apache Storm - Strata + Hadoop World 2014
Ad

Similar to Prezo at-mesos con2015-final (20)

PDF
Netflix container scheduling talk at stanford final
PDF
Podila mesos con-northamerica_sep2017
PDF
Podila QCon SF 2016
PPTX
Sizing MongoDB on AWS with Wired Tiger-Patrick and Vigyan-Final
PDF
Podila mesos con europe keynote aug sep 2016
PDF
Evolution Of MongoDB Replicaset
PPTX
MongoDB: How We Did It – Reanimating Identity at AOL
PDF
Evolution of MongoDB Replicaset and Its Best Practices
PPT
PDF
Testing data and metadata backends with ClawIO
PDF
MSR 2009
PDF
Netflix running Presto in the AWS Cloud
PDF
Apache Flink® Meets Apache Mesos® and DC/OS
PDF
Flink Forward Berlin 2017: Jörg Schad, Till Rohrmann - Apache Flink meets Apa...
PPTX
End to-end async and await
PDF
SCaLE 20X: Kubernetes Cloud Cost Monitoring with OpenCost & Optimization Stra...
PDF
Odoo Performance Limits
PDF
DockerCon EU 2015: The Glue is the Hard Part: Making a Production-Ready PaaS
PDF
The Glue is the Hard Part: Making a Production-Ready PaaS
PDF
Netflix Open Source Meetup Season 4 Episode 2
Netflix container scheduling talk at stanford final
Podila mesos con-northamerica_sep2017
Podila QCon SF 2016
Sizing MongoDB on AWS with Wired Tiger-Patrick and Vigyan-Final
Podila mesos con europe keynote aug sep 2016
Evolution Of MongoDB Replicaset
MongoDB: How We Did It – Reanimating Identity at AOL
Evolution of MongoDB Replicaset and Its Best Practices
Testing data and metadata backends with ClawIO
MSR 2009
Netflix running Presto in the AWS Cloud
Apache Flink® Meets Apache Mesos® and DC/OS
Flink Forward Berlin 2017: Jörg Schad, Till Rohrmann - Apache Flink meets Apa...
End to-end async and await
SCaLE 20X: Kubernetes Cloud Cost Monitoring with OpenCost & Optimization Stra...
Odoo Performance Limits
DockerCon EU 2015: The Glue is the Hard Part: Making a Production-Ready PaaS
The Glue is the Hard Part: Making a Production-Ready PaaS
Netflix Open Source Meetup Season 4 Episode 2
Ad

Recently uploaded (20)

PPTX
Advanced SystemCare Ultimate Crack + Portable (2025)
PDF
Cost to Outsource Software Development in 2025
PDF
AutoCAD Professional Crack 2025 With License Key
PPTX
Monitoring Stack: Grafana, Loki & Promtail
PPTX
Weekly report ppt - harsh dattuprasad patel.pptx
PDF
iTop VPN Crack Latest Version Full Key 2025
PPTX
Operating system designcfffgfgggggggvggggggggg
PDF
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
PDF
iTop VPN Free 5.6.0.5262 Crack latest version 2025
PPTX
Log360_SIEM_Solutions Overview PPT_Feb 2020.pptx
DOCX
Greta — No-Code AI for Building Full-Stack Web & Mobile Apps
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PDF
Download FL Studio Crack Latest version 2025 ?
PDF
Digital Systems & Binary Numbers (comprehensive )
PPTX
history of c programming in notes for students .pptx
PDF
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
PPTX
CHAPTER 2 - PM Management and IT Context
PDF
CapCut Video Editor 6.8.1 Crack for PC Latest Download (Fully Activated) 2025
PDF
wealthsignaloriginal-com-DS-text-... (1).pdf
PDF
Tally Prime Crack Download New Version 5.1 [2025] (License Key Free
Advanced SystemCare Ultimate Crack + Portable (2025)
Cost to Outsource Software Development in 2025
AutoCAD Professional Crack 2025 With License Key
Monitoring Stack: Grafana, Loki & Promtail
Weekly report ppt - harsh dattuprasad patel.pptx
iTop VPN Crack Latest Version Full Key 2025
Operating system designcfffgfgggggggvggggggggg
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
iTop VPN Free 5.6.0.5262 Crack latest version 2025
Log360_SIEM_Solutions Overview PPT_Feb 2020.pptx
Greta — No-Code AI for Building Full-Stack Web & Mobile Apps
Adobe Illustrator 28.6 Crack My Vision of Vector Design
Download FL Studio Crack Latest version 2025 ?
Digital Systems & Binary Numbers (comprehensive )
history of c programming in notes for students .pptx
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
CHAPTER 2 - PM Management and IT Context
CapCut Video Editor 6.8.1 Crack for PC Latest Download (Fully Activated) 2025
wealthsignaloriginal-com-DS-text-... (1).pdf
Tally Prime Crack Download New Version 5.1 [2025] (License Key Free

Prezo at-mesos con2015-final

  • 1. Heterogeneous Resource Scheduling Using Apache Mesos for Cloud Native Frameworks Sharma Podila Senior Software Engineer Netflix Aug 20th MesosCon 2015
  • 2. Agenda ● Context, motivation ● Fenzo scheduler library ● Usage at Netflix ● Future direction
  • 3. Why use Apache Mesos in a cloud? Resource granularity Application start latency
  • 4. A tale of two frameworks Reactive stream processing Container deployment and management
  • 5. Reactive stream processing, Mantis ● Cloud native ● Lightweight, dynamic jobs ○ Stateful, multi-stage ○ Real time, anomaly detection, etc. ● Task placement constraints ○ Cloud constructs ○ Resource utilization ● Service and batch style
  • 6. Mantis job topology Worker SourceApp Stage1 Stage2 Stage3 Sink A job is set of 1 or more stages A stage is a set of 1 or more workers A worker is a Mesos task Job
  • 7. Container management, Titan ● Cloud native ● Service and batch workloads ● Jobs with multiple sets of container tasks ● Container placement constraints ○ Cloud constructs ○ Resource affinity ○ Task locality
  • 8. Job 2 Set 0 Set 1 Container scheduling model Job 1 Co-locate tasks from multiple task sets
  • 9. Why develop a new framework?
  • 10. Easy to write a new framework?
  • 11. Easy to write a new framework? What about scale? Performance? Fault tolerance? Availability?
  • 12. Easy to write a new framework? What about scale? Performance? Fault tolerance? Availability? And scheduling is a hard problem to solve
  • 13. Long term justification is needed to create a new Mesos framework
  • 14. Our motivations for new framework ● Cloud native (autoscaling)
  • 15. Our motivations for new framework ● Cloud native (autoscaling) ● Customizable task placement optimizations (Mix of service, batch, and stream topologies)
  • 16. Cluster autoscaling challenge Host 1 Host 2 Host 3 Host 4 Host 1 Host 2 Host 3 Host 4 vs. For long running stateful services
  • 17. Cluster autoscaling challenge Host 1 Host 2 Host 3 Host 4 Host 1 Host 2 Host 3 Host 4 vs. For long running stateful services
  • 18. Components of a mesos framework API for users to interact
  • 19. Components of a mesos framework API for users to interact Be connected to Mesos via the driver
  • 20. Components of a mesos framework API for users to interact Be connected to Mesos via the driver Compute resource assignments for tasks
  • 21. Components of a mesos framework API for users to interact Be connected to Mesos via the driver Compute resource assignments for tasks
  • 22. Fenzo A common scheduling library for Mesos frameworks
  • 23. Fenzo usage in frameworks Mesos master Mesos framework Tasks requests Available resource offers Fenzo task scheduler Task assignment result • Host1 • Task1 • Task2 • Host2 • Task3 • Task4 Persistence
  • 24. Fenzo scheduling library Heterogeneous resources Autoscaling of cluster Visibility of scheduler actions Plugins for Constraints, Fitness High speed Heterogeneous task requests
  • 25. Announcing availability of Fenzo in Netflix OSS suite
  • 28. Scheduling optimizations Speed Accuracy First fit assignment Optimal assignment Real world trade-offs~ O (1) ~ O (N * M)1 1 Assuming tasks are not reassigned
  • 29. Scheduling strategy For each task On each host Validate hard constraints Eval fitness and soft constraints Until fitness good enough, and A minimum #hosts evaluated
  • 33. Host attribute value constraint Task HostAttrConstraint:instanceType=r3 Host1 Attr:instanceType=m3 Host2 Attr:instanceType=r3 Host3 Attr:instanceType=c3 Fenzo
  • 34. Unique host attribute constraint Task UniqueAttr:zone Host1 Attr:zone=1a Host2 Attr:zone=1a Host3 Attr:zone=1b Fenzo
  • 35. Balance host attribute constraint Host1 Attr:zone=1a Host2 Attr:zone=1b Host3 Attr:zone=1c Job with 9 tasks, BalanceAttr:zone Fenzo
  • 36. Fitness evaluation Degree of fitness Composable
  • 37. Bin packing fitness calculator Fitness for Host1 Host2 Host3 Host4 Host5 fitness = usedCPUs / totalCPUs
  • 38. Bin packing fitness calculator Fitness for 0.25 0.5 0.75 1.0 0.0 fitness = usedCPUs / totalCPUs Host1 Host2 Host3 Host4 Host5
  • 39. Bin packing fitness calculator Fitness for 0.25 0.5 0.75 1.0 0.0 ✔ Host1 Host2 Host3 Host4 Host5 fitness = usedCPUs / totalCPUs Host1 Host2 Host3 Host4 Host5
  • 40. Composable fitness calculators Fitness = ( BinPackFitness * BinPackWeight + RuntimePackFitness * RuntimeWeight ) / 2.0
  • 41. Cluster autoscaling in Fenzo ASG/Cluster: mantisagent MinIdle: 8 MaxIdle: 20 CooldownSecs: 360 ASG/Cluster: mantisagent MinIdle: 8 MaxIdle: 20 CooldownSecs: 360 ASG/cluster: computeCluster MinIdle: 8 MaxIdle: 20 CooldownSecs: 360 Fenzo ScaleUp action: Cluster, N ScaleDown action: Cluster, HostList
  • 42. Rules based cluster autoscaling ● Set up rules per host attribute value ○ E.g., one autoscale rule per ASG/cluster, one cluster for network- intensive jobs, another for CPU/memory-intensive jobs ● Sample: #Idle hosts Trigger down scale Trigger up scale min max Cluster Name Min Idle Count Max Idle Count Cooldown Secs NetworkClstr 5 15 360 ComputeClstr 10 20 300
  • 43. Shortfall analysis based autoscaling ● Rule-based scale up has a cool down period ○ What if there’s a surge of incoming requests? ● Pending requests trigger shortfall analysis ○ Scale up happens regardless of cool down period ○ Remembers which tasks have already been covered
  • 46. Scheduler run time (milliseconds) Scheduler run time in milliseconds (over a week) Average 2 mS Maximum 38 mS Note: times can vary depending on # of tasks, # and types of constraints, and # of hosts
  • 47. Experimenting with Fenzo Note: Experiments can be run without requiring a physical cluster
  • 48. A bin packing experiment Host 3 Host 2 Host 1 Mesos Slaves Host 3000 Tasks with cpu=1 Tasks with cpu=3 Tasks with cpu=6 Fenzo Iteratively assign Bunch of tasks Start with idle cluster
  • 49. Bin packing sample results Bin pack tasks using Fenzo’s built-in CPU bin packer
  • 50. Bin packing sample results Bin pack tasks using Fenzo’s built-in CPU bin packer
  • 51. Task runtime bin packing sample Bin pack tasks based on custom fitness calculator to pack short vs. long run time jobs separately
  • 52. Scheduler speed experiment # of hosts # of tasks to assign each time Avg time Avg time per task Min time Max time Total time Hosts: 8-CPUs each Task mix: 20% running 1-CPU jobs, 40% running 4-CPU, and 40% running 6-CPU jobs Goal: starting from an empty cluster, assign tasks to fill all hosts Scheduling strategy: CPU bin packing
  • 53. Scheduler speed experiment # of hosts # of tasks to assign each time Avg time Avg time per task Min time Max time Total time 1,000 1 3 mS 3 mS 1 mS 188 mS 9 s 1,000 200 40 mS 0.2 mS 17 mS 100 mS 0.5 s Hosts: 8-CPUs each Task mix: 20% running 1-CPU jobs, 40% running 4-CPU, and 40% running 6-CPU jobs Goal: starting from an empty cluster, assign tasks to fill all hosts Scheduling strategy: CPU bin packing
  • 54. Scheduler speed experiment Hosts: 8-CPUs each Task mix: 20% running 1-CPU jobs, 40% running 4-CPU, and 40% running 6-CPU jobs Goal: starting from an empty cluster, assign tasks to fill all hosts Scheduling strategy: CPU bin packing # of hosts # of tasks to assign each time Avg time Avg time per task Min time Max time Total time 1,000 1 3 mS 3 mS 1 mS 188 mS 9 s 1,000 200 40 mS 0.2 mS 17 mS 100 mS 0.5 s 10,000 1 29 mS 29 mS 10 mS 240 mS 870 s 10,000 200 132 mS 0.66 mS 22 mS 434 mS 19 s
  • 55. Accessing Fenzo Code at https://github.com/Netflix/Fenzo Wiki at https://github.com/Netflix/Fenzo/wiki
  • 56. Future directions ● Task management SLAs ● Support for newer Mesos features ● Collaboration
  • 58. Fenzo: scheduling library for frameworks Heterogeneous resources Autoscaling of cluster Visibility of scheduler actions Plugins for Constraints, Fitness High speed Heterogeneous task requests
  • 59. Fenzo is now available in Netflix OSS suite at https://github.com/Netflix/Fenzo
  • 60. Questions? Heterogeneous Resource Scheduling Using Apache Mesos for Cloud Native Frameworks Sharma Podila spodila @ netflix . com @podila