SlideShare a Scribd company logo
Mesos Python framework
O. Sallou, DevExp 2016
CC-BY-SA 3.0
Interacting with Mesos, 2 choices
Python API:
- not compatible with Python 3
- Easy to implement
- Bindings over C API
HTTP API:
- HTTP calls with persistent connection and streaming
- Recent
- Language independent,
Workflow
Register => Listen for offer => accept/decline offer => listen for job status
Messages use Protobuf [0], HTTP interface also supports JSON.
See Mesos protobuf definition [1] to read or create messages.
[0] https://developers.google.com/protocol-buffers/
[1] https://github.com/apache/mesos/blob/master/include/mesos/mesos.proto
Simple example
Python API
Register
framework = mesos_pb2.FrameworkInfo()
# mesos_pb2.XXX() read/use/write protobuf Mesos objects
framework.user = "" # Have Mesos fill in the current user.
framework.name = "Example Mesos framework"
framework.failover_timeout = 3600 * 24*7 # 1 week
# Optionally, restart from a previous run
mesos_framework_id = mesos_pb2.FrameworkID()
mesos_framework_id.value = XYZ
framework.id.MergeFrom(mesos_framework_id)
framework.principal = "godocker-mesos-framework"
# We will create our scheduler class MesosScheduler in next slide
mesosScheduler = MesosScheduler(1, executor)
# Let’s declare a framework, with a scheduler to manage offers
driver = mesos.native.MesosSchedulerDriver(
mesosScheduler,
framework,
‘zk://127.0.01:2881’)
driver.start()
executor = mesos_pb2.ExecutorInfo()
executor.executor_id.value = "sample"
executor.name = "Example executor"
When scheduler ends...
When scheduler stops, Mesos will kill any remaining tasks after
“failover_timeout” value.
One can set FrameworkID to restart framework and keep same context.
Mesos will keep tasks, and send status messages to framework.
Scheduler skeleton
class MesosScheduler(mesos.interface.Scheduler):
def registered(self, driver, frameworkId, masterInfo):
logging.info("Registered with framework ID %s" % frameworkId.value)
self.frameworkId = frameworkId.value
def resourceOffers(self, driver, offers):
'''
Receive offers, an offer defines a node
with available resources (cpu, mem, etc.)
'''
for offer in offers:
logging.debug('Mesos:Offer:Decline)
driver.declineOffer(offer.id)
def statusUpdate(self, driver, update):
'''
Receive status info from submitted tasks
(switch to running, failure of node, etc.)
'''
logging.debug("Task %s is in state %s" % 
(update.task_id.value, mesos_pb2.TaskState.Name
(update.state)))
def frameworkMessage(self, driver,
executorId, slaveId, message):
logging.debug("Received framework message")
# usually, nothing to do here
Messages are asynchronous
Status updates and offers are asynchronous callbacks. Scheduler run in a
separate thread.
You’re never the initiator of the requests (except registration), but you will
receive callback messages when something change on Mesos side (job switch
to running, node failure, …)
Submit a task
for offer in offers:
# Get available cpu and mem for this offer
offerCpus = 0
offerMem = 0
for resource in offer.resources:
if resource.name == "cpus":
offerCpus += resource.scalar.value
elif resource.name == "mem":
offerMem += resource.scalar.value
# We could chek for other resources here
logging.debug("Mesos:Received offer %s with cpus: %s and mem: %s" 
% (offer.id.value, offerCpus, offerMem))
# We should check that offer has enough resources
sample_task = create_a_sample_task(offer)
array_of_task = [ sample_task ]
driver.launchTasks(offer.id, array_of_task)
Mesos support any custom resource definition on
nodes (gpu, slots, disk, …), using scalar or range
values
When a task is launched, requested resources will be
removed from available resources for the selected
node.
Next offers won’t propose thoses resources again
until task is over (or killed).
Define a task
def create_a_sample_task(offer):
task = mesos_pb2.TaskInfo()
# The container part (native or docker)
container = mesos_pb2.ContainerInfo()
container.type = 1 # mesos_pb2.ContainerInfo.Type.DOCKER
# Let’s add a volume
volume = container.volumes.add()
volume.container_path = “/tmp/test”
volume.host_path = “/tmp/incontainer”
volume.mode = 1 # mesos_pb2.Volume.Mode.RW
# The command to execute, if not using entrypoint
command = mesos_pb2.CommandInfo()
command.value = “echo hello world”
task.command.MergeFrom(command)
# Unique identifier (or let mesos assign one)
task.task_id.value = XYZ_UNIQUE_IDENTIFIER
# the slave where task is executed
task.slave_id.value = offer.slave_id.value
task.name = “my_sample_task”
# The resources/requirements
# Resources have names, cpu, mem and ports are available
# by default, one can define custom ones per slave node
# and get them by their name here
cpus = task.resources.add()
cpus.name = "cpus"
cpus.type = mesos_pb2.Value.SCALAR
cpus.scalar.value = 2
mem = task.resources.add()
mem.name = "mem"
mem.type = mesos_pb2.Value.SCALAR
mem.scalar.value = 3000 #3 Go
Define a task (next)
# Now the Docker part
docker = mesos_pb2.ContainerInfo.DockerInfo()
docker.image = “debian:latest”
docker.network = 2 # mesos_pb2.ContainerInfo.DockerInfo.Network.BRIDGE
docker.force_pull_image = True
container.docker.MergeFrom(docker)
# Let’s map some ports, ports are resources like cpu and mem
# We will map container port 80 to an available host port
# Let’s pick the first available port for this offer, for simplicity
# we will skip here controls and suppose there is at least one port
offer_port = None
for resource in offer.resources:
if resource.name == "ports":
for mesos_range in resource.ranges.range:
offer_port = mesos_range.begin
break
# We map port 80 to offer_port in container
docker_port = docker.port_mappings.add()
docker_port.host_port = 80
docker_port.container_port = offer_port
# We tell mesos that we reserve this port
# Mesos will remove it from next offers until task
completion
mesos_ports = task.resources.add()
mesos_ports.name = "ports"
mesos_ports.type = mesos_pb2.Value.RANGES
port_range = mesos_ports.ranges.range.add()
port_range.begin = offer_port
port_range.end = offer_port
task.container.MergeFrom(container)
return task
Task status
def statusUpdate(self, driver, update):
'''
Receive status info from submitted tasks
(switch to running, failure of node, etc.)
'''
logging.debug("Task %s is in state %s" % 
(update.task_id.value, mesos_pb2.TaskState.Name(update.state)))
if int(update.state= == 1:
#Switched to RUNNING
container_info = json.loads(update.data)
if int(update.state) in [2,3,4,5,7]:
# Over or failure
logging.error(“Task is over or failed”)
Want to kill a task?
def resourceOffers(self, driver, offers):
….
task_id = mesos_pb2.TaskID()
task_id.value = my_unique_task_id
driver.killTask(task_id)
A framework
Quite easy to setup
Many logs on Mesos side for
debug
Share the same resources with
other frameworks
Different executors (docker,
native, …)
In a few lines of code

More Related Content

PDF
Mesos introduction
PDF
GoDocker presentation
PPTX
Building and Deploying Application to Apache Mesos
PPTX
Introduction To Apache Mesos
PPTX
Developing Frameworks for Apache Mesos
PPTX
Apache Mesos
PPTX
Containerized Data Persistence on Mesos
PDF
Get started with Developing Frameworks in Go on Apache Mesos
Mesos introduction
GoDocker presentation
Building and Deploying Application to Apache Mesos
Introduction To Apache Mesos
Developing Frameworks for Apache Mesos
Apache Mesos
Containerized Data Persistence on Mesos
Get started with Developing Frameworks in Go on Apache Mesos

What's hot (20)

PDF
Introduction to mesos bay
PPTX
Making Apache Kafka Elastic with Apache Mesos
ODP
Introduction to Mesos
PPTX
Apache mesos - overview
PPTX
Hands On Intro to Node.js
PDF
Java 8 고급 (6/6)
PPTX
Terraform day02
PDF
Configuring MongoDB HA Replica Set on AWS EC2
PDF
Integrating Docker with Mesos and Marathon
PPTX
Terraform modules restructured
PDF
[오픈소스컨설팅] EFK Stack 소개와 설치 방법
PDF
Developing Terraform Modules at Scale - HashiTalks 2021
PPT
Hadoop on ec2
PDF
Terraform introduction
PDF
Building Distributed System with Celery on Docker Swarm
PDF
Zookeeper In Action
PDF
WebCamp 2016: DevOps. Ярослав Погребняк: Gobetween - новый лоад балансер для ...
PDF
Drupaljam 2017 - Deploying Drupal 8 onto Hosted Kubernetes in Google Cloud
PDF
Kafka ops-new
PPTX
Terraform day03
Introduction to mesos bay
Making Apache Kafka Elastic with Apache Mesos
Introduction to Mesos
Apache mesos - overview
Hands On Intro to Node.js
Java 8 고급 (6/6)
Terraform day02
Configuring MongoDB HA Replica Set on AWS EC2
Integrating Docker with Mesos and Marathon
Terraform modules restructured
[오픈소스컨설팅] EFK Stack 소개와 설치 방법
Developing Terraform Modules at Scale - HashiTalks 2021
Hadoop on ec2
Terraform introduction
Building Distributed System with Celery on Docker Swarm
Zookeeper In Action
WebCamp 2016: DevOps. Ярослав Погребняк: Gobetween - новый лоад балансер для ...
Drupaljam 2017 - Deploying Drupal 8 onto Hosted Kubernetes in Google Cloud
Kafka ops-new
Terraform day03
Ad

Viewers also liked (20)

PPTX
Dynamic Scheduling - Federated clusters in mesos
PDF
Strata SC 2014: Apache Mesos as an SDK for Building Distributed Frameworks
PDF
MesosCon EU - HTTP API Framework
PPTX
Mesos sys adminday
PDF
Getting Started Hacking OpenNebula - Fosdem-2013
PPTX
Meson: Building a Machine Learning Orchestration Framework on Mesos
PPTX
Meson: Heterogeneous Workflows with Spark at Netflix
PPTX
Mesos framework API v1
PPTX
DC/OS: The definitive platform for modern apps
PDF
Deploying Docker Containers at Scale with Mesos and Marathon
PPTX
Apache Kafka, HDFS, Accumulo and more on Mesos
PDF
TDC2016POA | Trilha Infraestrutura - Apache Mesos & Marathon: gerenciando rem...
PDF
Container Orchestration Wars (Micro Edition)
PDF
CI/CD with Docker, DC/OS, and Jenkins
PDF
Mesos introduction
PDF
Piloter un loadbalancer pour exposer les microservoces de mon cluster Mesos/M...
PDF
Container Orchestration Wars
PDF
How to deploy Apache Spark 
to Mesos/DCOS
PDF
Understanding Kubernetes
PDF
"On-premises" FaaS on Kubernetes
Dynamic Scheduling - Federated clusters in mesos
Strata SC 2014: Apache Mesos as an SDK for Building Distributed Frameworks
MesosCon EU - HTTP API Framework
Mesos sys adminday
Getting Started Hacking OpenNebula - Fosdem-2013
Meson: Building a Machine Learning Orchestration Framework on Mesos
Meson: Heterogeneous Workflows with Spark at Netflix
Mesos framework API v1
DC/OS: The definitive platform for modern apps
Deploying Docker Containers at Scale with Mesos and Marathon
Apache Kafka, HDFS, Accumulo and more on Mesos
TDC2016POA | Trilha Infraestrutura - Apache Mesos & Marathon: gerenciando rem...
Container Orchestration Wars (Micro Edition)
CI/CD with Docker, DC/OS, and Jenkins
Mesos introduction
Piloter un loadbalancer pour exposer les microservoces de mon cluster Mesos/M...
Container Orchestration Wars
How to deploy Apache Spark 
to Mesos/DCOS
Understanding Kubernetes
"On-premises" FaaS on Kubernetes
Ad

Similar to Creating a Mesos python framework (20)

PPTX
Introduction to Apache Mesos
PDF
Apache Mesos: a simple explanation of basics
PDF
A Travel Through Mesos
PDF
OSDC 2015: Bernd Mathiske | Why the Datacenter Needs an Operating System
ODP
Presentation v1 (1)
PDF
OSDC 2016 - Mesos and the Architecture of the New Datacenter by Jörg Schad
PDF
Apache Mesos at Twitter (Texas LinuxFest 2014)
PPTX
Scalable On-Demand Hadoop Clusters with Docker and Mesos
PPTX
Scalable On-Demand Hadoop Clusters with Docker and Mesos
PDF
Mesos: A State-of-the-art Container Orchestrator
PDF
Making Distributed Data Persistent Services Elastic (Without Losing All Your ...
PDF
Spark on Mesos-A Deep Dive-(Dean Wampler and Tim Chen, Typesafe and Mesosphere)
PDF
Mesos 1.0
PDF
MesosCon EU 2017 - Criteo - Operating Mesos-based Infrastructures
PDF
Mesos: The Operating System for your Datacenter
PDF
Building Web Scale Apps with Docker and Mesos by Alex Rukletsov (Mesosphere)
PDF
Scaling and Embracing Failure: Clustering Docker with Mesos
PDF
Mesos on coreOS
PPT
What can-be-done-around-mesos
PDF
Introduction to Apache Mesos and DC/OS
Introduction to Apache Mesos
Apache Mesos: a simple explanation of basics
A Travel Through Mesos
OSDC 2015: Bernd Mathiske | Why the Datacenter Needs an Operating System
Presentation v1 (1)
OSDC 2016 - Mesos and the Architecture of the New Datacenter by Jörg Schad
Apache Mesos at Twitter (Texas LinuxFest 2014)
Scalable On-Demand Hadoop Clusters with Docker and Mesos
Scalable On-Demand Hadoop Clusters with Docker and Mesos
Mesos: A State-of-the-art Container Orchestrator
Making Distributed Data Persistent Services Elastic (Without Losing All Your ...
Spark on Mesos-A Deep Dive-(Dean Wampler and Tim Chen, Typesafe and Mesosphere)
Mesos 1.0
MesosCon EU 2017 - Criteo - Operating Mesos-based Infrastructures
Mesos: The Operating System for your Datacenter
Building Web Scale Apps with Docker and Mesos by Alex Rukletsov (Mesosphere)
Scaling and Embracing Failure: Clustering Docker with Mesos
Mesos on coreOS
What can-be-done-around-mesos
Introduction to Apache Mesos and DC/OS

Recently uploaded (20)

PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PDF
Digital Strategies for Manufacturing Companies
PPTX
Transform Your Business with a Software ERP System
PPTX
CHAPTER 2 - PM Management and IT Context
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
System and Network Administraation Chapter 3
PDF
Navsoft: AI-Powered Business Solutions & Custom Software Development
PDF
How Creative Agencies Leverage Project Management Software.pdf
PDF
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
PPTX
Odoo POS Development Services by CandidRoot Solutions
PPTX
Introduction to Artificial Intelligence
PPTX
Reimagine Home Health with the Power of Agentic AI​
PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PDF
PTS Company Brochure 2025 (1).pdf.......
PDF
medical staffing services at VALiNTRY
PPTX
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
PPTX
Operating system designcfffgfgggggggvggggggggg
2025 Textile ERP Trends: SAP, Odoo & Oracle
Digital Strategies for Manufacturing Companies
Transform Your Business with a Software ERP System
CHAPTER 2 - PM Management and IT Context
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
Which alternative to Crystal Reports is best for small or large businesses.pdf
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
System and Network Administraation Chapter 3
Navsoft: AI-Powered Business Solutions & Custom Software Development
How Creative Agencies Leverage Project Management Software.pdf
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
Odoo POS Development Services by CandidRoot Solutions
Introduction to Artificial Intelligence
Reimagine Home Health with the Power of Agentic AI​
Upgrade and Innovation Strategies for SAP ERP Customers
PTS Company Brochure 2025 (1).pdf.......
medical staffing services at VALiNTRY
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
Operating system designcfffgfgggggggvggggggggg

Creating a Mesos python framework

  • 1. Mesos Python framework O. Sallou, DevExp 2016 CC-BY-SA 3.0
  • 2. Interacting with Mesos, 2 choices Python API: - not compatible with Python 3 - Easy to implement - Bindings over C API HTTP API: - HTTP calls with persistent connection and streaming - Recent - Language independent,
  • 3. Workflow Register => Listen for offer => accept/decline offer => listen for job status Messages use Protobuf [0], HTTP interface also supports JSON. See Mesos protobuf definition [1] to read or create messages. [0] https://developers.google.com/protocol-buffers/ [1] https://github.com/apache/mesos/blob/master/include/mesos/mesos.proto
  • 5. Register framework = mesos_pb2.FrameworkInfo() # mesos_pb2.XXX() read/use/write protobuf Mesos objects framework.user = "" # Have Mesos fill in the current user. framework.name = "Example Mesos framework" framework.failover_timeout = 3600 * 24*7 # 1 week # Optionally, restart from a previous run mesos_framework_id = mesos_pb2.FrameworkID() mesos_framework_id.value = XYZ framework.id.MergeFrom(mesos_framework_id) framework.principal = "godocker-mesos-framework" # We will create our scheduler class MesosScheduler in next slide mesosScheduler = MesosScheduler(1, executor) # Let’s declare a framework, with a scheduler to manage offers driver = mesos.native.MesosSchedulerDriver( mesosScheduler, framework, ‘zk://127.0.01:2881’) driver.start() executor = mesos_pb2.ExecutorInfo() executor.executor_id.value = "sample" executor.name = "Example executor"
  • 6. When scheduler ends... When scheduler stops, Mesos will kill any remaining tasks after “failover_timeout” value. One can set FrameworkID to restart framework and keep same context. Mesos will keep tasks, and send status messages to framework.
  • 7. Scheduler skeleton class MesosScheduler(mesos.interface.Scheduler): def registered(self, driver, frameworkId, masterInfo): logging.info("Registered with framework ID %s" % frameworkId.value) self.frameworkId = frameworkId.value def resourceOffers(self, driver, offers): ''' Receive offers, an offer defines a node with available resources (cpu, mem, etc.) ''' for offer in offers: logging.debug('Mesos:Offer:Decline) driver.declineOffer(offer.id) def statusUpdate(self, driver, update): ''' Receive status info from submitted tasks (switch to running, failure of node, etc.) ''' logging.debug("Task %s is in state %s" % (update.task_id.value, mesos_pb2.TaskState.Name (update.state))) def frameworkMessage(self, driver, executorId, slaveId, message): logging.debug("Received framework message") # usually, nothing to do here
  • 8. Messages are asynchronous Status updates and offers are asynchronous callbacks. Scheduler run in a separate thread. You’re never the initiator of the requests (except registration), but you will receive callback messages when something change on Mesos side (job switch to running, node failure, …)
  • 9. Submit a task for offer in offers: # Get available cpu and mem for this offer offerCpus = 0 offerMem = 0 for resource in offer.resources: if resource.name == "cpus": offerCpus += resource.scalar.value elif resource.name == "mem": offerMem += resource.scalar.value # We could chek for other resources here logging.debug("Mesos:Received offer %s with cpus: %s and mem: %s" % (offer.id.value, offerCpus, offerMem)) # We should check that offer has enough resources sample_task = create_a_sample_task(offer) array_of_task = [ sample_task ] driver.launchTasks(offer.id, array_of_task) Mesos support any custom resource definition on nodes (gpu, slots, disk, …), using scalar or range values When a task is launched, requested resources will be removed from available resources for the selected node. Next offers won’t propose thoses resources again until task is over (or killed).
  • 10. Define a task def create_a_sample_task(offer): task = mesos_pb2.TaskInfo() # The container part (native or docker) container = mesos_pb2.ContainerInfo() container.type = 1 # mesos_pb2.ContainerInfo.Type.DOCKER # Let’s add a volume volume = container.volumes.add() volume.container_path = “/tmp/test” volume.host_path = “/tmp/incontainer” volume.mode = 1 # mesos_pb2.Volume.Mode.RW # The command to execute, if not using entrypoint command = mesos_pb2.CommandInfo() command.value = “echo hello world” task.command.MergeFrom(command) # Unique identifier (or let mesos assign one) task.task_id.value = XYZ_UNIQUE_IDENTIFIER # the slave where task is executed task.slave_id.value = offer.slave_id.value task.name = “my_sample_task” # The resources/requirements # Resources have names, cpu, mem and ports are available # by default, one can define custom ones per slave node # and get them by their name here cpus = task.resources.add() cpus.name = "cpus" cpus.type = mesos_pb2.Value.SCALAR cpus.scalar.value = 2 mem = task.resources.add() mem.name = "mem" mem.type = mesos_pb2.Value.SCALAR mem.scalar.value = 3000 #3 Go
  • 11. Define a task (next) # Now the Docker part docker = mesos_pb2.ContainerInfo.DockerInfo() docker.image = “debian:latest” docker.network = 2 # mesos_pb2.ContainerInfo.DockerInfo.Network.BRIDGE docker.force_pull_image = True container.docker.MergeFrom(docker) # Let’s map some ports, ports are resources like cpu and mem # We will map container port 80 to an available host port # Let’s pick the first available port for this offer, for simplicity # we will skip here controls and suppose there is at least one port offer_port = None for resource in offer.resources: if resource.name == "ports": for mesos_range in resource.ranges.range: offer_port = mesos_range.begin break # We map port 80 to offer_port in container docker_port = docker.port_mappings.add() docker_port.host_port = 80 docker_port.container_port = offer_port # We tell mesos that we reserve this port # Mesos will remove it from next offers until task completion mesos_ports = task.resources.add() mesos_ports.name = "ports" mesos_ports.type = mesos_pb2.Value.RANGES port_range = mesos_ports.ranges.range.add() port_range.begin = offer_port port_range.end = offer_port task.container.MergeFrom(container) return task
  • 12. Task status def statusUpdate(self, driver, update): ''' Receive status info from submitted tasks (switch to running, failure of node, etc.) ''' logging.debug("Task %s is in state %s" % (update.task_id.value, mesos_pb2.TaskState.Name(update.state))) if int(update.state= == 1: #Switched to RUNNING container_info = json.loads(update.data) if int(update.state) in [2,3,4,5,7]: # Over or failure logging.error(“Task is over or failed”)
  • 13. Want to kill a task? def resourceOffers(self, driver, offers): …. task_id = mesos_pb2.TaskID() task_id.value = my_unique_task_id driver.killTask(task_id)
  • 14. A framework Quite easy to setup Many logs on Mesos side for debug Share the same resources with other frameworks Different executors (docker, native, …) In a few lines of code