SlideShare a Scribd company logo
HPC on OpenStack
Review of the our Cloud Platform Project
Petar Forai, Erich Bingruber, Uemit Seren
Post FOSDEM tech talk event @UGent 04.02.2019
Agenda
Who We Are & General Remarks (Petar Forai)
Cloud Deployment and Continuous Verification
(Uemit Seren)
Cloud Monitoring System Architecture (Erich
Birngruber)
The “Cloudster” and How we’re Building it!
Shamelessly stolen from
Damien François Talk --
“The convergence of HPC
and BigData
What does it mean for
HPC sysadmins?”
Who Are We
● Part of Cloud Platform Engineering Team at molecular biology research
institutes (IMP, IMBA,GMI) located in Vienna, Austria at the Vienna Bio
Center.
● Tasked with delivery and operations of IT infrastructure for ~ 40 research
groups (~ 500 scientists).
● IT department delivers full stack of services from workstations, networking,
application hosting and development (among many others).
● Part of IT infrastructure is delivery of HPC services for our campus
● 14 People in total for everything.
Vienna Bio Center Computing Profile
● Computing infrastructure almost exclusively dedicated to bioinformatics
(genomics, image processing, cryo electron microscopy, etc.)
● Almost all applications are data exploration, analysis and data processing, no
simulation workloads
● Have all machinery for data acquisition on site (sequencers, microscopes,
etc.)
● Operating and running several compute clusters for batch computing and
several compute clusters for stateful applications (web apps, data bases, etc.)
What We Currently Have
● Siloed islands of infrastructure
● Cant talk to other islands, can’t
access data from other island (or
difficult logistics for users)
● Nightmare to manage
● No central automation across all
resources easily possible
What We’re Aiming At
Meet the CLIP Project
● OpenStack was chosen to be evaluated further as platform for this
● Setup a project “CLIP” (Cloud Infrastructure Project) and formed project team
(4.0 FTE) with a multi phase approach to delivery of the project.
● Goal is to implement not only a new HPC platform but a software defined
datacenter strategy based on OpenStack and deliver HPC services on top of
this platform
● Delivered in multiple phases
Tasks Performed within “CLIP”
● Build PoC environment to explore and develop understanding of OpenStack (
~ 2 months)
● Start deeper analysis of how OpenStack ( ~ 8 months)
○ Define and develop architecture of the cloud (understand HPC specific impact)
○ Develop deployment strategy and pick tooling for installation, configuration management,
monitoring, testing
○ Develop integration into existing data center resources and services
○ Develop understanding for operational topics like development procedures, upgrades, etc.
○ Benchmark
● Deploy production Cloud (~ 2 months and ongoing)
○ Purchase and install hardware
○ Develop architecture and pick tooling for for payload (HPC environments and applications)
○ Payload deployment
CLIP Cloud Architecture Hardware
● Heterogeneous nodes
(high core count, high
clock, high memory,
GPU accelerated,
NVME)
● First phase ~ 100
compute nodes and ~
3500 Intel SkyLake
cores
● 100GbE SDN RDMA
capable Ethernet and
some nodes with 2x or
4x ports
● ~ 250TB NVMe IO
Nodes ~ 200Gbyte/s
HPC Specific Adaptations
● Tuning, Tuning, Tuning required for excellent performance
○ NUMA clean instances (KVM process layout)
○ Huge pages (KSM etc.) setup
○ Core isolation
○ PCI-E passthrough (GPUs, NVME, …) and SR-IOV (esp. for networking) crucial for good
performance
Lessons Learned
● OpenStack is incredibly complex
● OpenStack is not a product. It is a framework.
● You need 2-3 OpenStack environments (development, staging, prod in our
case) to practice and understand upgrades and updates.
● Out of box experience and scalability for certain OpenStack subcomponents
is not optimal and should be considered more of a reference implementation
○ Consider plugging in real hardware here
● Cloud networking is really hard (especially in our case)
Deployment and Cloud Verification
Deployment and Cloud Verification
● Red Hat OpenStack (OSP) uses the
upstream “TripleO” (OpenStack on
OpenStack) project for the OpenStack
deployment.
● Undercloud (Red Hat terminology: Director)
is a single node deployment of OS using
puppet.
● The Undercloud uses various OpenStack
projects to deploy the Overcloud which is
our actual cloud where payload will run
● TripleO supports deploying a HA overcloud
● Overcloud can be installed either using a
Web-GUI or entirely from the CLI by
customizing yaml files.
Deployment and Cloud Verification
Deployment and Cloud Verification
● Web GUI is handy to play around but not so great for fast iterations and Infra as code.
→ Disable the Web UI and deploy from the CLI.
● TripleO internally uses heat to drive puppet that drives ansible ¯_(ツ)_/¯
● We decided to use ansible to drive the TripleO OpenStack deployment.
● Deployment split in 4 phases corresponding to 4 git repos:
a. clip-undercloud-prepare: Ansible playbooks that run on a bastion VM to prepare and install the
undercloud using PXE and kickstart.
b. clip-tripleo contains the customized yaml files for the TripleO configuration (storage, network
settings, etc)
c. clip-bootstrap contains ansible playbooks to initially deploy or update the overcloud using the
configuration in the clip-tripleo repo
d. clip-os-infra contains post deployment customizations that are not exposed through TripleO or
very cumbersome to customize
Deployment and Cloud Verification
● TripleO is slow because Heat → Puppet → Ansible !!
○ Small changes require “stack update” → 20 minutes (even for simple config stanza
change and service restart).
● Why not move all customizations to ansible (clip-os-infra) ? Unfortunately not robust :-(
○ Stack update (scale down/up) will overwrite our changes
○ → services can be down
● Let’s compromise:
○ Iterate on different customizations using ansible
○ Move finalized changes back to TripleO
● Ansible everywhere else !
○ clip-aci-infra: prepare the networking primitives for the 3 different OpenStack
environments
○ Move nodes between environments in the network fabric
Deployment and Cloud Verification
● We have 3 different environments (dev, staging and production) to try out updates and configuration
changes. We can guarantee reproducibility of deployment because we have everything as code/yaml,
but what about software packages ?
● To make sure that we can predictably upgrade and downgrade we decided to use Red Hat Satellite
(Foreman) and create Content Views and Life Cycle Environments for our 3 environments
Deployment and Cloud Verification
● While working on the deployment we ran into various known bugs that are fixed
in newer versions of OSP. To keep track of the workaround and the status of
those bugs we use a dedicated JIRA project (CRE)
Deployment and Cloud Verification
● How can we make sure and monitor that the cloud works during operations ?
● We leverage OpenStack’s own tempest testing suite to run verification against our deployed cloud.
● First smoke test (~ 128 tests) and if this is successful run full test (~ 3000 tests) against the cloud.
Deployment and Cloud Verification
● Ok, the Cloud works but what
about performance ? How can we
make sure that OS performs
when upgrading software
packages etc ?
● We plan to use Browbeat to run
Rally (control plane
performance/stress testing),
Shaker (network stress test) and
PerfkitBenchmarker (payload
performance) tests on a regular
basis or before and after software
upgrades or configuration
changes
Deployment and cloud verification
● Grafana and Kibana dashboard can show
more than individual rally graphs:
● Browbeat can show differences between
settings or software versions:
Scrolling through Browbeat 22 documents...
+-----------------------------------------------------------------------------------------+
Scenario | Action | conc.| times | 0b5ba58c | 2b177f3b | % Diff
+-----------------------------------------------------------------------------------------+
create-list-router | neutron.create_router | 500 | 32 | 19.940 | 15.656 | -21.483
create-list-router | neutron.list_routers | 500 | 32 | 2.588 | 2.086 | -19.410
create-list-router | neutron.create_network| 500 | 32 | 3.294 | 2.366 | -28.177
create-list-router | neutron.create_subnet | 500 | 32 | 4.282 | 2.866 | -33.075
create-list-port | neutron.list_ports | 500 | 32 | 52.627 | 43.448 | -17.442
create-list-port | neutron.create_network| 500 | 32 | 4.025 | 2.771 | -31.165
create-list-port | neutron.create_port | 500 | 32 | 19.458 | 5.412 | -72.189
create-list-subnet | neutron.create_subnet | 500 | 32 | 11.366 | 4.809 | -57.689
create-list-subnet | neutron.create_network| 500 | 32 | 6.432 | 4.286 | -33.368
create-list-subnet | neutron.list_subnets | 500 | 32 | 10.627 | 7.522 | -29.221
create-list-network| neutron.list_networks | 500 | 32 | 15.154 | 13.073 | -13.736
create-list-network| neutron.create_network| 500 | 32 | 10.200 | 6.595 | -35.347
+-----------------------------------------------------------------------------------------+
+-----------------------------------------------------------------------------------------+
UUID | Version | Build | Number of runs
+-----------------------------------------------------------------------------------------+
938dc451-d881-4f28-a6cb-ad502b177f3b | queens | 2018-03-20.2 | 1
6b50b6f7-acae-445a-ac53-78200b5ba58c | ocata | 2017-XX-XX.X | 3
+-----------------------------------------------------------------------------------------+
Deployment and cloud verification
Lessons learned and pitfalls of OpenStack/Tripleo:
● OpenStack and TripleO are complex with many moving parts. → Have a dev/staging environment
to test the upgrade and pin the software versions with Satellite or Foreman.
● Upgrades (even minor ones) can break the cloud in unexpected ways. Biggest pain point was to
upgrade from OSP11 (non-containerized) -> OSP12 (containerized).
● Containers are no free lunch. You need a container build pipeline to customize upstream containers
to add fixes and workarounds.
● TripleO gives you a supported out of the box installer for HA OpenStack with common
customizations. Non-common customizations are hard because of rigid architecture (heat, puppet,
ansible mixed together). TripleO is moving more towards ansible (config download)
● “Flying blind through clouds is dangerous”: Make sure you have a pipeline for verification and
performance regression testing.
● Infra as code (end to end) is great but requires discipline (proper PR reviews) and release
management for tracking workarounds and fixes.
Cloud Monitoring System Architecture
Monitoring is Difficult
Because it’s hard to get these right
● The information
○ At the right time
○ For the right people
● The numbers
○ Too few alarms
○ Too many alarms
○ Too many monitoring systems
● The time
○ doing it too late
Monitoring: What We Want to Know
● Logs: as structured as possible → Fluentd
○ syslog (unstructured)
○ OpenStack logs (structured)
● Events → RabbitMQ
○ OpenStack RPCs
○ high-level OpenStack interactions
○ CRUD of resources
● Status → Sensu
○ polling: is the service UP?
○ Publish / subscribe
○ modelling service dependencies
● Metrics → Collectd
○ time series, multi dimensional
○ performance metrics
Monitoring: How We do it
● Architecture
○ Ingest endpoints for all protocols
○ Buffer for peak loads
○ Persistent store
■ Structured data
■ timeseries
● Dashboards
○ Kibana, Grafana
○ Alerta to unify alarms
● Integration with deployment
○ Automatic configuration
● Service catalog integration
○ Service owners
○ Pointers to documentation
Monitoring: Outlook
● What changes for cloud deployments
○ Lifecycle, services come and go
○ Services scale up and down
○ No more hosts
● Further improvements
○ infrastructure debugger (tracing)
○ Stream processing (improved log parsing)
○ Dynamically integrate call-duty / notifications / handover
○ Robustness (last resort deployment)
Thanks!

More Related Content

PDF
TripleO Lightning Talk
PDF
Rally--OpenStack Benchmarking at Scale
PDF
Extending TripleO for OpenStack Management
PDF
How to use TripleO tools for your own project
PDF
Performance Benchmarking of Clouds Evaluating OpenStack
PDF
TripleO
PDF
Deploy Prometheus - Grafana and EFK stack on Kubic k8s Clusters
PDF
Asterisk as a Virtual Network Function Part 1
TripleO Lightning Talk
Rally--OpenStack Benchmarking at Scale
Extending TripleO for OpenStack Management
How to use TripleO tools for your own project
Performance Benchmarking of Clouds Evaluating OpenStack
TripleO
Deploy Prometheus - Grafana and EFK stack on Kubic k8s Clusters
Asterisk as a Virtual Network Function Part 1

What's hot (20)

PDF
Open stack china_201109_sjtu_jinyh
PDF
Rally: OpenStack Benchmarking
PDF
CERN OpenStack Cloud Control Plane - From VMs to K8s
PDF
Triple o overview
PDF
OpenStack Data Processing ("Sahara") project update - December 2014
PDF
Overview of kubernetes network functions
PDF
A Look Inside Google’s Data Center Networks
PDF
Nova: Openstack Compute-as-a-service
PPTX
20170926 cern cloud v4
PPTX
CERN User Story
PDF
A One-Stop Solution for Puppet and OpenStack
PPTX
Benchmarking Openstack Installations using Rally
PDF
Evolution of Openstack Networking at CERN
PPTX
Using Rally for OpenStack certification at Scale
PDF
Tensorflow in Docker
PPT
OpenDaylight Integration with OpenStack Neutron: A Tutorial
PDF
HKG15-204: OpenStack: 3rd party testing and performance benchmarking
PDF
Future Science on Future OpenStack
PDF
OpenCL Programming 101
PPTX
OpenStack Rally presentation by RamaK
Open stack china_201109_sjtu_jinyh
Rally: OpenStack Benchmarking
CERN OpenStack Cloud Control Plane - From VMs to K8s
Triple o overview
OpenStack Data Processing ("Sahara") project update - December 2014
Overview of kubernetes network functions
A Look Inside Google’s Data Center Networks
Nova: Openstack Compute-as-a-service
20170926 cern cloud v4
CERN User Story
A One-Stop Solution for Puppet and OpenStack
Benchmarking Openstack Installations using Rally
Evolution of Openstack Networking at CERN
Using Rally for OpenStack certification at Scale
Tensorflow in Docker
OpenDaylight Integration with OpenStack Neutron: A Tutorial
HKG15-204: OpenStack: 3rd party testing and performance benchmarking
Future Science on Future OpenStack
OpenCL Programming 101
OpenStack Rally presentation by RamaK
Ad

Similar to HPC on OpenStack (20)

PPTX
Openstack Summit Tokyo 2015 - Building a private cloud to efficiently handle ...
PDF
OpenShift 4 installation
PDF
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
PPTX
Benchmark of Alibaba Cloud capabilities
PPTX
Kubernetes 101
PDF
OpenStack Ottawa Q2 MeetUp - May 31st 2017
PDF
SCM Puppet: from an intro to the scaling
PDF
O'Reilly Software Architecture Conference London 2017: Building Resilient Mic...
PDF
OpenStack Toronto Q2 MeetUp - June 1st 2017
PDF
Montreal Kubernetes Meetup: Developer-first workflows (for microservices) on ...
PDF
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a Month
PDF
Logs/Metrics Gathering With OpenShift EFK Stack
PDF
Velocity NYC 2017: Building Resilient Microservices with Kubernetes, Docker, ...
PDF
Creating an open source load balancer for S3
PDF
Free GitOps Workshop
PDF
Data Science in the Cloud @StitchFix
PDF
Rise of the machines: Continuous Delivery at SEEK - YOW! Night Summary Slides
PDF
Red Hat Forum Benelux 2015
PDF
microXchg 2019: "Creating an Effective Developer Experience for Cloud-Native ...
PPTX
DR_PRESENT 1
Openstack Summit Tokyo 2015 - Building a private cloud to efficiently handle ...
OpenShift 4 installation
OpenShift_Installation_Deep_Dive_Robert_Bohne.pdf
Benchmark of Alibaba Cloud capabilities
Kubernetes 101
OpenStack Ottawa Q2 MeetUp - May 31st 2017
SCM Puppet: from an intro to the scaling
O'Reilly Software Architecture Conference London 2017: Building Resilient Mic...
OpenStack Toronto Q2 MeetUp - June 1st 2017
Montreal Kubernetes Meetup: Developer-first workflows (for microservices) on ...
USENIX LISA15: How TubeMogul Handles over One Trillion HTTP Requests a Month
Logs/Metrics Gathering With OpenShift EFK Stack
Velocity NYC 2017: Building Resilient Microservices with Kubernetes, Docker, ...
Creating an open source load balancer for S3
Free GitOps Workshop
Data Science in the Cloud @StitchFix
Rise of the machines: Continuous Delivery at SEEK - YOW! Night Summary Slides
Red Hat Forum Benelux 2015
microXchg 2019: "Creating an Effective Developer Experience for Cloud-Native ...
DR_PRESENT 1
Ad

Recently uploaded (20)

DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Approach and Philosophy of On baking technology
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
MYSQL Presentation for SQL database connectivity
PPT
Teaching material agriculture food technology
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
cuic standard and advanced reporting.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Review of recent advances in non-invasive hemoglobin estimation
The AUB Centre for AI in Media Proposal.docx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Understanding_Digital_Forensics_Presentation.pptx
MIND Revenue Release Quarter 2 2025 Press Release
Dropbox Q2 2025 Financial Results & Investor Presentation
Approach and Philosophy of On baking technology
Encapsulation_ Review paper, used for researhc scholars
MYSQL Presentation for SQL database connectivity
Teaching material agriculture food technology
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Network Security Unit 5.pdf for BCA BBA.
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Mobile App Security Testing_ A Comprehensive Guide.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Building Integrated photovoltaic BIPV_UPV.pdf
cuic standard and advanced reporting.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Review of recent advances in non-invasive hemoglobin estimation

HPC on OpenStack

  • 1. HPC on OpenStack Review of the our Cloud Platform Project Petar Forai, Erich Bingruber, Uemit Seren Post FOSDEM tech talk event @UGent 04.02.2019
  • 2. Agenda Who We Are & General Remarks (Petar Forai) Cloud Deployment and Continuous Verification (Uemit Seren) Cloud Monitoring System Architecture (Erich Birngruber)
  • 3. The “Cloudster” and How we’re Building it! Shamelessly stolen from Damien François Talk -- “The convergence of HPC and BigData What does it mean for HPC sysadmins?”
  • 4. Who Are We ● Part of Cloud Platform Engineering Team at molecular biology research institutes (IMP, IMBA,GMI) located in Vienna, Austria at the Vienna Bio Center. ● Tasked with delivery and operations of IT infrastructure for ~ 40 research groups (~ 500 scientists). ● IT department delivers full stack of services from workstations, networking, application hosting and development (among many others). ● Part of IT infrastructure is delivery of HPC services for our campus ● 14 People in total for everything.
  • 5. Vienna Bio Center Computing Profile ● Computing infrastructure almost exclusively dedicated to bioinformatics (genomics, image processing, cryo electron microscopy, etc.) ● Almost all applications are data exploration, analysis and data processing, no simulation workloads ● Have all machinery for data acquisition on site (sequencers, microscopes, etc.) ● Operating and running several compute clusters for batch computing and several compute clusters for stateful applications (web apps, data bases, etc.)
  • 6. What We Currently Have ● Siloed islands of infrastructure ● Cant talk to other islands, can’t access data from other island (or difficult logistics for users) ● Nightmare to manage ● No central automation across all resources easily possible
  • 8. Meet the CLIP Project ● OpenStack was chosen to be evaluated further as platform for this ● Setup a project “CLIP” (Cloud Infrastructure Project) and formed project team (4.0 FTE) with a multi phase approach to delivery of the project. ● Goal is to implement not only a new HPC platform but a software defined datacenter strategy based on OpenStack and deliver HPC services on top of this platform ● Delivered in multiple phases
  • 9. Tasks Performed within “CLIP” ● Build PoC environment to explore and develop understanding of OpenStack ( ~ 2 months) ● Start deeper analysis of how OpenStack ( ~ 8 months) ○ Define and develop architecture of the cloud (understand HPC specific impact) ○ Develop deployment strategy and pick tooling for installation, configuration management, monitoring, testing ○ Develop integration into existing data center resources and services ○ Develop understanding for operational topics like development procedures, upgrades, etc. ○ Benchmark ● Deploy production Cloud (~ 2 months and ongoing) ○ Purchase and install hardware ○ Develop architecture and pick tooling for for payload (HPC environments and applications) ○ Payload deployment
  • 10. CLIP Cloud Architecture Hardware ● Heterogeneous nodes (high core count, high clock, high memory, GPU accelerated, NVME) ● First phase ~ 100 compute nodes and ~ 3500 Intel SkyLake cores ● 100GbE SDN RDMA capable Ethernet and some nodes with 2x or 4x ports ● ~ 250TB NVMe IO Nodes ~ 200Gbyte/s
  • 11. HPC Specific Adaptations ● Tuning, Tuning, Tuning required for excellent performance ○ NUMA clean instances (KVM process layout) ○ Huge pages (KSM etc.) setup ○ Core isolation ○ PCI-E passthrough (GPUs, NVME, …) and SR-IOV (esp. for networking) crucial for good performance
  • 12. Lessons Learned ● OpenStack is incredibly complex ● OpenStack is not a product. It is a framework. ● You need 2-3 OpenStack environments (development, staging, prod in our case) to practice and understand upgrades and updates. ● Out of box experience and scalability for certain OpenStack subcomponents is not optimal and should be considered more of a reference implementation ○ Consider plugging in real hardware here ● Cloud networking is really hard (especially in our case)
  • 13. Deployment and Cloud Verification
  • 14. Deployment and Cloud Verification ● Red Hat OpenStack (OSP) uses the upstream “TripleO” (OpenStack on OpenStack) project for the OpenStack deployment. ● Undercloud (Red Hat terminology: Director) is a single node deployment of OS using puppet. ● The Undercloud uses various OpenStack projects to deploy the Overcloud which is our actual cloud where payload will run ● TripleO supports deploying a HA overcloud ● Overcloud can be installed either using a Web-GUI or entirely from the CLI by customizing yaml files.
  • 15. Deployment and Cloud Verification
  • 16. Deployment and Cloud Verification ● Web GUI is handy to play around but not so great for fast iterations and Infra as code. → Disable the Web UI and deploy from the CLI. ● TripleO internally uses heat to drive puppet that drives ansible ¯_(ツ)_/¯ ● We decided to use ansible to drive the TripleO OpenStack deployment. ● Deployment split in 4 phases corresponding to 4 git repos: a. clip-undercloud-prepare: Ansible playbooks that run on a bastion VM to prepare and install the undercloud using PXE and kickstart. b. clip-tripleo contains the customized yaml files for the TripleO configuration (storage, network settings, etc) c. clip-bootstrap contains ansible playbooks to initially deploy or update the overcloud using the configuration in the clip-tripleo repo d. clip-os-infra contains post deployment customizations that are not exposed through TripleO or very cumbersome to customize
  • 17. Deployment and Cloud Verification ● TripleO is slow because Heat → Puppet → Ansible !! ○ Small changes require “stack update” → 20 minutes (even for simple config stanza change and service restart). ● Why not move all customizations to ansible (clip-os-infra) ? Unfortunately not robust :-( ○ Stack update (scale down/up) will overwrite our changes ○ → services can be down ● Let’s compromise: ○ Iterate on different customizations using ansible ○ Move finalized changes back to TripleO ● Ansible everywhere else ! ○ clip-aci-infra: prepare the networking primitives for the 3 different OpenStack environments ○ Move nodes between environments in the network fabric
  • 18. Deployment and Cloud Verification ● We have 3 different environments (dev, staging and production) to try out updates and configuration changes. We can guarantee reproducibility of deployment because we have everything as code/yaml, but what about software packages ? ● To make sure that we can predictably upgrade and downgrade we decided to use Red Hat Satellite (Foreman) and create Content Views and Life Cycle Environments for our 3 environments
  • 19. Deployment and Cloud Verification ● While working on the deployment we ran into various known bugs that are fixed in newer versions of OSP. To keep track of the workaround and the status of those bugs we use a dedicated JIRA project (CRE)
  • 20. Deployment and Cloud Verification ● How can we make sure and monitor that the cloud works during operations ? ● We leverage OpenStack’s own tempest testing suite to run verification against our deployed cloud. ● First smoke test (~ 128 tests) and if this is successful run full test (~ 3000 tests) against the cloud.
  • 21. Deployment and Cloud Verification ● Ok, the Cloud works but what about performance ? How can we make sure that OS performs when upgrading software packages etc ? ● We plan to use Browbeat to run Rally (control plane performance/stress testing), Shaker (network stress test) and PerfkitBenchmarker (payload performance) tests on a regular basis or before and after software upgrades or configuration changes
  • 22. Deployment and cloud verification ● Grafana and Kibana dashboard can show more than individual rally graphs: ● Browbeat can show differences between settings or software versions: Scrolling through Browbeat 22 documents... +-----------------------------------------------------------------------------------------+ Scenario | Action | conc.| times | 0b5ba58c | 2b177f3b | % Diff +-----------------------------------------------------------------------------------------+ create-list-router | neutron.create_router | 500 | 32 | 19.940 | 15.656 | -21.483 create-list-router | neutron.list_routers | 500 | 32 | 2.588 | 2.086 | -19.410 create-list-router | neutron.create_network| 500 | 32 | 3.294 | 2.366 | -28.177 create-list-router | neutron.create_subnet | 500 | 32 | 4.282 | 2.866 | -33.075 create-list-port | neutron.list_ports | 500 | 32 | 52.627 | 43.448 | -17.442 create-list-port | neutron.create_network| 500 | 32 | 4.025 | 2.771 | -31.165 create-list-port | neutron.create_port | 500 | 32 | 19.458 | 5.412 | -72.189 create-list-subnet | neutron.create_subnet | 500 | 32 | 11.366 | 4.809 | -57.689 create-list-subnet | neutron.create_network| 500 | 32 | 6.432 | 4.286 | -33.368 create-list-subnet | neutron.list_subnets | 500 | 32 | 10.627 | 7.522 | -29.221 create-list-network| neutron.list_networks | 500 | 32 | 15.154 | 13.073 | -13.736 create-list-network| neutron.create_network| 500 | 32 | 10.200 | 6.595 | -35.347 +-----------------------------------------------------------------------------------------+ +-----------------------------------------------------------------------------------------+ UUID | Version | Build | Number of runs +-----------------------------------------------------------------------------------------+ 938dc451-d881-4f28-a6cb-ad502b177f3b | queens | 2018-03-20.2 | 1 6b50b6f7-acae-445a-ac53-78200b5ba58c | ocata | 2017-XX-XX.X | 3 +-----------------------------------------------------------------------------------------+
  • 23. Deployment and cloud verification Lessons learned and pitfalls of OpenStack/Tripleo: ● OpenStack and TripleO are complex with many moving parts. → Have a dev/staging environment to test the upgrade and pin the software versions with Satellite or Foreman. ● Upgrades (even minor ones) can break the cloud in unexpected ways. Biggest pain point was to upgrade from OSP11 (non-containerized) -> OSP12 (containerized). ● Containers are no free lunch. You need a container build pipeline to customize upstream containers to add fixes and workarounds. ● TripleO gives you a supported out of the box installer for HA OpenStack with common customizations. Non-common customizations are hard because of rigid architecture (heat, puppet, ansible mixed together). TripleO is moving more towards ansible (config download) ● “Flying blind through clouds is dangerous”: Make sure you have a pipeline for verification and performance regression testing. ● Infra as code (end to end) is great but requires discipline (proper PR reviews) and release management for tracking workarounds and fixes.
  • 24. Cloud Monitoring System Architecture
  • 25. Monitoring is Difficult Because it’s hard to get these right ● The information ○ At the right time ○ For the right people ● The numbers ○ Too few alarms ○ Too many alarms ○ Too many monitoring systems ● The time ○ doing it too late
  • 26. Monitoring: What We Want to Know ● Logs: as structured as possible → Fluentd ○ syslog (unstructured) ○ OpenStack logs (structured) ● Events → RabbitMQ ○ OpenStack RPCs ○ high-level OpenStack interactions ○ CRUD of resources ● Status → Sensu ○ polling: is the service UP? ○ Publish / subscribe ○ modelling service dependencies ● Metrics → Collectd ○ time series, multi dimensional ○ performance metrics
  • 27. Monitoring: How We do it ● Architecture ○ Ingest endpoints for all protocols ○ Buffer for peak loads ○ Persistent store ■ Structured data ■ timeseries ● Dashboards ○ Kibana, Grafana ○ Alerta to unify alarms ● Integration with deployment ○ Automatic configuration ● Service catalog integration ○ Service owners ○ Pointers to documentation
  • 28. Monitoring: Outlook ● What changes for cloud deployments ○ Lifecycle, services come and go ○ Services scale up and down ○ No more hosts ● Further improvements ○ infrastructure debugger (tracing) ○ Stream processing (improved log parsing) ○ Dynamically integrate call-duty / notifications / handover ○ Robustness (last resort deployment)