SlideShare a Scribd company logo
M2 Reference Architecture Performance
(Mirantis, Openstack, and Commodity Hardware)
Ubuntu 12.04
Fuel 6.0
Openstack Juno
Myricom 10G Ethernet
Ceph RBD Glance/Cinder
8 Disk Local Raid5 Ephemeral/Root
Ryan Aydelott - CELS
Introduction
The purpose of this exercise is to create a small testbed using the Mirantis
“Fuel” deployment tool (version 6.0) to build a small 13 node cluster.
This deployment consisted of: 3 Controller nodes, 10 compute nodes, and 13
Ceph nodes
Each node class (controller/compute) was configured identically with 2TB of
local disk for ephemeral storage and 1TB of local disk for Ceph per node, the
remaining being used for local OS storage. The array holding this data
consisted of a single 8 disk Raid5 array made up of 500GB drives. Additionally
a small Raid1 array consisting of two 50GB SSD drives was used for Ceph
journal storage.
In testing, RBD, ephemeral volume, and inter-VM network performance will be
measured.
2
Ryan Aydelott - CELS
Testbed Hardware
‣ IBM System x3650 M3
‣ 12 Cores @2.67Ghz
‣ x12 Dimms for 48GB @1.33Ghz
‣ ServeRAID M5015 x2 (x2 SSD for Ceph Journal, x8 Magnetic for Instance/
Ceph)
‣ x1 1G ethernet for provisioning, x1 10G ethernet for storage/vm tenant
networking (MYRICOM Inc. Myri-10G Dual-Protocol NIC (rev 01))
‣ Juniper ex4500-40f
3
Ryan Aydelott - CELS
Testbed Software
‣ Deployment Node: Fuel 6.0
‣ Controller Nodes: Ubuntu 12.04
‣ Compute Nodes: Ubuntu 12.04
‣ Openstack Release Juno
‣ Ceph version 0.80.7 (Firefly) with replica/3
4
Ryan Aydelott - CELS
Testbed Configuration
‣ A flavor on Openstack was created that consumed an entire HV
‣ 10 total instances were created, each on it’s own dedicated hypervisor
‣ The guest OS was Ubuntu 14.04 for each instance
‣ iperf version 2.0.5 (08 Jul 2010) pthreads (networking)
‣ bonnie++ v1.97 (disk)
‣ Ephemeral storage ran on local hardware raid5
‣ Ceph OSD ran on top of local hardware raid5 (not ideal, but simplest
deployment with Fuel given hardware contraints)
5
Ryan Aydelott - CELS
Local Network Performance
‣ iperf was utilized to test intervm network performance
‣ By default the HV and all backend network devices had an MTU set at
9000, the MTU on the default ubuntu 14.04 cloud image is set to 1500, we
ran tests using both 1500 byte and 9000 byte MTU’s on the the guest.
‣ 1500 Byte MTU: 5Gigabit per instance/HV (25% of theoretical max)
‣ 9000 Byte MTU: 16Gigabit per instance/HV (80% of theoretical max)
‣ Latency ~.5ms average guest<—>guest
‣ Traffic was appropriately divided by node count, whereas one node would
use 80% of the theoretical max, 10 nodes would each use 8% of the
theoretical max, 5 nodes @ 16%, etc.
6
Ryan Aydelott - CELS
Local Disk (ephemeral) Performance
‣ bonnie++ 1.97 was used to benchmark local disk
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
ryantest-640 80000M 550 99 282290 80 134720 21 1272 98 152762 43 189.3 11
Latency 16469us 1550ms 2709ms 36363us 1195ms 1820ms
Version 1.97 ------Sequential Create------ --------Random Create--------
ryantest-64011122-4 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 13526 64 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 166ms 526us 461us 58307us 45us 132us
7
Ryan Aydelott - CELS
Ceph RBD (Cinder) Performance
‣ bonnie++ 1.97 was used to benchmark Ceph RBD
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
ryantest-5b7 80000M 593 99 95170 27 45953 10 956 75 124535 19 4283 125
Latency 15259us 6451ms 1568ms 820ms 999ms 31255us
Version 1.97 ------Sequential Create------ --------Random Create--------
ryantest-5b7edbb6-2 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 10107 80 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 110ms 597us 468us 11835us 581us 105us
8
Ryan Aydelott - CELS
Conclusion
While not amazing, a solid network performance using nova-network was
achieved. Running under Neutron/OVS with VLAN segmentation was less then
ideal and only achieved 10% of theoretical network performance under the best
of circumstances.
Furthering this, Neutron OVS also had varying latency from sub millisecond to
upwards of 50ms for interVM latency.
Local disk was as expected ~130-280mb/s with Ceph RBD fluttering
~50-125mb/s depending on the type of workload.
Disk performance could certainly be improved with better local disks and a
more optimal Ceph configuration, however as a generic deployment this
configuration would work for most basic use cases.
9

More Related Content

PPTX
Cassandra Performance Benchmark
PDF
Memory, Big Data, NoSQL and Virtualization
PDF
Doing QoS Before Ceph Cluster QoS is available - David Byte, Alex Lau
PPTX
Disaggregating Ceph using NVMeoF
PDF
Bruno Silva - eMedLab: Merging HPC and Cloud for Biomedical Research
PDF
Automatic Operation Bot for Ceph - You Ji
PDF
Ceph QoS: How to support QoS in distributed storage system - Taewoong Kim
PPTX
MySQL Head-to-Head
Cassandra Performance Benchmark
Memory, Big Data, NoSQL and Virtualization
Doing QoS Before Ceph Cluster QoS is available - David Byte, Alex Lau
Disaggregating Ceph using NVMeoF
Bruno Silva - eMedLab: Merging HPC and Cloud for Biomedical Research
Automatic Operation Bot for Ceph - You Ji
Ceph QoS: How to support QoS in distributed storage system - Taewoong Kim
MySQL Head-to-Head

What's hot (19)

PDF
Some analysis of BlueStore and RocksDB
PDF
The new AMD EPYC solutions from OVHcloud: what benefits?
PDF
Evaluation of RBD replication options @CERN
PPTX
Ceph on 64-bit ARM with X-Gene
PDF
Ceph on arm64 upload
PDF
Ceph RBD Update - June 2021
PDF
XPDS14 - OSv - A Modern Semi-POSIX LibraryOS - Glauber Costa, Cloudius Systems
PPT
Nexenta at VMworld Hands-on Lab
PDF
Improvements in GlusterFS for Virtualization usecase
PDF
The latest developments from OVHcloud’s bare metal ranges
PDF
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
PDF
Ceph for Big Science - Dan van der Ster
PDF
Ceph Goes on Online at Qihoo 360 - Xuehan Xu
PDF
Global deduplication for Ceph - Myoungwon Oh
PPTX
Designing for High Performance Ceph at Scale
PDF
2021.02 new in Ceph Pacific Dashboard
PPTX
Ceph Day KL - Ceph Tiering with High Performance Archiecture
PDF
AF Ceph: Ceph Performance Analysis and Improvement on Flash
PPTX
Как построить видеоплатформу на 200 Гбитс / Ольховченков Вячеслав (Integros)
Some analysis of BlueStore and RocksDB
The new AMD EPYC solutions from OVHcloud: what benefits?
Evaluation of RBD replication options @CERN
Ceph on 64-bit ARM with X-Gene
Ceph on arm64 upload
Ceph RBD Update - June 2021
XPDS14 - OSv - A Modern Semi-POSIX LibraryOS - Glauber Costa, Cloudius Systems
Nexenta at VMworld Hands-on Lab
Improvements in GlusterFS for Virtualization usecase
The latest developments from OVHcloud’s bare metal ranges
Build a High Available NFS Cluster Based on CephFS - Shangzhong Zhu
Ceph for Big Science - Dan van der Ster
Ceph Goes on Online at Qihoo 360 - Xuehan Xu
Global deduplication for Ceph - Myoungwon Oh
Designing for High Performance Ceph at Scale
2021.02 new in Ceph Pacific Dashboard
Ceph Day KL - Ceph Tiering with High Performance Archiecture
AF Ceph: Ceph Performance Analysis and Improvement on Flash
Как построить видеоплатформу на 200 Гбитс / Ольховченков Вячеслав (Integros)
Ad

Viewers also liked (9)

PPTX
Automating Application over OpenStack using Workflows
PDF
Bringing New Experience with Openstack and Fuel (Ihor Dvoretskyi, Oleksandr M...
PPTX
Solum - OpenStack PaaS / ALM - Austin OpenStack summit
PDF
Fuel Plugins
PDF
OpenStack Murano introduction
PPTX
OpenStack Automation Overview
PPTX
Mistral and StackStorm
PDF
Fuel, Puppet and OpenStack
PDF
An Introduction to OpenStack Heat
Automating Application over OpenStack using Workflows
Bringing New Experience with Openstack and Fuel (Ihor Dvoretskyi, Oleksandr M...
Solum - OpenStack PaaS / ALM - Austin OpenStack summit
Fuel Plugins
OpenStack Murano introduction
OpenStack Automation Overview
Mistral and StackStorm
Fuel, Puppet and OpenStack
An Introduction to OpenStack Heat
Ad

Similar to Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware (20)

PPTX
ceph-barcelona-v-1.2
PPTX
Ceph barcelona-v-1.2
PDF
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
PDF
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
PDF
Approaching hyperconvergedopenstack
PPTX
Ceph on rdma
PPTX
Ceph Day New York 2014: Ceph, a physical perspective
PDF
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
PDF
Accelerating EDA workloads on Azure – Best Practice and benchmark on Intel EM...
PDF
Large-Scale Optimization Strategies for Typical HPC Workloads
PDF
Running Applications on the NetBSD Rump Kernel by Justin Cormack
PPTX
Presentasi training Sangfor HCI Advanced Sizing.pptx
PDF
Shak larry-jeder-perf-and-tuning-summit14-part1-final
PDF
3.INTEL.Optane_on_ceph_v2.pdf
PDF
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
PDF
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
PDF
Red hat open stack and storage presentation
PDF
1 Million Writes per second on 60 nodes with Cassandra and EBS
PDF
How Ceph performs on ARM Microserver Cluster
PDF
Stabilizing Ceph
ceph-barcelona-v-1.2
Ceph barcelona-v-1.2
Accelerating Ceph with iWARP RDMA over Ethernet - Brien Porter, Haodong Tang
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Approaching hyperconvergedopenstack
Ceph on rdma
Ceph Day New York 2014: Ceph, a physical perspective
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
Accelerating EDA workloads on Azure – Best Practice and benchmark on Intel EM...
Large-Scale Optimization Strategies for Typical HPC Workloads
Running Applications on the NetBSD Rump Kernel by Justin Cormack
Presentasi training Sangfor HCI Advanced Sizing.pptx
Shak larry-jeder-perf-and-tuning-summit14-part1-final
3.INTEL.Optane_on_ceph_v2.pdf
Ceph Day Beijing - Optimizing Ceph Performance by Leveraging Intel Optane and...
Ceph Day Beijing - Optimizing Ceph performance by leveraging Intel Optane and...
Red hat open stack and storage presentation
1 Million Writes per second on 60 nodes with Cassandra and EBS
How Ceph performs on ARM Microserver Cluster
Stabilizing Ceph

Recently uploaded (20)

PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Encapsulation theory and applications.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PPTX
A Presentation on Artificial Intelligence
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
KodekX | Application Modernization Development
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
Big Data Technologies - Introduction.pptx
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PPT
Teaching material agriculture food technology
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Unlocking AI with Model Context Protocol (MCP)
Encapsulation theory and applications.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
A Presentation on Artificial Intelligence
Review of recent advances in non-invasive hemoglobin estimation
Reach Out and Touch Someone: Haptics and Empathic Computing
MYSQL Presentation for SQL database connectivity
Understanding_Digital_Forensics_Presentation.pptx
KodekX | Application Modernization Development
Dropbox Q2 2025 Financial Results & Investor Presentation
Advanced methodologies resolving dimensionality complications for autism neur...
Digital-Transformation-Roadmap-for-Companies.pptx
Big Data Technologies - Introduction.pptx
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Mobile App Security Testing_ A Comprehensive Guide.pdf
Network Security Unit 5.pdf for BCA BBA.
Teaching material agriculture food technology
Agricultural_Statistics_at_a_Glance_2022_0.pdf

Mirantis, Openstack, Ubuntu, and it's Performance on Commodity Hardware

  • 1. M2 Reference Architecture Performance (Mirantis, Openstack, and Commodity Hardware) Ubuntu 12.04 Fuel 6.0 Openstack Juno Myricom 10G Ethernet Ceph RBD Glance/Cinder 8 Disk Local Raid5 Ephemeral/Root
  • 2. Ryan Aydelott - CELS Introduction The purpose of this exercise is to create a small testbed using the Mirantis “Fuel” deployment tool (version 6.0) to build a small 13 node cluster. This deployment consisted of: 3 Controller nodes, 10 compute nodes, and 13 Ceph nodes Each node class (controller/compute) was configured identically with 2TB of local disk for ephemeral storage and 1TB of local disk for Ceph per node, the remaining being used for local OS storage. The array holding this data consisted of a single 8 disk Raid5 array made up of 500GB drives. Additionally a small Raid1 array consisting of two 50GB SSD drives was used for Ceph journal storage. In testing, RBD, ephemeral volume, and inter-VM network performance will be measured. 2
  • 3. Ryan Aydelott - CELS Testbed Hardware ‣ IBM System x3650 M3 ‣ 12 Cores @2.67Ghz ‣ x12 Dimms for 48GB @1.33Ghz ‣ ServeRAID M5015 x2 (x2 SSD for Ceph Journal, x8 Magnetic for Instance/ Ceph) ‣ x1 1G ethernet for provisioning, x1 10G ethernet for storage/vm tenant networking (MYRICOM Inc. Myri-10G Dual-Protocol NIC (rev 01)) ‣ Juniper ex4500-40f 3
  • 4. Ryan Aydelott - CELS Testbed Software ‣ Deployment Node: Fuel 6.0 ‣ Controller Nodes: Ubuntu 12.04 ‣ Compute Nodes: Ubuntu 12.04 ‣ Openstack Release Juno ‣ Ceph version 0.80.7 (Firefly) with replica/3 4
  • 5. Ryan Aydelott - CELS Testbed Configuration ‣ A flavor on Openstack was created that consumed an entire HV ‣ 10 total instances were created, each on it’s own dedicated hypervisor ‣ The guest OS was Ubuntu 14.04 for each instance ‣ iperf version 2.0.5 (08 Jul 2010) pthreads (networking) ‣ bonnie++ v1.97 (disk) ‣ Ephemeral storage ran on local hardware raid5 ‣ Ceph OSD ran on top of local hardware raid5 (not ideal, but simplest deployment with Fuel given hardware contraints) 5
  • 6. Ryan Aydelott - CELS Local Network Performance ‣ iperf was utilized to test intervm network performance ‣ By default the HV and all backend network devices had an MTU set at 9000, the MTU on the default ubuntu 14.04 cloud image is set to 1500, we ran tests using both 1500 byte and 9000 byte MTU’s on the the guest. ‣ 1500 Byte MTU: 5Gigabit per instance/HV (25% of theoretical max) ‣ 9000 Byte MTU: 16Gigabit per instance/HV (80% of theoretical max) ‣ Latency ~.5ms average guest<—>guest ‣ Traffic was appropriately divided by node count, whereas one node would use 80% of the theoretical max, 10 nodes would each use 8% of the theoretical max, 5 nodes @ 16%, etc. 6
  • 7. Ryan Aydelott - CELS Local Disk (ephemeral) Performance ‣ bonnie++ 1.97 was used to benchmark local disk Version 1.97 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ryantest-640 80000M 550 99 282290 80 134720 21 1272 98 152762 43 189.3 11 Latency 16469us 1550ms 2709ms 36363us 1195ms 1820ms Version 1.97 ------Sequential Create------ --------Random Create-------- ryantest-64011122-4 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 13526 64 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency 166ms 526us 461us 58307us 45us 132us 7
  • 8. Ryan Aydelott - CELS Ceph RBD (Cinder) Performance ‣ bonnie++ 1.97 was used to benchmark Ceph RBD Version 1.97 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP ryantest-5b7 80000M 593 99 95170 27 45953 10 956 75 124535 19 4283 125 Latency 15259us 6451ms 1568ms 820ms 999ms 31255us Version 1.97 ------Sequential Create------ --------Random Create-------- ryantest-5b7edbb6-2 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 10107 80 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ Latency 110ms 597us 468us 11835us 581us 105us 8
  • 9. Ryan Aydelott - CELS Conclusion While not amazing, a solid network performance using nova-network was achieved. Running under Neutron/OVS with VLAN segmentation was less then ideal and only achieved 10% of theoretical network performance under the best of circumstances. Furthering this, Neutron OVS also had varying latency from sub millisecond to upwards of 50ms for interVM latency. Local disk was as expected ~130-280mb/s with Ceph RBD fluttering ~50-125mb/s depending on the type of workload. Disk performance could certainly be improved with better local disks and a more optimal Ceph configuration, however as a generic deployment this configuration would work for most basic use cases. 9