SlideShare a Scribd company logo
Burkhard Noltensmeier
teuto.net Netzdienste GmbH
Erkan Yanar
Consultant
teuto.net Netzdienste GmbH
● 18 Mitarbeiter
● Linux Systemhaus und
Webdevelopment
● Ubuntu Advantage Partner
● Openstack Ceph Service
● Büros und Datacenter in
Bielefeld
Why Openstack ?
Infrastructure as a Sevice
● Cloud Init (automated Instance provisioning)
● Network Virtualization
● Multiple Storage Options
● Multiple APIs for Automation
● closed beta since September 2013
● updated to Havana in October
● Ubuntu Cloud Archive
● 20 Compute Nodes
● 5 Ceph Nodes
● Additional Monitoring with Graphite
Provisioning and Orchestration
Openstack Storage Types
● Block Storage
● Object Storage
● Image Repository
● Internal Cluster Storage
– Temorary Image Store
– Databases (Mysql Galera,MongoDB)
Storage Requirements
● Scalability
● Redundancy
● Performance
● Efficient Pooling
Using Ceph in OStack.de - Ceph Day Frankfurt
Key Facts for our Decision
● One Ceph Cluster fits all Openstack needs
● no „single Point of Failure“
● POSIX compatibility via Rados Block Device
● seamless scalability
● commercial support by Inktank
● GPL
Rados Block Storage
● Live migration
● Efficient Snapshots
● Different types of storage avaiable (tiering)
● Cloning for fast restore or scaling
How to start
● determine Clustersize
uneven amount of Nodes to enable negotiation
● Small start with at least 5 Nodes
● either 8 or 12 Disks per Chassis
● One Jounal per Disk
● 2 Journal SSD per Chassis
Rough calculation
● 3 Nodes, 8 Disks per Node, 2 replica
● Netto = Brutto / 2 replica – 1 Node (33%) =
33%
Cluster Brutto
● 24 2TB Sata Disks, 100 IOPS each
Cluster Netto
● 15,8 Terrabyte, 790 IOPS
Rough calculation
● 5 Nodes, 8 Disks per Node, 3 replica
● Netto = Brutto / 3 replica – 1 Node (20%) =
27%
Cluster Brutto
● 40 2TB Sata Disks, 100 IOPS each
Cluster Netto
● 21,3 Terrabyte, 1066 IOPS
Ceph specifics
● Data is distributed throughout the Cluster
● Unfortunately this destroys Data locality
tradeoff between blocksize an iops.
● The bigger Blocks, the better is sequential
performance
● Double Write, SSD Journals strongly advised
● Longterm fragmentation by small writes
Operational Challenges
● Performance
● Availability
● Qos (Quality of Service)
Ceph Monitoring in ostack
● Ensure Quality with Monitoring
● Easy spotting of congestion Problems
● Event Monitoring (e.g. disk failure)
● Capacity management
What we did
● Disk monitoring with Icinga
● Collect data via Ceph Admin Socket Json
interface
● put it into Graphite
● enrich it with Meta Data
– with Openstack tennant
– Ceph Node
– OSD
Using Ceph in OStack.de - Ceph Day Frankfurt
Cumulated osd Performance
Single osd performance
Sum by Openstack tenant
Verify Ceph Performance
● Fio Benchmark with fixed file size
fio ­­fsync=<n> ­­runtime=60 ­­size=1g –bs=<n> ...
● Different sync option nosync, 1, 100
● Different Cinder Qos Service Options
● Blocksize 64k 512k 1024k 4096k
● 1 up to 4 VM Clients
● Resulting in 500 Benchmark runs..
Using Ceph in OStack.de - Ceph Day Frankfurt
Cinder Quality of Service
$ cinder qos­create high­iops consumer="front­end" 
  read_iops_sec=100       write_iops_sec=100 
  read_bytes_sec=41943040 write_bytes_sec=41943040
$ cinder qos­create low­iops consumer="front­end" 
  read_iops_sec=50        write_iops_sec=50 
  read_bytes_sec=20971520 write_bytes_sec=20971520
$ cinder qos­create ultra­low­iopsconsumer="front­end"
  read_iops_sec=10        write_iops_sec=10   
read_bytes_sec=10485760  write_bytes_sec=10485760
Speed per Cinder Qos
Does it scale
Effect of syncing Files
Different Blocksize with sync
Ceph is somewhat complex, but
● reliable
● No unpleasent suprises (so far!)
● Monitoring is important for resource
management and availabilty !

More Related Content

PDF
P99CONF — What We Need to Unlearn About Persistent Storage
PDF
Unikraft: Fast, Specialized Unikernels the Easy Way
PDF
Running OpenStack in Production - Barcamp Saigon 2016
PDF
Ceph Block Devices: A Deep Dive
ODP
Storage best practices
ODP
Ovirt and gluster_hyperconvergence_devconf-2016
PDF
Performance optimization for all flash based on aarch64 v2.0
PPTX
Improving hyperconverged performance
P99CONF — What We Need to Unlearn About Persistent Storage
Unikraft: Fast, Specialized Unikernels the Easy Way
Running OpenStack in Production - Barcamp Saigon 2016
Ceph Block Devices: A Deep Dive
Storage best practices
Ovirt and gluster_hyperconvergence_devconf-2016
Performance optimization for all flash based on aarch64 v2.0
Improving hyperconverged performance

What's hot (20)

PDF
Disaster Recovery Strategies Using oVirt's new Storage Connection Management ...
PDF
Boosting I/O Performance with KVM io_uring
PDF
Achieving the ultimate performance with KVM
PDF
Object Compaction in Cloud for High Yield
PDF
nebulaconf
PDF
Ceph Month 2021: RADOS Update
PDF
Make room! Make room!
PPTX
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
PDF
Gluster: a SWOT Analysis
PDF
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
PDF
Gluster as Block Store in Containers
PDF
Thierry carrez openly developing open infrastructure
PDF
Dynomite - PerconaLive 2017
PDF
Sharding: Past, Present and Future with Krutika Dhananjay
PDF
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
PDF
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
PDF
OSBConf 2015 | Scale out backups with bareos and gluster by niels de vos
PDF
Native Clients, more the merrier with GFProxy!
PDF
Seattle Cassandra Meetup - HasOffers
ODP
Gluster d thread_synchronization_using_urcu_lca2016
Disaster Recovery Strategies Using oVirt's new Storage Connection Management ...
Boosting I/O Performance with KVM io_uring
Achieving the ultimate performance with KVM
Object Compaction in Cloud for High Yield
nebulaconf
Ceph Month 2021: RADOS Update
Make room! Make room!
Scylla Summit 2018: Rebuilding the Ceph Distributed Storage Solution with Sea...
Gluster: a SWOT Analysis
Stor4NFV: Exploration of Cloud native Storage in OPNFV - Ren Qiaowei, Wang Hui
Gluster as Block Store in Containers
Thierry carrez openly developing open infrastructure
Dynomite - PerconaLive 2017
Sharding: Past, Present and Future with Krutika Dhananjay
Integrating gluster fs,_qemu_and_ovirt-vijay_bellur-linuxcon_eu_2013
Deploying pNFS over Distributed File Storage w/ Jiffin Tony Thottan and Niels...
OSBConf 2015 | Scale out backups with bareos and gluster by niels de vos
Native Clients, more the merrier with GFProxy!
Seattle Cassandra Meetup - HasOffers
Gluster d thread_synchronization_using_urcu_lca2016
Ad

Viewers also liked (20)

PDF
Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt
PPT
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
PPT
Ceph Performance and Optimization - Ceph Day Frankfurt
PDF
Ceph Day New York 2014: Future of CephFS
ODP
Ceph Day NYC: Developing With Librados
PPTX
Ceph Day London - Keynote
PDF
XenSummit - 08/28/2012
PDF
2012 Virtual Cloud Day
PDF
Ceph Day London 2014 - The current state of CephFS development
PDF
Ceph Day London 2014 - Ceph at IBM
PDF
Ceph Day New York: Ceph: one decade in
PDF
New features for Ceph with Cinder and Beyond
PPTX
Reglas aquí, reglas allá, pero... ¿y sin reglas?
PPT
Apostila de Avaliação Ambiental
PDF
Yunaiboon 2553 07
PPT
2011 laser
PPTX
Factores seo que Mejorarán tu Posicionamiento Web
PPS
Cachoeira 2
PPTX
Ratoncillo y el arcoiris
Ceph Deployment with Dell Crowbar - Ceph Day Frankfurt
Keynote: Building Tomorrow's Ceph - Ceph Day Frankfurt
Ceph Performance and Optimization - Ceph Day Frankfurt
Ceph Day New York 2014: Future of CephFS
Ceph Day NYC: Developing With Librados
Ceph Day London - Keynote
XenSummit - 08/28/2012
2012 Virtual Cloud Day
Ceph Day London 2014 - The current state of CephFS development
Ceph Day London 2014 - Ceph at IBM
Ceph Day New York: Ceph: one decade in
New features for Ceph with Cinder and Beyond
Reglas aquí, reglas allá, pero... ¿y sin reglas?
Apostila de Avaliação Ambiental
Yunaiboon 2553 07
2011 laser
Factores seo que Mejorarán tu Posicionamiento Web
Cachoeira 2
Ratoncillo y el arcoiris
Ad

Similar to Using Ceph in OStack.de - Ceph Day Frankfurt (20)

PPTX
Ceph Deployment at Target: Customer Spotlight
PPTX
Ceph Deployment at Target: Customer Spotlight
PPTX
Ceph barcelona-v-1.2
PPTX
ceph-barcelona-v-1.2
PPTX
Ceph Day Chicago - Ceph at work at Bloomberg
PDF
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
PPTX
New Ceph capabilities and Reference Architectures
PPTX
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
PPTX
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
PDF
Sanger OpenStack presentation March 2017
PPTX
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
PDF
Peanut Butter and jelly: Mapping the deep Integration between Ceph and OpenStack
PDF
Red Hat Storage Day Boston - OpenStack + Ceph Storage
PPTX
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
PPTX
Your 1st Ceph cluster
PDF
CEPH & OPENSTACK - Red Hat's Winning Combination for Enterprise Clouds
PDF
Ceph for Big Science - Dan van der Ster
PDF
3 ubuntu open_stack_ceph
PDF
Ceph Performance on OpenStack - Barcelona Summit
PDF
New use cases for Ceph, beyond OpenStack, Luis Rico
Ceph Deployment at Target: Customer Spotlight
Ceph Deployment at Target: Customer Spotlight
Ceph barcelona-v-1.2
ceph-barcelona-v-1.2
Ceph Day Chicago - Ceph at work at Bloomberg
Ceph Day Beijing: Experience Sharing and OpenStack and Ceph Integration
New Ceph capabilities and Reference Architectures
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
Ceph Day Chicago - Ceph Deployment at Target: Best Practices and Lessons Learned
Sanger OpenStack presentation March 2017
Ceph Day New York 2014: Best Practices for Ceph-Powered Implementations of St...
Peanut Butter and jelly: Mapping the deep Integration between Ceph and OpenStack
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Ceph at Work in Bloomberg: Object Store, RBD and OpenStack
Your 1st Ceph cluster
CEPH & OPENSTACK - Red Hat's Winning Combination for Enterprise Clouds
Ceph for Big Science - Dan van der Ster
3 ubuntu open_stack_ceph
Ceph Performance on OpenStack - Barcelona Summit
New use cases for Ceph, beyond OpenStack, Luis Rico

Recently uploaded (20)

PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Getting Started with Data Integration: FME Form 101
PDF
Electronic commerce courselecture one. Pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
1. Introduction to Computer Programming.pptx
PPTX
Big Data Technologies - Introduction.pptx
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PPTX
Machine Learning_overview_presentation.pptx
PDF
Empathic Computing: Creating Shared Understanding
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Machine learning based COVID-19 study performance prediction
PDF
Encapsulation_ Review paper, used for researhc scholars
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Per capita expenditure prediction using model stacking based on satellite ima...
Spectral efficient network and resource selection model in 5G networks
Reach Out and Touch Someone: Haptics and Empathic Computing
Network Security Unit 5.pdf for BCA BBA.
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Group 1 Presentation -Planning and Decision Making .pptx
Mobile App Security Testing_ A Comprehensive Guide.pdf
Getting Started with Data Integration: FME Form 101
Electronic commerce courselecture one. Pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Advanced methodologies resolving dimensionality complications for autism neur...
1. Introduction to Computer Programming.pptx
Big Data Technologies - Introduction.pptx
SOPHOS-XG Firewall Administrator PPT.pptx
Machine Learning_overview_presentation.pptx
Empathic Computing: Creating Shared Understanding
MIND Revenue Release Quarter 2 2025 Press Release
Machine learning based COVID-19 study performance prediction
Encapsulation_ Review paper, used for researhc scholars

Using Ceph in OStack.de - Ceph Day Frankfurt

  • 1. Burkhard Noltensmeier teuto.net Netzdienste GmbH Erkan Yanar Consultant
  • 2. teuto.net Netzdienste GmbH ● 18 Mitarbeiter ● Linux Systemhaus und Webdevelopment ● Ubuntu Advantage Partner ● Openstack Ceph Service ● Büros und Datacenter in Bielefeld
  • 3. Why Openstack ? Infrastructure as a Sevice ● Cloud Init (automated Instance provisioning) ● Network Virtualization ● Multiple Storage Options ● Multiple APIs for Automation
  • 4. ● closed beta since September 2013 ● updated to Havana in October ● Ubuntu Cloud Archive ● 20 Compute Nodes ● 5 Ceph Nodes ● Additional Monitoring with Graphite
  • 6. Openstack Storage Types ● Block Storage ● Object Storage ● Image Repository ● Internal Cluster Storage – Temorary Image Store – Databases (Mysql Galera,MongoDB)
  • 7. Storage Requirements ● Scalability ● Redundancy ● Performance ● Efficient Pooling
  • 9. Key Facts for our Decision ● One Ceph Cluster fits all Openstack needs ● no „single Point of Failure“ ● POSIX compatibility via Rados Block Device ● seamless scalability ● commercial support by Inktank ● GPL
  • 10. Rados Block Storage ● Live migration ● Efficient Snapshots ● Different types of storage avaiable (tiering) ● Cloning for fast restore or scaling
  • 11. How to start ● determine Clustersize uneven amount of Nodes to enable negotiation ● Small start with at least 5 Nodes ● either 8 or 12 Disks per Chassis ● One Jounal per Disk ● 2 Journal SSD per Chassis
  • 12. Rough calculation ● 3 Nodes, 8 Disks per Node, 2 replica ● Netto = Brutto / 2 replica – 1 Node (33%) = 33% Cluster Brutto ● 24 2TB Sata Disks, 100 IOPS each Cluster Netto ● 15,8 Terrabyte, 790 IOPS
  • 13. Rough calculation ● 5 Nodes, 8 Disks per Node, 3 replica ● Netto = Brutto / 3 replica – 1 Node (20%) = 27% Cluster Brutto ● 40 2TB Sata Disks, 100 IOPS each Cluster Netto ● 21,3 Terrabyte, 1066 IOPS
  • 14. Ceph specifics ● Data is distributed throughout the Cluster ● Unfortunately this destroys Data locality tradeoff between blocksize an iops. ● The bigger Blocks, the better is sequential performance ● Double Write, SSD Journals strongly advised ● Longterm fragmentation by small writes
  • 15. Operational Challenges ● Performance ● Availability ● Qos (Quality of Service)
  • 16. Ceph Monitoring in ostack ● Ensure Quality with Monitoring ● Easy spotting of congestion Problems ● Event Monitoring (e.g. disk failure) ● Capacity management
  • 17. What we did ● Disk monitoring with Icinga ● Collect data via Ceph Admin Socket Json interface ● put it into Graphite ● enrich it with Meta Data – with Openstack tennant – Ceph Node – OSD
  • 22. Verify Ceph Performance ● Fio Benchmark with fixed file size fio ­­fsync=<n> ­­runtime=60 ­­size=1g –bs=<n> ... ● Different sync option nosync, 1, 100 ● Different Cinder Qos Service Options ● Blocksize 64k 512k 1024k 4096k ● 1 up to 4 VM Clients ● Resulting in 500 Benchmark runs..
  • 24. Cinder Quality of Service $ cinder qos­create high­iops consumer="front­end"    read_iops_sec=100       write_iops_sec=100    read_bytes_sec=41943040 write_bytes_sec=41943040 $ cinder qos­create low­iops consumer="front­end"    read_iops_sec=50        write_iops_sec=50    read_bytes_sec=20971520 write_bytes_sec=20971520 $ cinder qos­create ultra­low­iopsconsumer="front­end"   read_iops_sec=10        write_iops_sec=10    read_bytes_sec=10485760  write_bytes_sec=10485760
  • 29. Ceph is somewhat complex, but ● reliable ● No unpleasent suprises (so far!) ● Monitoring is important for resource management and availabilty !