SlideShare a Scribd company logo
QCT Ceph
Solution – Design
Consideration and
Reference
Architecture
Gary Lee
AVP, QCT
• Industry Trend and Customer Needs
• Ceph Architecture
• Technology
• Ceph Reference Architecture and QCT Solution
• Test Result
• QCT/Red Hat Ceph Whitepaper
2
AGENDA
3
Industry Trend
and
Customer Needs
4
• Structured Data -> Unstructured/Structured Data
• Data -> Big Data, Fast Data
• Data Processing -> Data Modeling -> Data Science
• IT -> DT
• Monolithic -> Microservice
5
Industry Trend
• Scalable Size
• Variable Type
• Longivity Time
• Distributed Location
• Versatile Workload
6
• Affordable Price
• Available Service
• Continuous Innovation
• Consistent Management
• Neutral Vendor
Customer Needs
7
Ceph
Architecture
Ceph Storage Cluster
8
Cluster Network
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Ceph
Linux
CPU
Memory
SSD
HDD
NIC
Object Block File
Unified
Storage
Scale-out
Cluster
Open
Source
Software
Open
Commodity
Hardware
…..
9
Block
I/O
Ceph
Client
RBD
RADO
SGW
Ceph
FS
Object
I/O
File I/O
RADOS/Cluster Network
OSD
File
System
I/O
Disk I/O
PublicNetwork
End-to-end Data Path
App
Service
10
Ceph Software Architecture
Public Network (ex. 10GbE or 40GbE)
Cluster Network (ex. 10GbE or 40GbE)
Ceph Monitor
…...
RCT or RCC
Nx Ceph OSD Nodes
Ceph OSD Node
Clients
Ceph OSD Node Ceph OSD Node
Ceph Hardware Architecture
12
Technology
13
• 2x Intel E5-2600 CPU
• 16x DDR4 Memory
• 12x 3.5” SAS/SATA HDD
• 4x SATA SSD + PCIe M.2
• 1x SATADOM
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 Mezz Card
• 1x PCIe x8 SAS Controller
• 1U
QCT Ceph Storage Server
D51PH-1ULH
14
• Mono/Dual Node
• 2x Intel E5-2600 CPU
• 16x DDR4 Memory
• 78x or 2x 35x SSD/HDD
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 SAS Controller
• 1x PCIe x8 HHLH Card
• 1x PCIe x16 FHHL Card
• 4U
QCT Ceph Storage Server
T21P-4U
15
• 1x Intel Xeon D SoC CPU
• 4x DDR4 Memory
• 12x SAS/SATA HDD
• 4x SATA SSD
• 2x SATA SSD for OS
• 1x 1G/10G NIC
• BMC with 1G NIC
• 1x PCIe x8 Mezz Card
• 1x PCIe x8 SAS Controller
• 1U
QCT Ceph Storage Server
SD1Q-1ULH
• Standalone, without EC
• Standalone, with EC
• Hyper-converged, without EC
• High Core vs. High Frequency
• 1x OSD ~ (0.3-0.5)x Core + 2G RAM
16
CPU/Memory
• SSD:
– Journal
– Tier
– File System Cache
– Client Cache
• Journal
– HDD: SSD (SATA/SAS): 4~5
– HDD: NVMe: 12~18
17
SSD/NVMe
• 2x NVMe ~40Gb
• 4x NVMe ~100Gb
• 2x SATA SSD ~10Gb
• 1x SAS SSD ~10Gb
• (20~25)x HDD ~10Gb
• ~100x HDD ~40Gb
18
NIC
10G/40G -> 25G/100G
• CPU Offload through RDMA/iWARP
• Erasure Coding Offload
• Allocate computing on different silicon areas
19
NIC
I/O Offloading
• Object Replication
– 1 Primary + 2 Replica (or more)
– CRUSH Allocation Ruleset
• Erasure Coding
– [k+m], e.g. 4+2, 8+3
– Better Data Efficiency
• k/(k+m) vs. 1/(1+replication)
20
Erasure Coding vs. Replication
Size/
Workload
Small Medium Large
Throughput
Transfer Bandwidth
Sequential R/W
Capacity
Cost/capacity
Scalability
IOPS
IOPS/ per 4k Block
Random R/W
Hyper-converged ?
Desktop
Virtualization
Latency
Random R/W
Hadoop ?
21
Workload and Configuration
22
Red Hat Ceph
• Intel ISA-L
• Intel SPDK
• Intel CAS
• Mellanox Accelio Library
23
Vendor-specific Value-added Software
24
Ceph Reference
Architecture and
QCT Solution
• Trade-off among Technologies
• Scalable in Architecture
• Optimized for Workload
• Affordable as Expected
Design Principle
1. Needs for scale-out storage
2. Target workload
3. Access method
4. Storage capacity
5. Data protection methods
6. Fault domain risk tolerance
26
Design Considerations
27
Transactio
n
Data
Warehous
e
Big
Data
Scientific
Block
Transfe
r
Audio Video
IOPS
MB/sec
OLTP
OLAP
HPC
Streaming
DB
Storage Workload
SMALL (500TB*) MEDIUM (>1PB*) LARGE (>2PB*)
Throughput
optimized
QxStor RCT-200
16x D51PH-1ULH (16U)
• 12x 8TB HDDs
• 3x SSDs
• 1x dual port 10GbE
• 3x replica
QxStor RCT-400
6x T21P-4U/Dual (24U)
• 2x 35x 8TB HDDs
• 2x 2x PCIe SSDs
• 2x single port 40GbE
• 3x replica
QxStor RCT-400
11x T21P-4U/Dual (44U)
• 2x 35x 8TB HDDs
• 2x 2x PCIe SSDs
• 2x single port 40GbE
• 3x replica
Cost/Capacity
optimized
IOPS optimized Future direction Future direction NA
* Usable storage capacity
QxStor RCC-400
Nx T21P-4U/Dual
• 2x 35x 8TB HDDs
• 0x SSDs
• 2x dual port 10GbE
• Erasure Coding 4:2
QCT QxStor Red Hat Ceph Storage Edition Portfolio
Workload-driven Integrated Software/Hardware Solution
• Densest 1U Ceph building block
• Best reliability with smaller
failure domain
• Scale at high scale 2x 280TB
• At once obtain best throughput
and density
• Block or object storage
• 3x replication
• Video, audio, image repositories, and streaming media
• Highest density 560TB raw
capacity per chassis with greatest
price/performance
• Typically object storage
• Erasure coding common
for maximizing usable capacity
• Object archive
Throughput-Optimized
RCC-400RCT-200 RCT-400
Cost/Capacity-Optimized
USECASE
QCT QxStor Red Hat Ceph Storage Edition
Co-engineered with Red Hat Storage team to provide Optimized Ceph Solution
30
Ceph Solution Deployment
Using QCT QPT Bare Metal Privision Tool
31
Ceph Solution Deployment
Using QCT QPT Bare Metal Privision Tool
32
QCT Solution Value Proposition
• Workload-driven
• Hardware/software pre-validated, pre-optimized and
pre-integrated
• Up and running in minutes
• Balance between production (stable) and innovation
(up-streaming)
33
Test Result
Client 1
S2B
Client 2
S2B
Client 3
S2B
Ceph 1
S2PH
Ceph 2
S2PH
Ceph 3
S2PH
Ceph 5
S2PH
Ceph 4
S2PH
Client 8
S2B
Client 9
S2B
Client 10
S2B
10Gb
10Gb
Public Network
Cluster Network
General Configuration
• 5 Ceph nodes (S2PH) with each 2 x 10Gb link.
• 10 Client nodes (S2B) with each 2 x 10Gb link.
• Public network : Balanced bandwidth between Client nodes and Ceph nodes.
• Cluster network : Offload the traffic from public network to improve performance.
Option 1 (w/o SSD)
a. 12 OSD per Ceph storage node
b. S2PH (E5-2660) x2
c. RAM : 128 GB
Option 2 : (w/ SSD)
a. 12 OSD / 3 SSD per Ceph storage node
b. S2PH (E5-2660) x2
c. RAM : 12 (OSD) x 2GB = 24 GB
Testing Configuration (Throughput-Optimized)
Client 1
S2S
Client 2
S2S
Client 3
S2S
Ceph 1
S2P
Ceph 2
S2P
Client 6
S2S
Client 7
S2S
Client 8
S2S
10Gb
Public Network
40Gb 40Gb
General Configuration
• 2 Ceph nodes (S2P) with each 2 x 10Gb link.
• 8 Client nodes (S2S) with each 2 x 10Gb link.
• Public network : Balanced bandwidth between Client nodes and Ceph nodes.
• Cluster network : Offload the traffic from public network to improve performance.
Option 1 (w/o SSD)
a. 35 OSD per Ceph storage node
b. S2P (E5-2660) x2
c. RAM : 128 GB
Option 2 : (w/ SSD)
a. 35 OSD / 2 PCI-SSD per Ceph storage node
b. S2P (E5-2660) x2
c. RAM : 128 GB
Testing Configuration (Capacity-Optimized)
Level Component Test Suite
Raw I/O Disk FIO
Network I/O Network iperf
Object API I/O librados radosbench
Object I/O RGW Cosbench
Block I/O RBD librbdfio
36
CBT (Ceph Benchmarking Tool)
37
Linear Scale Out
38
Linear Scale Up
39
Price, in terms of Performance
40
Price, in terms of Capacity
41
Protection Scheme
42
Cluster Network
43
QCT/Red Hat
Ceph
Whitepaper
44
http://www.qct.io/account/d
ownload/download?order_
download_id=1022&dtype=
Reference%20Architecture
QCT/Red Hat Ceph Solution Brief
https://www.redhat.com/en/
files/resources/st-
performance-sizing-guide-
ceph-qct-inc0347490.pdf
http://www.qct.io/Solution/
Software-Defined-
Infrastructure/Storage-
Virtualization/QCT-and-
Red-Hat-Ceph-Storage-
p365c225c226c230
QCT/Red Hat Ceph Reference Architecture
• The Red Hat Ceph Storage Test Drive lab in QCT Solution Center
provides you a free hands-on experience. You'll be able to
explore the features and simplicity of the product in real-time.
• Concepts:
Ceph feature and functional test
• Lab Exercises:
Ceph Basics
Ceph Management - Calamari/CLI
Ceph Object/Block Access
46
QCT Offer TryCeph (Test Drive) Later
47
Remote access
to QCT cloud solution centers
• Easy to test. Anytime and anywhere.
• No facilities and logistic needed
• Configurations
• RCT-200 and newest QCT solutions
QCT Offer TryCeph (Test Drive) Later
• Ceph is Open Architecture
• QCT, Red Hat and Intel collaborate to provide
– Workload-driven,
– Pre-integrated,
– Comprehensive-tested and
– Well-optimized solution
• Red Hat – Open Software/Support Pioneer
Intel – Open Silicon/Technology Innovator
QCT – Open System/Solution Provider
• Together We Provide the Best
48
CONCLUSION
www.QuantaQCT.com
Thank you!
www.QCT.io
QCT CONFIDENTIAL50
Looking for
innovative cloud solution?
Come to QCT,
who else?

More Related Content

PPTX
Ceph Introduction 2017
PDF
Ceph as software define storage
PDF
Storage tiering and erasure coding in Ceph (SCaLE13x)
PDF
Ceph Object Storage Reference Architecture Performance and Sizing Guide
PDF
librados
PDF
HKG15-401: Ceph and Software Defined Storage on ARM servers
PPTX
New Ceph capabilities and Reference Architectures
PPTX
What you need to know about ceph
Ceph Introduction 2017
Ceph as software define storage
Storage tiering and erasure coding in Ceph (SCaLE13x)
Ceph Object Storage Reference Architecture Performance and Sizing Guide
librados
HKG15-401: Ceph and Software Defined Storage on ARM servers
New Ceph capabilities and Reference Architectures
What you need to know about ceph

What's hot (19)

PDF
BlueStore: a new, faster storage backend for Ceph
PDF
Ceph Overview for Distributed Computing Denver Meetup
PDF
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
PPT
An intro to Ceph and big data - CERN Big Data Workshop
PDF
Ceph - A distributed storage system
PDF
QCT Fact Sheet-English
PDF
CephFS update February 2016
PDF
BlueStore: a new, faster storage backend for Ceph
PPTX
Hadoop over rgw
ODP
Block Storage For VMs With Ceph
PDF
Introduction into Ceph storage for OpenStack
PDF
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
PDF
The container revolution, and what it means to operators.pptx
PPTX
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
PPTX
Ceph Intro and Architectural Overview by Ross Turk
PPTX
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
ODP
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
PDF
SF Ceph Users Jan. 2014
PDF
Managing data analytics in a hybrid cloud
BlueStore: a new, faster storage backend for Ceph
Ceph Overview for Distributed Computing Denver Meetup
Linux Stammtisch Munich: Ceph - Overview, Experiences and Outlook
An intro to Ceph and big data - CERN Big Data Workshop
Ceph - A distributed storage system
QCT Fact Sheet-English
CephFS update February 2016
BlueStore: a new, faster storage backend for Ceph
Hadoop over rgw
Block Storage For VMs With Ceph
Introduction into Ceph storage for OpenStack
Ceph: Open Source Storage Software Optimizations on Intel® Architecture for C...
The container revolution, and what it means to operators.pptx
What is a Ceph (and why do I care). OpenStack storage - Colorado OpenStack Me...
Ceph Intro and Architectural Overview by Ross Turk
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Quick-and-Easy Deployment of a Ceph Storage Cluster with SLES
SF Ceph Users Jan. 2014
Managing data analytics in a hybrid cloud
Ad

Viewers also liked (20)

PPTX
Designing for High Performance Ceph at Scale
PPTX
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
PDF
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
PPTX
Ceph - High Performance Without High Costs
PDF
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
PPTX
Ceph and OpenStack - Feb 2014
PPTX
Open source integrated infra structure using ansible configuration management
PDF
Private Cloud mit Ceph und OpenStack
PDF
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
PDF
Ceph@MIMOS: Growing Pains from R&D to Deployment
PDF
OpenStack at PayPal
PDF
Openstack with ceph
PPTX
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
PDF
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
PDF
Ceph Object Store
PPTX
Bluestore
PPTX
Which Hypervisor Is Best? My SQL on Ceph
PPTX
ceph optimization on ssd ilsoo byun-short
PDF
Distributed Storage and Compute With Ceph's librados (Vault 2015)
PDF
The State of Ceph, Manila, and Containers in OpenStack
Designing for High Performance Ceph at Scale
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Ceph - High Performance Without High Costs
Red Hat Storage Day Atlanta - Designing Ceph Clusters Using Intel-Based Hardw...
Ceph and OpenStack - Feb 2014
Open source integrated infra structure using ansible configuration management
Private Cloud mit Ceph und OpenStack
Ceph and Storage Management with openATTIC - FOSDEM 2017-02-05
Ceph@MIMOS: Growing Pains from R&D to Deployment
OpenStack at PayPal
Openstack with ceph
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
TechDay - Cambridge 2016 - OpenNebula at Harvard Univerity
Ceph Object Store
Bluestore
Which Hypervisor Is Best? My SQL on Ceph
ceph optimization on ssd ilsoo byun-short
Distributed Storage and Compute With Ceph's librados (Vault 2015)
The State of Ceph, Manila, and Containers in OpenStack
Ad

Similar to QCT Ceph Solution - Design Consideration and Reference Architecture (20)

PDF
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
PDF
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
PPTX
Ceph Community Talk on High-Performance Solid Sate Ceph
PDF
Ambedded - how to build a true no single point of failure ceph cluster
PDF
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
PDF
Accelerating HBase with NVMe and Bucket Cache
PPTX
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
PDF
The state of SQL-on-Hadoop in the Cloud
PPT
Oracle real application_cluster
PPTX
Red Hat Storage Day Boston - Supermicro Super Storage
PDF
PhegData X - High Performance EBS
PDF
Quick-and-Easy Deployment of a Ceph Storage Cluster
PPTX
Introduction to DPDK
PPTX
NSCC Training Introductory Class
PDF
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
PDF
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
PDF
The state of SQL-on-Hadoop in the Cloud
PDF
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
PPTX
FPGAs in the cloud? (October 2017)
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Ceph Community Talk on High-Performance Solid Sate Ceph
Ambedded - how to build a true no single point of failure ceph cluster
Apache Spark AI Use Case in Telco: Network Quality Analysis and Prediction wi...
Accelerating HBase with NVMe and Bucket Cache
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
The state of SQL-on-Hadoop in the Cloud
Oracle real application_cluster
Red Hat Storage Day Boston - Supermicro Super Storage
PhegData X - High Performance EBS
Quick-and-Easy Deployment of a Ceph Storage Cluster
Introduction to DPDK
NSCC Training Introductory Class
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Ceph Day San Jose - All-Flahs Ceph on NUMA-Balanced Server
The state of SQL-on-Hadoop in the Cloud
Optimized HPC/AI cloud with OpenStack acceleration service and composable har...
FPGAs in the cloud? (October 2017)

More from Patrick McGarry (12)

PPTX
Community Update
PPTX
MySQL Head-to-Head
ODP
Ceph: A decade in the making and still going strong
PPTX
2014 Ceph NYLUG Talk
PPTX
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
PPTX
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
PDF
DEVIEW 2013
PPTX
Ceph, Xen, and CloudStack: Semper Melior
PPTX
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
PPTX
Ceph & OpenStack - Boston Meetup
PPTX
Ceph in the Ecosystem - Ceph Day NYC 2013
PDF
Powering CloudStack with Ceph RBD - Apachecon
Community Update
MySQL Head-to-Head
Ceph: A decade in the making and still going strong
2014 Ceph NYLUG Talk
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
DEVIEW 2013
Ceph, Xen, and CloudStack: Semper Melior
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
Ceph & OpenStack - Boston Meetup
Ceph in the Ecosystem - Ceph Day NYC 2013
Powering CloudStack with Ceph RBD - Apachecon

Recently uploaded (20)

PPTX
A Presentation on Artificial Intelligence
PDF
Machine learning based COVID-19 study performance prediction
PPTX
Cloud computing and distributed systems.
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Approach and Philosophy of On baking technology
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PPTX
MYSQL Presentation for SQL database connectivity
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
cuic standard and advanced reporting.pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Modernizing your data center with Dell and AMD
PDF
Chapter 3 Spatial Domain Image Processing.pdf
A Presentation on Artificial Intelligence
Machine learning based COVID-19 study performance prediction
Cloud computing and distributed systems.
Network Security Unit 5.pdf for BCA BBA.
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
NewMind AI Weekly Chronicles - August'25 Week I
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Reach Out and Touch Someone: Haptics and Empathic Computing
Review of recent advances in non-invasive hemoglobin estimation
Approach and Philosophy of On baking technology
Per capita expenditure prediction using model stacking based on satellite ima...
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
MYSQL Presentation for SQL database connectivity
The AUB Centre for AI in Media Proposal.docx
cuic standard and advanced reporting.pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Modernizing your data center with Dell and AMD
Chapter 3 Spatial Domain Image Processing.pdf

QCT Ceph Solution - Design Consideration and Reference Architecture

  • 1. QCT Ceph Solution – Design Consideration and Reference Architecture Gary Lee AVP, QCT
  • 2. • Industry Trend and Customer Needs • Ceph Architecture • Technology • Ceph Reference Architecture and QCT Solution • Test Result • QCT/Red Hat Ceph Whitepaper 2 AGENDA
  • 4. 4
  • 5. • Structured Data -> Unstructured/Structured Data • Data -> Big Data, Fast Data • Data Processing -> Data Modeling -> Data Science • IT -> DT • Monolithic -> Microservice 5 Industry Trend
  • 6. • Scalable Size • Variable Type • Longivity Time • Distributed Location • Versatile Workload 6 • Affordable Price • Available Service • Continuous Innovation • Consistent Management • Neutral Vendor Customer Needs
  • 8. Ceph Storage Cluster 8 Cluster Network Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Ceph Linux CPU Memory SSD HDD NIC Object Block File Unified Storage Scale-out Cluster Open Source Software Open Commodity Hardware …..
  • 11. Public Network (ex. 10GbE or 40GbE) Cluster Network (ex. 10GbE or 40GbE) Ceph Monitor …... RCT or RCC Nx Ceph OSD Nodes Ceph OSD Node Clients Ceph OSD Node Ceph OSD Node Ceph Hardware Architecture
  • 13. 13 • 2x Intel E5-2600 CPU • 16x DDR4 Memory • 12x 3.5” SAS/SATA HDD • 4x SATA SSD + PCIe M.2 • 1x SATADOM • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 Mezz Card • 1x PCIe x8 SAS Controller • 1U QCT Ceph Storage Server D51PH-1ULH
  • 14. 14 • Mono/Dual Node • 2x Intel E5-2600 CPU • 16x DDR4 Memory • 78x or 2x 35x SSD/HDD • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 SAS Controller • 1x PCIe x8 HHLH Card • 1x PCIe x16 FHHL Card • 4U QCT Ceph Storage Server T21P-4U
  • 15. 15 • 1x Intel Xeon D SoC CPU • 4x DDR4 Memory • 12x SAS/SATA HDD • 4x SATA SSD • 2x SATA SSD for OS • 1x 1G/10G NIC • BMC with 1G NIC • 1x PCIe x8 Mezz Card • 1x PCIe x8 SAS Controller • 1U QCT Ceph Storage Server SD1Q-1ULH
  • 16. • Standalone, without EC • Standalone, with EC • Hyper-converged, without EC • High Core vs. High Frequency • 1x OSD ~ (0.3-0.5)x Core + 2G RAM 16 CPU/Memory
  • 17. • SSD: – Journal – Tier – File System Cache – Client Cache • Journal – HDD: SSD (SATA/SAS): 4~5 – HDD: NVMe: 12~18 17 SSD/NVMe
  • 18. • 2x NVMe ~40Gb • 4x NVMe ~100Gb • 2x SATA SSD ~10Gb • 1x SAS SSD ~10Gb • (20~25)x HDD ~10Gb • ~100x HDD ~40Gb 18 NIC 10G/40G -> 25G/100G
  • 19. • CPU Offload through RDMA/iWARP • Erasure Coding Offload • Allocate computing on different silicon areas 19 NIC I/O Offloading
  • 20. • Object Replication – 1 Primary + 2 Replica (or more) – CRUSH Allocation Ruleset • Erasure Coding – [k+m], e.g. 4+2, 8+3 – Better Data Efficiency • k/(k+m) vs. 1/(1+replication) 20 Erasure Coding vs. Replication
  • 21. Size/ Workload Small Medium Large Throughput Transfer Bandwidth Sequential R/W Capacity Cost/capacity Scalability IOPS IOPS/ per 4k Block Random R/W Hyper-converged ? Desktop Virtualization Latency Random R/W Hadoop ? 21 Workload and Configuration
  • 23. • Intel ISA-L • Intel SPDK • Intel CAS • Mellanox Accelio Library 23 Vendor-specific Value-added Software
  • 25. • Trade-off among Technologies • Scalable in Architecture • Optimized for Workload • Affordable as Expected Design Principle
  • 26. 1. Needs for scale-out storage 2. Target workload 3. Access method 4. Storage capacity 5. Data protection methods 6. Fault domain risk tolerance 26 Design Considerations
  • 28. SMALL (500TB*) MEDIUM (>1PB*) LARGE (>2PB*) Throughput optimized QxStor RCT-200 16x D51PH-1ULH (16U) • 12x 8TB HDDs • 3x SSDs • 1x dual port 10GbE • 3x replica QxStor RCT-400 6x T21P-4U/Dual (24U) • 2x 35x 8TB HDDs • 2x 2x PCIe SSDs • 2x single port 40GbE • 3x replica QxStor RCT-400 11x T21P-4U/Dual (44U) • 2x 35x 8TB HDDs • 2x 2x PCIe SSDs • 2x single port 40GbE • 3x replica Cost/Capacity optimized IOPS optimized Future direction Future direction NA * Usable storage capacity QxStor RCC-400 Nx T21P-4U/Dual • 2x 35x 8TB HDDs • 0x SSDs • 2x dual port 10GbE • Erasure Coding 4:2 QCT QxStor Red Hat Ceph Storage Edition Portfolio Workload-driven Integrated Software/Hardware Solution
  • 29. • Densest 1U Ceph building block • Best reliability with smaller failure domain • Scale at high scale 2x 280TB • At once obtain best throughput and density • Block or object storage • 3x replication • Video, audio, image repositories, and streaming media • Highest density 560TB raw capacity per chassis with greatest price/performance • Typically object storage • Erasure coding common for maximizing usable capacity • Object archive Throughput-Optimized RCC-400RCT-200 RCT-400 Cost/Capacity-Optimized USECASE QCT QxStor Red Hat Ceph Storage Edition Co-engineered with Red Hat Storage team to provide Optimized Ceph Solution
  • 30. 30 Ceph Solution Deployment Using QCT QPT Bare Metal Privision Tool
  • 31. 31 Ceph Solution Deployment Using QCT QPT Bare Metal Privision Tool
  • 32. 32 QCT Solution Value Proposition • Workload-driven • Hardware/software pre-validated, pre-optimized and pre-integrated • Up and running in minutes • Balance between production (stable) and innovation (up-streaming)
  • 34. Client 1 S2B Client 2 S2B Client 3 S2B Ceph 1 S2PH Ceph 2 S2PH Ceph 3 S2PH Ceph 5 S2PH Ceph 4 S2PH Client 8 S2B Client 9 S2B Client 10 S2B 10Gb 10Gb Public Network Cluster Network General Configuration • 5 Ceph nodes (S2PH) with each 2 x 10Gb link. • 10 Client nodes (S2B) with each 2 x 10Gb link. • Public network : Balanced bandwidth between Client nodes and Ceph nodes. • Cluster network : Offload the traffic from public network to improve performance. Option 1 (w/o SSD) a. 12 OSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 128 GB Option 2 : (w/ SSD) a. 12 OSD / 3 SSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 12 (OSD) x 2GB = 24 GB Testing Configuration (Throughput-Optimized)
  • 35. Client 1 S2S Client 2 S2S Client 3 S2S Ceph 1 S2P Ceph 2 S2P Client 6 S2S Client 7 S2S Client 8 S2S 10Gb Public Network 40Gb 40Gb General Configuration • 2 Ceph nodes (S2P) with each 2 x 10Gb link. • 8 Client nodes (S2S) with each 2 x 10Gb link. • Public network : Balanced bandwidth between Client nodes and Ceph nodes. • Cluster network : Offload the traffic from public network to improve performance. Option 1 (w/o SSD) a. 35 OSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB Option 2 : (w/ SSD) a. 35 OSD / 2 PCI-SSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB Testing Configuration (Capacity-Optimized)
  • 36. Level Component Test Suite Raw I/O Disk FIO Network I/O Network iperf Object API I/O librados radosbench Object I/O RGW Cosbench Block I/O RBD librbdfio 36 CBT (Ceph Benchmarking Tool)
  • 39. 39 Price, in terms of Performance
  • 40. 40 Price, in terms of Capacity
  • 46. • The Red Hat Ceph Storage Test Drive lab in QCT Solution Center provides you a free hands-on experience. You'll be able to explore the features and simplicity of the product in real-time. • Concepts: Ceph feature and functional test • Lab Exercises: Ceph Basics Ceph Management - Calamari/CLI Ceph Object/Block Access 46 QCT Offer TryCeph (Test Drive) Later
  • 47. 47 Remote access to QCT cloud solution centers • Easy to test. Anytime and anywhere. • No facilities and logistic needed • Configurations • RCT-200 and newest QCT solutions QCT Offer TryCeph (Test Drive) Later
  • 48. • Ceph is Open Architecture • QCT, Red Hat and Intel collaborate to provide – Workload-driven, – Pre-integrated, – Comprehensive-tested and – Well-optimized solution • Red Hat – Open Software/Support Pioneer Intel – Open Silicon/Technology Innovator QCT – Open System/Solution Provider • Together We Provide the Best 48 CONCLUSION
  • 50. www.QCT.io QCT CONFIDENTIAL50 Looking for innovative cloud solution? Come to QCT, who else?

Editor's Notes

  • #29: Here is 3 skus based on small-medium-large scale. For larger scale, suggest customer to adopt RCT-400 or RCC-400. QCT is planning to do sku that is optimized for IOPS-intensive workloads. We’ll launch it in 2016H2