SlideShare a Scribd company logo
COSBench: A benchmark Tool
for Cloud Object Storage
Services

         Jiangang.Duan@intel.com
         2012.10



                                   Updated June 2012
Agenda

• Self introduction
• COSBench Introduction
• Case Study to evaluate OpenStack* swift performance
  with COSBench
• Next Step plan and Summary
Self introduction

• Jiangang Duan
• Working in Cloud Infrastructure Technology Team (CITT) of Intel
    APAC R&D Ltd. (shanghai)
•   We are software team, Experienced at performance
•   To understand how to build an efficient/scale Cloud Solution
    with Open Source software (OpenStack*, Xen*, KVM*)
•   All of work will be contributed back to Community
•   Today we will talk about some efforts we try to measure
    OpenStack* Swift performance
COSBench Introduction
• COSBench is an Intel developed Benchmark to measure
  Cloud Object Storage Service performance
  – For S3, OpenStack Swift like Object Storage
  – Not for File system (NFS e.g) and Block Device system (EBS e.g.)
• Requirement of a benchmark to measure Object Store
  performance
• Cloud solution provider can use it to
  – Compare different Hardware/Software Stacks
  – Identify bottleneck and make optimization
• We are in progress to Open Source COSBench


 COSBench is the IOMeter for Cloud Object Storage service
COSBench Key Component
Config.xml:
          • define workload with flexibility.         Web Console

Controller:                                                                      Controller

          • Control all drivers
                                                                    Config.xml
          • Collect and aggregate stats.
                                                                COSBench
Driver:
          • generate load w/ config.xml parameters.
                                                                                              Driver
          • can run tests w/o controller.                       Driver


Web Console:
                                                                                   Controller
          • Manage controller                                                      Node
          • Browse real-time stats
          • Communication is based on HTTP                   Storage Cloud
            (RESTful style)
                                                                                     Storage
                                                                                     Node
Web Console

                                       Driver list




                                   Workload List

                                     History list



Intuitive UI to get Overview.
Workload Configuration
                   Flexible load control




                                                 object size distribution


                    Read/Write Operations




                                   Workflow for complex stages

Flexible configuration parameters is capable of complex Cases
Performance Metrics




• Throughput (Operations/s): the operations completed in one
  second
• Response Time (in ms): the duration between operation
  initiation and completion.
• Bandwidth (KB/s): the total data in KiB transferred in one
  second
• Success Ratio (%): the ratio of successful operations
Easy to be Extended

• Easily extend for new
  storage system:
• Support                       AuthAPI
   – OpenStack Swift                             Auth

   – Amplistor*
   – Adding More              Context
                                                 PUT
                                                 GET
                                                 DELETE
                               StorageAPI



     Easy execution engine is capable of complex cases.
Setup-SATA has higher CPU power
   System Configuration                                     Setup-SAS has faster disks


                   Client Node    Client Node   Client Node    Client Node        Client Node

                           2GbE     2GbE        2GbE          2GbE         2GbE

Setup-SATA                                                                    Both Setup
- CPU: 2 *
                                                               Client        - CPU: 2 * 2.93GHz (4C/8T)
                                                 Ethernet
   2.7GHz (8C/16T)                                             Network       - MEM: 12GB DDR3 1333MHz
                                                       10GbE                 - NIC: 2 * Intel 82579 1GbE
- MEM: 32GB DDR3 1333MHz
- NIC: Intel 82599 10GbE                                                        bonding (mode=rr)
                                                  Proxy
Setup-SAS                                         Node
- CPU: 2 *                                                                    Setup-SAS
   2.3GHz (8C/16T)                                     10GbE                 - 12 * 70GB SAS (15000 rpm)
- MEM: 64GB DDR3 1333MHz                                       Storage
- NIC: Intel 82599 10GbE                         Ethernet      Network        Setup-SATA
                                                                             - 14 * 1T SATA (5,400 rpm)

                           2GbE     2GbE        2GbE          2GbE         2GbE

                     Storage       Storage       Storage         Storage           Storage
                      Node          Node          Node            Node              Node
128KB-Read
          • SLA: 200ms + 128KB/1MBps = 325ms
                                                                                                                                   # Worker 95%-ResTime   Throughput
                                                  Setup-SAS 128KB-Read
                                                                                                                                                ms           op/s
                                       Throughput              Avg-ResTime             95%-ResTime                                    5        20.00        369.49
                        6.0k                                                                           1000                          10        20.00        711.24
                                                                             5.02k 5.00k 4.95k 4.84k
                                                                       4.69k




                                                                                                              response time (ms)
                                                                                                       800                           20        20.00        1383.30
    throughput (op/s)




                        4.5k
                                                               3.66k
                                                                                                                                     40        30.00        2517.94
                                                                                                       600
                        3.0k                           2.52k                                                                         80        46.67        3662.71
                                                                                                       400
                                               1.38k                                                                                 160       56.67        4693.97
                        1.5k           0.71k                                                           200
                               0.37k                                                                                                320      106.67       5019.85
                        0.0k                                                                           0                             640       230.00       4998.13
                                5       10      20      40      80     160 320 640 1280 2560                                         1280      470.00       4947.15
                                                       Total Number of Worker
                                                                                                                                     2560      923.33       4840.19



   The bottleneck was identified to be the proxy’s                                                                                     CPU
    -- The CPU utilization at that node was ~100%!
    -- The peak throughput for setup-SATA was                                                     5576 op/s (640 workers)
For more complete information about performance and                                                        Better CPU results in higher throughput
benchmark results, visit Performance Test Disclosure
128KB-Write
          • SLA: 200ms + 128KB/1MBps = 325ms
                                                                                                                                       # Worker 95%-ResTime   Throughput
                                                  Setup-SAS 128KB-Write
                                                                                                                                                    ms           op/s
                                       Throughput              Avg-ResTime               95%-ResTime
                                                                                                                                          5        40.00        219.73
                        2.0k                                                                 1.87k 1.89k   4000
                                                                               1.77k 1.77k                                               10        40.00        391.14
                                                                       1.59k




                                                                                                                  response time (ms)
                                                                                                           3200
    throughput (op/s)




                                                                                                                                         20        50.00        668.19
                        1.5k                                   1.33k
                                                                                                           2400                          40        70.00        1022.07
                                                       1.02k
                        1.0k                                                                                                             80        100.00       1333.34
                                               0.67k                                                       1600
                        0.5k           0.39k                                                                                            160      143.33       1594.12
                               0.22k                                                                       800
                                                                                                                                         320       370.00       1769.55
                        0.0k                                                                               0                             640      1223.33       1773.12
                                5       10      20      40      80     160 320 640 1280 2560                                             1280     1690.00       1871.58
                                                       Total Number of Worker
                                                                                                                                         2560     3160.00       1886.81



    The Disks at storage nodes had significant impact on overall throughput
      -- The peak throughput for setup-SATA was only 155 op/s (20 Workers)
        -- even after we had put account/container DB files on SSD disks!
For more complete information about performance and
benchmark results, visit Performance Test Disclosure
128KB-Write
To fully understand the write performance …
 -- From the Disk side: were storage disk a bottleneck?
   -- in setup-SATA, all SSD  1621 op/s compared with 155 op/s
   -- Need do more tests to understand the disk impact


-- From the NIC side: were storage network a bottleneck?
   -- in setup-SAS, two set of object daemon,  NO performance change (why? TODO)
   -- in setup-SATA, all SSD + 32/64KB objects  Storage node CPU bottleneck (why? TODO)


-- From the SW side: were container updating a bottleneck?
   -- in setup-SATA, 1 container, put account/container from HDD to SSD  316% ↑
   -- in setup-SATA, 100 containers, put account/container from HDD to SSD  119% ↑
10MB-Read
         • SLA: 200ms + 10MB/1MBps = 1200ms
                                                                                                                                 # Worker 95%-ResTime   Throughput
                                                     Setup-SAS10MB -Read
                                                                                                                                              ms           op/s
                                     Throughput              Avg-ResTime            95%-ResTime                                     5        270.00        34.69
                        80                                               69.58 70.18 71.41 72.90   60,000                          10        320.00        51.87
                                                     65.48 67.37 68.69
                                             59.91                                                 50,000




                                                                                                            response time (ms)
                                                                                                                                   20        480.00        59.91
    throughput (op/s)




                        60           51.87
                                                                                                   40,000                          40      900.00         65.48
                        40   34.69                                                                 30,000                          80       1636.67        67.37
                                                                                                   20,000                          160      3093.33        68.69
                        20
                                                                                                   10,000                          320      5950.00        69.58
                        0                                                                          0                               640      11906.67       70.18
                              5       10      20      40    80 160 320 640 12802560                                                1280     24090.00       71.41
                                                   Total Number of Worker
                                                                                                                                   2560     52090.00       72.90



   The bottleneck was identified to be the clients’ NIC                                                                                   BandWidth
   Double client receive bandwidth can double the throughput


For more complete information about performance and
benchmark results, visit Performance Test Disclosure
10MB-Write
         • SLA: 200ms + 10MB/1MBps = 1200ms
                                                                                                                                      # Worker 95%-ResTime   Throughput
                                                   Setup-SAS 10MB-Write
                                                                                                                                                   ms           op/s
                                     Throughput              Avg-ResTime                95%-ResTime                                      5        536.67        13.12
                                                                                               23.71
                        25
                                                           21.38   22.21 23.19 23.23   21.55
                                                                                                       250,000                          10      936.67         17.50
                                             20.28 21.30




                                                                                                                 response time (ms)
                        20                                                                             200,000                          20       1596.67        20.28
    throughput (op/s)




                                     17.50

                             13.12                                                                                                      40       2786.67        21.30
                        15                                                                             150,000
                                                                                                                                        80       5133.33        21.38
                        10                                                                             100,000
                                                                                                                                        160      9800.00        22.21
                        5                                                                              50,000
                                                                                                                                        320      18623.33       23.19
                        0                                                                              0                                640      41576.67       23.23
                              5       10      20     40     80 160 320 640 12802560                                                     1280    102090.00       21.55
                                                   Total Number of Worker
                                                                                                                                        2560    200306.67       23.71



   The bottleneck might be the storage          nodes’ NICs
    -- in setup-SATA, the peak throughput was 15.74 op/s (10 clients)
    -- in both setups, the write performance was 1/3 of the read performance
For more complete information about performance and
benchmark results, visit Performance Test Disclosure
Next Step and call for action

• Open source COSBench (WIP) –
• Keep develop Cosbench to support more Object Storage
  Software
• Take COSBench as a tool to analyze Cloud Object Service
  performance (swift and Ceph)
• Contact me (jiangang.duan@intel.com) if you want to
  evaluate COSBench – and we are glad to hear your feedback
  to make it better
Summary

• New storage Usage model rises for Cloud Computing age,
  which need new benchmark
• COSBench is a new benchmark developed by Intel to
  measure Cloud Object Storage service performance
• OpenStack Swift is stable/high open source performance
  object Storage implementation, still need improvement
Disclaimers
•   INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY
    ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN
    INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL
    DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR
    WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT,
    COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
•   A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal
    injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL
    INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND
    EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES
    ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY
    WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE
    DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS.
•   Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the
    absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future
    definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The
    information here is subject to change without notice. Do not finalize a design with this information.
•   The products described in this document may contain design defects or errors known as errata which may cause the product to
    deviate from published specifications. Current characterized errata are available on request.
•   Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
•   Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by
    calling 1-800-548-4725, or go to: http://www.intel.com/design/literature.htm%20
•   This document contains information on products in the design phase of development.
•   Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
•   *Other names and brands may be claimed as the property of others.
•   Copyright © 2012 Intel Corporation. All rights reserved.
cosbench-openstack.pdf

More Related Content

PDF
Cosbench apac
PDF
Methods of NoSQL database systems benchmarking
PDF
Cosbench apac
ODP
Benchmarking MongoDB and CouchBase
PDF
Couchbase Performance Benchmarking
PPTX
Hadoop on Virtual Machines
PDF
How to Increase Performance of Your Hadoop Cluster
PDF
Apache Hadoop on Virtual Machines
Cosbench apac
Methods of NoSQL database systems benchmarking
Cosbench apac
Benchmarking MongoDB and CouchBase
Couchbase Performance Benchmarking
Hadoop on Virtual Machines
How to Increase Performance of Your Hadoop Cluster
Apache Hadoop on Virtual Machines

What's hot (20)

PDF
Gluster Webinar: Introduction to GlusterFS
PDF
HBase Storage Internals
PDF
2012 11 Openstack China
PPT
Advanced Hadoop Tuning and Optimization - Hadoop Consulting
PPTX
Hadoop Summit 2012 | Optimizing MapReduce Job Performance
PDF
Webinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS Storage
PDF
Virtualization Primer for Java Developers
ODP
Hug Hbase Presentation.
PDF
App cap2956v2-121001194956-phpapp01 (1)
PDF
HDFS Futures: NameNode Federation for Improved Efficiency and Scalability
PPT
Less01 architecture
PPTX
Optimizing your Infrastrucure and Operating System for Hadoop
PDF
Intro to GlusterFS Webinar - August 2011
PDF
PostgreSQL Query Cache - "pqc"
PDF
Big data on virtualized infrastucture
PDF
Oracle rac 10g best practices
PPTX
Oct 2012 HUG: Hadoop .Next (0.23) - Customer Impact and Deployment
PDF
Cloumon enterprise
PDF
Distributed Caching Essential Lessons (Ts 1402)
PDF
Future of cloud storage
Gluster Webinar: Introduction to GlusterFS
HBase Storage Internals
2012 11 Openstack China
Advanced Hadoop Tuning and Optimization - Hadoop Consulting
Hadoop Summit 2012 | Optimizing MapReduce Job Performance
Webinar Sept 22: Gluster Partners with Redapt to Deliver Scale-Out NAS Storage
Virtualization Primer for Java Developers
Hug Hbase Presentation.
App cap2956v2-121001194956-phpapp01 (1)
HDFS Futures: NameNode Federation for Improved Efficiency and Scalability
Less01 architecture
Optimizing your Infrastrucure and Operating System for Hadoop
Intro to GlusterFS Webinar - August 2011
PostgreSQL Query Cache - "pqc"
Big data on virtualized infrastucture
Oracle rac 10g best practices
Oct 2012 HUG: Hadoop .Next (0.23) - Customer Impact and Deployment
Cloumon enterprise
Distributed Caching Essential Lessons (Ts 1402)
Future of cloud storage
Ad

Similar to cosbench-openstack.pdf (20)

PDF
stackconf 2025 | How NVMe over TCP runs PostgreSQL in Quicksilver mode! by Sa...
PPTX
Varrow madness 2013 virtualizing sql presentation
PPTX
Ceph Community Talk on High-Performance Solid Sate Ceph
PPTX
Scale your Alfresco Solutions
PPTX
Ceph Day Taipei - Accelerate Ceph via SPDK
PDF
EVCache & Moneta (GoSF)
PPTX
Revisiting CephFS MDS and mClock QoS Scheduler
PDF
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
PDF
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
ODP
Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...
PDF
Ceph on arm64 upload
ODP
Apache con 2013-hadoop
PPTX
Blue host using openstack in a traditional hosting environment
PPTX
Blue host openstacksummit_2013
PDF
Optimizing elastic search on google compute engine
PDF
Running ElasticSearch on Google Compute Engine in Production
PPTX
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
PPTX
CPN302 your-linux-ami-optimization-and-performance
PDF
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
PDF
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
stackconf 2025 | How NVMe over TCP runs PostgreSQL in Quicksilver mode! by Sa...
Varrow madness 2013 virtualizing sql presentation
Ceph Community Talk on High-Performance Solid Sate Ceph
Scale your Alfresco Solutions
Ceph Day Taipei - Accelerate Ceph via SPDK
EVCache & Moneta (GoSF)
Revisiting CephFS MDS and mClock QoS Scheduler
Ceph Day Beijing - Ceph All-Flash Array Design Based on NUMA Architecture
Ceph Day Beijing - Ceph all-flash array design based on NUMA architecture
Nagios Conference 2012 - Dan Wittenberg - Case Study: Scaling Nagios Core at ...
Ceph on arm64 upload
Apache con 2013-hadoop
Blue host using openstack in a traditional hosting environment
Blue host openstacksummit_2013
Optimizing elastic search on google compute engine
Running ElasticSearch on Google Compute Engine in Production
End to End Processing of 3.7 Million Telemetry Events per Second using Lambda...
CPN302 your-linux-ami-optimization-and-performance
Accelerating Cassandra Workloads on Ceph with All-Flash PCIE SSDS
Como creamos QuestDB Cloud, un SaaS basado en Kubernetes alrededor de QuestDB...
Ad

More from OpenStack Foundation (20)

PDF
Sponsor Webinar - OpenStack Summit Vancouver 2018
PDF
OpenStack Summits 101: A Guide For Attendees
PPT
OpenStack Marketing Plan - Community Presentation
PPTX
OpenStack 5th Birthday - User Group Parties
PPTX
Liberty release: Preliminary marketing materials & messages
PPTX
OpenStack Foundation 2H 2015 Marketing Plan
PPTX
OpenStack Summit Tokyo Sponsor Webinar
PPTX
Cinder Updates - Liberty Edition
PPTX
Glance Updates - Liberty Edition
PPTX
Heat Updates - Liberty Edition
PPTX
Neutron Updates - Liberty Edition
PPTX
Nova Updates - Liberty Edition
PPTX
Sahara Updates - Liberty Edition
PDF
Searchlight Updates - Liberty Edition
PPTX
Trove Updates - Liberty Edition
PPTX
OpenStack: five years in
PDF
Swift Updates - Liberty Edition
PPTX
Congress Updates - Liberty Edition
PDF
Release Cycle Management Updates - Liberty Edition
PPT
OpenStack Day CEE 2015: Real-World Use Cases
Sponsor Webinar - OpenStack Summit Vancouver 2018
OpenStack Summits 101: A Guide For Attendees
OpenStack Marketing Plan - Community Presentation
OpenStack 5th Birthday - User Group Parties
Liberty release: Preliminary marketing materials & messages
OpenStack Foundation 2H 2015 Marketing Plan
OpenStack Summit Tokyo Sponsor Webinar
Cinder Updates - Liberty Edition
Glance Updates - Liberty Edition
Heat Updates - Liberty Edition
Neutron Updates - Liberty Edition
Nova Updates - Liberty Edition
Sahara Updates - Liberty Edition
Searchlight Updates - Liberty Edition
Trove Updates - Liberty Edition
OpenStack: five years in
Swift Updates - Liberty Edition
Congress Updates - Liberty Edition
Release Cycle Management Updates - Liberty Edition
OpenStack Day CEE 2015: Real-World Use Cases

cosbench-openstack.pdf

  • 1. COSBench: A benchmark Tool for Cloud Object Storage Services Jiangang.Duan@intel.com 2012.10 Updated June 2012
  • 2. Agenda • Self introduction • COSBench Introduction • Case Study to evaluate OpenStack* swift performance with COSBench • Next Step plan and Summary
  • 3. Self introduction • Jiangang Duan • Working in Cloud Infrastructure Technology Team (CITT) of Intel APAC R&D Ltd. (shanghai) • We are software team, Experienced at performance • To understand how to build an efficient/scale Cloud Solution with Open Source software (OpenStack*, Xen*, KVM*) • All of work will be contributed back to Community • Today we will talk about some efforts we try to measure OpenStack* Swift performance
  • 4. COSBench Introduction • COSBench is an Intel developed Benchmark to measure Cloud Object Storage Service performance – For S3, OpenStack Swift like Object Storage – Not for File system (NFS e.g) and Block Device system (EBS e.g.) • Requirement of a benchmark to measure Object Store performance • Cloud solution provider can use it to – Compare different Hardware/Software Stacks – Identify bottleneck and make optimization • We are in progress to Open Source COSBench COSBench is the IOMeter for Cloud Object Storage service
  • 5. COSBench Key Component Config.xml: • define workload with flexibility. Web Console Controller: Controller • Control all drivers Config.xml • Collect and aggregate stats. COSBench Driver: • generate load w/ config.xml parameters. Driver • can run tests w/o controller. Driver Web Console: Controller • Manage controller Node • Browse real-time stats • Communication is based on HTTP Storage Cloud (RESTful style) Storage Node
  • 6. Web Console Driver list Workload List History list Intuitive UI to get Overview.
  • 7. Workload Configuration Flexible load control object size distribution Read/Write Operations Workflow for complex stages Flexible configuration parameters is capable of complex Cases
  • 8. Performance Metrics • Throughput (Operations/s): the operations completed in one second • Response Time (in ms): the duration between operation initiation and completion. • Bandwidth (KB/s): the total data in KiB transferred in one second • Success Ratio (%): the ratio of successful operations
  • 9. Easy to be Extended • Easily extend for new storage system: • Support AuthAPI – OpenStack Swift Auth – Amplistor* – Adding More Context PUT GET DELETE StorageAPI Easy execution engine is capable of complex cases.
  • 10. Setup-SATA has higher CPU power System Configuration Setup-SAS has faster disks Client Node Client Node Client Node Client Node Client Node 2GbE 2GbE 2GbE 2GbE 2GbE Setup-SATA Both Setup - CPU: 2 * Client - CPU: 2 * 2.93GHz (4C/8T) Ethernet 2.7GHz (8C/16T) Network - MEM: 12GB DDR3 1333MHz 10GbE - NIC: 2 * Intel 82579 1GbE - MEM: 32GB DDR3 1333MHz - NIC: Intel 82599 10GbE bonding (mode=rr) Proxy Setup-SAS Node - CPU: 2 * Setup-SAS 2.3GHz (8C/16T) 10GbE - 12 * 70GB SAS (15000 rpm) - MEM: 64GB DDR3 1333MHz Storage - NIC: Intel 82599 10GbE Ethernet Network Setup-SATA - 14 * 1T SATA (5,400 rpm) 2GbE 2GbE 2GbE 2GbE 2GbE Storage Storage Storage Storage Storage Node Node Node Node Node
  • 11. 128KB-Read • SLA: 200ms + 128KB/1MBps = 325ms # Worker 95%-ResTime Throughput Setup-SAS 128KB-Read ms op/s Throughput Avg-ResTime 95%-ResTime 5 20.00 369.49 6.0k 1000 10 20.00 711.24 5.02k 5.00k 4.95k 4.84k 4.69k response time (ms) 800 20 20.00 1383.30 throughput (op/s) 4.5k 3.66k 40 30.00 2517.94 600 3.0k 2.52k 80 46.67 3662.71 400 1.38k 160 56.67 4693.97 1.5k 0.71k 200 0.37k 320 106.67 5019.85 0.0k 0 640 230.00 4998.13 5 10 20 40 80 160 320 640 1280 2560 1280 470.00 4947.15 Total Number of Worker 2560 923.33 4840.19 The bottleneck was identified to be the proxy’s CPU -- The CPU utilization at that node was ~100%! -- The peak throughput for setup-SATA was 5576 op/s (640 workers) For more complete information about performance and Better CPU results in higher throughput benchmark results, visit Performance Test Disclosure
  • 12. 128KB-Write • SLA: 200ms + 128KB/1MBps = 325ms # Worker 95%-ResTime Throughput Setup-SAS 128KB-Write ms op/s Throughput Avg-ResTime 95%-ResTime 5 40.00 219.73 2.0k 1.87k 1.89k 4000 1.77k 1.77k 10 40.00 391.14 1.59k response time (ms) 3200 throughput (op/s) 20 50.00 668.19 1.5k 1.33k 2400 40 70.00 1022.07 1.02k 1.0k 80 100.00 1333.34 0.67k 1600 0.5k 0.39k 160 143.33 1594.12 0.22k 800 320 370.00 1769.55 0.0k 0 640 1223.33 1773.12 5 10 20 40 80 160 320 640 1280 2560 1280 1690.00 1871.58 Total Number of Worker 2560 3160.00 1886.81 The Disks at storage nodes had significant impact on overall throughput -- The peak throughput for setup-SATA was only 155 op/s (20 Workers) -- even after we had put account/container DB files on SSD disks! For more complete information about performance and benchmark results, visit Performance Test Disclosure
  • 13. 128KB-Write To fully understand the write performance … -- From the Disk side: were storage disk a bottleneck? -- in setup-SATA, all SSD  1621 op/s compared with 155 op/s -- Need do more tests to understand the disk impact -- From the NIC side: were storage network a bottleneck? -- in setup-SAS, two set of object daemon,  NO performance change (why? TODO) -- in setup-SATA, all SSD + 32/64KB objects  Storage node CPU bottleneck (why? TODO) -- From the SW side: were container updating a bottleneck? -- in setup-SATA, 1 container, put account/container from HDD to SSD  316% ↑ -- in setup-SATA, 100 containers, put account/container from HDD to SSD  119% ↑
  • 14. 10MB-Read • SLA: 200ms + 10MB/1MBps = 1200ms # Worker 95%-ResTime Throughput Setup-SAS10MB -Read ms op/s Throughput Avg-ResTime 95%-ResTime 5 270.00 34.69 80 69.58 70.18 71.41 72.90 60,000 10 320.00 51.87 65.48 67.37 68.69 59.91 50,000 response time (ms) 20 480.00 59.91 throughput (op/s) 60 51.87 40,000 40 900.00 65.48 40 34.69 30,000 80 1636.67 67.37 20,000 160 3093.33 68.69 20 10,000 320 5950.00 69.58 0 0 640 11906.67 70.18 5 10 20 40 80 160 320 640 12802560 1280 24090.00 71.41 Total Number of Worker 2560 52090.00 72.90 The bottleneck was identified to be the clients’ NIC BandWidth Double client receive bandwidth can double the throughput For more complete information about performance and benchmark results, visit Performance Test Disclosure
  • 15. 10MB-Write • SLA: 200ms + 10MB/1MBps = 1200ms # Worker 95%-ResTime Throughput Setup-SAS 10MB-Write ms op/s Throughput Avg-ResTime 95%-ResTime 5 536.67 13.12 23.71 25 21.38 22.21 23.19 23.23 21.55 250,000 10 936.67 17.50 20.28 21.30 response time (ms) 20 200,000 20 1596.67 20.28 throughput (op/s) 17.50 13.12 40 2786.67 21.30 15 150,000 80 5133.33 21.38 10 100,000 160 9800.00 22.21 5 50,000 320 18623.33 23.19 0 0 640 41576.67 23.23 5 10 20 40 80 160 320 640 12802560 1280 102090.00 21.55 Total Number of Worker 2560 200306.67 23.71 The bottleneck might be the storage nodes’ NICs -- in setup-SATA, the peak throughput was 15.74 op/s (10 clients) -- in both setups, the write performance was 1/3 of the read performance For more complete information about performance and benchmark results, visit Performance Test Disclosure
  • 16. Next Step and call for action • Open source COSBench (WIP) – • Keep develop Cosbench to support more Object Storage Software • Take COSBench as a tool to analyze Cloud Object Service performance (swift and Ceph) • Contact me (jiangang.duan@intel.com) if you want to evaluate COSBench – and we are glad to hear your feedback to make it better
  • 17. Summary • New storage Usage model rises for Cloud Computing age, which need new benchmark • COSBench is a new benchmark developed by Intel to measure Cloud Object Storage service performance • OpenStack Swift is stable/high open source performance object Storage implementation, still need improvement
  • 18. Disclaimers • INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. • A "Mission Critical Application" is any application in which failure of the Intel Product could result, directly or indirectly, in personal injury or death. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EMPLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRITICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS. • Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. • The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. • Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order. • Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling 1-800-548-4725, or go to: http://www.intel.com/design/literature.htm%20 • This document contains information on products in the design phase of development. • Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. • *Other names and brands may be claimed as the property of others. • Copyright © 2012 Intel Corporation. All rights reserved.