SlideShare a Scribd company logo
ceph – a unified distributed storage system


                    sage weil
           cloudopen – august 29, 2012
outline
●   why you should care
●   what is it, what it does
●   how it works
    ●   architecture
●   how you can use it
    ●   librados
    ●   radosgw
    ●   RBD
    ●   file system
●   who we are, why we do this
why should you care about another
        storage system?
requirements
●   diverse storage needs
    ●   object storage
    ●   block devices (for VMs) with snapshots, cloning
    ●   shared file system with POSIX, coherent caches
    ●   structured data... files, block devices, or objects?
●   scale
    ●   terabytes, petabytes, exabytes
    ●   heterogeneous hardware
    ●   reliability and fault tolerance
time
●   ease of administration
●   no manual data migration, load balancing
●   painless scaling
    ●   expansion and contraction
    ●   seamless migration
cost
●   linear function of size or performance
●   incremental expansion
    ●   no fork-lift upgrades
●   no vendor lock-in
    ●   choice of hardware
    ●   choice of software
●   open
what is ceph?
unified storage system
●   objects
    ●   native
    ●   RESTful
●   block
    ●   thin provisioning, snapshots, cloning
●   file
    ●   strong consistency, snapshots
APP
      APP                      APP
                               APP                  HOST/VM
                                                    HOST/VM                    CLIENT
                                                                               CLIENT



                         RADOSGW
                         RADOSGW                RBD
                                                RBD                        CEPH FS
                                                                           CEPH FS
LIBRADOS
 LIBRADOS
                          A bucket-based
                           A bucket-based         A reliable and fully-
                                                   A reliable and fully-    A POSIX-compliant
                                                                             A POSIX-compliant
   A library allowing
    A library allowing    REST gateway,
                           REST gateway,          distributed block
                                                   distributed block        distributed file
                                                                             distributed file
   apps to directly
    apps to directly      compatible with S3
                           compatible with S3     device, with aaLinux
                                                   device, with Linux       system, with aa
                                                                             system, with
   access RADOS,
    access RADOS,         and Swift
                           and Swift              kernel client and aa
                                                   kernel client and        Linux kernel client
                                                                             Linux kernel client
   with support for
    with support for                              QEMU/KVM driver
                                                   QEMU/KVM driver          and support for
                                                                             and support for
   C, C++, Java,
    C, C++, Java,                                                           FUSE
                                                                             FUSE
   Python, Ruby,
    Python, Ruby,
   and PHP
    and PHP




RADOS
RADOS

 A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
  A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
 intelligent storage nodes
  intelligent storage nodes
open source
●   LGPLv2
    ●   copyleft
    ●   ok to link to proprietary code
●   no copyright assignment
    ●   no dual licensing
    ●   no “enterprise-only” feature set
●   active community
●   commercial support
distributed storage system
●   data center scale
    ●   10s to 10,000s of machines
    ●   terabytes to exabytes
●   fault tolerant
    ●   no single point of failure
    ●   commodity hardware
●   self-managing, self-healing
ceph object model
●   pools
    ●   1s to 100s
    ●   independent namespaces or object collections
    ●   replication level, placement policy
●   objects
    ●   bazillions
    ●   blob of data (bytes to gigabytes)
    ●   attributes (e.g., “version=12”; bytes to kilobytes)
    ●   key/value bundle (bytes to gigabytes)
why start with objects?
●   more useful than (disk) blocks
    ●   names in a single flat namespace
    ●   variable size
    ●   simple API with rich semantics
●   more scalable than files
    ●   no hard-to-distribute hierarchy
    ●   update semantics do not span objects
    ●   workload is trivially parallel
DISK
                   DISK

                   DISK
                   DISK

                   DISK
                   DISK

HUMAN
HUMAN   COMPUTER
        COMPUTER   DISK
                   DISK

                   DISK
                   DISK

                   DISK
                   DISK

                   DISK
                   DISK
DISK
                   DISK

                   DISK
                   DISK
HUMAN
HUMAN
                   DISK
                   DISK

HUMAN
HUMAN   COMPUTER
        COMPUTER   DISK
                   DISK

                   DISK
                   DISK
HUMAN
HUMAN
                   DISK
                   DISK

                   DISK
                   DISK
HUMAN
    HUMAN      HUMAN
                HUMAN

                        HUMAN
                         HUMAN
 HUMAN
  HUMAN                                                       DISK
                                                              DISK
               HUMAN
                HUMAN
HUMAN
 HUMAN                                                        DISK
                                                              DISK
 HUMAN
  HUMAN        HUMAN
                HUMAN
                                                              DISK
                                                              DISK
                                                              DISK
                                                              DISK
     HUMAN
      HUMAN
                                                              DISK
                                                              DISK
            HUMAN
             HUMAN
HUMAN
 HUMAN
                                                              DISK
                                                              DISK
                                    (COMPUTER))
                                     (COMPUTER
             HUMAN
              HUMAN
                                                              DISK
                                                              DISK
  HUMAN            HUMAN
                    HUMAN
   HUMAN                                                      DISK
                                                              DISK
             HUMAN
              HUMAN
 HUMAN
  HUMAN                                                       DISK
                                                              DISK
              HUMAN
               HUMAN                                          DISK
                                                              DISK
   HUMAN
    HUMAN                                                     DISK
                                                              DISK
                     HUMAN
                      HUMAN
     HUMAN
      HUMAN                                                   DISK
                                                              DISK
                     HUMAN
                      HUMAN

          HUMAN
           HUMAN
                                 (actually more like this…)
COMPUTER
        COMPUTER   DISK
                   DISK
        COMPUTER
        COMPUTER   DISK
                   DISK
        COMPUTER
        COMPUTER   DISK
                   DISK
HUMAN
HUMAN
        COMPUTER
        COMPUTER   DISK
                   DISK
        COMPUTER
        COMPUTER   DISK
                   DISK
        COMPUTER
        COMPUTER   DISK
                   DISK
HUMAN
HUMAN
        COMPUTER
        COMPUTER   DISK
                   DISK
        COMPUTER
        COMPUTER   DISK
                   DISK
        COMPUTER
        COMPUTER   DISK
                   DISK
HUMAN
HUMAN
        COMPUTER
        COMPUTER   DISK
                   DISK
        COMPUTER
        COMPUTER   DISK
                   DISK
        COMPUTER
        COMPUTER   DISK
                   DISK
OSD    OSD    OSD    OSD    OSD




 FS     FS     FS    FS     FS     btrfs
                                   xfs
                                   ext4
DISK   DISK   DISK   DISK   DISK




  M            M             M
Monitors:
    •
        Maintain cluster membership and state



M
    •
        Provide consensus for distributed
        decision-making
    •
        Small, odd number
    •
        These do not serve stored objects to
        clients



    Object Storage Daemons (OSDs):
    •
        At least three in a cluster
    •
        One per disk or RAID group
    •
        Serve stored objects to clients
    •
        Intelligently peer to perform
        replication tasks
HUMAN




        M




M           M
data distribution
●   all objects are replicated N times
●   objects are automatically placed, balanced, migrated
    in a dynamic cluster
●   must consider physical infrastructure
    ●   ceph-osds on hosts in racks in rows in data centers

●   three approaches
    ●   pick a spot; remember where you put it
    ●   pick a spot; write down where you put it
    ●   calculate where to put it, where to find it
CRUSH
•   Pseudo-random placement
    algorithm
•   Fast calculation, no lookup
•   Repeatable, deterministic
•   Ensures even distribution
•   Stable mapping
    •   Limited data migration
•   Rule-based configuration
    •   specifiable replication
    •   infrastructure topology aware
    •   allows weighting
10 10 01 01 10 10 01 11 01 10
         10 10 01 01 10 10 01 11 01 10

                                    hash(object name) % num pg

10
 10   10
       10   01
             01   01
                   01   10
                         10   10
                               10    01
                                      01   11
                                            11   01
                                                  01   10
                                                        10




                                    CRUSH(pg, cluster state, policy)
10 10 01 01 10 10 01 11 01 10
        10 10 01 01 10 10 01 11 01 10




10
 10   10
       10   01
             01   01
                   01   10
                         10   10
                               10   01
                                     01   11
                                           11   01
                                                 01   10
                                                       10
RADOS
●   monitors publish osd map that describes cluster state
    ●   ceph-osd node status (up/down, weight, IP)               M
    ●   CRUSH function specifying desired data distribution
●   object storage daemons (OSDs)
    ●   safely replicate and store object
    ●   migrate data as the cluster changes over time
    ●   coordinate based on shared view of reality
●   decentralized, distributed approach allows
    ●   massive scales (10,000s of servers or more)
    ●   the illusion of a single copy with consistent behavior
CLIENT
CLIENT

         ??
CloudOpen - 08/29/2012
CloudOpen - 08/29/2012
CLIENT

         ??
APP
      APP                     APP
                              APP                   HOST/VM
                                                    HOST/VM                    CLIENT
                                                                               CLIENT



                        RADOSGW
                        RADOSGW                RBD
                                               RBD                         CEPH FS
                                                                           CEPH FS
LIBRADOS
                         A bucket-based
                          A bucket-based          A reliable and fully-
                                                   A reliable and fully-    A POSIX-compliant
                                                                             A POSIX-compliant
   A library allowing    REST gateway,
                          REST gateway,           distributed block
                                                   distributed block        distributed file
                                                                             distributed file
   apps to directly      compatible with S3
                          compatible with S3      device, with aaLinux
                                                   device, with Linux       system, with aa
                                                                             system, with
   access RADOS,         and Swift
                          and Swift               kernel client and aa
                                                   kernel client and        Linux kernel client
                                                                             Linux kernel client
   with support for                               QEMU/KVM driver
                                                   QEMU/KVM driver          and support for
                                                                             and support for
   C, C++, Java,                                                            FUSE
                                                                             FUSE
   Python, Ruby,
   and PHP




RADOS
RADOS

 A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
  A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
 intelligent storage nodes
  intelligent storage nodes
APP
     APP
    LIBRADOS
     LIBRADOS

                native




    M
    M
M
M               M
                M
LIBRADOS



L
    • Provides direct access to
      RADOS for applications
    • C, C++, Python, PHP, Java
    • No HTTP overhead
APP
      APP                      APP
                               APP                  HOST/VM
                                                    HOST/VM                    CLIENT
                                                                               CLIENT



                         RADOSGW               RBD
                                               RBD                         CEPH FS
                                                                           CEPH FS
LIBRADOS
 LIBRADOS
                          A bucket-based          A reliable and fully-
                                                   A reliable and fully-    A POSIX-compliant
                                                                             A POSIX-compliant
   A library allowing
    A library allowing    REST gateway,           distributed block
                                                   distributed block        distributed file
                                                                             distributed file
   apps to directly
    apps to directly      compatible with S3      device, with aaLinux
                                                   device, with Linux       system, with aa
                                                                             system, with
   access RADOS,
    access RADOS,         and Swift               kernel client and aa
                                                   kernel client and        Linux kernel client
                                                                             Linux kernel client
   with support for
    with support for                              QEMU/KVM driver
                                                   QEMU/KVM driver          and support for
                                                                             and support for
   C, C++, Java,
    C, C++, Java,                                                           FUSE
                                                                             FUSE
   Python, Ruby,
    Python, Ruby,
   and PHP
    and PHP




RADOS
RADOS

 A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
  A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
 intelligent storage nodes
  intelligent storage nodes
APP
  APP                 APP
                      APP
                                  REST




RADOSGW
RADOSGW           RADOSGW
                  RADOSGW
  LIBRADOS
   LIBRADOS           LIBRADOS
                       LIBRADOS


                                         native




              M
              M
        M
        M         M
                  M
RADOS Gateway:
• REST-based interface to
  RADOS
• Supports buckets,
  accounting
• Compatible with S3 and
  Swift applications
APP
      APP                      APP
                               APP                  HOST/VM
                                                    HOST/VM                    CLIENT
                                                                               CLIENT



                         RADOSGW
                         RADOSGW                RBD                       CEPH FS
                                                                          CEPH FS
LIBRADOS
 LIBRADOS
                          A bucket-based
                           A bucket-based         A reliable and fully-    A POSIX-compliant
                                                                            A POSIX-compliant
   A library allowing
    A library allowing    REST gateway,
                           REST gateway,          distributed block        distributed file
                                                                            distributed file
   apps to directly
    apps to directly      compatible with S3
                           compatible with S3     device, with a Linux     system, with aa
                                                                            system, with
   access RADOS,
    access RADOS,         and Swift
                           and Swift              kernel client and a      Linux kernel client
                                                                            Linux kernel client
   with support for
    with support for                              QEMU/KVM driver          and support for
                                                                            and support for
   C, C++, Java,
    C, C++, Java,                                                          FUSE
                                                                            FUSE
   Python, Ruby,
    Python, Ruby,
   and PHP
    and PHP




RADOS
RADOS

 A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
  A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
 intelligent storage nodes
  intelligent storage nodes
COMPUTER
                  COMPUTER   DISK
                             DISK
                  COMPUTER
                  COMPUTER   DISK
                             DISK
                  COMPUTER
                  COMPUTER   DISK
                             DISK
                  COMPUTER
                  COMPUTER   DISK
                             DISK
                  COMPUTER
                  COMPUTER   DISK
                             DISK
                  COMPUTER
                  COMPUTER   DISK
                             DISK
COMPUTER
COMPUTER          COMPUTER
                  COMPUTER   DISK
                             DISK
           DISK
           DISK
                  COMPUTER
                  COMPUTER   DISK
                             DISK
                  COMPUTER
                  COMPUTER   DISK
                             DISK
                  COMPUTER
                  COMPUTER   DISK
                             DISK
                  COMPUTER
                  COMPUTER   DISK
                             DISK
                  COMPUTER
                  COMPUTER   DISK
                             DISK
COMPUTER
     COMPUTER   DISK
                DISK
     COMPUTER
     COMPUTER   DISK
                DISK
     COMPUTER
     COMPUTER   DISK
                DISK
     COMPUTER
     COMPUTER   DISK
                DISK
VM
VM   COMPUTER
     COMPUTER   DISK
                DISK
     COMPUTER
     COMPUTER   DISK
                DISK
VM
VM   COMPUTER   DISK
     COMPUTER   DISK
     COMPUTER
     COMPUTER   DISK
                DISK
VM
VM
     COMPUTER
     COMPUTER   DISK
                DISK
     COMPUTER
     COMPUTER   DISK
                DISK
     COMPUTER
     COMPUTER   DISK
                DISK
     COMPUTER
     COMPUTER   DISK
                DISK
VM
            VM




VIRTUALIZATION CONTAINER
VIRTUALIZATION CONTAINER
            LIBRBD
             LIBRBD
         LIBRADOS
          LIBRADOS




        M
        M
    M
    M                 M
                      M
CONTAINER
CONTAINER         VM
                  VM       CONTAINER
                           CONTAINER
   LIBRBD
    LIBRBD                    LIBRBD
                               LIBRBD
  LIBRADOS
   LIBRADOS                  LIBRADOS
                              LIBRADOS




                  M
                  M
              M
              M        M
                       M
HOST
        HOST
    KRBD (KERNEL MODULE)
     KRBD (KERNEL MODULE)
         LIBRADOS
          LIBRADOS




       M
       M
M
M                       M
                        M
RADOS Block Device:
• Storage of virtual disks in RADOS
• Decouples VMs and containers
 • Live migration!
• Images are striped across the cluster
• Snapshots!
• Support in
  • Qemu/KVM
  • OpenStack, CloudStack
  • Mainline Linux kernel
HOW DO YOU
    SPIN UP
THOUSANDS OF VMs
   INSTANTLY
      AND
  EFFICIENTLY?
instant copy




144
      0        0         0   0   = 144
write
                          CLIENT
                  write


                  write


                  write




144   4   = 148
read


             read
                    CLIENT

             read




144   4   = 148
APP
      APP                      APP
                               APP                  HOST/VM
                                                    HOST/VM                    CLIENT
                                                                               CLIENT



                         RADOSGW
                         RADOSGW                RBD
                                                RBD                        CEPH FS
LIBRADOS
 LIBRADOS
                          A bucket-based
                           A bucket-based         A reliable and fully-
                                                   A reliable and fully-    A POSIX-compliant
   A library allowing
    A library allowing    REST gateway,
                           REST gateway,          distributed block
                                                   distributed block        distributed file
   apps to directly
    apps to directly      compatible with S3
                           compatible with S3     device, with aaLinux
                                                   device, with Linux       system, with a
   access RADOS,
    access RADOS,         and Swift
                           and Swift              kernel client and aa
                                                   kernel client and        Linux kernel client
   with support for
    with support for                              QEMU/KVM driver
                                                   QEMU/KVM driver          and support for
   C, C++, Java,
    C, C++, Java,                                                           FUSE
   Python, Ruby,
    Python, Ruby,
   and PHP
    and PHP




RADOS
RADOS

 A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
  A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
 intelligent storage nodes
  intelligent storage nodes
CLIENT
               CLIENT



metadata           01
                    01   data
                   10
                    10




               M
               M
           M
           M             M
                         M
M
    M
M
M       M
        M
Metadata Server
• Manages metadata for a
  POSIX-compliant shared
  filesystem
 • Directory hierarchy
 • File metadata (owner,
   timestamps, mode, etc.)
• Stores metadata in RADOS
• Does not serve file data to
  clients
• Only required for shared
  filesystem
one tree




three metadata servers


                               ??
CloudOpen - 08/29/2012
CloudOpen - 08/29/2012
CloudOpen - 08/29/2012
CloudOpen - 08/29/2012
DYNAMIC SUBTREE PARTITIONING
recursive accounting
●   ceph-mds tracks recursive directory stats
    ●   file sizes
    ●   file and directory counts
    ●   modification time
●
    virtual xattrs present full stats
●
    efficient
        $ ls ­alSh | head
        total 0
        drwxr­xr­x 1 root            root      9.7T 2011­02­04 15:51 .
        drwxr­xr­x 1 root            root      9.7T 2010­12­16 15:06 ..
        drwxr­xr­x 1 pomceph         pg4194980 9.6T 2011­02­24 08:25 pomceph
        drwxr­xr­x 1 mcg_test1       pg2419992  23G 2011­02­02 08:57 mcg_test1
        drwx­­x­­­ 1 luko            adm        19G 2011­01­21 12:17 luko
        drwx­­x­­­ 1 eest            adm        14G 2011­02­04 16:29 eest
        drwxr­xr­x 1 mcg_test2       pg2419992 3.0G 2011­02­02 09:34 mcg_test2
        drwx­­x­­­ 1 fuzyceph        adm       1.5G 2011­01­18 10:46 fuzyceph
        drwxr­xr­x 1 dallasceph      pg275     596M 2011­01­14 10:06 dallasceph
snapshots
●   volume or subvolume snapshots unusable at petabyte scale
    ●   snapshot arbitrary subdirectories
●   simple interface
    ●   hidden '.snap' directory
    ●
        no special tools


        $ mkdir foo/.snap/one      # create snapshot
        $ ls foo/.snap
        one
        $ ls foo/bar/.snap
        _one_1099511627776         # parent's snap name is mangled
        $ rm foo/myfile
        $ ls -F foo
        bar/
        $ ls -F foo/.snap/one
        myfile bar/
        $ rmdir foo/.snap/one      # remove snapshot
multiple protocols, implementations
●   Linux kernel client
    ●   mount -t ceph 1.2.3.4:/ /mnt
                                        NFS            SMB/CIFS
    ●   export (NFS), Samba (CIFS)
●   ceph-fuse                           Ganesha            Samba
                                         libcephfs          libcephfs
●   libcephfs.so
    ●   your app                        Hadoop             your app
                                         libcephfs          libcephfs
    ●   Samba (CIFS)
    ●   Ganesha (NFS)                          ceph-fuse
                                       ceph        fuse
    ●   Hadoop (map/reduce)                       kernel
APP
      APP                      APP
                               APP                  HOST/VM
                                                    HOST/VM                    CLIENT
                                                                               CLIENT



                         RADOSGW
                         RADOSGW                RBD
                                                RBD                        CEPH FS
                                                                           CEPH FS
LIBRADOS
 LIBRADOS
                          A bucket-based
                           A bucket-based         A reliable and fully-
                                                   A reliable and fully-    A POSIX-compliant
                                                                             A POSIX-compliant
   A library allowing
    A library allowing    REST gateway,
                           REST gateway,          distributed block
                                                   distributed block        distributed file
                                                                             distributed file
   apps to directly
    apps to directly      compatible with S3
                           compatible with S3     device, with aaLinux
                                                   device, with Linux       system, with aa
                                                                             system, with
   access RADOS,
    access RADOS,         and Swift
                           and Swift              kernel client and aa
                                                   kernel client and        Linux kernel client
                                                                             Linux kernel client
   with support for
    with support for                              QEMU/KVM driver
                                                   QEMU/KVM driver          and support for
                                                                             and support for
   C, C++, Java,
    C, C++, Java,                                                           FUSE
                                                                             FUSE
   Python, Ruby,
    Python, Ruby,
   and PHP
    and PHP                AWESOME                 AWESOME
                                                                              NEARLY
 AWESOME                                                                     AWESOME


RADOS
RADOS                                    AWESOME
 A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
  A reliable, autonomous, distributed object store comprised of self-healing, self-managing,
 intelligent storage nodes
  intelligent storage nodes
why we do this
●   limited options for scalable open source storage
●   proprietary solutions
    ●   expensive
    ●   don't scale (well or out)
    ●   marry hardware and software


●   industry needs to change
who we are
●   Ceph created at UC Santa Cruz (2007)
●   supported by DreamHost (2008-2011)
●   Inktank (2012)
    ●   Los Angeles, Sunnyvale, San Francisco, remote
●   growing user and developer community
    ●   Linux distros, users, cloud stacks, SIs, OEMs


                       http://ceph.com/
thanks
BoF tonight @ 5:15




sage weil
sage@inktank.com     http://github.com/ceph
@liewegas            http://ceph.com/
CloudOpen - 08/29/2012
why we like btrfs
●   pervasive checksumming
●   snapshots, copy-on-write
●   efficient metadata (xattrs)
●   inline data for small files
●   transparent compression
●   integrated volume management
    ●   software RAID, mirroring, error recovery
    ●   SSD-aware
●   online fsck
●   active development community

More Related Content

PDF
Ceph Day Nov 2012 - Sage Weil
PDF
Ceph LISA'12 Presentation
PDF
New features for Ceph with Cinder and Beyond
PPTX
Setting up Storage Features in Windows Server 2012
PDF
XenSummit - 08/28/2012
PDF
Windows Server 2012 Hyper-V
PDF
CS4344 Lecture 10: Player Dynamics
PDF
GigaOM Structure SF - June 2012
Ceph Day Nov 2012 - Sage Weil
Ceph LISA'12 Presentation
New features for Ceph with Cinder and Beyond
Setting up Storage Features in Windows Server 2012
XenSummit - 08/28/2012
Windows Server 2012 Hyper-V
CS4344 Lecture 10: Player Dynamics
GigaOM Structure SF - June 2012

Viewers also liked (20)

PPTX
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
PPTX
Ceph Day New York 2014: Ceph Ecosystem Update
PDF
DreamObjects - Ceph Day Nov 2012
PDF
Ceph Day London 2014 - Ceph Ecosystem Overview
PPTX
Ceph Day New York 2014: Ceph, a physical perspective
PDF
Ceph Day Beijing: Welcome
ODP
Ceph Day Amsterdam 2015 - Ceph over IPv6
PDF
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
PPTX
Ceph Day Berlin: Measuring and predicting performance of Ceph clusters
PDF
Ceph Day Berlin: Scaling an Academic Cloud
PDF
Ceph Day Berlin: Building Your Own Disaster? The Safe Way to Make Ceph Storag...
PPTX
Ceph Community Talk on High-Performance Solid Sate Ceph
PDF
Boletín RadioAMLO no. 7
DOC
Calentamiento Global
PPTX
Summer school social media 2013 - Sociale Media vd Toekomst
PPT
Observacion y medida; Ambitos
DOCX
Mi perfil
PPS
Miguel Delibes
PDF
Mumbai media book 2013
PDF
Valentine district charrette presentation
Ceph Day SF 2015 - Deploying flash storage for Ceph without compromising perf...
Ceph Day New York 2014: Ceph Ecosystem Update
DreamObjects - Ceph Day Nov 2012
Ceph Day London 2014 - Ceph Ecosystem Overview
Ceph Day New York 2014: Ceph, a physical perspective
Ceph Day Beijing: Welcome
Ceph Day Amsterdam 2015 - Ceph over IPv6
Ceph Day Amsterdam 2015: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Measuring and predicting performance of Ceph clusters
Ceph Day Berlin: Scaling an Academic Cloud
Ceph Day Berlin: Building Your Own Disaster? The Safe Way to Make Ceph Storag...
Ceph Community Talk on High-Performance Solid Sate Ceph
Boletín RadioAMLO no. 7
Calentamiento Global
Summer school social media 2013 - Sociale Media vd Toekomst
Observacion y medida; Ambitos
Mi perfil
Miguel Delibes
Mumbai media book 2013
Valentine district charrette presentation
Ad

Similar to CloudOpen - 08/29/2012 (20)

PDF
Storage Developer Conference - 09/19/2012
PPT
Webinar - Advance Ceph Features
PDF
Storing VMs with Cinder and Ceph RBD.pdf
ODP
Block Storage For VMs With Ceph
PDF
Docker: The Blue Whale of Awesomness
PDF
New Features for Ceph with Cinder and Beyond
PPTX
Ceph Intro and Architectural Overview by Ross Turk
PDF
Rugged DevOps Will help you build ur cloudz
ODP
Ceph Day NYC: The Future of CephFS
PDF
Distributed Stream Processing on Fluentd / #fluentd
PDF
End of RAID as we know it with Ceph Replication
ODP
London Ceph Day: The Future of CephFS
PDF
Hadoop on VMware
PPTX
Hadoop and WANdisco: The Future of Big Data
PPT
Red hat enterprise_virtualization_load
PDF
Webinar - Getting Started With Ceph
PPTX
Docker Introduction
PDF
Open Cloud Interop Public
PDF
librados
PDF
librados
Storage Developer Conference - 09/19/2012
Webinar - Advance Ceph Features
Storing VMs with Cinder and Ceph RBD.pdf
Block Storage For VMs With Ceph
Docker: The Blue Whale of Awesomness
New Features for Ceph with Cinder and Beyond
Ceph Intro and Architectural Overview by Ross Turk
Rugged DevOps Will help you build ur cloudz
Ceph Day NYC: The Future of CephFS
Distributed Stream Processing on Fluentd / #fluentd
End of RAID as we know it with Ceph Replication
London Ceph Day: The Future of CephFS
Hadoop on VMware
Hadoop and WANdisco: The Future of Big Data
Red hat enterprise_virtualization_load
Webinar - Getting Started With Ceph
Docker Introduction
Open Cloud Interop Public
librados
librados
Ad

Recently uploaded (20)

PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPTX
Spectroscopy.pptx food analysis technology
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
MYSQL Presentation for SQL database connectivity
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
Machine Learning_overview_presentation.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPT
Teaching material agriculture food technology
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PPTX
A Presentation on Artificial Intelligence
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Diabetes mellitus diagnosis method based random forest with bat algorithm
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
“AI and Expert System Decision Support & Business Intelligence Systems”
NewMind AI Weekly Chronicles - August'25-Week II
Spectroscopy.pptx food analysis technology
The Rise and Fall of 3GPP – Time for a Sabbatical?
MYSQL Presentation for SQL database connectivity
The AUB Centre for AI in Media Proposal.docx
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Machine Learning_overview_presentation.pptx
Encapsulation_ Review paper, used for researhc scholars
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Building Integrated photovoltaic BIPV_UPV.pdf
Teaching material agriculture food technology
Per capita expenditure prediction using model stacking based on satellite ima...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Assigned Numbers - 2025 - Bluetooth® Document
A Presentation on Artificial Intelligence

CloudOpen - 08/29/2012

  • 1. ceph – a unified distributed storage system sage weil cloudopen – august 29, 2012
  • 2. outline ● why you should care ● what is it, what it does ● how it works ● architecture ● how you can use it ● librados ● radosgw ● RBD ● file system ● who we are, why we do this
  • 3. why should you care about another storage system?
  • 4. requirements ● diverse storage needs ● object storage ● block devices (for VMs) with snapshots, cloning ● shared file system with POSIX, coherent caches ● structured data... files, block devices, or objects? ● scale ● terabytes, petabytes, exabytes ● heterogeneous hardware ● reliability and fault tolerance
  • 5. time ● ease of administration ● no manual data migration, load balancing ● painless scaling ● expansion and contraction ● seamless migration
  • 6. cost ● linear function of size or performance ● incremental expansion ● no fork-lift upgrades ● no vendor lock-in ● choice of hardware ● choice of software ● open
  • 8. unified storage system ● objects ● native ● RESTful ● block ● thin provisioning, snapshots, cloning ● file ● strong consistency, snapshots
  • 9. APP APP APP APP HOST/VM HOST/VM CLIENT CLIENT RADOSGW RADOSGW RBD RBD CEPH FS CEPH FS LIBRADOS LIBRADOS A bucket-based A bucket-based A reliable and fully- A reliable and fully- A POSIX-compliant A POSIX-compliant A library allowing A library allowing REST gateway, REST gateway, distributed block distributed block distributed file distributed file apps to directly apps to directly compatible with S3 compatible with S3 device, with aaLinux device, with Linux system, with aa system, with access RADOS, access RADOS, and Swift and Swift kernel client and aa kernel client and Linux kernel client Linux kernel client with support for with support for QEMU/KVM driver QEMU/KVM driver and support for and support for C, C++, Java, C, C++, Java, FUSE FUSE Python, Ruby, Python, Ruby, and PHP and PHP RADOS RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes intelligent storage nodes
  • 10. open source ● LGPLv2 ● copyleft ● ok to link to proprietary code ● no copyright assignment ● no dual licensing ● no “enterprise-only” feature set ● active community ● commercial support
  • 11. distributed storage system ● data center scale ● 10s to 10,000s of machines ● terabytes to exabytes ● fault tolerant ● no single point of failure ● commodity hardware ● self-managing, self-healing
  • 12. ceph object model ● pools ● 1s to 100s ● independent namespaces or object collections ● replication level, placement policy ● objects ● bazillions ● blob of data (bytes to gigabytes) ● attributes (e.g., “version=12”; bytes to kilobytes) ● key/value bundle (bytes to gigabytes)
  • 13. why start with objects? ● more useful than (disk) blocks ● names in a single flat namespace ● variable size ● simple API with rich semantics ● more scalable than files ● no hard-to-distribute hierarchy ● update semantics do not span objects ● workload is trivially parallel
  • 14. DISK DISK DISK DISK DISK DISK HUMAN HUMAN COMPUTER COMPUTER DISK DISK DISK DISK DISK DISK DISK DISK
  • 15. DISK DISK DISK DISK HUMAN HUMAN DISK DISK HUMAN HUMAN COMPUTER COMPUTER DISK DISK DISK DISK HUMAN HUMAN DISK DISK DISK DISK
  • 16. HUMAN HUMAN HUMAN HUMAN HUMAN HUMAN HUMAN HUMAN DISK DISK HUMAN HUMAN HUMAN HUMAN DISK DISK HUMAN HUMAN HUMAN HUMAN DISK DISK DISK DISK HUMAN HUMAN DISK DISK HUMAN HUMAN HUMAN HUMAN DISK DISK (COMPUTER)) (COMPUTER HUMAN HUMAN DISK DISK HUMAN HUMAN HUMAN HUMAN DISK DISK HUMAN HUMAN HUMAN HUMAN DISK DISK HUMAN HUMAN DISK DISK HUMAN HUMAN DISK DISK HUMAN HUMAN HUMAN HUMAN DISK DISK HUMAN HUMAN HUMAN HUMAN (actually more like this…)
  • 17. COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK HUMAN HUMAN COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK HUMAN HUMAN COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK HUMAN HUMAN COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK
  • 18. OSD OSD OSD OSD OSD FS FS FS FS FS btrfs xfs ext4 DISK DISK DISK DISK DISK M M M
  • 19. Monitors: • Maintain cluster membership and state M • Provide consensus for distributed decision-making • Small, odd number • These do not serve stored objects to clients Object Storage Daemons (OSDs): • At least three in a cluster • One per disk or RAID group • Serve stored objects to clients • Intelligently peer to perform replication tasks
  • 20. HUMAN M M M
  • 21. data distribution ● all objects are replicated N times ● objects are automatically placed, balanced, migrated in a dynamic cluster ● must consider physical infrastructure ● ceph-osds on hosts in racks in rows in data centers ● three approaches ● pick a spot; remember where you put it ● pick a spot; write down where you put it ● calculate where to put it, where to find it
  • 22. CRUSH • Pseudo-random placement algorithm • Fast calculation, no lookup • Repeatable, deterministic • Ensures even distribution • Stable mapping • Limited data migration • Rule-based configuration • specifiable replication • infrastructure topology aware • allows weighting
  • 23. 10 10 01 01 10 10 01 11 01 10 10 10 01 01 10 10 01 11 01 10 hash(object name) % num pg 10 10 10 10 01 01 01 01 10 10 10 10 01 01 11 11 01 01 10 10 CRUSH(pg, cluster state, policy)
  • 24. 10 10 01 01 10 10 01 11 01 10 10 10 01 01 10 10 01 11 01 10 10 10 10 10 01 01 01 01 10 10 10 10 01 01 11 11 01 01 10 10
  • 25. RADOS ● monitors publish osd map that describes cluster state ● ceph-osd node status (up/down, weight, IP) M ● CRUSH function specifying desired data distribution ● object storage daemons (OSDs) ● safely replicate and store object ● migrate data as the cluster changes over time ● coordinate based on shared view of reality ● decentralized, distributed approach allows ● massive scales (10,000s of servers or more) ● the illusion of a single copy with consistent behavior
  • 29. CLIENT ??
  • 30. APP APP APP APP HOST/VM HOST/VM CLIENT CLIENT RADOSGW RADOSGW RBD RBD CEPH FS CEPH FS LIBRADOS A bucket-based A bucket-based A reliable and fully- A reliable and fully- A POSIX-compliant A POSIX-compliant A library allowing REST gateway, REST gateway, distributed block distributed block distributed file distributed file apps to directly compatible with S3 compatible with S3 device, with aaLinux device, with Linux system, with aa system, with access RADOS, and Swift and Swift kernel client and aa kernel client and Linux kernel client Linux kernel client with support for QEMU/KVM driver QEMU/KVM driver and support for and support for C, C++, Java, FUSE FUSE Python, Ruby, and PHP RADOS RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes intelligent storage nodes
  • 31. APP APP LIBRADOS LIBRADOS native M M M M M M
  • 32. LIBRADOS L • Provides direct access to RADOS for applications • C, C++, Python, PHP, Java • No HTTP overhead
  • 33. APP APP APP APP HOST/VM HOST/VM CLIENT CLIENT RADOSGW RBD RBD CEPH FS CEPH FS LIBRADOS LIBRADOS A bucket-based A reliable and fully- A reliable and fully- A POSIX-compliant A POSIX-compliant A library allowing A library allowing REST gateway, distributed block distributed block distributed file distributed file apps to directly apps to directly compatible with S3 device, with aaLinux device, with Linux system, with aa system, with access RADOS, access RADOS, and Swift kernel client and aa kernel client and Linux kernel client Linux kernel client with support for with support for QEMU/KVM driver QEMU/KVM driver and support for and support for C, C++, Java, C, C++, Java, FUSE FUSE Python, Ruby, Python, Ruby, and PHP and PHP RADOS RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes intelligent storage nodes
  • 34. APP APP APP APP REST RADOSGW RADOSGW RADOSGW RADOSGW LIBRADOS LIBRADOS LIBRADOS LIBRADOS native M M M M M M
  • 35. RADOS Gateway: • REST-based interface to RADOS • Supports buckets, accounting • Compatible with S3 and Swift applications
  • 36. APP APP APP APP HOST/VM HOST/VM CLIENT CLIENT RADOSGW RADOSGW RBD CEPH FS CEPH FS LIBRADOS LIBRADOS A bucket-based A bucket-based A reliable and fully- A POSIX-compliant A POSIX-compliant A library allowing A library allowing REST gateway, REST gateway, distributed block distributed file distributed file apps to directly apps to directly compatible with S3 compatible with S3 device, with a Linux system, with aa system, with access RADOS, access RADOS, and Swift and Swift kernel client and a Linux kernel client Linux kernel client with support for with support for QEMU/KVM driver and support for and support for C, C++, Java, C, C++, Java, FUSE FUSE Python, Ruby, Python, Ruby, and PHP and PHP RADOS RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes intelligent storage nodes
  • 37. COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER COMPUTER COMPUTER DISK DISK DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK
  • 38. COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK VM VM COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK VM VM COMPUTER DISK COMPUTER DISK COMPUTER COMPUTER DISK DISK VM VM COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK COMPUTER COMPUTER DISK DISK
  • 39. VM VM VIRTUALIZATION CONTAINER VIRTUALIZATION CONTAINER LIBRBD LIBRBD LIBRADOS LIBRADOS M M M M M M
  • 40. CONTAINER CONTAINER VM VM CONTAINER CONTAINER LIBRBD LIBRBD LIBRBD LIBRBD LIBRADOS LIBRADOS LIBRADOS LIBRADOS M M M M M M
  • 41. HOST HOST KRBD (KERNEL MODULE) KRBD (KERNEL MODULE) LIBRADOS LIBRADOS M M M M M M
  • 42. RADOS Block Device: • Storage of virtual disks in RADOS • Decouples VMs and containers • Live migration! • Images are striped across the cluster • Snapshots! • Support in • Qemu/KVM • OpenStack, CloudStack • Mainline Linux kernel
  • 43. HOW DO YOU SPIN UP THOUSANDS OF VMs INSTANTLY AND EFFICIENTLY?
  • 44. instant copy 144 0 0 0 0 = 144
  • 45. write CLIENT write write write 144 4 = 148
  • 46. read read CLIENT read 144 4 = 148
  • 47. APP APP APP APP HOST/VM HOST/VM CLIENT CLIENT RADOSGW RADOSGW RBD RBD CEPH FS LIBRADOS LIBRADOS A bucket-based A bucket-based A reliable and fully- A reliable and fully- A POSIX-compliant A library allowing A library allowing REST gateway, REST gateway, distributed block distributed block distributed file apps to directly apps to directly compatible with S3 compatible with S3 device, with aaLinux device, with Linux system, with a access RADOS, access RADOS, and Swift and Swift kernel client and aa kernel client and Linux kernel client with support for with support for QEMU/KVM driver QEMU/KVM driver and support for C, C++, Java, C, C++, Java, FUSE Python, Ruby, Python, Ruby, and PHP and PHP RADOS RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes intelligent storage nodes
  • 48. CLIENT CLIENT metadata 01 01 data 10 10 M M M M M M
  • 49. M M M M M M
  • 50. Metadata Server • Manages metadata for a POSIX-compliant shared filesystem • Directory hierarchy • File metadata (owner, timestamps, mode, etc.) • Stores metadata in RADOS • Does not serve file data to clients • Only required for shared filesystem
  • 57. recursive accounting ● ceph-mds tracks recursive directory stats ● file sizes ● file and directory counts ● modification time ● virtual xattrs present full stats ● efficient $ ls ­alSh | head total 0 drwxr­xr­x 1 root            root      9.7T 2011­02­04 15:51 . drwxr­xr­x 1 root            root      9.7T 2010­12­16 15:06 .. drwxr­xr­x 1 pomceph         pg4194980 9.6T 2011­02­24 08:25 pomceph drwxr­xr­x 1 mcg_test1       pg2419992  23G 2011­02­02 08:57 mcg_test1 drwx­­x­­­ 1 luko            adm        19G 2011­01­21 12:17 luko drwx­­x­­­ 1 eest            adm        14G 2011­02­04 16:29 eest drwxr­xr­x 1 mcg_test2       pg2419992 3.0G 2011­02­02 09:34 mcg_test2 drwx­­x­­­ 1 fuzyceph        adm       1.5G 2011­01­18 10:46 fuzyceph drwxr­xr­x 1 dallasceph      pg275     596M 2011­01­14 10:06 dallasceph
  • 58. snapshots ● volume or subvolume snapshots unusable at petabyte scale ● snapshot arbitrary subdirectories ● simple interface ● hidden '.snap' directory ● no special tools $ mkdir foo/.snap/one # create snapshot $ ls foo/.snap one $ ls foo/bar/.snap _one_1099511627776 # parent's snap name is mangled $ rm foo/myfile $ ls -F foo bar/ $ ls -F foo/.snap/one myfile bar/ $ rmdir foo/.snap/one # remove snapshot
  • 59. multiple protocols, implementations ● Linux kernel client ● mount -t ceph 1.2.3.4:/ /mnt NFS SMB/CIFS ● export (NFS), Samba (CIFS) ● ceph-fuse Ganesha Samba libcephfs libcephfs ● libcephfs.so ● your app Hadoop your app libcephfs libcephfs ● Samba (CIFS) ● Ganesha (NFS) ceph-fuse ceph fuse ● Hadoop (map/reduce) kernel
  • 60. APP APP APP APP HOST/VM HOST/VM CLIENT CLIENT RADOSGW RADOSGW RBD RBD CEPH FS CEPH FS LIBRADOS LIBRADOS A bucket-based A bucket-based A reliable and fully- A reliable and fully- A POSIX-compliant A POSIX-compliant A library allowing A library allowing REST gateway, REST gateway, distributed block distributed block distributed file distributed file apps to directly apps to directly compatible with S3 compatible with S3 device, with aaLinux device, with Linux system, with aa system, with access RADOS, access RADOS, and Swift and Swift kernel client and aa kernel client and Linux kernel client Linux kernel client with support for with support for QEMU/KVM driver QEMU/KVM driver and support for and support for C, C++, Java, C, C++, Java, FUSE FUSE Python, Ruby, Python, Ruby, and PHP and PHP AWESOME AWESOME NEARLY AWESOME AWESOME RADOS RADOS AWESOME A reliable, autonomous, distributed object store comprised of self-healing, self-managing, A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes intelligent storage nodes
  • 61. why we do this ● limited options for scalable open source storage ● proprietary solutions ● expensive ● don't scale (well or out) ● marry hardware and software ● industry needs to change
  • 62. who we are ● Ceph created at UC Santa Cruz (2007) ● supported by DreamHost (2008-2011) ● Inktank (2012) ● Los Angeles, Sunnyvale, San Francisco, remote ● growing user and developer community ● Linux distros, users, cloud stacks, SIs, OEMs http://ceph.com/
  • 63. thanks BoF tonight @ 5:15 sage weil sage@inktank.com http://github.com/ceph @liewegas http://ceph.com/
  • 65. why we like btrfs ● pervasive checksumming ● snapshots, copy-on-write ● efficient metadata (xattrs) ● inline data for small files ● transparent compression ● integrated volume management ● software RAID, mirroring, error recovery ● SSD-aware ● online fsck ● active development community