SlideShare a Scribd company logo
RED HAT GLUSTER STORAGE:
DIRECTION, ROADMAP AND
USE-CASES
Sayandab Saha
Product Management, Red Hat Storage
A Quick Look Back
The Present
What's Next: The Future
Gluster Upstream Roadmap
Red Hat Gluster Storage Integration Roadmap
AGENDA
THE RED HAT STORAGE PORTFOLIO
Ceph
management
OPENSOURCE
SOFTWARE
Gluster
management
Ceph
data services
Gluster
data services
STANDARD
HARDWARE
Share-nothing, scale-out
architecture provides durability
and adapts to changing demands
Self-managing and self-healing
features reduce operational
overhead
Standards-based interfaces
and full APIs ease integration
with applications and systems
Supported by the
experts at Red Hat
Nimble file storage for petabyte-scale workloads
● Machine analytics with Splunk
● Big data analytics with Hadoop
TARGET USE CASES
Enterprise File Sharing
● Media Streaming
● Active Archives
Analytics
Enterprise Virtualization
Purpose-built as a scale-out file store with a
straightforward architecture suitable for public,
private, and hybrid cloud
Simple to install and configure, with a minimal
hardware footprint
Offers mature NFS, SMB and HDFS interfaces
for enterprise use
Customer Highlight: Intuit
Intuit uses Red Hat Gluster Storage to provide
flexible, cost-effective storage for their industry-
leading financial offerings. Rich Media & Archival
RED HAT GLUSTER STORAGE
ANALYTICS
Big Data analytics with Hadoop
CLOUD
INFRASTRUCTURE
RICH MEDIA
AND ARCHIVAL
SYNC AND
SHARE
ENTERPRISE
VIRTUALIZATION
Machine data analytics with Splunk
Virtual machine storage with OpenStack
Object storage for tenant applications
Cost-effective storage for rich media streaming
Active archives
File sync and share with ownCloud
Storage for conventional
virtualization with RHEV
FOCUSED SET OF USE CASES
A QUICK LOOK BACK
Red Hat Storage Server 2.0 (GA June 2012)
● 6 updates released
● Features: VM image store, performance & stability
● EOL-ed on June 2014
Active/Active cluster-on-cluster
● 6 updates released. Planned EOL October 2015.
● Features: Quota, Geo-Rep, management console, SMB 2.0
LOOKING BACK
Launched September 2014
Key Features
● Volume snapshots for disk based backup
Management
● Monitoring using Nagios
● SNMP Support
● Rolling upgrade support, CDN delivery
Hadoop Plug-in for HortonWorks Data Platform 2.0.6
Scale
● 60 drives per server, 128 nodes per cluster
RHGS 3.0 (DENALI)
PREVIOUS MAJOR RELEASE
BETWEEN 3.0 AND TODAY
3.0 3.0.1 3.0.2 3.0.3 3.0.4 3.1
Sept 2014 Oct 2014 Nov 2014 Jan 2015 Mar 2015 Summer 2015
“Denali”
release
Bug fixes RHEL 6.6
support
HDP 2.1
Tez
Hbase
RDMA
USS
IceHouse
rebase for
Swift
3-way
replication +
JBOD
Small-file
performance
“Everglades”
release
THE PRESENT
Key Features
● Erasure Coding, Tiering, Bit-Rot Detection
Protocols
● Active/Active NFSv4
● SMB 3 (protocol negotiation, in-flight encryption, server-side copy)
Red Hat Gluster Storage Console
● Device Management, Geo-Rep, Snapshot, Dashboard, Snapshot Scheduling
Security
● SSL based network encryption
● SELinux Enforcing Mode
Performance
● Rebalance performance enhancement (100% improvement)
RHGS 3.1 (EVERGLADES)
Data protection without using RAID &
replication
Break data into smaller fragments, store
and recover from a smaller number of
fragments
New type of volumes: Dispersed, dist-
dispersed
Initial supported configurations: 8+3,
8+4 & 4+2 configuration
Algorithm used is REED-Solomon
ERASURE CODING
SEQUENTIAL IO PERFORMANCE WITH
ERASURE CODING
HOT
TIER
COLD
TIER
GLUSTER
VOLUME
TRUSTED STORAGE POOL
TIERING
Automated data movement between hot
& cold tiers
Movement based on access frequency
● Hot tiers could be SSDs, cold tiers are
normal disks
Attach & detach a tier to and from an
existing Gluster volume
All I/Os forwarded to hot tier
Cache misses promote data to hot tier
BIT ROT DETECTION
Protection against “silent data corruption”
Two fundamental procedures
● Signing using SHA256
● Scanning/scrubbing for rot
Lazy checksum maintenance
● (not inline to data path)
Checksum calculation when a file is “stable”
Alert/log on mismatched checksums
Scanning mode is admin selectable to control
impact
NFSv4 ACLs
PseudoFS support
Security
● Kerberos authentication using RPCSEC_GSS, krb5/i/p
● Kerberos authentication using RPCSEC_GSS, spkm3
Active/Active cluster-on-cluster
● With up to 16 active-active NFS heads
● Gluster storage pool scales out as usual
Ability to add and delete RHS volume exports to nfs-ganesha at run-time
Delegations to be supported in an update release
ACTIVE/ACTIVE NFSv4
NFS
HEAD
NFS
HEAD
NFS
HEAD
NFS
HEAD
GANESHA CLUSTER
USING PACEMAKER & COROSYNC
TRUSTED STORAGE POOL
GLUSTER VOLUME
ACTIVE/ACTIVE NFSv4
WHAT'S NEXT – THE FUTURE
TRADITIONAL STORAGE NEXT GENERATION STORAGE
Manual provisioning of LUNs and volumes
with some degree of automation
Self-service provisioning by lines of
businesses and application developers
Static selection of storage platforms based
on application needs
Catalog based storage service offerings with
metering & charge-back
Scale-up with some scale-out Expand, Shrink and scale on demand
Little to no flexibility in selecting optimum
storage back-end for workloads
Policy based storage back-end selection
STORAGE TRENDS:
MODERN IT INFRASTRUCTURES
Key elements for modern storage infrastructure (Manila, containers,
hyper-converged)
● Consumption Model (“File As A Service” or “NAS on Demand”)
● Dynamic provisioning, healing, tuning & balancing
● Security & multi-tenancy
● Cloud scale & stability at scale
● Performance: performant storage back-end for a wide variety of
workloads
● Advanced data services: tiering, compression, de-duplication
APPLIED TO GLUSTER
GLUSTER UPSTREAM ROADMAP
Gluster 4.0 will be our technology base for the next five years or so
● Based on 3.x experience
Design must be based on estimates of where we’ll be in 2021
Higher node counts and more complex networks
Heterogeneous storage
● e.g. NVMe for performance, SMR for capacity
New workloads and usage models
● Hyper-convergence
● Containers
● “XYX as a service” and multi-tenancy
GLUSTERFS 4.0 CONTEXT
Declarative and constraint-based
● Not “this brick and this brick and this brick”
● More like “this big, replicated this many times, these features”
● We figure out which combinations match user requirements
Overlapping replica/stripe/erasure sets
● Ease requirement to add bricks in multiples
● Better load distribution during and after failures
Multiple replication levels (and types) within one volume
More sophisticated tiering, rack- or security-aware placement, etc.
FLEXIBLE STORAGE MANAGEMENT
More scalable membership protocol
Stronger consistency for configuration data
Improved modularity
● Most-changed code in 3.x
● Increasing complexity and merge conflicts slow down the entire project
● Plugin approach allows independent development of new features
Prerequisite for other 4.0 features
GLUSTERD CHANGES
Client-side caching
● Fully consistent via “upcall” mechanism
Third-party copy
● Already part of NFS and SMB protocols
Multiple networks and Quality of Service
● Leverage faster private networks e.g. for replication
● Isolate internal traffic
● Protect tenants from each other
PERFORMANCE ENHANCEMENTS
Theme: Storage/File as a Service
Use-Cases: Storage for containers, OpenStack Manila
Technology Enablers:
● Dynamic Provisioning
● At-rest Encryption
● Inode-quotas
● Cloud Scale & Stability at Scale
● Performant back-end for diverse workloads
● Autonomous operations
● Multi-tenancy
FUTURE FOCUS AREAS
RHGS 3.2 (Fundy)
H1-CY2016
Baseline
● GlusterFS 3.8, RHEL 6, RHEL 7
Management
● Dynamic provisioning of
volumes
Key Features
● Inode quotas
Protocols
● SMB 3.0 (advanced features)
● Multi-channel support
Performance
● Rebalance
● Self-heal
Security
● At-rest encryption
Baseline
● GlusterFS 4, RHEL 7
Key Features
● Compression, Deduplication
Core Infrastructure
● Next gen replication
● Highly scalable control plane
● DHTv2
Protocols
● pNFS
Performance
● QoS
● Client side caching
Management
● USM, Gluster ReST API
RHGS 4 (Gir)
(In Planning)
RED HAT GLUSTER STORAGE ROADMAP
RED HAT GLUSTER STROAGE
INTEGRATION ROADMAP
Tech preview level support for RHELOSP 7 (Kilo)
● Create/delete/rename/list share
● Create/delete snapshots
● Allow/deny access to shares
● OSP Director integration planned for September release (ver 1.1)
Full support expected in RHELOSP 8 (Liberty)
● Create/delete share dynamically
● Create share from snapshot
● Exploring integration with Barbican for managing certificates
OPENSTACK MANILA
3-key storage use-cases
● Persistent data store for state-full containers
● Container image registries
● Storage for live container images (local storage)
Focused on “Persistent data store for containers” use-case
● Containerized applications mount Gluster as their data store
● NFS or GlusterFS native client integration in kubernetes
Key attributes that makes Gluster interesting
● Not impacted my mount-storm
● Built in HA
CONTAINERS
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases

More Related Content

PPTX
Red Hat Gluster Storage, Container Storage and CephFS Plans
ODP
Gluster Data Tiering
PDF
Gluster overview & future directions vault 2015
PDF
Red Hat Enterprise Linux: Open, hyperconverged infrastructure
PDF
Erasure codes and storage tiers on gluster
PDF
The Future of GlusterFS and Gluster.org
PDF
Ceph Block Devices: A Deep Dive
PDF
Gluster for sysadmins
Red Hat Gluster Storage, Container Storage and CephFS Plans
Gluster Data Tiering
Gluster overview & future directions vault 2015
Red Hat Enterprise Linux: Open, hyperconverged infrastructure
Erasure codes and storage tiers on gluster
The Future of GlusterFS and Gluster.org
Ceph Block Devices: A Deep Dive
Gluster for sysadmins

What's hot (20)

PDF
Red Hat Storage for Mere Mortals
ODP
Glusterfs for sysadmins-justin_clift
PDF
Storage as a Service with Gluster
PDF
Gluster d2
PPTX
Gluster Storage
PDF
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...
PDF
GlusterFS And Big Data
PDF
Red Hat Storage - Introduction to GlusterFS
ODP
Tiering barcelona
ODP
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
ODP
Dustin Black - Red Hat Storage Server Administration Deep Dive
ODP
Red Hat Gluster Storage : GlusterFS
PDF
Red Hat Ceph Storage Roadmap: January 2016
PDF
Glusterfs and openstack
ODP
GlusterFs Architecture & Roadmap - LinuxCon EU 2013
PDF
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
ODP
Lisa 2015-gluster fs-introduction
PDF
Red Hat Gluster Storage Performance
PDF
Smb gluster devmar2013
ODP
Gluster fs architecture_future_directions_tlv
Red Hat Storage for Mere Mortals
Glusterfs for sysadmins-justin_clift
Storage as a Service with Gluster
Gluster d2
Gluster Storage
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...
GlusterFS And Big Data
Red Hat Storage - Introduction to GlusterFS
Tiering barcelona
Introduction to highly_availablenfs_server_on_scale-out_storage_systems_based...
Dustin Black - Red Hat Storage Server Administration Deep Dive
Red Hat Gluster Storage : GlusterFS
Red Hat Ceph Storage Roadmap: January 2016
Glusterfs and openstack
GlusterFs Architecture & Roadmap - LinuxCon EU 2013
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Lisa 2015-gluster fs-introduction
Red Hat Gluster Storage Performance
Smb gluster devmar2013
Gluster fs architecture_future_directions_tlv
Ad

Viewers also liked (16)

PDF
Gluster.community.day.2013
PDF
Performance comparison of Distributed File Systems on 1Gbit networks
PPTX
Salt conf15 presentation-william-cannon
PPTX
Containers and CloudStack
ODP
Caching and tuning fun for high scalability
PDF
Hacking apache cloud stack
PDF
Kvm forum 2013 - future integration points for oVirt storage
PDF
Gluster for Geeks: Performance Tuning Tips & Tricks
PDF
Le Wagon's Product Design Sprint
PDF
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
PDF
Filesystem Comparison: NFS vs GFS2 vs OCFS2
PDF
Production-ready Software
ODP
oVirt and OpenStack
PPTX
Gfs vs hdfs
PDF
The Google File System (GFS)
Gluster.community.day.2013
Performance comparison of Distributed File Systems on 1Gbit networks
Salt conf15 presentation-william-cannon
Containers and CloudStack
Caching and tuning fun for high scalability
Hacking apache cloud stack
Kvm forum 2013 - future integration points for oVirt storage
Gluster for Geeks: Performance Tuning Tips & Tricks
Le Wagon's Product Design Sprint
Red Hat Storage Day Boston - Red Hat Gluster Storage vs. Traditional Storage ...
Filesystem Comparison: NFS vs GFS2 vs OCFS2
Production-ready Software
oVirt and OpenStack
Gfs vs hdfs
The Google File System (GFS)
Ad

Similar to Red Hat Gluster Storage - Direction, Roadmap and Use-Cases (20)

PDF
Scalable POSIX File Systems in the Cloud
PDF
Red Hat Storage Day Dallas - Storage for OpenShift Containers
PDF
Red Hat Storage Server Roadmap & Integration With Open Stack
PDF
Red hat storage objects, containers and Beyond!
ODP
The Future of GlusterFS and Gluster.org
PPTX
Software Defined storage
PDF
Red Hat Storage Server For AWS
PDF
Redhat - rhcs 2017 past, present and future
PDF
vBACD - Distributed Petabyte-Scale Cloud Storage with GlusterFS - 2/28
PDF
Red Hat Storage Roadmap
PDF
Red Hat Storage Roadmap
PDF
Red hat ceph storage customer presentation
PDF
Red Hat Storage 2014 - Product(s) Overview
PDF
Introducing gluster filesystem by aditya
PDF
A Tight Ship: How Containers and SDS Optimize the Enterprise
PPTX
Jaspreet webinar-cns
PPTX
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
PDF
GlusterFS : un file system open source per i big data di oggi e domani - Robe...
PDF
GlusterFs: a scalable file system for today's and tomorrow's big data
PDF
Gluster fs architecture_&_roadmap_atin_punemeetup_2015
Scalable POSIX File Systems in the Cloud
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Server Roadmap & Integration With Open Stack
Red hat storage objects, containers and Beyond!
The Future of GlusterFS and Gluster.org
Software Defined storage
Red Hat Storage Server For AWS
Redhat - rhcs 2017 past, present and future
vBACD - Distributed Petabyte-Scale Cloud Storage with GlusterFS - 2/28
Red Hat Storage Roadmap
Red Hat Storage Roadmap
Red hat ceph storage customer presentation
Red Hat Storage 2014 - Product(s) Overview
Introducing gluster filesystem by aditya
A Tight Ship: How Containers and SDS Optimize the Enterprise
Jaspreet webinar-cns
Red Hat Storage Day Atlanta - Red Hat Gluster Storage vs. Traditional Storage...
GlusterFS : un file system open source per i big data di oggi e domani - Robe...
GlusterFs: a scalable file system for today's and tomorrow's big data
Gluster fs architecture_&_roadmap_atin_punemeetup_2015

More from Red_Hat_Storage (20)

PPTX
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
PPTX
Red Hat Storage Day Dallas - Defiance of the Appliance
PPTX
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
PPTX
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
PPTX
Red Hat Storage Day Boston - Why Software-defined Storage Matters
PPTX
Red Hat Storage Day Boston - Supermicro Super Storage
PDF
Red Hat Storage Day Boston - OpenStack + Ceph Storage
PPTX
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
PDF
Red Hat Storage Day Boston - Persistent Storage for Containers
PDF
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
PDF
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
PDF
Red Hat Storage Day - When the Ceph Hits the Fan
PDF
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
PDF
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
PDF
Red Hat Storage Day New York - New Reference Architectures
PDF
Red Hat Storage Day New York - Persistent Storage for Containers
PDF
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
PDF
Red Hat Storage Day New York - Welcome Remarks
PPTX
Red Hat Storage Day Seattle: Why Software-Defined Storage Matters
PPTX
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Why Software-defined Storage Matters
Red Hat Storage Day Boston - Supermicro Super Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Red Hat Storage Day Boston - Persistent Storage for Containers
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day New York - QCT: Avoid the mess, deploy with a validated s...
Red Hat Storage Day - When the Ceph Hits the Fan
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day New York - Intel Unlocking Big Data Infrastructure Effici...
Red Hat Storage Day New York - New Reference Architectures
Red Hat Storage Day New York - Persistent Storage for Containers
Red Hat Storage Day New York -Performance Intensive Workloads with Samsung NV...
Red Hat Storage Day New York - Welcome Remarks
Red Hat Storage Day Seattle: Why Software-Defined Storage Matters
Red Hat Storage Day Seattle: Supermicro Solutions for Red Hat Ceph and Red Ha...

Recently uploaded (20)

PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Modernizing your data center with Dell and AMD
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
Big Data Technologies - Introduction.pptx
PDF
Machine learning based COVID-19 study performance prediction
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
KodekX | Application Modernization Development
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
cuic standard and advanced reporting.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
DOCX
The AUB Centre for AI in Media Proposal.docx
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Unlocking AI with Model Context Protocol (MCP)
Digital-Transformation-Roadmap-for-Companies.pptx
Modernizing your data center with Dell and AMD
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Big Data Technologies - Introduction.pptx
Machine learning based COVID-19 study performance prediction
The Rise and Fall of 3GPP – Time for a Sabbatical?
Building Integrated photovoltaic BIPV_UPV.pdf
KodekX | Application Modernization Development
NewMind AI Weekly Chronicles - August'25 Week I
Reach Out and Touch Someone: Haptics and Empathic Computing
cuic standard and advanced reporting.pdf
Understanding_Digital_Forensics_Presentation.pptx
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Encapsulation_ Review paper, used for researhc scholars
20250228 LYD VKU AI Blended-Learning.pptx
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
The AUB Centre for AI in Media Proposal.docx

Red Hat Gluster Storage - Direction, Roadmap and Use-Cases

  • 1. RED HAT GLUSTER STORAGE: DIRECTION, ROADMAP AND USE-CASES Sayandab Saha Product Management, Red Hat Storage
  • 2. A Quick Look Back The Present What's Next: The Future Gluster Upstream Roadmap Red Hat Gluster Storage Integration Roadmap AGENDA
  • 3. THE RED HAT STORAGE PORTFOLIO Ceph management OPENSOURCE SOFTWARE Gluster management Ceph data services Gluster data services STANDARD HARDWARE Share-nothing, scale-out architecture provides durability and adapts to changing demands Self-managing and self-healing features reduce operational overhead Standards-based interfaces and full APIs ease integration with applications and systems Supported by the experts at Red Hat
  • 4. Nimble file storage for petabyte-scale workloads ● Machine analytics with Splunk ● Big data analytics with Hadoop TARGET USE CASES Enterprise File Sharing ● Media Streaming ● Active Archives Analytics Enterprise Virtualization Purpose-built as a scale-out file store with a straightforward architecture suitable for public, private, and hybrid cloud Simple to install and configure, with a minimal hardware footprint Offers mature NFS, SMB and HDFS interfaces for enterprise use Customer Highlight: Intuit Intuit uses Red Hat Gluster Storage to provide flexible, cost-effective storage for their industry- leading financial offerings. Rich Media & Archival RED HAT GLUSTER STORAGE
  • 5. ANALYTICS Big Data analytics with Hadoop CLOUD INFRASTRUCTURE RICH MEDIA AND ARCHIVAL SYNC AND SHARE ENTERPRISE VIRTUALIZATION Machine data analytics with Splunk Virtual machine storage with OpenStack Object storage for tenant applications Cost-effective storage for rich media streaming Active archives File sync and share with ownCloud Storage for conventional virtualization with RHEV FOCUSED SET OF USE CASES
  • 7. Red Hat Storage Server 2.0 (GA June 2012) ● 6 updates released ● Features: VM image store, performance & stability ● EOL-ed on June 2014 Active/Active cluster-on-cluster ● 6 updates released. Planned EOL October 2015. ● Features: Quota, Geo-Rep, management console, SMB 2.0 LOOKING BACK
  • 8. Launched September 2014 Key Features ● Volume snapshots for disk based backup Management ● Monitoring using Nagios ● SNMP Support ● Rolling upgrade support, CDN delivery Hadoop Plug-in for HortonWorks Data Platform 2.0.6 Scale ● 60 drives per server, 128 nodes per cluster RHGS 3.0 (DENALI) PREVIOUS MAJOR RELEASE
  • 9. BETWEEN 3.0 AND TODAY 3.0 3.0.1 3.0.2 3.0.3 3.0.4 3.1 Sept 2014 Oct 2014 Nov 2014 Jan 2015 Mar 2015 Summer 2015 “Denali” release Bug fixes RHEL 6.6 support HDP 2.1 Tez Hbase RDMA USS IceHouse rebase for Swift 3-way replication + JBOD Small-file performance “Everglades” release
  • 11. Key Features ● Erasure Coding, Tiering, Bit-Rot Detection Protocols ● Active/Active NFSv4 ● SMB 3 (protocol negotiation, in-flight encryption, server-side copy) Red Hat Gluster Storage Console ● Device Management, Geo-Rep, Snapshot, Dashboard, Snapshot Scheduling Security ● SSL based network encryption ● SELinux Enforcing Mode Performance ● Rebalance performance enhancement (100% improvement) RHGS 3.1 (EVERGLADES)
  • 12. Data protection without using RAID & replication Break data into smaller fragments, store and recover from a smaller number of fragments New type of volumes: Dispersed, dist- dispersed Initial supported configurations: 8+3, 8+4 & 4+2 configuration Algorithm used is REED-Solomon ERASURE CODING
  • 13. SEQUENTIAL IO PERFORMANCE WITH ERASURE CODING
  • 14. HOT TIER COLD TIER GLUSTER VOLUME TRUSTED STORAGE POOL TIERING Automated data movement between hot & cold tiers Movement based on access frequency ● Hot tiers could be SSDs, cold tiers are normal disks Attach & detach a tier to and from an existing Gluster volume All I/Os forwarded to hot tier Cache misses promote data to hot tier
  • 15. BIT ROT DETECTION Protection against “silent data corruption” Two fundamental procedures ● Signing using SHA256 ● Scanning/scrubbing for rot Lazy checksum maintenance ● (not inline to data path) Checksum calculation when a file is “stable” Alert/log on mismatched checksums Scanning mode is admin selectable to control impact
  • 16. NFSv4 ACLs PseudoFS support Security ● Kerberos authentication using RPCSEC_GSS, krb5/i/p ● Kerberos authentication using RPCSEC_GSS, spkm3 Active/Active cluster-on-cluster ● With up to 16 active-active NFS heads ● Gluster storage pool scales out as usual Ability to add and delete RHS volume exports to nfs-ganesha at run-time Delegations to be supported in an update release ACTIVE/ACTIVE NFSv4
  • 17. NFS HEAD NFS HEAD NFS HEAD NFS HEAD GANESHA CLUSTER USING PACEMAKER & COROSYNC TRUSTED STORAGE POOL GLUSTER VOLUME ACTIVE/ACTIVE NFSv4
  • 18. WHAT'S NEXT – THE FUTURE
  • 19. TRADITIONAL STORAGE NEXT GENERATION STORAGE Manual provisioning of LUNs and volumes with some degree of automation Self-service provisioning by lines of businesses and application developers Static selection of storage platforms based on application needs Catalog based storage service offerings with metering & charge-back Scale-up with some scale-out Expand, Shrink and scale on demand Little to no flexibility in selecting optimum storage back-end for workloads Policy based storage back-end selection STORAGE TRENDS: MODERN IT INFRASTRUCTURES
  • 20. Key elements for modern storage infrastructure (Manila, containers, hyper-converged) ● Consumption Model (“File As A Service” or “NAS on Demand”) ● Dynamic provisioning, healing, tuning & balancing ● Security & multi-tenancy ● Cloud scale & stability at scale ● Performance: performant storage back-end for a wide variety of workloads ● Advanced data services: tiering, compression, de-duplication APPLIED TO GLUSTER
  • 22. Gluster 4.0 will be our technology base for the next five years or so ● Based on 3.x experience Design must be based on estimates of where we’ll be in 2021 Higher node counts and more complex networks Heterogeneous storage ● e.g. NVMe for performance, SMR for capacity New workloads and usage models ● Hyper-convergence ● Containers ● “XYX as a service” and multi-tenancy GLUSTERFS 4.0 CONTEXT
  • 23. Declarative and constraint-based ● Not “this brick and this brick and this brick” ● More like “this big, replicated this many times, these features” ● We figure out which combinations match user requirements Overlapping replica/stripe/erasure sets ● Ease requirement to add bricks in multiples ● Better load distribution during and after failures Multiple replication levels (and types) within one volume More sophisticated tiering, rack- or security-aware placement, etc. FLEXIBLE STORAGE MANAGEMENT
  • 24. More scalable membership protocol Stronger consistency for configuration data Improved modularity ● Most-changed code in 3.x ● Increasing complexity and merge conflicts slow down the entire project ● Plugin approach allows independent development of new features Prerequisite for other 4.0 features GLUSTERD CHANGES
  • 25. Client-side caching ● Fully consistent via “upcall” mechanism Third-party copy ● Already part of NFS and SMB protocols Multiple networks and Quality of Service ● Leverage faster private networks e.g. for replication ● Isolate internal traffic ● Protect tenants from each other PERFORMANCE ENHANCEMENTS
  • 26. Theme: Storage/File as a Service Use-Cases: Storage for containers, OpenStack Manila Technology Enablers: ● Dynamic Provisioning ● At-rest Encryption ● Inode-quotas ● Cloud Scale & Stability at Scale ● Performant back-end for diverse workloads ● Autonomous operations ● Multi-tenancy FUTURE FOCUS AREAS
  • 27. RHGS 3.2 (Fundy) H1-CY2016 Baseline ● GlusterFS 3.8, RHEL 6, RHEL 7 Management ● Dynamic provisioning of volumes Key Features ● Inode quotas Protocols ● SMB 3.0 (advanced features) ● Multi-channel support Performance ● Rebalance ● Self-heal Security ● At-rest encryption Baseline ● GlusterFS 4, RHEL 7 Key Features ● Compression, Deduplication Core Infrastructure ● Next gen replication ● Highly scalable control plane ● DHTv2 Protocols ● pNFS Performance ● QoS ● Client side caching Management ● USM, Gluster ReST API RHGS 4 (Gir) (In Planning) RED HAT GLUSTER STORAGE ROADMAP
  • 28. RED HAT GLUSTER STROAGE INTEGRATION ROADMAP
  • 29. Tech preview level support for RHELOSP 7 (Kilo) ● Create/delete/rename/list share ● Create/delete snapshots ● Allow/deny access to shares ● OSP Director integration planned for September release (ver 1.1) Full support expected in RHELOSP 8 (Liberty) ● Create/delete share dynamically ● Create share from snapshot ● Exploring integration with Barbican for managing certificates OPENSTACK MANILA
  • 30. 3-key storage use-cases ● Persistent data store for state-full containers ● Container image registries ● Storage for live container images (local storage) Focused on “Persistent data store for containers” use-case ● Containerized applications mount Gluster as their data store ● NFS or GlusterFS native client integration in kubernetes Key attributes that makes Gluster interesting ● Not impacted my mount-storm ● Built in HA CONTAINERS