SlideShare a Scribd company logo
Linux File Systems
Enabling Cutting Edge Features in
RHEL 6 & 7
Ric Wheeler, Senior Manager
Steve Dickson, Consulting Engineer
File and Storage Team
Red Hat, Inc.
June 12, 2013
Red Hat Enterprise Linux 6
RHEL6 File and Storage Themes
● LVM Support for Scalable Snapshots and Thin
Provisioned Storage
● Expanded options for file systems
● General performance enhancements
● Industry leading support for new pNFS protocol
RHEL6 LVM Redesign
● Shared exception store
● Eliminates the need to have a dedicated partition for
each file system that uses snapshots
● Mechanism is much more scalable
● LVM thin provisioned target
● dm-thinp is part of the device mapper stack
● Implements a high end, enterprise storage array feature
● Makes re-sizing a file system almost obsolete!
RHEL Storage Provisioning
Improved Resource Utilization & Ease of Use
Data
Allocated Unused
Data
Allocated Unused
Data
Data
Available
Storage
Volume 2
Volume 1
Volume 1
Volume 2
Free
Space
Allocation
Pool
Traditional Provisioning RHEL Thin Provisioning
Inefficient resource utilization
Manually manage unused storage
Efficient resource utilization
Automatically managed
{
{
{
{
{
Avoids wasting space
Less administration
* Lower costs *
Thin Provisioned Targets Leverage Discard
● Red Hat Enterprise Linux 6 introduced support to notify
storage devices when a blocks becomes free in a file
system
● Useful for wear leveling in SSD's or space reclamation
in thin provisioned storage
● Discard maps to the appropriate low level command
● TRIM for SSD devices
● UNMAP or WRITE_SAME for SCSI devices
● dm-thinp handles the discard directly
RHEL Storage Provisioning
Improved Resource Utilization
20GB In use
Free
Space
Allocation
Pool
10GB In useVolume {
90GB Available
Storage
80GB Available
Storage
{
10GB In use
90GB Available
Storage
100GB TP Volume
10GB Data Written
Another 10GB
Data Written
10GB Data Erased
Space Reclaimed after
Discard Command
STEP #1 STEP #2 STEP #3
Space Reclamation with Thin Provisioned Volumes
Even More LVM Features
● Red Hat Enterprise Linux LVM commands can control
software RAID (MD) devices
● 6.4 added LVM Support for RAID10
● Introduction of a new user space daemon
● Daemon stores the device information after a scan in its
caches
● Major performance gains when systems have a large
number of disks
Expanding Choices
● Early in RHEL5 there are limited choices in the file
system space
● Ext3 was the only local file system
● GFS1 was your clustered file system
● RHEL5 updates brought in support for
● Ext4 added to the core Red Hat Linux platform
● Scalable File System (XFS) as a layered product
● GFS2 as the next generation of clustered file system
● Support for user space file systems via FUSE support
RHEL6 File System Highlights
● Scalable File System (XFS) enhancements support the
most challenging workloads
● Performance enhancements for meta-data workloads
● Selected as the base file system for Red Hat Storage
● GFS2 reached new levels of performance
● Base file system for clustered Samba servers
● XFS performance enhancements for synchronous
workloads
● Support for Parallel NFS file layout client
RHEL6.4 File System Updates
● Ext4 enhanced for virtual guest storage in RHEL6.4
● “hole punch” deallocates data in the middle of files
● Refresh of the btrfs file system technology preview to
3.5 upstream version
● Scalable File System (aka XFS) is a layered product
for the largest and most demanding workloads
● Series of performance enhancements learned courtesy
of Red Hat Storage and partners
● Refresh of key updates from the upstream Linux kernel
How to Choose a Local File System?
● The best way is to test each file system with your
specific workload on a well tuned system
● Make sure to use the RHEL tuned profile for your
storage
● The default file system will just work for most
applications and servers
● Many applications are not file or storage constrained
● If you are not waiting on the file system, changing it
probably will not make your application faster!
SAS on Standalone Systems
Picking a RHEL File System
ext3
ext4
xfs
0 3600 7200 10800 14400
SAS Mixed Analytics 9.3 running RHEL6.3
Comparing Total time and System CPU usage
TOTALtime SystemTime
Time in seconds (lower is better)
Filesystem
xfs most recommended
●
Max file system size 100TB
●
Max file size 100TB
●
Best performing
ext4 recommended
●
Max file system size 16TB
●
Max file size 16TB
ext3 not recommended
●
Max file system size 16TB
●
Max file size 2TB
INTERNAL ONLY14
Red Hat Enterprise Linux 6
GFS2 New Features
● Performance improvements
● Sorted ordered write list for improved log flush speed &
block reservation in the allocator for better on-disk
layout with complex workloads (6.4+)
● Orlov allocator & better scalability with very large
number of cached inodes (6.5+)
● Faster glock dump for debugging (6.4+)
● Support for 4k sector sized devices with TRIM (6.5+)
Performance Enhancement for User Space File
Systems
● User space file systems are increasingly popular
● Cloud file systems and “scale out” file systems like
HDFS or gluster
● FUSE is a common kernel mechanism for this class of
file system
● Work to enhance the performance of FUSE includes
● Support for FUSE readdirplus()
● Support for scatter-gather IO
● Reduces trips from user space to the kernel
RHEL6 NFS Updates
Traditional NFS
NFS Client NFS Client NFS Client
Linux NFS
Server
Storage
One Server for Multiple Clients
= Limited Scalability
Parallel NFS (pNFS)
● Architecture
● Metadata Server (MDS) – Handles all non-Data Traffic
● Data Server (DS) – Direct I/O access to clients
● Shared Storage Between Servers
● Layout Define server Architecture
● File Layout (NAS Env) - Netapp
● Block Layout (SAN Env) - EMC
● Object Layout (High Perf Env) Pananas & Tonian
Parallel NFS = Scalability
Parallel data paths to Storage
File Layout ==> NAS
Direct path – Client to Storage
Object Layout ==> SAN
pNFS ClientpNFS ClientNFS Client
pNFS
MetaData
Server
Storage
pNFS ClientpNFS ClientpNFS ClientpNFS Client ...
StorageStorage Storage
pNFS Clients
pNFS
Data
Server
pNFS
Data
Server
pNFS
Data
Server
Parallel NFS (pNFS) - RHEL 6.4
● First to market with Client support (file layout)
● Thank you very much Upstream and Partners!!!
● Enabling pNFS:
● mount -o v4.1 server:/export /mnt/export
● RHEL-Next
● Block and Object layout support
10 20 40 60 80 100
0
200000
400000
600000
800000
1000000
1200000
1400000
1600000
RHEL 6.4 pNFS vs NFSv4
Oracle11gR2 OLTP Workload
pNFS
NFSv4
Number of Users
TransactionsPerMinute(TPM
Parallel NFS = High Performance and Scalability
Source: Tonian Systems
Red Hat Enterprise Linux 7
RHEL7 Areas of Focus
● Enhanced performance
● Focus on very high performance, low latency devices
● Support for new types of hardware
● Working with our storage partners to enable their latest
devices
● Support for higher capacities across the range of file
and storage options
● Ease of use and management
RHEL7 Storage Updates
Storage Devices are too Fast for the Kernel!
● We are too slow for modern SSD devices
● The Linux kernel did pretty well with just S-ATA SSD's
● PCI-e SSD cards can sustain 500,000 or more IOPs
and our stack is the bottleneck in performance
● A new generation of persistent memory is coming that
will be
● Roughly the same capacity, cost and performance as
DRAM
● The IO stack needs to go to millions of IOPs
● http://lwn.net/Articles/547903
Dueling Block Layer Caching Schemes
● With all classes of SSD's, the cost makes it difficult to
have a purely SSD system at large capacity
● Obvious extension is to have a block layer cache
● Two major upstream choices:
● Device mapper dm-cache target will be in RHEL7
● BCACHE queued up for 3.10 kernel, might make
RHEL7
● Performance testing underway
● BCACHE is finer grained cache
● Dm-cache has a pluggable policy (similar to dm MPIO)
● https://lwn.net/Articles/548348
Thinly Provisioned Storage & Alerts
● Thinly provisioned storage lies to users
● Similar to DRAM versus virtual address space
● Sys admin can give all users a virtual TB and only back
it up with 100GB of real storage for each user
● Supported in arrays & by device mapper dm-thinp
● Trouble comes when physical storage nears its limit
● Watermarks are set to trigger an alert
● Debate is over where & how to log that
● How much is done in kernel versus user space?
● User space policy agent was slightly more popular
● http://lwn.net/Articles/548295
Continued LVM Enhancements for Software RAID
● LVM will support more software RAID features
● Scrubbing proactively detects latent disk errors in
software RAID devices
● Reshape moves a device from one RAID level to
another
● Support for write-mostly, write-behind, resync throttling
all help performance
● BTRFS will provide very basic software RAID
capabilities
● Still very new code
Storage Management APIs
● Unification of the code that does storage management
● Low level libraries with C & Python bindings
● libstoragemgt manages SAN and NAS
● liblvm is the API equivalent of LVM user commands
● API's in the works include HBA, SCSI, iSCSI, FcoE,
multipath
● Storage system manager provides an easy to use
command line interface
● Blivet is a new high level storage and file system
library that will be used by anaconda and openlmi
Future Red Hat Stack Overview
Kernel
Storage Target
Low Level Tools:
LVM,
Device Mapper,
FS Utilities
Anaconda
Storage System
Manager (SSM)
Vendor Specific Tools
Hardware RAID
Array Specific
OVIRT OpenLMI
LIBSTORAGEMGT
BLIVET
LIBLVM
RHEL7 Local File System
Updates
RHEL7 Will Bring in More Choices
● RHEL 7 is looking to support ext4, XFS and btrfs
● All can be used for boot, system & data partitions
● Btrfs going through intense testing and qualification
● Ext2/Ext3 will be fully supported
● Use the ext4 driver which is mostly invisible to users
● Full support for all pNFS client layout types
● Add in support for vendors NAS boxes which support
the pNFS object and block layouts
RHEL7 Default File System
● In RHEL7, Red Hat is looking to make XFS the new
default
● XFS will be the default for boot, root and user data
partitions on all supported architectures
● Red Hat is working with partners and customers during
this selection process to test and validate XFS
● Final decision will be made pending successful testing
● Evaluating maximum file system sizes for RHEL7
XFS Strengths
● XFS is the reigning champion of larger servers and
high end storage devices
● Tends to extract the most from the hardware
● Well tuned to multi-socket and multi-core servers
● XFS has a proven track record with the largest
systems
● Carefully selected RHEL partners and customers run
XFS up to 300TB in RHEL6
● Popular base for enterprise NAS servers
● Including Red Hat Storage
EXT4 Strengths
● Ext4 is very well know to system administrators and
users
● Default file system in RHEL6
● Closely related to ext3 our RHEL5 default
● Can outperform XFS in some specific workloads
● Single threaded, single disk workload with synchronous
updates
● Avoid ext4 for larger storage
● Base file system for Android and Google File System
BTRFS – The New File System Choice
● Integrates many block layer functions into the file
system layer
● Logical volume management functions like snapshots
● Can do several versions of RAID
● Designed around ease of use and has sophisticated
metadata
● Back references help map IO errors to file system
objects
● Great choice for systems with lots of independent disks
and no hardware RAID
RHEL7 NFS Updates
RHEL7 NFS Server Updates
● Red Hat Enterprise Linux 7.0 completes the server
side support for NFS 4.1
● Support for only-once semantics
● Callbacks use port 2049
● No server side support for parallel NFS ... yet!
Parallel NFS Updates
● Parallel NFS has three layout types
● Block layouts allow direct client access to SAN data
● Object layouts for direct access to the object backend
● File layout
● RHEL7.0 will add support for block and object layout
types
● Will provide support for all enterprise pNFS servers!
Support for SELinux over NFS
● Labeled NFS enable fine grained SELinux contexts
● Part of the NFS4.2 specification
● Use cases include
● Secure virtual machines stored on NFS server
● Restricted home directory access
INTERNAL ONLY42
Red Hat Enterprise Linux 7.9
GFS2 New Features
● All of the previously mentioned RHEL6 features
● Streamlined journaling code
● Less memory overhead
● Less prone to pauses during low memory conditions
● New cluster stack interface (no gfs_controld)
● Performance co-pilot (PCP) support for glock statistics
● Faster fsck
● RAID stripe aware mkfs
Pulling it all Together....
● Ease of Use
● Tuning & automation of Local FS to LVM new features
● Thin provisioned storage
● Upgrade rollback
● Scalable snapshots
● Major focus on stability testing of btrfs
● Looking to see what use cases it fits best
● Harden XFS metadata
● Detect errors to confidently support 500TB single FS
Upstream Projects
Performance Work: SBC-4 Commands
● SBC-4 is a reduced & optimized SCSI disk command
set proposed in T10
● Current command set is too large and supports too
many odd devices (tapes, USB, etc)
● SBC-4 will support just disks and be optimized for low
latency
● Probably needs a new SCSI disk drive
● New atomic write command from T10
● All of the change or nothing happens
● Supported by some new devices like FusionIO already
● http://lwn.net/Articles/548116/
Copy Offload System Calls
● Upstream kernel community has debated “copy
offload” for several years
● Popular use case is VM guest image copy
● Proposal is to have one new system call
● int copy_range(int fd_in, loff_t, upos_in, int fd_out, loff_t
upos_out, int count)
● Offload copy to SCSI devices, NFS or copy enabled file
systems (like reflink in OCFS2 or btrfs)
● Patches for copy_range() posted by Zach Brown
● https://lkml.org/lkml/2013/5/14/622
● http://lwn.net/Articles/548347
IO Hints
● Storage device manufacturers want help from
applications and the kernel
● Tag data with hints about streaming vs random, boot
versus run time, critical data
● T10 standards body proposed SCSI versions which was
voted down
● Suggestion raised to allow hints to be passed down via
struct bio from file system to block layer
● Support for expanding fadvise() hints for applications
● No consensus on what hints to issue from the file or
storage stack internally
● http://lwn.net/Articles/548296/
Improving Error Return Codes?
● The interface from the IO subsystem up to the file
system is pretty basic
● Low level device errors almost always propagate as EIO
● Causes file system to go offline or read only
● Makes it hard to do intelligent error handling at FS level
● Suggestion was to re-use select POSIX error codes to
differentiate from temporary to permanent errors
● File system might retry on temporary errors
● Will know to give up immediately on others
● Challenge is that IO layer itself cannot always tell!
● http://lwn.net/Articles/548353
Learning More
Learn more about File Systems & Storage
● Attend related Summit Sessions
● Linux File Systems: Enabling Cutting-edge Features in
Red Hat Enterprise Linux 6 & 7 (Wed 4:50)
● Kernel Storage & File System Demo Pod (Wed 5:30)
● Evolving & Improving RHEL NFS (Thurs 2:30)
● Parallel NFS: Storage Leaders & NFS Architects Panel
(Thurs 3:40)
● Engage the community
● http://lwn.net
● Mailing lists: linux-ext4, linux-btrfs, linux-nfs,
xfs@oss.sgi.com

More Related Content

ODP
Dustin Black - Red Hat Storage Server Administration Deep Dive
PDF
Red Hat Storage Server Administration Deep Dive
PPTX
NetApp C-mode for 7 mode engineers
PDF
Red Hat Storage - Introduction to GlusterFS
PDF
IBM Spectrum Scale Networking Flow
PDF
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...
PDF
Gluster for Geeks: Performance Tuning Tips & Tricks
PPTX
Gluster Storage
Dustin Black - Red Hat Storage Server Administration Deep Dive
Red Hat Storage Server Administration Deep Dive
NetApp C-mode for 7 mode engineers
Red Hat Storage - Introduction to GlusterFS
IBM Spectrum Scale Networking Flow
Gluster fs tutorial part 2 gluster and big data- gluster for devs and sys ...
Gluster for Geeks: Performance Tuning Tips & Tricks
Gluster Storage

What's hot (20)

PPTX
Revisiting CephFS MDS and mClock QoS Scheduler
ODP
Lisa 2015-gluster fs-introduction
ODP
Gluster fs architecture_future_directions_tlv
ODP
Glusterfs for sysadmins-justin_clift
ODP
Performance characterization in large distributed file system with gluster fs
PDF
The Future of GlusterFS and Gluster.org
PDF
Disperse xlator ramon_datalab
PDF
Erasure codes and storage tiers on gluster
PDF
Ceph Block Devices: A Deep Dive
ODP
Gluster fs hadoop_fifth-elephant
PDF
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases
PPTX
Red Hat Gluster Storage, Container Storage and CephFS Plans
PPTX
Hadoop HDFS Architeture and Design
PDF
HDFS for Geographically Distributed File System
ODP
GlusterFs Architecture & Roadmap - LinuxCon EU 2013
PDF
Qnap nas tvs serie x71 catalogo
PDF
Gluster fs current_features_and_roadmap
PDF
Gluster d2
PPTX
Geek Sync | Infrastructure for the Data Professional: An Introduction
PPTX
HyperLoop: Group-Based NIC-Offloading to Accelerate Replicated Transactions i...
Revisiting CephFS MDS and mClock QoS Scheduler
Lisa 2015-gluster fs-introduction
Gluster fs architecture_future_directions_tlv
Glusterfs for sysadmins-justin_clift
Performance characterization in large distributed file system with gluster fs
The Future of GlusterFS and Gluster.org
Disperse xlator ramon_datalab
Erasure codes and storage tiers on gluster
Ceph Block Devices: A Deep Dive
Gluster fs hadoop_fifth-elephant
Red Hat Gluster Storage - Direction, Roadmap and Use-Cases
Red Hat Gluster Storage, Container Storage and CephFS Plans
Hadoop HDFS Architeture and Design
HDFS for Geographically Distributed File System
GlusterFs Architecture & Roadmap - LinuxCon EU 2013
Qnap nas tvs serie x71 catalogo
Gluster fs current_features_and_roadmap
Gluster d2
Geek Sync | Infrastructure for the Data Professional: An Introduction
HyperLoop: Group-Based NIC-Offloading to Accelerate Replicated Transactions i...
Ad

Viewers also liked (20)

PDF
Openstackinsideoutv10 140222065532-phpapp01
PDF
2node cluster
PDF
2013fooscoverpageimage 130417105210-phpapp01
PDF
Glusterfs 구성제안서 v1.0
PDF
Cluster pitfalls recommand
PDF
Openstack v4 0
PDF
Glusterfs 구성제안 v1.0
PDF
3.ubuntu custom
PDF
Why opensource cloud
PDF
Gluster fs guide(v1.0)
PDF
Hardware accelerated virtio networking for nfv linux con
PDF
Lkda facebook seminar_140419
PDF
난공불락세미나 Ldap
PDF
Linux con europe_2014_full_system_rollback_btrfs_snapper_0
PDF
Gluster fs guide(v1.0)
PDF
Wheeler w 0450_linux_file_systems1
PDF
Glusterfs 구성제안서 v1.0
PDF
유닉스 리눅스 마이그레이션_이호성_v1.0
PDF
Glusterfs 소개 v1.0_난공불락세미나
PDF
Linux Performan tuning Part I
Openstackinsideoutv10 140222065532-phpapp01
2node cluster
2013fooscoverpageimage 130417105210-phpapp01
Glusterfs 구성제안서 v1.0
Cluster pitfalls recommand
Openstack v4 0
Glusterfs 구성제안 v1.0
3.ubuntu custom
Why opensource cloud
Gluster fs guide(v1.0)
Hardware accelerated virtio networking for nfv linux con
Lkda facebook seminar_140419
난공불락세미나 Ldap
Linux con europe_2014_full_system_rollback_btrfs_snapper_0
Gluster fs guide(v1.0)
Wheeler w 0450_linux_file_systems1
Glusterfs 구성제안서 v1.0
유닉스 리눅스 마이그레이션_이호성_v1.0
Glusterfs 소개 v1.0_난공불락세미나
Linux Performan tuning Part I
Ad

Similar to Wheeler w 0450_linux_file_systems1 (20)

PDF
Why btrfs is the Bread and Butter of Filesystems
PDF
PDF
LinuxCon_2013_NA_Eckermann_Filesystems_btrfs.pdf
PDF
PDF
Red Hat Storage 2014 - Product(s) Overview
PDF
tburke_rhel6_summit.pdf
PDF
Rhel7 vs rhel6
PDF
Red Hat Storage Server Roadmap & Integration With Open Stack
PDF
SHARE.ORG Orlando 2015
PPTX
Red Hat System Administration
PDF
PostgreSQL on EXT4, XFS, BTRFS and ZFS
PDF
Red Hat for IBM System z IBM Enterprise2014 Las Vegas
PDF
Red hat ceph storage customer presentation
PDF
2010-01-28 NSA Open Source User Group Meeting, Current & Future Linux on Syst...
PDF
The Domino 10 RHEL 7 Primer
PDF
Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...
PDF
Container Storage Best Practices in 2017
PDF
Btrfs and Snapper - The Next Steps from Pure Filesystem Features to Integrati...
PPTX
Disk and File System Management in Linux
PDF
RHEL roadmap
Why btrfs is the Bread and Butter of Filesystems
LinuxCon_2013_NA_Eckermann_Filesystems_btrfs.pdf
Red Hat Storage 2014 - Product(s) Overview
tburke_rhel6_summit.pdf
Rhel7 vs rhel6
Red Hat Storage Server Roadmap & Integration With Open Stack
SHARE.ORG Orlando 2015
Red Hat System Administration
PostgreSQL on EXT4, XFS, BTRFS and ZFS
Red Hat for IBM System z IBM Enterprise2014 Las Vegas
Red hat ceph storage customer presentation
2010-01-28 NSA Open Source User Group Meeting, Current & Future Linux on Syst...
The Domino 10 RHEL 7 Primer
Reliable Storage for High Availability, Disaster Recovery, Clouds and Contain...
Container Storage Best Practices in 2017
Btrfs and Snapper - The Next Steps from Pure Filesystem Features to Integrati...
Disk and File System Management in Linux
RHEL roadmap

More from sprdd (14)

PDF
Linux con europe_2014_f
PDF
Zinst 패키지 기반의-리눅스_중앙관리시스템_20140415
PDF
Summit2014 riel chegu_w_0340_automatic_numa_balancing_0
PDF
오픈소스컨설팅 클러스터제안 V1.0
PDF
HP NMI WATCHDOG
PDF
H2890 emc-clariion-asymm-active-wp
PDF
Rhel cluster gfs_improveperformance
PDF
Doldoggi bisiri
PDF
세미나설문
PDF
5231 140-hellwig
PDF
Glusterfs 파일시스템 구성_및 운영가이드_v2.0
PDF
Glusterfs 구성제안 및_운영가이드_v2.0
PDF
2node cluster
PDF
오픈소스 모니터링비교
Linux con europe_2014_f
Zinst 패키지 기반의-리눅스_중앙관리시스템_20140415
Summit2014 riel chegu_w_0340_automatic_numa_balancing_0
오픈소스컨설팅 클러스터제안 V1.0
HP NMI WATCHDOG
H2890 emc-clariion-asymm-active-wp
Rhel cluster gfs_improveperformance
Doldoggi bisiri
세미나설문
5231 140-hellwig
Glusterfs 파일시스템 구성_및 운영가이드_v2.0
Glusterfs 구성제안 및_운영가이드_v2.0
2node cluster
오픈소스 모니터링비교

Recently uploaded (20)

PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Modernizing your data center with Dell and AMD
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Machine learning based COVID-19 study performance prediction
PDF
KodekX | Application Modernization Development
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
Electronic commerce courselecture one. Pdf
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Advanced IT Governance
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Modernizing your data center with Dell and AMD
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Network Security Unit 5.pdf for BCA BBA.
GamePlan Trading System Review: Professional Trader's Honest Take
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Machine learning based COVID-19 study performance prediction
KodekX | Application Modernization Development
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
“AI and Expert System Decision Support & Business Intelligence Systems”
Mobile App Security Testing_ A Comprehensive Guide.pdf
Big Data Technologies - Introduction.pptx
Electronic commerce courselecture one. Pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
The AUB Centre for AI in Media Proposal.docx
Dropbox Q2 2025 Financial Results & Investor Presentation
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Advanced IT Governance

Wheeler w 0450_linux_file_systems1

  • 1. Linux File Systems Enabling Cutting Edge Features in RHEL 6 & 7 Ric Wheeler, Senior Manager Steve Dickson, Consulting Engineer File and Storage Team Red Hat, Inc. June 12, 2013
  • 3. RHEL6 File and Storage Themes ● LVM Support for Scalable Snapshots and Thin Provisioned Storage ● Expanded options for file systems ● General performance enhancements ● Industry leading support for new pNFS protocol
  • 4. RHEL6 LVM Redesign ● Shared exception store ● Eliminates the need to have a dedicated partition for each file system that uses snapshots ● Mechanism is much more scalable ● LVM thin provisioned target ● dm-thinp is part of the device mapper stack ● Implements a high end, enterprise storage array feature ● Makes re-sizing a file system almost obsolete!
  • 5. RHEL Storage Provisioning Improved Resource Utilization & Ease of Use Data Allocated Unused Data Allocated Unused Data Data Available Storage Volume 2 Volume 1 Volume 1 Volume 2 Free Space Allocation Pool Traditional Provisioning RHEL Thin Provisioning Inefficient resource utilization Manually manage unused storage Efficient resource utilization Automatically managed { { { { { Avoids wasting space Less administration * Lower costs *
  • 6. Thin Provisioned Targets Leverage Discard ● Red Hat Enterprise Linux 6 introduced support to notify storage devices when a blocks becomes free in a file system ● Useful for wear leveling in SSD's or space reclamation in thin provisioned storage ● Discard maps to the appropriate low level command ● TRIM for SSD devices ● UNMAP or WRITE_SAME for SCSI devices ● dm-thinp handles the discard directly
  • 7. RHEL Storage Provisioning Improved Resource Utilization 20GB In use Free Space Allocation Pool 10GB In useVolume { 90GB Available Storage 80GB Available Storage { 10GB In use 90GB Available Storage 100GB TP Volume 10GB Data Written Another 10GB Data Written 10GB Data Erased Space Reclaimed after Discard Command STEP #1 STEP #2 STEP #3 Space Reclamation with Thin Provisioned Volumes
  • 8. Even More LVM Features ● Red Hat Enterprise Linux LVM commands can control software RAID (MD) devices ● 6.4 added LVM Support for RAID10 ● Introduction of a new user space daemon ● Daemon stores the device information after a scan in its caches ● Major performance gains when systems have a large number of disks
  • 9. Expanding Choices ● Early in RHEL5 there are limited choices in the file system space ● Ext3 was the only local file system ● GFS1 was your clustered file system ● RHEL5 updates brought in support for ● Ext4 added to the core Red Hat Linux platform ● Scalable File System (XFS) as a layered product ● GFS2 as the next generation of clustered file system ● Support for user space file systems via FUSE support
  • 10. RHEL6 File System Highlights ● Scalable File System (XFS) enhancements support the most challenging workloads ● Performance enhancements for meta-data workloads ● Selected as the base file system for Red Hat Storage ● GFS2 reached new levels of performance ● Base file system for clustered Samba servers ● XFS performance enhancements for synchronous workloads ● Support for Parallel NFS file layout client
  • 11. RHEL6.4 File System Updates ● Ext4 enhanced for virtual guest storage in RHEL6.4 ● “hole punch” deallocates data in the middle of files ● Refresh of the btrfs file system technology preview to 3.5 upstream version ● Scalable File System (aka XFS) is a layered product for the largest and most demanding workloads ● Series of performance enhancements learned courtesy of Red Hat Storage and partners ● Refresh of key updates from the upstream Linux kernel
  • 12. How to Choose a Local File System? ● The best way is to test each file system with your specific workload on a well tuned system ● Make sure to use the RHEL tuned profile for your storage ● The default file system will just work for most applications and servers ● Many applications are not file or storage constrained ● If you are not waiting on the file system, changing it probably will not make your application faster!
  • 13. SAS on Standalone Systems Picking a RHEL File System ext3 ext4 xfs 0 3600 7200 10800 14400 SAS Mixed Analytics 9.3 running RHEL6.3 Comparing Total time and System CPU usage TOTALtime SystemTime Time in seconds (lower is better) Filesystem xfs most recommended ● Max file system size 100TB ● Max file size 100TB ● Best performing ext4 recommended ● Max file system size 16TB ● Max file size 16TB ext3 not recommended ● Max file system size 16TB ● Max file size 2TB
  • 14. INTERNAL ONLY14 Red Hat Enterprise Linux 6 GFS2 New Features ● Performance improvements ● Sorted ordered write list for improved log flush speed & block reservation in the allocator for better on-disk layout with complex workloads (6.4+) ● Orlov allocator & better scalability with very large number of cached inodes (6.5+) ● Faster glock dump for debugging (6.4+) ● Support for 4k sector sized devices with TRIM (6.5+)
  • 15. Performance Enhancement for User Space File Systems ● User space file systems are increasingly popular ● Cloud file systems and “scale out” file systems like HDFS or gluster ● FUSE is a common kernel mechanism for this class of file system ● Work to enhance the performance of FUSE includes ● Support for FUSE readdirplus() ● Support for scatter-gather IO ● Reduces trips from user space to the kernel
  • 17. Traditional NFS NFS Client NFS Client NFS Client Linux NFS Server Storage One Server for Multiple Clients = Limited Scalability
  • 18. Parallel NFS (pNFS) ● Architecture ● Metadata Server (MDS) – Handles all non-Data Traffic ● Data Server (DS) – Direct I/O access to clients ● Shared Storage Between Servers ● Layout Define server Architecture ● File Layout (NAS Env) - Netapp ● Block Layout (SAN Env) - EMC ● Object Layout (High Perf Env) Pananas & Tonian
  • 19. Parallel NFS = Scalability Parallel data paths to Storage File Layout ==> NAS Direct path – Client to Storage Object Layout ==> SAN pNFS ClientpNFS ClientNFS Client pNFS MetaData Server Storage pNFS ClientpNFS ClientpNFS ClientpNFS Client ... StorageStorage Storage pNFS Clients pNFS Data Server pNFS Data Server pNFS Data Server
  • 20. Parallel NFS (pNFS) - RHEL 6.4 ● First to market with Client support (file layout) ● Thank you very much Upstream and Partners!!! ● Enabling pNFS: ● mount -o v4.1 server:/export /mnt/export ● RHEL-Next ● Block and Object layout support
  • 21. 10 20 40 60 80 100 0 200000 400000 600000 800000 1000000 1200000 1400000 1600000 RHEL 6.4 pNFS vs NFSv4 Oracle11gR2 OLTP Workload pNFS NFSv4 Number of Users TransactionsPerMinute(TPM
  • 22. Parallel NFS = High Performance and Scalability Source: Tonian Systems
  • 24. RHEL7 Areas of Focus ● Enhanced performance ● Focus on very high performance, low latency devices ● Support for new types of hardware ● Working with our storage partners to enable their latest devices ● Support for higher capacities across the range of file and storage options ● Ease of use and management
  • 26. Storage Devices are too Fast for the Kernel! ● We are too slow for modern SSD devices ● The Linux kernel did pretty well with just S-ATA SSD's ● PCI-e SSD cards can sustain 500,000 or more IOPs and our stack is the bottleneck in performance ● A new generation of persistent memory is coming that will be ● Roughly the same capacity, cost and performance as DRAM ● The IO stack needs to go to millions of IOPs ● http://lwn.net/Articles/547903
  • 27. Dueling Block Layer Caching Schemes ● With all classes of SSD's, the cost makes it difficult to have a purely SSD system at large capacity ● Obvious extension is to have a block layer cache ● Two major upstream choices: ● Device mapper dm-cache target will be in RHEL7 ● BCACHE queued up for 3.10 kernel, might make RHEL7 ● Performance testing underway ● BCACHE is finer grained cache ● Dm-cache has a pluggable policy (similar to dm MPIO) ● https://lwn.net/Articles/548348
  • 28. Thinly Provisioned Storage & Alerts ● Thinly provisioned storage lies to users ● Similar to DRAM versus virtual address space ● Sys admin can give all users a virtual TB and only back it up with 100GB of real storage for each user ● Supported in arrays & by device mapper dm-thinp ● Trouble comes when physical storage nears its limit ● Watermarks are set to trigger an alert ● Debate is over where & how to log that ● How much is done in kernel versus user space? ● User space policy agent was slightly more popular ● http://lwn.net/Articles/548295
  • 29. Continued LVM Enhancements for Software RAID ● LVM will support more software RAID features ● Scrubbing proactively detects latent disk errors in software RAID devices ● Reshape moves a device from one RAID level to another ● Support for write-mostly, write-behind, resync throttling all help performance ● BTRFS will provide very basic software RAID capabilities ● Still very new code
  • 30. Storage Management APIs ● Unification of the code that does storage management ● Low level libraries with C & Python bindings ● libstoragemgt manages SAN and NAS ● liblvm is the API equivalent of LVM user commands ● API's in the works include HBA, SCSI, iSCSI, FcoE, multipath ● Storage system manager provides an easy to use command line interface ● Blivet is a new high level storage and file system library that will be used by anaconda and openlmi
  • 31. Future Red Hat Stack Overview Kernel Storage Target Low Level Tools: LVM, Device Mapper, FS Utilities Anaconda Storage System Manager (SSM) Vendor Specific Tools Hardware RAID Array Specific OVIRT OpenLMI LIBSTORAGEMGT BLIVET LIBLVM
  • 32. RHEL7 Local File System Updates
  • 33. RHEL7 Will Bring in More Choices ● RHEL 7 is looking to support ext4, XFS and btrfs ● All can be used for boot, system & data partitions ● Btrfs going through intense testing and qualification ● Ext2/Ext3 will be fully supported ● Use the ext4 driver which is mostly invisible to users ● Full support for all pNFS client layout types ● Add in support for vendors NAS boxes which support the pNFS object and block layouts
  • 34. RHEL7 Default File System ● In RHEL7, Red Hat is looking to make XFS the new default ● XFS will be the default for boot, root and user data partitions on all supported architectures ● Red Hat is working with partners and customers during this selection process to test and validate XFS ● Final decision will be made pending successful testing ● Evaluating maximum file system sizes for RHEL7
  • 35. XFS Strengths ● XFS is the reigning champion of larger servers and high end storage devices ● Tends to extract the most from the hardware ● Well tuned to multi-socket and multi-core servers ● XFS has a proven track record with the largest systems ● Carefully selected RHEL partners and customers run XFS up to 300TB in RHEL6 ● Popular base for enterprise NAS servers ● Including Red Hat Storage
  • 36. EXT4 Strengths ● Ext4 is very well know to system administrators and users ● Default file system in RHEL6 ● Closely related to ext3 our RHEL5 default ● Can outperform XFS in some specific workloads ● Single threaded, single disk workload with synchronous updates ● Avoid ext4 for larger storage ● Base file system for Android and Google File System
  • 37. BTRFS – The New File System Choice ● Integrates many block layer functions into the file system layer ● Logical volume management functions like snapshots ● Can do several versions of RAID ● Designed around ease of use and has sophisticated metadata ● Back references help map IO errors to file system objects ● Great choice for systems with lots of independent disks and no hardware RAID
  • 39. RHEL7 NFS Server Updates ● Red Hat Enterprise Linux 7.0 completes the server side support for NFS 4.1 ● Support for only-once semantics ● Callbacks use port 2049 ● No server side support for parallel NFS ... yet!
  • 40. Parallel NFS Updates ● Parallel NFS has three layout types ● Block layouts allow direct client access to SAN data ● Object layouts for direct access to the object backend ● File layout ● RHEL7.0 will add support for block and object layout types ● Will provide support for all enterprise pNFS servers!
  • 41. Support for SELinux over NFS ● Labeled NFS enable fine grained SELinux contexts ● Part of the NFS4.2 specification ● Use cases include ● Secure virtual machines stored on NFS server ● Restricted home directory access
  • 42. INTERNAL ONLY42 Red Hat Enterprise Linux 7.9 GFS2 New Features ● All of the previously mentioned RHEL6 features ● Streamlined journaling code ● Less memory overhead ● Less prone to pauses during low memory conditions ● New cluster stack interface (no gfs_controld) ● Performance co-pilot (PCP) support for glock statistics ● Faster fsck ● RAID stripe aware mkfs
  • 43. Pulling it all Together.... ● Ease of Use ● Tuning & automation of Local FS to LVM new features ● Thin provisioned storage ● Upgrade rollback ● Scalable snapshots ● Major focus on stability testing of btrfs ● Looking to see what use cases it fits best ● Harden XFS metadata ● Detect errors to confidently support 500TB single FS
  • 45. Performance Work: SBC-4 Commands ● SBC-4 is a reduced & optimized SCSI disk command set proposed in T10 ● Current command set is too large and supports too many odd devices (tapes, USB, etc) ● SBC-4 will support just disks and be optimized for low latency ● Probably needs a new SCSI disk drive ● New atomic write command from T10 ● All of the change or nothing happens ● Supported by some new devices like FusionIO already ● http://lwn.net/Articles/548116/
  • 46. Copy Offload System Calls ● Upstream kernel community has debated “copy offload” for several years ● Popular use case is VM guest image copy ● Proposal is to have one new system call ● int copy_range(int fd_in, loff_t, upos_in, int fd_out, loff_t upos_out, int count) ● Offload copy to SCSI devices, NFS or copy enabled file systems (like reflink in OCFS2 or btrfs) ● Patches for copy_range() posted by Zach Brown ● https://lkml.org/lkml/2013/5/14/622 ● http://lwn.net/Articles/548347
  • 47. IO Hints ● Storage device manufacturers want help from applications and the kernel ● Tag data with hints about streaming vs random, boot versus run time, critical data ● T10 standards body proposed SCSI versions which was voted down ● Suggestion raised to allow hints to be passed down via struct bio from file system to block layer ● Support for expanding fadvise() hints for applications ● No consensus on what hints to issue from the file or storage stack internally ● http://lwn.net/Articles/548296/
  • 48. Improving Error Return Codes? ● The interface from the IO subsystem up to the file system is pretty basic ● Low level device errors almost always propagate as EIO ● Causes file system to go offline or read only ● Makes it hard to do intelligent error handling at FS level ● Suggestion was to re-use select POSIX error codes to differentiate from temporary to permanent errors ● File system might retry on temporary errors ● Will know to give up immediately on others ● Challenge is that IO layer itself cannot always tell! ● http://lwn.net/Articles/548353
  • 50. Learn more about File Systems & Storage ● Attend related Summit Sessions ● Linux File Systems: Enabling Cutting-edge Features in Red Hat Enterprise Linux 6 & 7 (Wed 4:50) ● Kernel Storage & File System Demo Pod (Wed 5:30) ● Evolving & Improving RHEL NFS (Thurs 2:30) ● Parallel NFS: Storage Leaders & NFS Architects Panel (Thurs 3:40) ● Engage the community ● http://lwn.net ● Mailing lists: linux-ext4, linux-btrfs, linux-nfs, xfs@oss.sgi.com