SlideShare a Scribd company logo
Talk Title Here
Author Name, Company
Comparison of FOSS
distributed filesystems
Marian “HackMan” Marinov, SiteGround
Who am I?
• Chief System Architect of SiteGround
• Sysadmin since 96
• Teaching Network Security & Linux System
Administration in Sofia University
• Organizing biggest FOSS conference in Bulgaria
- OpenFest
• We are a hosting provider
• We needed shared filesystem between VMs and
containers
• Different size of directories and relatively small
files
• Sometimes milions of files in a single dir
Why were we considering shared FS?
• We started with DBRD + OCFS2
• I did a small test of cLVM + ATAoE
• We tried DRBD + OCFS2 for MySQL clusters
• We then switched to GlusterFS
• Later moved to CephFS
• and finally settled on good old NFS :)
but with the storage on Ceph RBD
The story of our storage endeavors
• CephFS
• GlusterFS
• MooseFS
• OrangeFS
• BeeGFS - no stats
I haven't played with Lustre or AFS
And because of our history with OCFS2 and GFS2 I skipped them
Which filesystems were tested
GlusterFS Tunning
● use distributed and replicated volumes
instead of only replicated
– gluster volume create VOL replica 2 stripe 2 ....
● setup performance parameters
Test cluster
• CPU i7-3770 @ 3.40GHz / 16G DDR3 1333MHz
• SSD SAMSUNG PM863
• 10Gbps Intel x520
• 40Gbps Qlogic QDR Infiniband
• IBM Blade 10Gbps switch
• Mellanox Infiniscale-IV switch
What did I test?
• Setup
• Fault tollerance
• Resource requirements (client & server)
• Capabilities (Redundancy, RDMA, caching)
• FS performance (creation, deletion, random
read, random write)
Complexity of the setup
• CephFS - requires a lot of knowledge and time
• GlusterFS - relatively easy to setup and install
• OrangeFS - extremely easy to do basic setup
• MooseFS - extremely easy to do basic setup
• DRBD + NFS - very complex setup if you want HA
• BeeGFS -
CephFS
• Fault tollerant by desing...
– until all MDSes start crashing
● single directory may crash all MDSes
● MDS behind on trimming
● Client failing to respond to cache pressure
● Ceph has redundancy but lacks RDMA support
CephFS
• Uses a lot of memory on the MDS nodes
• Not suitable to run on the same machines as the
compute nodes
• Small number of nodes 3-5 is a no go
CephFS tunning
• Placement Groups(PG)
• mds_log_max_expiring &
mds_log_max_segments fixes the problem with
trimming
• When you have a lot of inodes, increasing
mds_cache_size works but increases the
memory usage
CephFS fixes
• Client XXX failing to respond to cache pressure
● This issue is due to the current working set size of the
inodes.
The fix for this is setting the mds_cache_size in
/etc/ceph/ceph.conf accordingly.
ceph daemon mds.$(hostname) perf dump | grep inode
"inode_max": 100000, - max cache size value
"inodes": 109488, - currently used
CephFS fixes
• Client XXX failing to respond to cache pressure
● Another “FIX” for the same issue is to login to current active
MDS and run:
● This way it will change currnet active MDS to another server
and will drop inode usage
/etc/init.d/ceph restart mds
GlusterFS
• Fault tollerance
– in case of network hikups sometimes the mount may
become inaccesable and requires manual remount
– in case one storage node dies, you have to manually
remount if you don't have local copy of the data
• Capabilities - local caching, RDMA and data
Redundancy are supported. Offers different
ways for data redundancy and sharding
GlusterFS
• High CPU usage for heavily used file systems
– it required a copy of the data on all nodes
– the FUSE driver has a limit of the number of
small operations, that it can perform
• Unfortunatelly this limit was very easy to be
reached by our customers
GlusterFS Tunning
volume set VOLNAME performance.cache-refresh-timeout 3
volume set VOLNAME performance.io-thread-count 8
volume set VOLNAME performance.cache-size 256MB
volume set VOLNAME performance.cache-max-file-size 300KB
volume set VOLNAME performance.cache-min-file-size 0B
volume set VOLNAME performance.readdir-ahead on
volume set VOLNAME performance.write-behind-window-size 100KB
GlusterFS Tunning
volume set VOLNAME features.lock-heal on
volume set VOLNAME cluster.self-heal-daemon enable
volume set VOLNAME cluster.metadata-self-heal on
volume set VOLNAME cluster.consistent-metadata on
volume set VOLNAME cluster.stripe-block-size 100KB
volume set VOLNAME nfs.disable on
FUSE tunning
FUSE GlusterFS
entry_timeout entry-timeout
negative_timeout negative-timeout
attr_timeout attribute-timeout
mount eveyrhting with “-o intr” :)
MooseFS
• Reliability with multiple masters
• Multiple metadata servers
• Multiple chunkservers
• Flexible replication (per directory)
• A lot of stats and a web interface
BeeGFS
• Metadata and Storage nodes are replicated by
design
• FUSE based
OrangeFS
• No native redundancy uses corosync/pacemaker
for HA
• Adding new storage servers requires stopping of
the whole cluster
• It was very easy to break it
Ceph RBD + NFS
• the main tunning goes to NFS, by using the
cachefilesd
• it is very important to have cache for both
accessed and missing files
• enable FS-Cache by using "-o fsc" mount option
Ceph RBD + NFS
• verify that your mounts are with enabled cache:
•
# cat /proc/fs/nfsfs/volumes
NV SERVER PORT DEV FSID FSC
v4 aaaaaaaa 801 0:35 17454aa0fddaa6a5:96d7706699eb981b yes
v4 aaaaaaaa 801 0:374 7d581aa468faac13:92e653953087a8a4 yes
Sysctl tunning
fs.nfs.idmap_cache_timeout = 600
fs.nfs.nfs_congestion_kb = 127360
fs.nfs.nfs_mountpoint_timeout = 500
fs.nfs.nlm_grace_period = 10
fs.nfs.nlm_timeout = 10
Sysctl tunning
# Tune the network card polling
net.core.netdev_budget=900
net.core.netdev_budget_usecs=1000
net.core.netdev_max_backlog=300000
Sysctl tunning
# Increase network stack memory
net.core.rmem_max=16777216
net.core.wmem_max=16777216
net.core.rmem_default=16777216
net.core.wmem_default=16777216
net.core.optmem_max=16777216
Sysctl tunning
# memory allocation min/pressure/max.
# read buffer, write buffer, and buffer space
net.ipv4.tcp_rmem = 4096 87380 134217728
net.ipv4.tcp_wmem = 4096 65536 134217728
Sysctl tunning
# turn off selective ACK and timestamps
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_low_latency = 1
# scalable or bbr
net.ipv4.tcp_congestion_control = scalable
Sysctl tunning
# Increase system IP port range to allow for more
concurrent connections
net.ipv4.ip_local_port_range = 1024 65000
# OS tunning
vm.swappiness = 0
# Increase system file descriptor limit
fs.file-max = 65535
Stats
• For small files, network BW is not a problem
• However network latency is a killer :(
Stats
Creation of 10000 empty files:
Local SSD 7.030s
MooseFS 12.451s
NFS + Ceph RBD 16.947s
OrangeFS 40.574s
Gluster distributed 1m48.904s
Stats
MTU Congestion result
MooseFS 10G
1500 Scalable 10.411s
9000 Scalable 10.475s
9000 BBR 10.574s
1500 BBR 10.710s
GlusterFS 10G
1500 BBR 48.143s
1500 Scalable 48.292s
9000 BBR 48.747s
9000 Scalable 48.865s
StatsMTU Congestion result
MooseFS IPoIB
1500 BBR 9.484s
1500 Scalable 9.675s
GlusterFS IPoIB
9000 BBR 40.598s
1500 BBR 40.784s
1500 Scalable 41.448s
9000 Scalable 41.803s
GlusterFS RDMA
1500 Scalable 31.731s
1500 BBR 31.768s
Stats
MTU Congestion result
MooseFS 10G
1500 Scalable 3m46.501s
1500 BBR 3m47.066s
9000 Scalable 3m48.129s
9000 BBR 3m58.068s
GlusterFS 10G
1500 BBR 7m56.144s
1500 Scalable 7m57.663s
9000 BBR 7m56.607s
9000 Scalable 7m53.828s
Creation of
10000
random size
files:
Stats
MTU Congestion result
MooseFS IPoIB
1500 BBR 3m48.254s
1500 Scalable 3m49.467s
GlusterFS RDMA
1500 Scalable 8m52.168s
Creation of 10000 random size files:
Thank you!
Marian HackMan Marinov
mm@siteground.com
Thank you!
Questions?
Marian HackMan Marinov
mm@siteground.com
Comparison of-foss-distributed-storage

More Related Content

PDF
Under the Hood of a Shard-per-Core Database Architecture
PDF
Apache Iceberg Presentation for the St. Louis Big Data IDEA
PDF
Change Data Streaming Patterns for Microservices With Debezium
PDF
Understanding Data Partitioning and Replication in Apache Cassandra
PPTX
Azure SQL Database & Azure SQL Data Warehouse
PPTX
Cassandra vs. ScyllaDB: Evolutionary Differences
PPTX
RocksDB compaction
PPTX
Performance evolution of raid
Under the Hood of a Shard-per-Core Database Architecture
Apache Iceberg Presentation for the St. Louis Big Data IDEA
Change Data Streaming Patterns for Microservices With Debezium
Understanding Data Partitioning and Replication in Apache Cassandra
Azure SQL Database & Azure SQL Data Warehouse
Cassandra vs. ScyllaDB: Evolutionary Differences
RocksDB compaction
Performance evolution of raid

What's hot (20)

PPTX
Apache Kudu: Technical Deep Dive


PDF
Iceberg: a fast table format for S3
PPTX
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics
PDF
Introduction to Azure IaaS
PDF
Apache Spark in Depth: Core Concepts, Architecture & Internals
PDF
Prometheus Multi Tenancy
PPTX
Introduction to Kafka
PDF
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
PDF
Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...
PDF
Improving PySpark performance: Spark Performance Beyond the JVM
PDF
Deploying Flink on Kubernetes - David Anderson
PDF
Apache Kafka Fundamentals for Architects, Admins and Developers
PDF
Iceberg + Alluxio for Fast Data Analytics
PDF
Data Lake - Multitenancy Best Practices
PDF
Cassandra at Instagram 2016 (Dikang Gu, Facebook) | Cassandra Summit 2016
PDF
Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안
PPTX
Delta Lake with Azure Databricks
PPTX
Performance Optimizations in Apache Impala
PDF
MySQL with DRBD/Pacemaker/Corosync on Linux
PPTX
Presentation oracle on power power advantages and license optimization
Apache Kudu: Technical Deep Dive


Iceberg: a fast table format for S3
Apache Ignite vs Alluxio: Memory Speed Big Data Analytics
Introduction to Azure IaaS
Apache Spark in Depth: Core Concepts, Architecture & Internals
Prometheus Multi Tenancy
Introduction to Kafka
[OpenStack Days Korea 2016] Track1 - All flash CEPH 구성 및 최적화
Best Practice of Compression/Decompression Codes in Apache Spark with Sophia...
Improving PySpark performance: Spark Performance Beyond the JVM
Deploying Flink on Kubernetes - David Anderson
Apache Kafka Fundamentals for Architects, Admins and Developers
Iceberg + Alluxio for Fast Data Analytics
Data Lake - Multitenancy Best Practices
Cassandra at Instagram 2016 (Dikang Gu, Facebook) | Cassandra Summit 2016
Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안
Delta Lake with Azure Databricks
Performance Optimizations in Apache Impala
MySQL with DRBD/Pacemaker/Corosync on Linux
Presentation oracle on power power advantages and license optimization
Ad

Similar to Comparison of-foss-distributed-storage (20)

PDF
Comparison of foss distributed storage
PDF
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
PPTX
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
PDF
Gluster for Geeks: Performance Tuning Tips & Tricks
PDF
Linux Huge Pages
PPTX
Anthony Somerset - Site Speed = Success!
PDF
Caching methodology and strategies
PDF
Caching Methodology & Strategies
PDF
hbaseconasia2017: Large scale data near-line loading method and architecture
PDF
SUSE Storage: Sizing and Performance (Ceph)
PDF
Tuning Linux Windows and Firebird for Heavy Workload
PPT
Real world capacity
PPTX
Windows Server 2012 R2 Software-Defined Storage
PDF
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
PPTX
ZFS for Databases
PPTX
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
PDF
Improving Hadoop Performance via Linux
PDF
Best Practices with PostgreSQL on Solaris
PPTX
In-memory Caching in HDFS: Lower Latency, Same Great Taste
PPTX
Improving Hadoop Cluster Performance via Linux Configuration
Comparison of foss distributed storage
Build an High-Performance and High-Durable Block Storage Service Based on Ceph
Red Hat Ceph Storage Acceleration Utilizing Flash Technology
Gluster for Geeks: Performance Tuning Tips & Tricks
Linux Huge Pages
Anthony Somerset - Site Speed = Success!
Caching methodology and strategies
Caching Methodology & Strategies
hbaseconasia2017: Large scale data near-line loading method and architecture
SUSE Storage: Sizing and Performance (Ceph)
Tuning Linux Windows and Firebird for Heavy Workload
Real world capacity
Windows Server 2012 R2 Software-Defined Storage
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar Ahmed
ZFS for Databases
Red Hat Storage Day Dallas - Red Hat Ceph Storage Acceleration Utilizing Flas...
Improving Hadoop Performance via Linux
Best Practices with PostgreSQL on Solaris
In-memory Caching in HDFS: Lower Latency, Same Great Taste
Improving Hadoop Cluster Performance via Linux Configuration
Ad

More from Marian Marinov (20)

PDF
How to start and then move forward in IT
PDF
Thinking about highly-available systems and their setup
PDF
Understanding your memory usage under Linux
PDF
How to implement PassKeys in your application
PDF
Dev.bg DevOps March 2024 Monitoring & Logging
PDF
Basic presentation of cryptography mechanisms
PDF
Microservices: Benefits, drawbacks and are they for me?
PDF
Introduction and replication to DragonflyDB
PDF
Message Queuing - Gearman, Mosquitto, Kafka and RabbitMQ
PDF
How to successfully migrate to DevOps .pdf
PDF
How to survive in the work from home era
PDF
Managing sysadmins
PDF
Improve your storage with bcachefs
PDF
Control your service resources with systemd
PDF
Защо и как да обогатяваме знанията си?
PDF
Securing your MySQL server
PDF
Sysadmin vs. dev ops
PDF
DoS and DDoS mitigations with eBPF, XDP and DPDK
PDF
Challenges with high density networks
PDF
SiteGround building automation
How to start and then move forward in IT
Thinking about highly-available systems and their setup
Understanding your memory usage under Linux
How to implement PassKeys in your application
Dev.bg DevOps March 2024 Monitoring & Logging
Basic presentation of cryptography mechanisms
Microservices: Benefits, drawbacks and are they for me?
Introduction and replication to DragonflyDB
Message Queuing - Gearman, Mosquitto, Kafka and RabbitMQ
How to successfully migrate to DevOps .pdf
How to survive in the work from home era
Managing sysadmins
Improve your storage with bcachefs
Control your service resources with systemd
Защо и как да обогатяваме знанията си?
Securing your MySQL server
Sysadmin vs. dev ops
DoS and DDoS mitigations with eBPF, XDP and DPDK
Challenges with high density networks
SiteGround building automation

Recently uploaded (20)

PPTX
master seminar digital applications in india
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
01-Introduction-to-Information-Management.pdf
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Week 4 Term 3 Study Techniques revisited.pptx
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
Basic Mud Logging Guide for educational purpose
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
PPTX
Pharma ospi slides which help in ospi learning
PPTX
Cell Structure & Organelles in detailed.
PPTX
Cell Types and Its function , kingdom of life
PDF
O7-L3 Supply Chain Operations - ICLT Program
master seminar digital applications in india
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Microbial disease of the cardiovascular and lymphatic systems
01-Introduction-to-Information-Management.pdf
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Week 4 Term 3 Study Techniques revisited.pptx
STATICS OF THE RIGID BODIES Hibbelers.pdf
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
human mycosis Human fungal infections are called human mycosis..pptx
O5-L3 Freight Transport Ops (International) V1.pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Basic Mud Logging Guide for educational purpose
Microbial diseases, their pathogenesis and prophylaxis
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
Pharma ospi slides which help in ospi learning
Cell Structure & Organelles in detailed.
Cell Types and Its function , kingdom of life
O7-L3 Supply Chain Operations - ICLT Program

Comparison of-foss-distributed-storage

  • 1. Talk Title Here Author Name, Company Comparison of FOSS distributed filesystems Marian “HackMan” Marinov, SiteGround
  • 2. Who am I? • Chief System Architect of SiteGround • Sysadmin since 96 • Teaching Network Security & Linux System Administration in Sofia University • Organizing biggest FOSS conference in Bulgaria - OpenFest
  • 3. • We are a hosting provider • We needed shared filesystem between VMs and containers • Different size of directories and relatively small files • Sometimes milions of files in a single dir Why were we considering shared FS?
  • 4. • We started with DBRD + OCFS2 • I did a small test of cLVM + ATAoE • We tried DRBD + OCFS2 for MySQL clusters • We then switched to GlusterFS • Later moved to CephFS • and finally settled on good old NFS :) but with the storage on Ceph RBD The story of our storage endeavors
  • 5. • CephFS • GlusterFS • MooseFS • OrangeFS • BeeGFS - no stats I haven't played with Lustre or AFS And because of our history with OCFS2 and GFS2 I skipped them Which filesystems were tested
  • 6. GlusterFS Tunning ● use distributed and replicated volumes instead of only replicated – gluster volume create VOL replica 2 stripe 2 .... ● setup performance parameters
  • 7. Test cluster • CPU i7-3770 @ 3.40GHz / 16G DDR3 1333MHz • SSD SAMSUNG PM863 • 10Gbps Intel x520 • 40Gbps Qlogic QDR Infiniband • IBM Blade 10Gbps switch • Mellanox Infiniscale-IV switch
  • 8. What did I test? • Setup • Fault tollerance • Resource requirements (client & server) • Capabilities (Redundancy, RDMA, caching) • FS performance (creation, deletion, random read, random write)
  • 9. Complexity of the setup • CephFS - requires a lot of knowledge and time • GlusterFS - relatively easy to setup and install • OrangeFS - extremely easy to do basic setup • MooseFS - extremely easy to do basic setup • DRBD + NFS - very complex setup if you want HA • BeeGFS -
  • 10. CephFS • Fault tollerant by desing... – until all MDSes start crashing ● single directory may crash all MDSes ● MDS behind on trimming ● Client failing to respond to cache pressure ● Ceph has redundancy but lacks RDMA support
  • 11. CephFS • Uses a lot of memory on the MDS nodes • Not suitable to run on the same machines as the compute nodes • Small number of nodes 3-5 is a no go
  • 12. CephFS tunning • Placement Groups(PG) • mds_log_max_expiring & mds_log_max_segments fixes the problem with trimming • When you have a lot of inodes, increasing mds_cache_size works but increases the memory usage
  • 13. CephFS fixes • Client XXX failing to respond to cache pressure ● This issue is due to the current working set size of the inodes. The fix for this is setting the mds_cache_size in /etc/ceph/ceph.conf accordingly. ceph daemon mds.$(hostname) perf dump | grep inode "inode_max": 100000, - max cache size value "inodes": 109488, - currently used
  • 14. CephFS fixes • Client XXX failing to respond to cache pressure ● Another “FIX” for the same issue is to login to current active MDS and run: ● This way it will change currnet active MDS to another server and will drop inode usage /etc/init.d/ceph restart mds
  • 15. GlusterFS • Fault tollerance – in case of network hikups sometimes the mount may become inaccesable and requires manual remount – in case one storage node dies, you have to manually remount if you don't have local copy of the data • Capabilities - local caching, RDMA and data Redundancy are supported. Offers different ways for data redundancy and sharding
  • 16. GlusterFS • High CPU usage for heavily used file systems – it required a copy of the data on all nodes – the FUSE driver has a limit of the number of small operations, that it can perform • Unfortunatelly this limit was very easy to be reached by our customers
  • 17. GlusterFS Tunning volume set VOLNAME performance.cache-refresh-timeout 3 volume set VOLNAME performance.io-thread-count 8 volume set VOLNAME performance.cache-size 256MB volume set VOLNAME performance.cache-max-file-size 300KB volume set VOLNAME performance.cache-min-file-size 0B volume set VOLNAME performance.readdir-ahead on volume set VOLNAME performance.write-behind-window-size 100KB
  • 18. GlusterFS Tunning volume set VOLNAME features.lock-heal on volume set VOLNAME cluster.self-heal-daemon enable volume set VOLNAME cluster.metadata-self-heal on volume set VOLNAME cluster.consistent-metadata on volume set VOLNAME cluster.stripe-block-size 100KB volume set VOLNAME nfs.disable on
  • 19. FUSE tunning FUSE GlusterFS entry_timeout entry-timeout negative_timeout negative-timeout attr_timeout attribute-timeout mount eveyrhting with “-o intr” :)
  • 20. MooseFS • Reliability with multiple masters • Multiple metadata servers • Multiple chunkservers • Flexible replication (per directory) • A lot of stats and a web interface
  • 21. BeeGFS • Metadata and Storage nodes are replicated by design • FUSE based
  • 22. OrangeFS • No native redundancy uses corosync/pacemaker for HA • Adding new storage servers requires stopping of the whole cluster • It was very easy to break it
  • 23. Ceph RBD + NFS • the main tunning goes to NFS, by using the cachefilesd • it is very important to have cache for both accessed and missing files • enable FS-Cache by using "-o fsc" mount option
  • 24. Ceph RBD + NFS • verify that your mounts are with enabled cache: • # cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v4 aaaaaaaa 801 0:35 17454aa0fddaa6a5:96d7706699eb981b yes v4 aaaaaaaa 801 0:374 7d581aa468faac13:92e653953087a8a4 yes
  • 25. Sysctl tunning fs.nfs.idmap_cache_timeout = 600 fs.nfs.nfs_congestion_kb = 127360 fs.nfs.nfs_mountpoint_timeout = 500 fs.nfs.nlm_grace_period = 10 fs.nfs.nlm_timeout = 10
  • 26. Sysctl tunning # Tune the network card polling net.core.netdev_budget=900 net.core.netdev_budget_usecs=1000 net.core.netdev_max_backlog=300000
  • 27. Sysctl tunning # Increase network stack memory net.core.rmem_max=16777216 net.core.wmem_max=16777216 net.core.rmem_default=16777216 net.core.wmem_default=16777216 net.core.optmem_max=16777216
  • 28. Sysctl tunning # memory allocation min/pressure/max. # read buffer, write buffer, and buffer space net.ipv4.tcp_rmem = 4096 87380 134217728 net.ipv4.tcp_wmem = 4096 65536 134217728
  • 29. Sysctl tunning # turn off selective ACK and timestamps net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_low_latency = 1 # scalable or bbr net.ipv4.tcp_congestion_control = scalable
  • 30. Sysctl tunning # Increase system IP port range to allow for more concurrent connections net.ipv4.ip_local_port_range = 1024 65000 # OS tunning vm.swappiness = 0 # Increase system file descriptor limit fs.file-max = 65535
  • 31. Stats • For small files, network BW is not a problem • However network latency is a killer :(
  • 32. Stats Creation of 10000 empty files: Local SSD 7.030s MooseFS 12.451s NFS + Ceph RBD 16.947s OrangeFS 40.574s Gluster distributed 1m48.904s
  • 33. Stats MTU Congestion result MooseFS 10G 1500 Scalable 10.411s 9000 Scalable 10.475s 9000 BBR 10.574s 1500 BBR 10.710s GlusterFS 10G 1500 BBR 48.143s 1500 Scalable 48.292s 9000 BBR 48.747s 9000 Scalable 48.865s
  • 34. StatsMTU Congestion result MooseFS IPoIB 1500 BBR 9.484s 1500 Scalable 9.675s GlusterFS IPoIB 9000 BBR 40.598s 1500 BBR 40.784s 1500 Scalable 41.448s 9000 Scalable 41.803s GlusterFS RDMA 1500 Scalable 31.731s 1500 BBR 31.768s
  • 35. Stats MTU Congestion result MooseFS 10G 1500 Scalable 3m46.501s 1500 BBR 3m47.066s 9000 Scalable 3m48.129s 9000 BBR 3m58.068s GlusterFS 10G 1500 BBR 7m56.144s 1500 Scalable 7m57.663s 9000 BBR 7m56.607s 9000 Scalable 7m53.828s Creation of 10000 random size files:
  • 36. Stats MTU Congestion result MooseFS IPoIB 1500 BBR 3m48.254s 1500 Scalable 3m49.467s GlusterFS RDMA 1500 Scalable 8m52.168s Creation of 10000 random size files:
  • 37. Thank you! Marian HackMan Marinov mm@siteground.com
  • 38. Thank you! Questions? Marian HackMan Marinov mm@siteground.com