Voldemort	
  on	
  Solid	
  State	
  Drives	
  
                             Vinoth	
  Chandar,	
  Lei	
  Gao,	
  Cuong	
  Tran	
  
                           Linkedin	
  Corporation,	
  Mountain	
  View,	
  CA	
  

Abstract
Voldemort is Linkedin’s open implementation of Amazon Dynamo, providing fast, scalable, fault-
tolerant access to key-value data. Voldemort is widely used by applications at LinkedIn that demand lots
of IOPS. Solid State Drives (SSD) are becoming an attractive option to speed up data access. In this
paper, we describe our experiences with GC issues on Voldemort server nodes, after migrating to SSD.
Based on these experiences, we provide an intuition for caching strategies with SSD storage.

1. Introduction
Voldemort [1] is a distributed key-value storage system, based on Amazon Dynamo. It has a very simple
get(k), put(k,v), delete(k) interface, that allows for pluggable serialization, routing and storage engines.
Voldemort serves a substantial amount of site traffic at LinkedIn for applications like ‘Skills’, ‘People
You May Know’, ‘Company Follow’, ‘LinkedIn Share’, serving thousands of operations/sec over several
terabytes of data. It also has wide adoption in companies such as Gilt Group, EHarmony, Nokia, Jive
Software, WealthFront and Mendeley.

Due to simple key-value access pattern, the single Voldemort server node performance is typically bound
by IOPS, with plenty of CPU cycles to spare. Hence, Voldemort clusters at LinkedIn were migrated to
SSD to increase the single server node capacity. The migration has proven fruitful, although unearthing a
set of interesting GC issues, which led to rethinking of our caching strategy with SSD. Rest of the paper
is organized as follows. Section 2 describes the software stack for a single Voldemort server. Section 3
describes the impact of SSD migration on the single server performance and details ways to mitigate
Java GC issues. Section 3 also explores leveraging SSD to alleviate caching problems. Section 4
concludes.

2. Single Server stack
The server uses an embedded, log structured, Java based storage engine - Oracle BerkeleyDB JE [2].
BDB employs an LRU cache on top of the JVM heap and relies on Java garbage collection for managing
its memory. Loosely, the cache is a bunch of references to index and data objects. Cache eviction
happens simply by releasing the references for garbage collection. A single cluster serves a large number
of applications and hence the BDB cache contains objects of different sizes, sharing the same BDB
cache. The server also has a background thread that enforces data retention policy, by periodically
deleting stale entries.

3. SSD Performance Implications
With plenty of IOPS at hand, the allocation rates went up causing very frequent GC pauses, moving the
bottleneck from IO to garbage collection. After migrating to SSD, the average latency greatly improved
from 20ms to 2ms. Speed of cluster expansion and data restoration has improved 10x. However, the 95th
and 99th percentile latencies shot up from 30ms to 130ms and 240ms to 380ms respectively, due to a host
of garbage collection issues, detailed below.

3.1 Need for End-End Correlation
By developing tools to correlate Linux paging statistics from SAR with pauses from GC, we discovered
that Linux was stealing pages from the JVM heap, resulting in 4-second minor pauses. Subsequent
promotions into the old generation incur page scans, causing the big pauses with a high system time
component. Hence, it is imperative to mlock() the server heap to prevent it from being swapped out.
Also, we experienced higher system time in lab experiments, since not all of the virtual address space of
the JVM heap had been mapped to physical pages. Thus, using the AlwaysPreTouch JVM option is
imperative for any ‘Big Data’ benchmarking tool, to reproduce the same memory conditions as in the
real world. This exercise stressed the importance of developing performance tools that can identify
interesting patterns by correlating performance data across the entire stack.

3.2 SSD Aware Caching
Promotion failures with huge 25-second pauses during the retention job, prompted us to rethink the
caching strategy with SSD. The retention job does a walk of the entire BDB database without any
throttling. With very fast SSD, this translates into rapid 200MB allocations and promotions, parallely
kicking out the objects from the LRU cache in old generation. Since the server is multitenant, hosting
different object sizes, this leads to heavy fragmentation. Real workloads almost always have ‘hotsets’
which live in the old generation and any incoming traffic that drastically changes the hotset is likely to
run into this issue. The issue was very difficult to reproduce since it depended heavily on the state of old
generation, highlighting the need for building performance test infrastructures that can replay real world
traffic. We managed to reproduce the problem by roughly matching up cache miss rates as seen in
production. We solved the problem by forcing BDB to evict data objects brought in by the retention job
right away, such that they are collected in young generation and never promoted.

In fact, we plan to cache only the index nodes over the JVM heap even for regular traffic. This will help
fight fragmentation and achieve predictable multitenant deployments. Results in lab have shown that this
approach can deliver comparable performance, due to the power of SSD and uniformly sized index
objects. Also, this approach reduces the promotion rate, thus increasing the chances that CMS initial
mark is scheduled after a minor collection. This improves initial mark time as described in next section.
This approach is applicable even for systems that manage their own memory since fragmentation is a
general issue.

3.3 Reducing Cost of CMS Initial mark
Assuming we can control fragmentation, yielding control back to the JVM to schedule CMS adaptively
based on promotion rate can help cut down initial mark times. Even when evicting data objects right
away, the high SSD read rates could cause heavy promotion for index objects. Under such
circumstances, the CMS initial mark might be scheduled when the young generation is not empty,
resulting in a 1.2 second CMS initial mark pause on a 2GB young generation. We found that by
increasing the CMSInitiatingOccupancyFraction to a higher value (90), the scheduling of CMS happened
much closer to minor collections when the young generation is empty or small, reducing the maximum
initial mark time to 0.4 seconds.

4. Conclusion
With SSD, we find that garbage collection will become a very significant bottleneck, especially for
systems, which have little control over the storage layer and rely on Java memory management. Big heap
sizes make the cost of garbage collection expensive, especially the single threaded CMS Initial mark. We
believe that data systems must revisit their caching strategies with SSDs. In this regard, SSD has
provided an efficient solution for handling fragmentation and moving towards predictable multitenancy.

References
[1] http://project-voldemort.com/
[2] http://www.oracle.com/technetwork/database/berkeleydb/overview/index-093405.html	
  

More Related Content

PPTX
Voldemort
PDF
ScyllaDB: What could you do with Cassandra compatibility at 1.8 million reque...
PPTX
Time-Series Apache HBase
PPTX
Cassandra Tuning - above and beyond
PDF
HBaseConAsia2018 Track1-1: Use CCSMap to improve HBase YGC time
PDF
Argus Production Monitoring at Salesforce
PDF
Dynomite: A Highly Available, Distributed and Scalable Dynamo Layer--Ioannis ...
PDF
Building Scalable, Real Time Applications for Financial Services with DataStax
Voldemort
ScyllaDB: What could you do with Cassandra compatibility at 1.8 million reque...
Time-Series Apache HBase
Cassandra Tuning - above and beyond
HBaseConAsia2018 Track1-1: Use CCSMap to improve HBase YGC time
Argus Production Monitoring at Salesforce
Dynomite: A Highly Available, Distributed and Scalable Dynamo Layer--Ioannis ...
Building Scalable, Real Time Applications for Financial Services with DataStax

What's hot (20)

PDF
Kafka to the Maxka - (Kafka Performance Tuning)
PDF
Scylla Summit 2016: Outbrain Case Study - Lowering Latency While Doing 20X IO...
PPTX
Scaling HDFS at Xiaomi
PPTX
One Billion Black Friday Shoppers on a Distributed Data Store (Fahd Siddiqui,...
PPTX
Hadoop engineering bo_f_final
PDF
25 snowflake
PDF
Mesosphere and Contentteam: A New Way to Run Cassandra
PDF
TeraCache: Efficient Caching Over Fast Storage Devices
PDF
HBaseConAsia2018 Track3-4: HBase and OpenTSDB practice at Huawei
PPTX
Lessons Learned on Java Tuning for Our Cassandra Clusters (Carlos Monroy, Kne...
PPTX
Redis Labs and SQL Server
PPTX
RedisConf17 - Redis Labs - Implementing Real-time Machine Learning with Redis-ML
PDF
HBaseConAsia2018 Track1-3: HBase at Xiaomi
PDF
Big Data Day LA 2015 - Sparking up your Cassandra Cluster- Analytics made Awe...
PDF
Redis for horizontally scaled data processing at jFrog bintray
PPTX
Building a Multi-Region Cluster at Target (Aaron Ploetz, Target) | Cassandra ...
PDF
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
PDF
Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...
PDF
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
PDF
HBaseCon 2015: HBase at Scale in an Online and High-Demand Environment
Kafka to the Maxka - (Kafka Performance Tuning)
Scylla Summit 2016: Outbrain Case Study - Lowering Latency While Doing 20X IO...
Scaling HDFS at Xiaomi
One Billion Black Friday Shoppers on a Distributed Data Store (Fahd Siddiqui,...
Hadoop engineering bo_f_final
25 snowflake
Mesosphere and Contentteam: A New Way to Run Cassandra
TeraCache: Efficient Caching Over Fast Storage Devices
HBaseConAsia2018 Track3-4: HBase and OpenTSDB practice at Huawei
Lessons Learned on Java Tuning for Our Cassandra Clusters (Carlos Monroy, Kne...
Redis Labs and SQL Server
RedisConf17 - Redis Labs - Implementing Real-time Machine Learning with Redis-ML
HBaseConAsia2018 Track1-3: HBase at Xiaomi
Big Data Day LA 2015 - Sparking up your Cassandra Cluster- Analytics made Awe...
Redis for horizontally scaled data processing at jFrog bintray
Building a Multi-Region Cluster at Target (Aaron Ploetz, Target) | Cassandra ...
Webinar: Deep Dive on Apache Flink State - Seth Wiesman
Scaling Redis Cluster Deployments for Genome Analysis (featuring LSU) - Terry...
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
HBaseCon 2015: HBase at Scale in an Online and High-Demand Environment
Ad

Viewers also liked (7)

PPTX
Gc and-pagescan-attacks-by-linux
PPT
Bluetube
PDF
Project Voldemort
PDF
Voldemort Nosql
PPT
Composing and Executing Parallel Data Flow Graphs wth Shell Pipes
PDF
Voldemort : Prototype to Production
PDF
Introducción a Voldemort - Innova4j
Gc and-pagescan-attacks-by-linux
Bluetube
Project Voldemort
Voldemort Nosql
Composing and Executing Parallel Data Flow Graphs wth Shell Pipes
Voldemort : Prototype to Production
Introducción a Voldemort - Innova4j
Ad

Similar to Voldemort on Solid State Drives (20)

PDF
Voldemort on Solid State Drives
PPTX
Deploying ssd in the data center 2014
PDF
hpc2013_20131223
PPTX
IMC Summit 2016 Breakout - Per Minoborg - Work with Multiple Hot Terabytes in...
PDF
CLFS 2010
PDF
Why does my choice of storage matter with cassandra?
PDF
Generic SAN Acceleration White Paper DRAFT
PDF
OpenDS_Jazoon2010
PDF
Optimizing RocksDB for Open-Channel SSDs
PDF
Analyst Perspective: SSD Caching or SSD Tiering - Which is Better?
PDF
Linux on System z Optimizing Resource Utilization for Linux under z/VM - Part1
PDF
Distro Recipes 2013 : My ${favorite_linux_distro} is slow!
PDF
#MFSummit2016 Operate: The race for space
PDF
What every-programmer-should-know-about-memory
PDF
OS caused Large JVM pauses: Deep dive and solutions
PPT
Ssd And Enteprise Storage
PDF
Demystifying SSD, Mark Smith, S3
PDF
S3
PPTX
2015 deploying flash in the data center
PPTX
2015 deploying flash in the data center
Voldemort on Solid State Drives
Deploying ssd in the data center 2014
hpc2013_20131223
IMC Summit 2016 Breakout - Per Minoborg - Work with Multiple Hot Terabytes in...
CLFS 2010
Why does my choice of storage matter with cassandra?
Generic SAN Acceleration White Paper DRAFT
OpenDS_Jazoon2010
Optimizing RocksDB for Open-Channel SSDs
Analyst Perspective: SSD Caching or SSD Tiering - Which is Better?
Linux on System z Optimizing Resource Utilization for Linux under z/VM - Part1
Distro Recipes 2013 : My ${favorite_linux_distro} is slow!
#MFSummit2016 Operate: The race for space
What every-programmer-should-know-about-memory
OS caused Large JVM pauses: Deep dive and solutions
Ssd And Enteprise Storage
Demystifying SSD, Mark Smith, S3
S3
2015 deploying flash in the data center
2015 deploying flash in the data center

More from Vinoth Chandar (6)

PPTX
[Pulsar summit na 21] Change Data Capture To Data Lakes Using Apache Pulsar/Hudi
PDF
Hoodie - DataEngConf 2017
PDF
Hoodie: How (And Why) We built an analytical datastore on Spark
PDF
Hadoop Strata Talk - Uber, your hadoop has arrived
PDF
Triple-Triple RDF Store with Greedy Graph based Grouping
PDF
Distributeddatabasesforchallengednet
[Pulsar summit na 21] Change Data Capture To Data Lakes Using Apache Pulsar/Hudi
Hoodie - DataEngConf 2017
Hoodie: How (And Why) We built an analytical datastore on Spark
Hadoop Strata Talk - Uber, your hadoop has arrived
Triple-Triple RDF Store with Greedy Graph based Grouping
Distributeddatabasesforchallengednet

Recently uploaded (20)

PDF
IGGE1 Understanding the Self1234567891011
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
What if we spent less time fighting change, and more time building what’s rig...
PDF
Trump Administration's workforce development strategy
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PPTX
Virtual and Augmented Reality in Current Scenario
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PPTX
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
PDF
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
Complications of Minimal Access-Surgery.pdf
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
IGGE1 Understanding the Self1234567891011
Paper A Mock Exam 9_ Attempt review.pdf.
What if we spent less time fighting change, and more time building what’s rig...
Trump Administration's workforce development strategy
FORM 1 BIOLOGY MIND MAPS and their schemes
Virtual and Augmented Reality in Current Scenario
A powerpoint presentation on the Revised K-10 Science Shaping Paper
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
202450812 BayCHI UCSC-SV 20250812 v17.pptx
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
Weekly quiz Compilation Jan -July 25.pdf
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
Complications of Minimal Access-Surgery.pdf
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx

Voldemort on Solid State Drives

  • 1. Voldemort  on  Solid  State  Drives   Vinoth  Chandar,  Lei  Gao,  Cuong  Tran   Linkedin  Corporation,  Mountain  View,  CA   Abstract Voldemort is Linkedin’s open implementation of Amazon Dynamo, providing fast, scalable, fault- tolerant access to key-value data. Voldemort is widely used by applications at LinkedIn that demand lots of IOPS. Solid State Drives (SSD) are becoming an attractive option to speed up data access. In this paper, we describe our experiences with GC issues on Voldemort server nodes, after migrating to SSD. Based on these experiences, we provide an intuition for caching strategies with SSD storage. 1. Introduction Voldemort [1] is a distributed key-value storage system, based on Amazon Dynamo. It has a very simple get(k), put(k,v), delete(k) interface, that allows for pluggable serialization, routing and storage engines. Voldemort serves a substantial amount of site traffic at LinkedIn for applications like ‘Skills’, ‘People You May Know’, ‘Company Follow’, ‘LinkedIn Share’, serving thousands of operations/sec over several terabytes of data. It also has wide adoption in companies such as Gilt Group, EHarmony, Nokia, Jive Software, WealthFront and Mendeley. Due to simple key-value access pattern, the single Voldemort server node performance is typically bound by IOPS, with plenty of CPU cycles to spare. Hence, Voldemort clusters at LinkedIn were migrated to SSD to increase the single server node capacity. The migration has proven fruitful, although unearthing a set of interesting GC issues, which led to rethinking of our caching strategy with SSD. Rest of the paper is organized as follows. Section 2 describes the software stack for a single Voldemort server. Section 3 describes the impact of SSD migration on the single server performance and details ways to mitigate Java GC issues. Section 3 also explores leveraging SSD to alleviate caching problems. Section 4 concludes. 2. Single Server stack The server uses an embedded, log structured, Java based storage engine - Oracle BerkeleyDB JE [2]. BDB employs an LRU cache on top of the JVM heap and relies on Java garbage collection for managing its memory. Loosely, the cache is a bunch of references to index and data objects. Cache eviction happens simply by releasing the references for garbage collection. A single cluster serves a large number of applications and hence the BDB cache contains objects of different sizes, sharing the same BDB cache. The server also has a background thread that enforces data retention policy, by periodically deleting stale entries. 3. SSD Performance Implications With plenty of IOPS at hand, the allocation rates went up causing very frequent GC pauses, moving the bottleneck from IO to garbage collection. After migrating to SSD, the average latency greatly improved from 20ms to 2ms. Speed of cluster expansion and data restoration has improved 10x. However, the 95th and 99th percentile latencies shot up from 30ms to 130ms and 240ms to 380ms respectively, due to a host of garbage collection issues, detailed below. 3.1 Need for End-End Correlation By developing tools to correlate Linux paging statistics from SAR with pauses from GC, we discovered that Linux was stealing pages from the JVM heap, resulting in 4-second minor pauses. Subsequent
  • 2. promotions into the old generation incur page scans, causing the big pauses with a high system time component. Hence, it is imperative to mlock() the server heap to prevent it from being swapped out. Also, we experienced higher system time in lab experiments, since not all of the virtual address space of the JVM heap had been mapped to physical pages. Thus, using the AlwaysPreTouch JVM option is imperative for any ‘Big Data’ benchmarking tool, to reproduce the same memory conditions as in the real world. This exercise stressed the importance of developing performance tools that can identify interesting patterns by correlating performance data across the entire stack. 3.2 SSD Aware Caching Promotion failures with huge 25-second pauses during the retention job, prompted us to rethink the caching strategy with SSD. The retention job does a walk of the entire BDB database without any throttling. With very fast SSD, this translates into rapid 200MB allocations and promotions, parallely kicking out the objects from the LRU cache in old generation. Since the server is multitenant, hosting different object sizes, this leads to heavy fragmentation. Real workloads almost always have ‘hotsets’ which live in the old generation and any incoming traffic that drastically changes the hotset is likely to run into this issue. The issue was very difficult to reproduce since it depended heavily on the state of old generation, highlighting the need for building performance test infrastructures that can replay real world traffic. We managed to reproduce the problem by roughly matching up cache miss rates as seen in production. We solved the problem by forcing BDB to evict data objects brought in by the retention job right away, such that they are collected in young generation and never promoted. In fact, we plan to cache only the index nodes over the JVM heap even for regular traffic. This will help fight fragmentation and achieve predictable multitenant deployments. Results in lab have shown that this approach can deliver comparable performance, due to the power of SSD and uniformly sized index objects. Also, this approach reduces the promotion rate, thus increasing the chances that CMS initial mark is scheduled after a minor collection. This improves initial mark time as described in next section. This approach is applicable even for systems that manage their own memory since fragmentation is a general issue. 3.3 Reducing Cost of CMS Initial mark Assuming we can control fragmentation, yielding control back to the JVM to schedule CMS adaptively based on promotion rate can help cut down initial mark times. Even when evicting data objects right away, the high SSD read rates could cause heavy promotion for index objects. Under such circumstances, the CMS initial mark might be scheduled when the young generation is not empty, resulting in a 1.2 second CMS initial mark pause on a 2GB young generation. We found that by increasing the CMSInitiatingOccupancyFraction to a higher value (90), the scheduling of CMS happened much closer to minor collections when the young generation is empty or small, reducing the maximum initial mark time to 0.4 seconds. 4. Conclusion With SSD, we find that garbage collection will become a very significant bottleneck, especially for systems, which have little control over the storage layer and rely on Java memory management. Big heap sizes make the cost of garbage collection expensive, especially the single threaded CMS Initial mark. We believe that data systems must revisit their caching strategies with SSDs. In this regard, SSD has provided an efficient solution for handling fragmentation and moving towards predictable multitenancy. References [1] http://project-voldemort.com/ [2] http://www.oracle.com/technetwork/database/berkeleydb/overview/index-093405.html