SlideShare a Scribd company logo
CASSANDRA &
BENCHMARKING
A holistic perspective
Agenda
1. This presentation is related to performance benchmarks
for Cassandra based systems
2. Discuss benchmarking in general
3. Define and Approach
4. Explore gotchas and things to look out for
5. Hear from you! (Prizes for best benchmarking stories)
Benchmarking
• Benchmark testing is the process of load testing a
component or an entire end to end IT system to determine
the performance characteristics of the application.
Benchmarking Properties
• Should be repeatable
• Should capture performance measurements from
successive runs
• Ideally there should be low variance between successive
tests
• Should highlight improvements or degradation in system
changes
Modern Systems
• More often than not distributed.
• Many different types of system components
• Complex performance constraints
• What is Easily Measured? Network, CPU, Memory, I/O
Utilisation
• More Difficult: Tech. Specific Factors, e.g. Cassandra –
impact of compaction, read performance
Justification for Benchmarking
• Simple:
• Is the system going to keep performing the more users there are?
• Complex:
• Cost Reduction
• Optimisation
• Growth Projection
• TCO
APPROACH
Caveats
• The more information you have the better…
• Any investment in systemic testing is generally a good
investment
• Simplify the goals/outcomes for business
• Automate as much as possible and formalise test
procedure to ensure adherence to quality measures.
• As interested in percentiles as well as mean values
Requirements
• Discover resource constraints
• Discover modes of failure
• To guarantee operation outside of usual parameters
• Ensure SLAs are being met
• Ensure operation over longer periods is consistent.
Basic Approach
• Distinguish component benchmark from system
benchmark.
• Component benchmark is important, defines a basic SLA
for inter component operations.
• A system is sum of all parts, not just each component :
Component performance does not imply system
performance.
• Take corrective action from the bottom up (network,
hardware, compute resources) as well as from the top
down (API design, data access patterns).
Holistic Approach
• The system exists to service business requirements, work
backwards from them.
• Define our benchmark from user perspective.
• Technical goals + business goals must align.
• The system must function in its entirety, it is not sufficient
to performance test each component in isolation.
1. Define a Basic Traffic Model
• Example - Simple Storefront
• GET /product/list (50%)
• GET /product/{id} (20%)
• POST /product/{id}/order (20%)
• GET /orders/list (10%)
2. Define a User Profile
• User Type 1
• Browse heavy
• GET /product/list (70%)
• GET /product/{id} (20%)
• POST /product/{id}/order (5%)
• GET /orders/list (5%)
• User Type 2
• Compulsive buyers
• GET /product/list (30%)
• GET /product/{id} (20%)
• POST /product/{id}/order (30%)
• GET /orders/list (20%)
Peak Periods?
• Adding an hourly activity allows for a more useful
benchmark.
• Can be expressed as active user count.
• Very simple to assign a probability to the number of each
type of user on the system at that time.
• E.g. 20% type 1, 80% type 2.
• The ideal circumstance is to use real data for these
models if any is available.
• Distributed load drivers coordinate to meet the hourly user
count.
Peak Periods?
0
2000
4000
6000
8000
10000
12000
14000
16000
0 2 4 6 8 10 12 14 16 18 20 22
Hour
Active Users
Tooling
• Jmeter
• The Grinder
• Jolokia (JMX)
• Logstash / Statsd
• Codahale Metrics
• Graphite (Visualisation)
• Iostat / dstat, iftop, netstat, htop, etc.
• cassandra-stress (useful for a basic sanity check)
CASSANDRA
Specifics
Considerations
• Cassandra’s append only writes mean writes are always
consistently fast given sufficient resources
• Compaction has a different impact depending on the
strategy you use (STCS lighter than LCS).
• Pending compactions tend to backup more during load
oriented testing
• Reads have a significant impact depending on:
• Spread of column mutations across SSTables
• Compaction strategy (STCS less efficient for above than LCS)
• No. of reads for same row key (whether we are exercising the key
cache or not)
• Our consistency level (same for writes)
Common Issues
• Poor query design (unbounded queries, abuse of ALLOW
FILTERING), anti-patterns.
• Poor capacity planning, disk, memory, cpu etc.
• Many failed requests on coordinators may lead to
resources being over-used for hinted handoff.
• If a node is memory constrained you may get JVM pauses
due to garbage collection
• Poor network connectivity and incorrect consistency
levels may lead to more timeouts.
• It is possible to have hotspots in Cassandra if you have
not modelled keys correctly.
What to collect during test?
• Read / Write latency per CF (nodetool cfstats)
• No. of reads / writes (nodetool cfstats)
• No. of pending compactions
• Thread Pool usage, especially pending (nodetool tipstats)
• Correlate with
• Disk i/o
• CPU
• Memory usage
• Visualise as much as possible and use overlays for
correlation.
Points to Remember
• Latency reported by Cassandra is internal, so only useful
to tell if Cassandra I/O is performing adequately. Graph it
to get most value or use OpsCentre.
• Add metrics at every tier in your system, make sure it is
possible to correlate the above number with latency in
other parts of the system.
• Soak testing is critical with Cassandra as empty system
performance may be very different as disk utilization /
compaction requirements grow.
• Experiment with settings for easy gains. Some CFs may
benefit from RowCache.
YOUR STORIES
Best two stories get books from O’ Reilly

More Related Content

PDF
London JBUG April 2015 - Performance Tuning Apps with WildFly Application Server
PPSX
Test Team Responsibilities
PPSX
Defect Life Cycle
PDF
Database testing tutorial
PPT
Dal deck
PPSX
Test Case Design and Technique
PDF
Webinar Slides: Real-Time Replication vs. ETL - How Analytics Requires New Te...
PPTX
Spring Data Cassandra
London JBUG April 2015 - Performance Tuning Apps with WildFly Application Server
Test Team Responsibilities
Defect Life Cycle
Database testing tutorial
Dal deck
Test Case Design and Technique
Webinar Slides: Real-Time Replication vs. ETL - How Analytics Requires New Te...
Spring Data Cassandra

Similar to Cassandra Applications Benchmarking (20)

PDF
performancetestinganoverview-110206071921-phpapp02.pdf
PPTX
SCRIMPS-STD: Test Automation Design Principles - and asking the right questions!
PDF
PAC 2019 virtual Alexander Podelko
PPTX
Training - What is Performance ?
PDF
Tools. Techniques. Trouble?
PDF
Integration strategies best practices- Mulesoft meetup April 2018
PPTX
Linux basics
PPTX
Performance Testing
PDF
Performance tuning Grails applications
PPTX
Scaling Systems: Architectures that grow
PDF
Microservices for java architects it-symposium-2015-09-15
PPTX
Dynamics CRM high volume systems - lessons from the field
PPTX
05. performance-concepts-26-slides
PDF
Building data intensive applications
PDF
Art of Cloud Workload Translation
PDF
Cassandra CLuster Management by Japan Cassandra Community
PPTX
PAD: Performance Anomaly Detection in Multi-Server Distributed Systems
PPTX
Performance tuning Grails applications SpringOne 2GX 2014
PPTX
Cqrs and Event Sourcing Intro For Developers
PPTX
Владимир Бронников (Senior .NET Developer, Perfectial) “Performance optimizat...
performancetestinganoverview-110206071921-phpapp02.pdf
SCRIMPS-STD: Test Automation Design Principles - and asking the right questions!
PAC 2019 virtual Alexander Podelko
Training - What is Performance ?
Tools. Techniques. Trouble?
Integration strategies best practices- Mulesoft meetup April 2018
Linux basics
Performance Testing
Performance tuning Grails applications
Scaling Systems: Architectures that grow
Microservices for java architects it-symposium-2015-09-15
Dynamics CRM high volume systems - lessons from the field
05. performance-concepts-26-slides
Building data intensive applications
Art of Cloud Workload Translation
Cassandra CLuster Management by Japan Cassandra Community
PAD: Performance Anomaly Detection in Multi-Server Distributed Systems
Performance tuning Grails applications SpringOne 2GX 2014
Cqrs and Event Sourcing Intro For Developers
Владимир Бронников (Senior .NET Developer, Perfectial) “Performance optimizat...
Ad

Recently uploaded (20)

PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
cuic standard and advanced reporting.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Electronic commerce courselecture one. Pdf
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
cuic standard and advanced reporting.pdf
MYSQL Presentation for SQL database connectivity
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Big Data Technologies - Introduction.pptx
Building Integrated photovoltaic BIPV_UPV.pdf
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Review of recent advances in non-invasive hemoglobin estimation
20250228 LYD VKU AI Blended-Learning.pptx
NewMind AI Weekly Chronicles - August'25 Week I
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Electronic commerce courselecture one. Pdf
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Encapsulation_ Review paper, used for researhc scholars
Mobile App Security Testing_ A Comprehensive Guide.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Advanced methodologies resolving dimensionality complications for autism neur...
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Ad

Cassandra Applications Benchmarking

  • 2. Agenda 1. This presentation is related to performance benchmarks for Cassandra based systems 2. Discuss benchmarking in general 3. Define and Approach 4. Explore gotchas and things to look out for 5. Hear from you! (Prizes for best benchmarking stories)
  • 3. Benchmarking • Benchmark testing is the process of load testing a component or an entire end to end IT system to determine the performance characteristics of the application.
  • 4. Benchmarking Properties • Should be repeatable • Should capture performance measurements from successive runs • Ideally there should be low variance between successive tests • Should highlight improvements or degradation in system changes
  • 5. Modern Systems • More often than not distributed. • Many different types of system components • Complex performance constraints • What is Easily Measured? Network, CPU, Memory, I/O Utilisation • More Difficult: Tech. Specific Factors, e.g. Cassandra – impact of compaction, read performance
  • 6. Justification for Benchmarking • Simple: • Is the system going to keep performing the more users there are? • Complex: • Cost Reduction • Optimisation • Growth Projection • TCO
  • 8. Caveats • The more information you have the better… • Any investment in systemic testing is generally a good investment • Simplify the goals/outcomes for business • Automate as much as possible and formalise test procedure to ensure adherence to quality measures. • As interested in percentiles as well as mean values
  • 9. Requirements • Discover resource constraints • Discover modes of failure • To guarantee operation outside of usual parameters • Ensure SLAs are being met • Ensure operation over longer periods is consistent.
  • 10. Basic Approach • Distinguish component benchmark from system benchmark. • Component benchmark is important, defines a basic SLA for inter component operations. • A system is sum of all parts, not just each component : Component performance does not imply system performance. • Take corrective action from the bottom up (network, hardware, compute resources) as well as from the top down (API design, data access patterns).
  • 11. Holistic Approach • The system exists to service business requirements, work backwards from them. • Define our benchmark from user perspective. • Technical goals + business goals must align. • The system must function in its entirety, it is not sufficient to performance test each component in isolation.
  • 12. 1. Define a Basic Traffic Model • Example - Simple Storefront • GET /product/list (50%) • GET /product/{id} (20%) • POST /product/{id}/order (20%) • GET /orders/list (10%)
  • 13. 2. Define a User Profile • User Type 1 • Browse heavy • GET /product/list (70%) • GET /product/{id} (20%) • POST /product/{id}/order (5%) • GET /orders/list (5%) • User Type 2 • Compulsive buyers • GET /product/list (30%) • GET /product/{id} (20%) • POST /product/{id}/order (30%) • GET /orders/list (20%)
  • 14. Peak Periods? • Adding an hourly activity allows for a more useful benchmark. • Can be expressed as active user count. • Very simple to assign a probability to the number of each type of user on the system at that time. • E.g. 20% type 1, 80% type 2. • The ideal circumstance is to use real data for these models if any is available. • Distributed load drivers coordinate to meet the hourly user count.
  • 15. Peak Periods? 0 2000 4000 6000 8000 10000 12000 14000 16000 0 2 4 6 8 10 12 14 16 18 20 22 Hour Active Users
  • 16. Tooling • Jmeter • The Grinder • Jolokia (JMX) • Logstash / Statsd • Codahale Metrics • Graphite (Visualisation) • Iostat / dstat, iftop, netstat, htop, etc. • cassandra-stress (useful for a basic sanity check)
  • 18. Considerations • Cassandra’s append only writes mean writes are always consistently fast given sufficient resources • Compaction has a different impact depending on the strategy you use (STCS lighter than LCS). • Pending compactions tend to backup more during load oriented testing • Reads have a significant impact depending on: • Spread of column mutations across SSTables • Compaction strategy (STCS less efficient for above than LCS) • No. of reads for same row key (whether we are exercising the key cache or not) • Our consistency level (same for writes)
  • 19. Common Issues • Poor query design (unbounded queries, abuse of ALLOW FILTERING), anti-patterns. • Poor capacity planning, disk, memory, cpu etc. • Many failed requests on coordinators may lead to resources being over-used for hinted handoff. • If a node is memory constrained you may get JVM pauses due to garbage collection • Poor network connectivity and incorrect consistency levels may lead to more timeouts. • It is possible to have hotspots in Cassandra if you have not modelled keys correctly.
  • 20. What to collect during test? • Read / Write latency per CF (nodetool cfstats) • No. of reads / writes (nodetool cfstats) • No. of pending compactions • Thread Pool usage, especially pending (nodetool tipstats) • Correlate with • Disk i/o • CPU • Memory usage • Visualise as much as possible and use overlays for correlation.
  • 21. Points to Remember • Latency reported by Cassandra is internal, so only useful to tell if Cassandra I/O is performing adequately. Graph it to get most value or use OpsCentre. • Add metrics at every tier in your system, make sure it is possible to correlate the above number with latency in other parts of the system. • Soak testing is critical with Cassandra as empty system performance may be very different as disk utilization / compaction requirements grow. • Experiment with settings for easy gains. Some CFs may benefit from RowCache.
  • 22. YOUR STORIES Best two stories get books from O’ Reilly