SlideShare a Scribd company logo
Chill, Distill, No Overkill: Best Practices to Stress Test Kafka
Siva Kunapuli
About me
2
Teacher, Programmer, Engineer, Architect
• Started early, still at it
• 15+ years
• Services, Product, Consulting, Technical Account Management
Customers
• Financial services, strategic
• “Customer with a problem”
Kafkaesque
• Fail often, and learn
• Challenging to operationalize, but useful
• Around the world
3
Stress testing Kafka:
The challenge
Paradigm driven
Protocol interactions a.k.a. real-time vs. batch
Data storage vs. distribution
Distributed system - components, scale, changing
conditions
Resources
Simple, commodity, cloud
Provisioning demands vs. reality
Structure
Parameters – test design
Recordability, repeatability
Costs and SLAs
4
Stress testing Kafka:
The challenge
Paradigm driven
Protocol interactions a.k.a. real-time vs. batch
Data storage vs. distribution
Distributed system - components, scale, changing
conditions
Resources
Simple, commodity, cloud
Provisioning demands vs. reality
Structure
Parameters – test design
Recordability, repeatability
Costs and SLAs
5
Stress testing Kafka:
The challenge
Paradigm driven
Protocol interactions a.k.a. real-time vs. batch
Data storage vs. distribution
Distributed system - components, scale, changing
conditions
Resources
Simple, commodity, cloud
Provisioning demands vs. reality
Structure
Parameters – test design
Recordability, repeatability
Costs and SLAs
6
Stress testing Kafka:
The challenge
Paradigm driven
Protocol interactions a.k.a. real-time vs. batch
Data storage vs. distribution
Distributed system - components, scale, changing
conditions
Resources
Simple, commodity, cloud
Provisioning demands vs. reality
Structure
Parameters – test design
Recordability, repeatability
Costs and SLAs
01
Chill
02
Distill
03
No Overkill
04
= Stress free
stress testing
Stress testing primer
8
Robustness of setup
• Can your system handle stress gracefully?
• Does it fail where and when you’re not looking?
• What is usual, and what is unusual?
Spanning stress (i.e., not selective)
• Component/framework models – Connect, Streams, Core
• Resources rather than use case(s) – storage, IO throughput, memory, CPU
• Concurrency, data access – many clients, simulated network conditions
Mission critical
• No part is open to failure
• Use case is the driver
Stress testing primer
9
Robustness of setup
• Can your system handle stress gracefully?
• Does it fail where and when you’re not looking?
• What is usual, and what is unusual?
Spanning stress (i.e., not selective)
• Component/framework models – Connect, Streams, Core
• Resources rather than use case(s) – storage, IO throughput, memory, CPU
• Concurrency, data access – many clients, simulated network conditions
Mission critical
• No part is open to failure
• Use case is the driver
Stress testing primer
10
Robustness of setup
• Can your system handle stress gracefully?
• Does it fail where and when you’re not looking?
• What is usual, and what is unusual?
Spanning stress (i.e., not selective)
• Component/framework models – Connect, Streams, Core
• Resources rather than use case(s) – storage, IO throughput, memory, CPU
• Concurrency, data access – many clients, simulated network conditions
Mission critical
• No part is open to failure
• Use case is the driver
Stress testing primer
11
Robustness of setup
• Can your system handle stress gracefully?
• Does it fail where and when you’re not looking?
• What is usual, and what is unusual?
Spanning stress (i.e., not selective)
• Component/framework models – Connect, Streams, Core
• Resources rather than use case(s) – storage, IO throughput, memory, CPU
• Concurrency, data access – many clients, simulated network conditions
Mission critical
• No part is open to failure
• Use case is the driver
Kafka – an introduction
12
Pre-requisites
13
Concern Ready Bonus points
Environment Identified scaling procedures – adding brokers, storage Automation for scaling
Identified components – connect, streams, core Good component diagram
At least at production scale Tear down after done
Identified network setup Ping, and packet roundtrip
Benchmarks Active (normal) benchmarks published Repeating at regular intervals
Identified SLA Negotiated, and signed off
Multi-tenancy Quotas set
Observability Full cluster metrics captured, and visualized Application metrics
Pre-requisites
14
Concern Ready Bonus points
Environment Identified scaling procedures – adding brokers, storage Automation for scaling
Identified components – connect, streams, core Good component diagram
At least at production scale Tear down after done
Identified network setup Ping, and packet roundtrip
Benchmarks Active (normal) benchmarks published Repeating at regular intervals
Identified SLA Negotiated, and signed off
Multi-tenancy Quotas set
Observability Full cluster metrics captured, and visualized Application metrics
Pre-requisites
15
Concern Ready Bonus points
Environment Identified scaling procedures – adding brokers, storage Automation for scaling
Identified components – connect, streams, core Good component diagram
At least at production scale Tear down after done
Identified network setup Ping, and packet roundtrip
Benchmarks Active (normal) benchmarks published Repeating at regular intervals
Identified SLA Negotiated, and signed off
Multi-tenancy Quotas set
Observability Full cluster metrics captured, and visualized Application metrics
Pre-requisites
16
Concern Ready Bonus points
Environment Identified scaling procedures – adding brokers, storage Automation for scaling
Identified components – connect, streams, core Good component diagram
At least at production scale Tear down after done
Identified network setup Ping, and packet roundtrip
Benchmarks Active (normal) benchmarks published Repeating at regular intervals
Identified SLA Negotiated, and signed off
Multi-tenancy Quotas set
Observability Full cluster metrics captured, and visualized Application metrics
Pre-requisites continued
17
Benchmarking
• Other sessions, lightning talks
• OpenMessaging benchmark framework
• Simulate production load – multiple applications, clients, connectors, change data (CDC) etc.
Clean container environments
• No massively parallel multi-function, single purpose, all encompassing clusters
Observability
• APM tools, or DIY
• Must have – production, consumption, topic level, throughput metrics
Multi-tenancy
• Stop, and do not move forward without quotas
• Can pose challenges in separation even with quotas – cluster downtime
18
A good stress test for
Kafka
Stick to the paradigm
Request/response, topic semantics
Push data, and consume
Application is less important
Include all parameters
Component tests
High concurrency tests – race conditions
Resource tests – network, IO, CPU
Specific use cases
Break something, and recover
Change conditions
Memory leaks
19
A good stress test for
Kafka
Stick to the paradigm
Request/response, topic semantics
Push data, and consume
Application is less important
Include all parameters
Component tests
High concurrency tests – race conditions
Resource tests – network, IO, CPU
Specific use cases
Break something, and recover
Change conditions
Memory leaks
20
A good stress test for
Kafka
Stick to the paradigm
Request/response, topic semantics
Push data, and consume
Application is less important
Include all parameters
Component tests
High concurrency tests – race conditions
Resource tests – network, IO, CPU
Specific use cases
Break something, and recover
Change conditions
Memory leaks
21
A good stress test for
Kafka
Stick to the paradigm
Request/response, topic semantics
Push data, and consume
Application is less important
Include all parameters
Component tests
High concurrency tests – race conditions
Resource tests – network, IO, CPU
Specific use cases
Break something, and recover
Change conditions
Memory leaks
22
Kafka internals
23
Kafka internals continued
24
Consumer Group protocol
• Partition assignment, and subscription
• Group coordinator
• Rebalance triggers
• Offset management
Control plane
• Controller with and without KRAFT
• Topic metadata
• Replication
Topics
• Compaction
• Message keys, and partitions
Connect
• Change Data Capture
(CDC) has row, timestamp
dependency
• Database/data store
reads/writes
• Protocol shifts – Kafka to
HTTP and back.
Component tests
Streams
• Stateless vs. stateful
• Test against real topology
• Focus on changelog topics,
and state stores
• Streams application reset
tool
25
Others
• Non-Java clients may have
different concurrency
• Zookeeper/KRAFT
• Avoid Admin API tests
especially for topic
metadata, and partition
changes
• Geo-replication
• Multi-tenancy
Connect
• Change Data Capture
(CDC) has row, timestamp
dependency
• Database/data store
reads/writes
• Protocol shifts – Kafka to
HTTP and back.
Component tests
Streams
• Stateless vs. stateful
• Test against real topology
• Focus on changelog topics,
and state stores
• Streams application reset
tool
26
Others
• Non-Java clients may have
different concurrency
• Zookeeper/KRAFT
• Avoid Admin API tests
especially for topic
metadata, and partition
changes
• Geo-replication
• Multi-tenancy
Connect
• Change Data Capture
(CDC) has row, timestamp
dependency
• Database/data store
reads/writes
• Protocol shifts – Kafka to
HTTP and back.
Component tests
Streams
• Stateless vs. stateful
• Test against real topology
• Focus on changelog topics,
and state stores
• Streams application reset
tool
27
Others
• Non-Java clients may have
different concurrency
• Zookeeper/KRAFT
• Avoid Admin API tests
especially for topic
metadata, and partition
changes
• Geo-replication
• Multi-tenancy
Connect
• Change Data Capture
(CDC) has row, timestamp
dependency
• Database/data store
reads/writes
• Protocol shifts – Kafka to
HTTP and back.
Component tests
Streams
• Stateless vs. stateful
• Test against real topology
• Focus on changelog topics,
and state stores
• Streams application reset
tool
28
Others
• Non-Java clients may have
different concurrency
• Zookeeper/KRAFT
• Avoid Admin API tests
especially for topic
metadata, and partition
changes
• Geo-replication
• Multi-tenancy
High concurrency
29
Multiple producer, consumer application instances are better
• Can help establish keying/partitioning issues
• Containerization can help, but don’t go overboard
• Different network fragments i.e., different data centers, or availability zones are better
Small, numerous messages are better
• Large messages break concurrency tests and are not normal for Kafka
• Increasing, and decreasing number of messages could be part of test
Transactions/EOS
• If using Transactions API, several additional considerations including Transaction Coordinator are in play
• Exactly Once Semantics (EOS) influences stress
Race conditions
• Not enough partitions on consumer group topic(s)
• Rebalances
High concurrency
30
Multiple producer, consumer application instances are better
• Can help establish keying/partitioning issues
• Containerization can help, but don’t go overboard
• Different network fragments i.e., different data centers, or availability zones are better
Small, numerous messages are better
• Large messages break concurrency tests and are not normal for Kafka
• Increasing, and decreasing number of messages could be part of test
Transactions/EOS
• If using Transactions API, several additional considerations including Transaction Coordinator are in play
• Exactly Once Semantics (EOS) influences stress
Race conditions
• Not enough partitions on consumer group topic(s)
• Rebalances
High concurrency
31
Multiple producer, consumer application instances are better
• Can help establish keying/partitioning issues
• Containerization can help, but don’t go overboard
• Different network fragments i.e., different data centers, or availability zones are better
Small, numerous messages are better
• Large messages break concurrency tests and are not normal for Kafka
• Increasing, and decreasing number of messages could be part of test
Transactions/EOS
• If using Transactions API, several additional considerations including Transaction Coordinator are in play
• Exactly Once Semantics (EOS) influences stress
Race conditions
• Not enough partitions on consumer group topic(s)
• Rebalances
High concurrency
32
Multiple producer, consumer application instances are better
• Can help establish keying/partitioning issues
• Containerization can help, but don’t go overboard
• Different network fragments i.e., different data centers, or availability zones are better
Small, numerous messages are better
• Large messages break concurrency tests and are not normal for Kafka
• Increasing, and decreasing number of messages could be part of test
Transactions/EOS
• If using Transactions API, several additional considerations including Transaction Coordinator are in play
• Exactly Once Semantics (EOS) influences stress
Race conditions
• Not enough partitions on consumer group topic(s)
• Rebalances
High concurrency
33
Multiple producer, consumer application instances are better
• Can help establish keying/partitioning issues
• Containerization can help, but don’t go overboard
• Different network fragments i.e., different data centers, or availability zones are better
Small, numerous messages are better
• Large messages break concurrency tests and are not normal for Kafka
• Increasing, and decreasing number of messages could be part of test
Transactions/EOS
• If using Transactions API, several additional considerations including Transaction Coordinator are in play
• Exactly Once Semantics (EOS) influences stress
Race conditions
• Not enough partitions on consumer group topic(s)
• Rebalances
Resource tests
34
Resource Before test Observe
IO throughput Identify limits of storage class – device
hardware, or virtualization
Throughput should hit or exceed limit
Stage multiple devices, storage
directories
Usage of all log dirs, should increase
Allow for ongoing snapshots/backups Effects of snapshots/backups, and their failure
Network Identify provisioned capacity, ping, and
packet roundtrip
Message bytes for replication + produce/consume should
match
Network partitions known Hit all possible network fragments, and observe differences
CPU Benchmarks for various compression
types known
CPU utilization should continue to be low
If security protocol is MTLS Test with real certificates and right algorithms
Memory Must include if using streams, or
connect
JVM, and RocksDB metrics
Resource tests
35
Resource Before test Observe
IO throughput Identify limits of storage class – device
hardware, or virtualization
Throughput should hit or exceed limit
Stage multiple devices, storage
directories
Usage of all log dirs, should increase
Allow for ongoing snapshots/backups Effects of snapshots/backups, and their failure
Network Identify provisioned capacity, ping, and
packet roundtrip
Message bytes for replication + produce/consume should
match
Network partitions known Hit all possible network fragments, and observe differences
CPU Benchmarks for various compression
types known
CPU utilization should continue to be low
If security protocol is MTLS Test with real certificates and right algorithms
Memory Must include if using streams, or
connect
JVM, and RocksDB metrics
Resource tests
36
Resource Before test Observe
IO throughput Identify limits of storage class – device
hardware, or virtualization
Throughput should hit or exceed limit
Stage multiple devices, storage
directories
Usage of all log dirs, should increase
Allow for ongoing snapshots/backups Effects of snapshots/backups, and their failure
Network Identify provisioned capacity, ping, and
packet roundtrip
Message bytes for replication + produce/consume should
match
Network partitions known Hit all possible network fragments, and observe differences
CPU Benchmarks for various compression
types known
CPU utilization should continue to be low
If security protocol is MTLS Test with real certificates and right algorithms
Memory Must include if using streams, or
connect
JVM, and RocksDB metrics
Resource tests
37
Resource Before test Observe
IO throughput Identify limits of storage class – device
hardware, or virtualization
Throughput should hit or exceed limit
Stage multiple devices, storage
directories
Usage of all log dirs, should increase
Allow for ongoing snapshots/backups Effects of snapshots/backups, and their failure
Network Identify provisioned capacity, ping, and
packet roundtrip
Message bytes for replication + produce/consume should
match
Network partitions known Hit all possible network fragments, and observe differences
CPU Benchmarks for various compression
types known
CPU utilization should continue to be low
If security protocol is MTLS Test with real certificates and right algorithms
Memory Must include if using streams, or
connect
JVM, and RocksDB metrics
Resource tests
38
Resource Before test Observe
IO throughput Identify limits of storage class – device
hardware, or virtualization
Throughput should hit or exceed limit
Stage multiple devices, storage
directories
Usage of all log dirs, should increase
Allow for ongoing snapshots/backups Effects of snapshots/backups, and their failure
Network Identify provisioned capacity, ping, and
packet roundtrip
Message bytes for replication + produce/consume should
match
Network partitions known Hit all possible network fragments, and observe differences
CPU Benchmarks for various compression
types known
CPU utilization should continue to be low
If security protocol is MTLS Test with real certificates and right algorithms
Memory Must include if using streams, or
connect
JVM, and RocksDB metrics
Use case driven
39
Not every use case can cause stress
• Use case needs to be able to push structural boundaries of Kafka i.e., paradigms, components, or resources
• Criticality <> Latency <> Throughput <> Cost
Run full use case with end-to-end latency metrics
• Introduce application specific metrics, simple JMX will do
• End to end latency for critical use cases must be designed upfront, and included in SLA
• Data availability, and system boundaries must be accounted for
Production critical use cases with low latency need good infrastructure
• Purpose of stress testing is to establish system limits, not necessarily to provide insights outside of resilience
• Repeated test cycles are not substitutes for good infrastructure
• Network is usually the bottleneck
Substantial parallelism requires specific tuning
• Thousands of parallel connections while supported may create unknown system states
• Port/socket level limits, TCP and other buffers
Chaos can be fun
• Stop brokers, network
devices, and storage
devices
• Pull the plug, cord, or
anything that can be pulled
• Remove certs, change
firewall rules, and
necessary software
components
Staying calm, and breaking Kafka
Increase number of
client instances,
number of messages
• Continuous increase will
start to hit message level
latency, and throughput
• Topic level metrics like
bytes in will start to dip
• Focus on 95th percentile for
stress tests
40
For critical use cases,
identify and introduce
breaking points
• Increase number of
database rows, or remove
Hadoop partitions
• Delete state stores, backup
state stores and observe
• Where load balancers are in
play, test them for real
scenarios
Chaos can be fun
• Stop brokers, network
devices, and storage
devices
• Pull the plug, cord, or
anything that can be pulled
• Remove certs, change
firewall rules, and
necessary software
components
Staying calm, and breaking Kafka
Increase number of
client instances,
number of messages
• Continuous increase will
start to hit message level
latency, and throughput
• Topic level metrics like
bytes in will start to dip
• Focus on 95th percentile for
stress tests
41
For critical use cases,
identify and introduce
breaking points
• Increase number of
database rows, or remove
Hadoop partitions
• Delete state stores, backup
state stores and observe
• Where load balancers are in
play, test them for real
scenarios
Chaos can be fun
• Stop brokers, network
devices, and storage
devices
• Pull the plug, cord, or
anything that can be pulled
• Remove certs, change
firewall rules, and
necessary software
components
Staying calm, and breaking Kafka
Increase number of
client instances,
number of messages
• Continuous increase will
start to hit message level
latency, and throughput
• Topic level metrics like
bytes in will start to dip
• Focus on 95th percentile for
stress tests
42
For critical use cases,
identify and introduce
breaking points
• Increase number of
database rows, or remove
Hadoop partitions
• Delete state stores, backup
state stores and observe
• Where load balancers are in
play, test them for real
scenarios
Chaos can be fun
• Stop brokers, network
devices, and storage
devices
• Pull the plug, cord, or
anything that can be pulled
• Remove certs, change
firewall rules, and
necessary software
components
Staying calm, and breaking Kafka
Increase number of
client instances,
number of messages
• Continuous increase will
start to hit message level
latency, and throughput
• Topic level metrics like
bytes in will start to dip
• Focus on 95th percentile for
stress tests
43
For critical use cases,
identify and introduce
breaking points
• Increase number of
database rows, or remove
Hadoop partitions
• Delete state stores, backup
state stores and observe
• Where load balancers are in
play, test them for real
scenarios
Memory and leaks
44
Cannot say where
• Memory leaks can occur in all components
• JVM tuning is not generally required unless setting up for specific environment or use case
• Tuning likely to go overboard – think GC
Use profiler, and have runbook
• Get familiar with the usage of JVM profiler, and have ability to attach to Kafka components
• Can help with application debugging also
May be more likely for REST, and other interactions
• Kafka protocol itself doesn’t rely too much on memory
• Therefore, understand and test with the angle of where data is moving and why
45
Recording results, and
recovering
Results should be metrics
Drop or change in metrics under specific
conditions should be captured
Functional testing i.e., application changes are
interesting observations but not necessarily tied
to stress testing (except critical use cases)
Brokers should be up, retention is your
friend
Bring up any lost brokers, and recovery should be
straightforward
Topic retention will help remove any large volumes
of messages, set it to low when stress testing
Break glass
Procedures should be in place for any stress
testing. For Kafka, this may include ability to drop
and create new cluster.
46
Recording results, and
recovering
Results should be metrics
Drop or change in metrics under specific
conditions should be captured
Functional testing i.e., application changes are
interesting observations but not necessarily tied
to stress testing (except critical use cases)
Brokers should be up, retention is your
friend
Bring up any lost brokers, and recovery should be
straightforward
Topic retention will help remove any large volumes
of messages, set it to low when stress testing
Break glass
Procedures should be in place for any stress
testing. For Kafka, this may include ability to drop
and create new cluster.
47
Recording results, and
recovering
Results should be metrics
Drop or change in metrics under specific
conditions should be captured
Functional testing i.e., application changes are
interesting observations but not necessarily tied
to stress testing (except critical use cases)
Brokers should be up, retention is your
friend
Bring up any lost brokers, and recovery should be
straightforward
Topic retention will help remove any large volumes
of messages, set it to low when stress testing
Break glass
Procedures should be in place for any stress
testing. For Kafka, this may include ability to drop
and create new cluster.
48
Recording results, and
recovering
Results should be metrics
Drop or change in metrics under specific
conditions should be captured
Functional testing i.e., application changes are
interesting observations but not necessarily tied
to stress testing (except critical use cases)
Brokers should be up, retention is your
friend
Bring up any lost brokers, and recovery should be
straightforward
Topic retention will help remove any large volumes
of messages, set it to low when stress testing
Break glass
Procedures should be in place for any stress
testing. For Kafka, this may include ability to drop
and create new cluster.
Repeatability
49
Use scripts, and chaos testing
• All tests including for deploying multiple instances can be scripted
• Tie into any popular testing framework
• Think and get ready with automation, and continuous deployment during cluster build
Inheritance framework for Kafka clusters
• Top secret project which no one (including me) works on J
• Start with metrics, benchmarking, and move to stress testing
• Critical use cases cannot be built on poorly understood systems
Cloud vs. on-prem
• On-prem systems are excellent candidates to do stress testing because expansion, and bug fixing takes longer
• Patching of cloud instances is also a good opportunity to repeat
• Automation will help in both cases
50
Scenario illustrations, and exercise
Sensor data from
multiple devices
• Thousands of devices
sending data to cluster
• Some are real-time
requiring immediate
response, others send large
batches
• Messages need to be
analyzed almost
instantaneously
Some real(ish) scenarios – design a good stress test
CDC from Oracle,
streams, high volume
• Continuously increasing
transaction volume
• Streams processing with
joins/other aggregates
• High volume (>5000
messages/sec)
51
Geo-distributed, ultra
low latency
• Cluster serves multiple
geographies
• Requires ultra-low latency
for messages (<10
milliseconds)
• Volume is low, but will
increase as cluster adoption
increases
52
Questions
Thank you!
Siva Kunapuli

More Related Content

PDF
Apache Kafka - Martin Podval
PDF
PDF
Data ingestion and distribution with apache NiFi
PPTX
Apache Kafka Best Practices
PDF
Flink powered stream processing platform at Pinterest
PDF
Bench, a Framework for Benchmarking Kafka Using K8s and OpenMessaging Benchma...
PPTX
Deep Dive into Apache Kafka
PDF
An Introduction to Apache Kafka
Apache Kafka - Martin Podval
Data ingestion and distribution with apache NiFi
Apache Kafka Best Practices
Flink powered stream processing platform at Pinterest
Bench, a Framework for Benchmarking Kafka Using K8s and OpenMessaging Benchma...
Deep Dive into Apache Kafka
An Introduction to Apache Kafka

What's hot (20)

PPTX
Kafka 101
PPTX
Apache kafka
PPTX
Manchester MuleSoft Meetup #6 - Runtime Fabric with Mulesoft
PDF
TRex Traffic Generator - Hanoch Haim
PPTX
PDF
Introduction to apache kafka
PDF
Apache Kafka Fundamentals for Architects, Admins and Developers
PDF
Kafka Streams: What it is, and how to use it?
PPTX
Kafka presentation
PDF
Benefits of Stream Processing and Apache Kafka Use Cases
PDF
Kafka 101 and Developer Best Practices
PPTX
A visual introduction to Apache Kafka
PDF
Fundamentals of Apache Kafka
PPTX
Cloud-based performance testing
PDF
Introduction to Spark Streaming
PDF
Apache Kafka Introduction
PPTX
Wireshark
PPTX
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
PDF
Apache kafka performance(latency)_benchmark_v0.3
PDF
Apache Kafka Architecture & Fundamentals Explained
Kafka 101
Apache kafka
Manchester MuleSoft Meetup #6 - Runtime Fabric with Mulesoft
TRex Traffic Generator - Hanoch Haim
Introduction to apache kafka
Apache Kafka Fundamentals for Architects, Admins and Developers
Kafka Streams: What it is, and how to use it?
Kafka presentation
Benefits of Stream Processing and Apache Kafka Use Cases
Kafka 101 and Developer Best Practices
A visual introduction to Apache Kafka
Fundamentals of Apache Kafka
Cloud-based performance testing
Introduction to Spark Streaming
Apache Kafka Introduction
Wireshark
Exactly-Once Financial Data Processing at Scale with Flink and Pinot
Apache kafka performance(latency)_benchmark_v0.3
Apache Kafka Architecture & Fundamentals Explained
Ad

Similar to Chill, Distill, No Overkill: Best Practices to Stress Test Kafka with Siva Kunapuli (20)

PDF
Adding Value in the Cloud with Performance Test
PPTX
Benchmarking NGINX for Accuracy and Results
PPT
Performance Testing Overview
PDF
DrupalCamp LA 2014 - A Perfect Launch, Every Time
PPTX
Performance tuning Grails applications SpringOne 2GX 2014
PDF
Integration strategies best practices- Mulesoft meetup April 2018
PDF
VMworld 2013: Building a Validation Factory for VMware Partners
PDF
Resilience Planning & How the Empire Strikes Back
PPTX
Multiple Dimensions of Load Testing
PDF
SCALE 16x on-prem container orchestrator deployment
PDF
Expect the unexpected: Prepare for failures in microservices
PDF
Architecting with power vm
PDF
071410 sun a_1515_feldman_stephen
PDF
Mtc learnings from isv & enterprise interaction
PPTX
Mtc learnings from isv & enterprise (dated - Dec -2014)
PPTX
Hadoop for the Data Scientist: Spark in Cloudera 5.5
PDF
Comprehensive Performance Testing: From Early Dev to Live Production
PPTX
Performance Testing
PDF
Tokyo AK Meetup Speedtest - Share.pdf
PDF
Real-World Load Testing of ADF Fusion Applications Demonstrated - Oracle Ope...
Adding Value in the Cloud with Performance Test
Benchmarking NGINX for Accuracy and Results
Performance Testing Overview
DrupalCamp LA 2014 - A Perfect Launch, Every Time
Performance tuning Grails applications SpringOne 2GX 2014
Integration strategies best practices- Mulesoft meetup April 2018
VMworld 2013: Building a Validation Factory for VMware Partners
Resilience Planning & How the Empire Strikes Back
Multiple Dimensions of Load Testing
SCALE 16x on-prem container orchestrator deployment
Expect the unexpected: Prepare for failures in microservices
Architecting with power vm
071410 sun a_1515_feldman_stephen
Mtc learnings from isv & enterprise interaction
Mtc learnings from isv & enterprise (dated - Dec -2014)
Hadoop for the Data Scientist: Spark in Cloudera 5.5
Comprehensive Performance Testing: From Early Dev to Live Production
Performance Testing
Tokyo AK Meetup Speedtest - Share.pdf
Real-World Load Testing of ADF Fusion Applications Demonstrated - Oracle Ope...
Ad

More from HostedbyConfluent (20)

PDF
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
PDF
Renaming a Kafka Topic | Kafka Summit London
PDF
Evolution of NRT Data Ingestion Pipeline at Trendyol
PDF
Ensuring Kafka Service Resilience: A Dive into Health-Checking Techniques
PDF
Exactly-once Stream Processing with Arroyo and Kafka
PDF
Fish Plays Pokemon | Kafka Summit London
PDF
Tiered Storage 101 | Kafla Summit London
PDF
Building a Self-Service Stream Processing Portal: How And Why
PDF
From the Trenches: Improving Kafka Connect Source Connector Ingestion from 7 ...
PDF
Future with Zero Down-Time: End-to-end Resiliency with Chaos Engineering and ...
PDF
Navigating Private Network Connectivity Options for Kafka Clusters
PDF
Apache Flink: Building a Company-wide Self-service Streaming Data Platform
PDF
Explaining How Real-Time GenAI Works in a Noisy Pub
PDF
TL;DR Kafka Metrics | Kafka Summit London
PDF
A Window Into Your Kafka Streams Tasks | KSL
PDF
Mastering Kafka Producer Configs: A Guide to Optimizing Performance
PDF
Data Contracts Management: Schema Registry and Beyond
PDF
Code-First Approach: Crafting Efficient Flink Apps
PDF
Debezium vs. the World: An Overview of the CDC Ecosystem
PDF
Beyond Tiered Storage: Serverless Kafka with No Local Disks
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Renaming a Kafka Topic | Kafka Summit London
Evolution of NRT Data Ingestion Pipeline at Trendyol
Ensuring Kafka Service Resilience: A Dive into Health-Checking Techniques
Exactly-once Stream Processing with Arroyo and Kafka
Fish Plays Pokemon | Kafka Summit London
Tiered Storage 101 | Kafla Summit London
Building a Self-Service Stream Processing Portal: How And Why
From the Trenches: Improving Kafka Connect Source Connector Ingestion from 7 ...
Future with Zero Down-Time: End-to-end Resiliency with Chaos Engineering and ...
Navigating Private Network Connectivity Options for Kafka Clusters
Apache Flink: Building a Company-wide Self-service Streaming Data Platform
Explaining How Real-Time GenAI Works in a Noisy Pub
TL;DR Kafka Metrics | Kafka Summit London
A Window Into Your Kafka Streams Tasks | KSL
Mastering Kafka Producer Configs: A Guide to Optimizing Performance
Data Contracts Management: Schema Registry and Beyond
Code-First Approach: Crafting Efficient Flink Apps
Debezium vs. the World: An Overview of the CDC Ecosystem
Beyond Tiered Storage: Serverless Kafka with No Local Disks

Recently uploaded (20)

PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Empathic Computing: Creating Shared Understanding
PPT
Teaching material agriculture food technology
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Encapsulation theory and applications.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Modernizing your data center with Dell and AMD
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Machine learning based COVID-19 study performance prediction
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Empathic Computing: Creating Shared Understanding
Teaching material agriculture food technology
Advanced methodologies resolving dimensionality complications for autism neur...
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Unlocking AI with Model Context Protocol (MCP)
Review of recent advances in non-invasive hemoglobin estimation
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Encapsulation theory and applications.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Reach Out and Touch Someone: Haptics and Empathic Computing
Modernizing your data center with Dell and AMD
20250228 LYD VKU AI Blended-Learning.pptx
Network Security Unit 5.pdf for BCA BBA.
Machine learning based COVID-19 study performance prediction
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?

Chill, Distill, No Overkill: Best Practices to Stress Test Kafka with Siva Kunapuli

  • 1. Chill, Distill, No Overkill: Best Practices to Stress Test Kafka Siva Kunapuli
  • 2. About me 2 Teacher, Programmer, Engineer, Architect • Started early, still at it • 15+ years • Services, Product, Consulting, Technical Account Management Customers • Financial services, strategic • “Customer with a problem” Kafkaesque • Fail often, and learn • Challenging to operationalize, but useful • Around the world
  • 3. 3 Stress testing Kafka: The challenge Paradigm driven Protocol interactions a.k.a. real-time vs. batch Data storage vs. distribution Distributed system - components, scale, changing conditions Resources Simple, commodity, cloud Provisioning demands vs. reality Structure Parameters – test design Recordability, repeatability Costs and SLAs
  • 4. 4 Stress testing Kafka: The challenge Paradigm driven Protocol interactions a.k.a. real-time vs. batch Data storage vs. distribution Distributed system - components, scale, changing conditions Resources Simple, commodity, cloud Provisioning demands vs. reality Structure Parameters – test design Recordability, repeatability Costs and SLAs
  • 5. 5 Stress testing Kafka: The challenge Paradigm driven Protocol interactions a.k.a. real-time vs. batch Data storage vs. distribution Distributed system - components, scale, changing conditions Resources Simple, commodity, cloud Provisioning demands vs. reality Structure Parameters – test design Recordability, repeatability Costs and SLAs
  • 6. 6 Stress testing Kafka: The challenge Paradigm driven Protocol interactions a.k.a. real-time vs. batch Data storage vs. distribution Distributed system - components, scale, changing conditions Resources Simple, commodity, cloud Provisioning demands vs. reality Structure Parameters – test design Recordability, repeatability Costs and SLAs
  • 8. Stress testing primer 8 Robustness of setup • Can your system handle stress gracefully? • Does it fail where and when you’re not looking? • What is usual, and what is unusual? Spanning stress (i.e., not selective) • Component/framework models – Connect, Streams, Core • Resources rather than use case(s) – storage, IO throughput, memory, CPU • Concurrency, data access – many clients, simulated network conditions Mission critical • No part is open to failure • Use case is the driver
  • 9. Stress testing primer 9 Robustness of setup • Can your system handle stress gracefully? • Does it fail where and when you’re not looking? • What is usual, and what is unusual? Spanning stress (i.e., not selective) • Component/framework models – Connect, Streams, Core • Resources rather than use case(s) – storage, IO throughput, memory, CPU • Concurrency, data access – many clients, simulated network conditions Mission critical • No part is open to failure • Use case is the driver
  • 10. Stress testing primer 10 Robustness of setup • Can your system handle stress gracefully? • Does it fail where and when you’re not looking? • What is usual, and what is unusual? Spanning stress (i.e., not selective) • Component/framework models – Connect, Streams, Core • Resources rather than use case(s) – storage, IO throughput, memory, CPU • Concurrency, data access – many clients, simulated network conditions Mission critical • No part is open to failure • Use case is the driver
  • 11. Stress testing primer 11 Robustness of setup • Can your system handle stress gracefully? • Does it fail where and when you’re not looking? • What is usual, and what is unusual? Spanning stress (i.e., not selective) • Component/framework models – Connect, Streams, Core • Resources rather than use case(s) – storage, IO throughput, memory, CPU • Concurrency, data access – many clients, simulated network conditions Mission critical • No part is open to failure • Use case is the driver
  • 12. Kafka – an introduction 12
  • 13. Pre-requisites 13 Concern Ready Bonus points Environment Identified scaling procedures – adding brokers, storage Automation for scaling Identified components – connect, streams, core Good component diagram At least at production scale Tear down after done Identified network setup Ping, and packet roundtrip Benchmarks Active (normal) benchmarks published Repeating at regular intervals Identified SLA Negotiated, and signed off Multi-tenancy Quotas set Observability Full cluster metrics captured, and visualized Application metrics
  • 14. Pre-requisites 14 Concern Ready Bonus points Environment Identified scaling procedures – adding brokers, storage Automation for scaling Identified components – connect, streams, core Good component diagram At least at production scale Tear down after done Identified network setup Ping, and packet roundtrip Benchmarks Active (normal) benchmarks published Repeating at regular intervals Identified SLA Negotiated, and signed off Multi-tenancy Quotas set Observability Full cluster metrics captured, and visualized Application metrics
  • 15. Pre-requisites 15 Concern Ready Bonus points Environment Identified scaling procedures – adding brokers, storage Automation for scaling Identified components – connect, streams, core Good component diagram At least at production scale Tear down after done Identified network setup Ping, and packet roundtrip Benchmarks Active (normal) benchmarks published Repeating at regular intervals Identified SLA Negotiated, and signed off Multi-tenancy Quotas set Observability Full cluster metrics captured, and visualized Application metrics
  • 16. Pre-requisites 16 Concern Ready Bonus points Environment Identified scaling procedures – adding brokers, storage Automation for scaling Identified components – connect, streams, core Good component diagram At least at production scale Tear down after done Identified network setup Ping, and packet roundtrip Benchmarks Active (normal) benchmarks published Repeating at regular intervals Identified SLA Negotiated, and signed off Multi-tenancy Quotas set Observability Full cluster metrics captured, and visualized Application metrics
  • 17. Pre-requisites continued 17 Benchmarking • Other sessions, lightning talks • OpenMessaging benchmark framework • Simulate production load – multiple applications, clients, connectors, change data (CDC) etc. Clean container environments • No massively parallel multi-function, single purpose, all encompassing clusters Observability • APM tools, or DIY • Must have – production, consumption, topic level, throughput metrics Multi-tenancy • Stop, and do not move forward without quotas • Can pose challenges in separation even with quotas – cluster downtime
  • 18. 18 A good stress test for Kafka Stick to the paradigm Request/response, topic semantics Push data, and consume Application is less important Include all parameters Component tests High concurrency tests – race conditions Resource tests – network, IO, CPU Specific use cases Break something, and recover Change conditions Memory leaks
  • 19. 19 A good stress test for Kafka Stick to the paradigm Request/response, topic semantics Push data, and consume Application is less important Include all parameters Component tests High concurrency tests – race conditions Resource tests – network, IO, CPU Specific use cases Break something, and recover Change conditions Memory leaks
  • 20. 20 A good stress test for Kafka Stick to the paradigm Request/response, topic semantics Push data, and consume Application is less important Include all parameters Component tests High concurrency tests – race conditions Resource tests – network, IO, CPU Specific use cases Break something, and recover Change conditions Memory leaks
  • 21. 21 A good stress test for Kafka Stick to the paradigm Request/response, topic semantics Push data, and consume Application is less important Include all parameters Component tests High concurrency tests – race conditions Resource tests – network, IO, CPU Specific use cases Break something, and recover Change conditions Memory leaks
  • 22. 22
  • 24. Kafka internals continued 24 Consumer Group protocol • Partition assignment, and subscription • Group coordinator • Rebalance triggers • Offset management Control plane • Controller with and without KRAFT • Topic metadata • Replication Topics • Compaction • Message keys, and partitions
  • 25. Connect • Change Data Capture (CDC) has row, timestamp dependency • Database/data store reads/writes • Protocol shifts – Kafka to HTTP and back. Component tests Streams • Stateless vs. stateful • Test against real topology • Focus on changelog topics, and state stores • Streams application reset tool 25 Others • Non-Java clients may have different concurrency • Zookeeper/KRAFT • Avoid Admin API tests especially for topic metadata, and partition changes • Geo-replication • Multi-tenancy
  • 26. Connect • Change Data Capture (CDC) has row, timestamp dependency • Database/data store reads/writes • Protocol shifts – Kafka to HTTP and back. Component tests Streams • Stateless vs. stateful • Test against real topology • Focus on changelog topics, and state stores • Streams application reset tool 26 Others • Non-Java clients may have different concurrency • Zookeeper/KRAFT • Avoid Admin API tests especially for topic metadata, and partition changes • Geo-replication • Multi-tenancy
  • 27. Connect • Change Data Capture (CDC) has row, timestamp dependency • Database/data store reads/writes • Protocol shifts – Kafka to HTTP and back. Component tests Streams • Stateless vs. stateful • Test against real topology • Focus on changelog topics, and state stores • Streams application reset tool 27 Others • Non-Java clients may have different concurrency • Zookeeper/KRAFT • Avoid Admin API tests especially for topic metadata, and partition changes • Geo-replication • Multi-tenancy
  • 28. Connect • Change Data Capture (CDC) has row, timestamp dependency • Database/data store reads/writes • Protocol shifts – Kafka to HTTP and back. Component tests Streams • Stateless vs. stateful • Test against real topology • Focus on changelog topics, and state stores • Streams application reset tool 28 Others • Non-Java clients may have different concurrency • Zookeeper/KRAFT • Avoid Admin API tests especially for topic metadata, and partition changes • Geo-replication • Multi-tenancy
  • 29. High concurrency 29 Multiple producer, consumer application instances are better • Can help establish keying/partitioning issues • Containerization can help, but don’t go overboard • Different network fragments i.e., different data centers, or availability zones are better Small, numerous messages are better • Large messages break concurrency tests and are not normal for Kafka • Increasing, and decreasing number of messages could be part of test Transactions/EOS • If using Transactions API, several additional considerations including Transaction Coordinator are in play • Exactly Once Semantics (EOS) influences stress Race conditions • Not enough partitions on consumer group topic(s) • Rebalances
  • 30. High concurrency 30 Multiple producer, consumer application instances are better • Can help establish keying/partitioning issues • Containerization can help, but don’t go overboard • Different network fragments i.e., different data centers, or availability zones are better Small, numerous messages are better • Large messages break concurrency tests and are not normal for Kafka • Increasing, and decreasing number of messages could be part of test Transactions/EOS • If using Transactions API, several additional considerations including Transaction Coordinator are in play • Exactly Once Semantics (EOS) influences stress Race conditions • Not enough partitions on consumer group topic(s) • Rebalances
  • 31. High concurrency 31 Multiple producer, consumer application instances are better • Can help establish keying/partitioning issues • Containerization can help, but don’t go overboard • Different network fragments i.e., different data centers, or availability zones are better Small, numerous messages are better • Large messages break concurrency tests and are not normal for Kafka • Increasing, and decreasing number of messages could be part of test Transactions/EOS • If using Transactions API, several additional considerations including Transaction Coordinator are in play • Exactly Once Semantics (EOS) influences stress Race conditions • Not enough partitions on consumer group topic(s) • Rebalances
  • 32. High concurrency 32 Multiple producer, consumer application instances are better • Can help establish keying/partitioning issues • Containerization can help, but don’t go overboard • Different network fragments i.e., different data centers, or availability zones are better Small, numerous messages are better • Large messages break concurrency tests and are not normal for Kafka • Increasing, and decreasing number of messages could be part of test Transactions/EOS • If using Transactions API, several additional considerations including Transaction Coordinator are in play • Exactly Once Semantics (EOS) influences stress Race conditions • Not enough partitions on consumer group topic(s) • Rebalances
  • 33. High concurrency 33 Multiple producer, consumer application instances are better • Can help establish keying/partitioning issues • Containerization can help, but don’t go overboard • Different network fragments i.e., different data centers, or availability zones are better Small, numerous messages are better • Large messages break concurrency tests and are not normal for Kafka • Increasing, and decreasing number of messages could be part of test Transactions/EOS • If using Transactions API, several additional considerations including Transaction Coordinator are in play • Exactly Once Semantics (EOS) influences stress Race conditions • Not enough partitions on consumer group topic(s) • Rebalances
  • 34. Resource tests 34 Resource Before test Observe IO throughput Identify limits of storage class – device hardware, or virtualization Throughput should hit or exceed limit Stage multiple devices, storage directories Usage of all log dirs, should increase Allow for ongoing snapshots/backups Effects of snapshots/backups, and their failure Network Identify provisioned capacity, ping, and packet roundtrip Message bytes for replication + produce/consume should match Network partitions known Hit all possible network fragments, and observe differences CPU Benchmarks for various compression types known CPU utilization should continue to be low If security protocol is MTLS Test with real certificates and right algorithms Memory Must include if using streams, or connect JVM, and RocksDB metrics
  • 35. Resource tests 35 Resource Before test Observe IO throughput Identify limits of storage class – device hardware, or virtualization Throughput should hit or exceed limit Stage multiple devices, storage directories Usage of all log dirs, should increase Allow for ongoing snapshots/backups Effects of snapshots/backups, and their failure Network Identify provisioned capacity, ping, and packet roundtrip Message bytes for replication + produce/consume should match Network partitions known Hit all possible network fragments, and observe differences CPU Benchmarks for various compression types known CPU utilization should continue to be low If security protocol is MTLS Test with real certificates and right algorithms Memory Must include if using streams, or connect JVM, and RocksDB metrics
  • 36. Resource tests 36 Resource Before test Observe IO throughput Identify limits of storage class – device hardware, or virtualization Throughput should hit or exceed limit Stage multiple devices, storage directories Usage of all log dirs, should increase Allow for ongoing snapshots/backups Effects of snapshots/backups, and their failure Network Identify provisioned capacity, ping, and packet roundtrip Message bytes for replication + produce/consume should match Network partitions known Hit all possible network fragments, and observe differences CPU Benchmarks for various compression types known CPU utilization should continue to be low If security protocol is MTLS Test with real certificates and right algorithms Memory Must include if using streams, or connect JVM, and RocksDB metrics
  • 37. Resource tests 37 Resource Before test Observe IO throughput Identify limits of storage class – device hardware, or virtualization Throughput should hit or exceed limit Stage multiple devices, storage directories Usage of all log dirs, should increase Allow for ongoing snapshots/backups Effects of snapshots/backups, and their failure Network Identify provisioned capacity, ping, and packet roundtrip Message bytes for replication + produce/consume should match Network partitions known Hit all possible network fragments, and observe differences CPU Benchmarks for various compression types known CPU utilization should continue to be low If security protocol is MTLS Test with real certificates and right algorithms Memory Must include if using streams, or connect JVM, and RocksDB metrics
  • 38. Resource tests 38 Resource Before test Observe IO throughput Identify limits of storage class – device hardware, or virtualization Throughput should hit or exceed limit Stage multiple devices, storage directories Usage of all log dirs, should increase Allow for ongoing snapshots/backups Effects of snapshots/backups, and their failure Network Identify provisioned capacity, ping, and packet roundtrip Message bytes for replication + produce/consume should match Network partitions known Hit all possible network fragments, and observe differences CPU Benchmarks for various compression types known CPU utilization should continue to be low If security protocol is MTLS Test with real certificates and right algorithms Memory Must include if using streams, or connect JVM, and RocksDB metrics
  • 39. Use case driven 39 Not every use case can cause stress • Use case needs to be able to push structural boundaries of Kafka i.e., paradigms, components, or resources • Criticality <> Latency <> Throughput <> Cost Run full use case with end-to-end latency metrics • Introduce application specific metrics, simple JMX will do • End to end latency for critical use cases must be designed upfront, and included in SLA • Data availability, and system boundaries must be accounted for Production critical use cases with low latency need good infrastructure • Purpose of stress testing is to establish system limits, not necessarily to provide insights outside of resilience • Repeated test cycles are not substitutes for good infrastructure • Network is usually the bottleneck Substantial parallelism requires specific tuning • Thousands of parallel connections while supported may create unknown system states • Port/socket level limits, TCP and other buffers
  • 40. Chaos can be fun • Stop brokers, network devices, and storage devices • Pull the plug, cord, or anything that can be pulled • Remove certs, change firewall rules, and necessary software components Staying calm, and breaking Kafka Increase number of client instances, number of messages • Continuous increase will start to hit message level latency, and throughput • Topic level metrics like bytes in will start to dip • Focus on 95th percentile for stress tests 40 For critical use cases, identify and introduce breaking points • Increase number of database rows, or remove Hadoop partitions • Delete state stores, backup state stores and observe • Where load balancers are in play, test them for real scenarios
  • 41. Chaos can be fun • Stop brokers, network devices, and storage devices • Pull the plug, cord, or anything that can be pulled • Remove certs, change firewall rules, and necessary software components Staying calm, and breaking Kafka Increase number of client instances, number of messages • Continuous increase will start to hit message level latency, and throughput • Topic level metrics like bytes in will start to dip • Focus on 95th percentile for stress tests 41 For critical use cases, identify and introduce breaking points • Increase number of database rows, or remove Hadoop partitions • Delete state stores, backup state stores and observe • Where load balancers are in play, test them for real scenarios
  • 42. Chaos can be fun • Stop brokers, network devices, and storage devices • Pull the plug, cord, or anything that can be pulled • Remove certs, change firewall rules, and necessary software components Staying calm, and breaking Kafka Increase number of client instances, number of messages • Continuous increase will start to hit message level latency, and throughput • Topic level metrics like bytes in will start to dip • Focus on 95th percentile for stress tests 42 For critical use cases, identify and introduce breaking points • Increase number of database rows, or remove Hadoop partitions • Delete state stores, backup state stores and observe • Where load balancers are in play, test them for real scenarios
  • 43. Chaos can be fun • Stop brokers, network devices, and storage devices • Pull the plug, cord, or anything that can be pulled • Remove certs, change firewall rules, and necessary software components Staying calm, and breaking Kafka Increase number of client instances, number of messages • Continuous increase will start to hit message level latency, and throughput • Topic level metrics like bytes in will start to dip • Focus on 95th percentile for stress tests 43 For critical use cases, identify and introduce breaking points • Increase number of database rows, or remove Hadoop partitions • Delete state stores, backup state stores and observe • Where load balancers are in play, test them for real scenarios
  • 44. Memory and leaks 44 Cannot say where • Memory leaks can occur in all components • JVM tuning is not generally required unless setting up for specific environment or use case • Tuning likely to go overboard – think GC Use profiler, and have runbook • Get familiar with the usage of JVM profiler, and have ability to attach to Kafka components • Can help with application debugging also May be more likely for REST, and other interactions • Kafka protocol itself doesn’t rely too much on memory • Therefore, understand and test with the angle of where data is moving and why
  • 45. 45 Recording results, and recovering Results should be metrics Drop or change in metrics under specific conditions should be captured Functional testing i.e., application changes are interesting observations but not necessarily tied to stress testing (except critical use cases) Brokers should be up, retention is your friend Bring up any lost brokers, and recovery should be straightforward Topic retention will help remove any large volumes of messages, set it to low when stress testing Break glass Procedures should be in place for any stress testing. For Kafka, this may include ability to drop and create new cluster.
  • 46. 46 Recording results, and recovering Results should be metrics Drop or change in metrics under specific conditions should be captured Functional testing i.e., application changes are interesting observations but not necessarily tied to stress testing (except critical use cases) Brokers should be up, retention is your friend Bring up any lost brokers, and recovery should be straightforward Topic retention will help remove any large volumes of messages, set it to low when stress testing Break glass Procedures should be in place for any stress testing. For Kafka, this may include ability to drop and create new cluster.
  • 47. 47 Recording results, and recovering Results should be metrics Drop or change in metrics under specific conditions should be captured Functional testing i.e., application changes are interesting observations but not necessarily tied to stress testing (except critical use cases) Brokers should be up, retention is your friend Bring up any lost brokers, and recovery should be straightforward Topic retention will help remove any large volumes of messages, set it to low when stress testing Break glass Procedures should be in place for any stress testing. For Kafka, this may include ability to drop and create new cluster.
  • 48. 48 Recording results, and recovering Results should be metrics Drop or change in metrics under specific conditions should be captured Functional testing i.e., application changes are interesting observations but not necessarily tied to stress testing (except critical use cases) Brokers should be up, retention is your friend Bring up any lost brokers, and recovery should be straightforward Topic retention will help remove any large volumes of messages, set it to low when stress testing Break glass Procedures should be in place for any stress testing. For Kafka, this may include ability to drop and create new cluster.
  • 49. Repeatability 49 Use scripts, and chaos testing • All tests including for deploying multiple instances can be scripted • Tie into any popular testing framework • Think and get ready with automation, and continuous deployment during cluster build Inheritance framework for Kafka clusters • Top secret project which no one (including me) works on J • Start with metrics, benchmarking, and move to stress testing • Critical use cases cannot be built on poorly understood systems Cloud vs. on-prem • On-prem systems are excellent candidates to do stress testing because expansion, and bug fixing takes longer • Patching of cloud instances is also a good opportunity to repeat • Automation will help in both cases
  • 51. Sensor data from multiple devices • Thousands of devices sending data to cluster • Some are real-time requiring immediate response, others send large batches • Messages need to be analyzed almost instantaneously Some real(ish) scenarios – design a good stress test CDC from Oracle, streams, high volume • Continuously increasing transaction volume • Streams processing with joins/other aggregates • High volume (>5000 messages/sec) 51 Geo-distributed, ultra low latency • Cluster serves multiple geographies • Requires ultra-low latency for messages (<10 milliseconds) • Volume is low, but will increase as cluster adoption increases