SlideShare a Scribd company logo
OCTOBER 11-14, 2016 • BOSTON, MA
Tuning Solr and its Pipeline for Logs
Rafał Kuć and Radu Gheorghe
Software Engineers, Sematext Group, Inc.
3
01
Agenda
Designing a Solr(Cloud) cluster for time-series data
Solr and operating system knobs to tune
Pipeline patterns and shipping options
4
01
Time-based collections, the single best improvement
14.10
indexing
5
01
Time-based collections, the single best improvement
14.10
indexing
15.10
6
01
Time-based collections, the single best improvement
14.10
indexing
15.10
7
01
Time-based collections, the single best improvement
14.10 15.10 ... 21.10
indexing
8
01
Time-based collections, the single best improvement
14.10 15.10 ... 21.10
indexing
Less merging ⇒ faster indexing
Quick deletes (of whole collections)
Search only some collections
Better use of caches
9
01
Load is uneven
Black
Friday
Saturday Sunday
1
0
01
Load is uneven
Need to “rotate” collections fast enough to
work with this (otherwise indexing and
queries will be slow)
These will be tiny
Black
Friday
Saturday Sunday
1
1
01
If load is uneven, daily/monthly/etc indices are suboptimal:
you either have poor performance or too many collections
Octi* is worried:
* this is Octi →
1
2
01
Solution: rotate by size
indexing
logs01
Size limit
1
3
01
Solution: rotate by size
indexing
logs01
Size limit
1
4
01
Solution: rotate by size
indexing
logs01
logs02
Size limit
1
5
01
Solution: rotate by size
indexing
logs01
logs02
Size limit
1
6
01
Solution: rotate by size
indexing
logs01
logs08...
logs02
1
7
01
Solution: rotate by size
indexingPredictable indexing and search performance
Fewer shards
logs01
logs08...
logs02
1
8
01
Dealing with size-based collections
logs01
logs08...
logs02
app (caches results)
stats
2016-10-11 2016-10-13 2016-10-12 2016-10-14
2016-10-18
doesn’t matter,
it’s the latest collection
1
9
01
Size-based collections handle spiky load better
Octi concludes:
2
0
01
Tiered cluster (a.k.a. hot-cold)
14 Oct
11 Oct
10 Oct
12 Oct
13 Oct
hot01 cold01 cold02
indexing, most searches longer-running (+cached) searches
2
1
01
Tiered cluster (a.k.a. hot-cold)
14 Oct
11 Oct
10 Oct
12 Oct
13 Oct
hot01 cold01 cold02
indexing, most searches longer-running (+cached) searches
⇒ Good CPU and IO* ⇒ Heap. Decent IO for replication&backup
* Ideally local SSD; avoid network storage unless it’s really good
2
2
01
Octi likes tiered clusters
Costs: you can use different hardware for different workloads
Performance (see costs): fewer shards, less overhead
Isolation: long-running searches don’t slow down indexing
2
3
01
AWS specifics
Hot tier:
c3 (compute optimized) + EBS and use local SSD as cache*
c4 (EBS only)
Cold tier:
d2 (big local HDDs + lots of RAM)
m4 (general purpose) + EBS
i2 (big local SSDs + lots of RAM)
General stuff:
EBS optimized
Enhanced Networking
VPC (to get access to c4&m4 instances)
* Use --cachemode writeback for async writing:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/ht
ml/Logical_Volume_Manager_Administration/lvm_cache_volume_creation.html
PIOPS is best but expensive
HDD - too slow (unless cold=icy)
⇒ General purpose SSDs
2
4
01
EBS
Stay under 3TB. More smaller (<1TB) drives in RAID0 give better, but shorter IOPS bursts
Performance isn’t guaranteed ⇒ RAID0 will wait for the slowest disk
Check limits (e.g. 160MB/s per drive, instance-dependent IOPS and network)
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html
3 IOPS/GB up to 10K (~3TB), up to 256kb/IO, merges up to 4 consecutive IOs
2
5
01
Octi’s AWS top picks
c4s for the hot tier: cheaper than c3s with similar performance
m4s for the cold tier: well balanced, scale up to 10xl, flexible storage via EBS
EBS drives < 3TB, otherwise avoids RAID0: higher chances of performance drop
2
6
01
Scratching the surface of OS options
Say no to swap
Disk scheduler: CFQ for HDD, deadline for SSD
Mount options: noatime, nodiratime, data=writeback, nobarrier
For bare metal: check CPU governor and THP*
because strict ordering is for the weak
* often it’s enabled, but /proc/sys/vm/nr_hugepages is 0
2
7
01
Schema and solrconfig
Auto soft commit (5s?)
Auto commit (few minutes?)
RAM buffer size + Max buffered docs
Doc Values for faceting
+ retrieving those fields (stored=false)
Omit norms, frequencies and positions
Don’t store catch-all field(s)
2
8
01
Relaxing the merge policy*
Merges are the heaviest part of indexing
Facets are the heaviest part of searching
Facets (except method=enum) depend on data size more than # of segments
Knobs:
segmentsPerTier: more segments ⇒ less merging
maxMergeAtOnce < segmentsPerTier, to smooth out those IO spikes
maxMergedSegmentMB: lower to merge more small segments, less bigger ones
* unless you only do “grep”. YMMV, of course. Keep an eye on open files, though
⇒ fewer open files
2
9
01
Some numbers: more segments, more throughput (10-15% here)
10 segmentsPerTier
10 maxMergeAtOnce
50 segmentsPerTier
50 maxMergeAtOnce
need to rotate
before this drop
3
0
01
Lower max segment (500MB from 5GB default)
less CPU fewer segments
3
1
01
There’s more...
SPM screenshots from all tests + JMeter test plan here:
https://github.com/sematext/lucene-revolution-samples/tree/master/2016/solr_logging
We’d love to hear about your own results!
correct spelling: sematext.com/spm
3
2
01
Increasing segments per tier while decreasing max merged
segment (by an order of magnitude) makes indexing better and
search latency less spiky
Octi’s conclusions so far
3
3
01
Optimize I/O and CPU by not optimizing
Unless you have spare CPU & IO (why would you?)
And unless you run out of open files
Only do this on “old” indices!
3
4
01
Optimizing the pipeline*
logs
log shipper(s)
Ship using which protocol(s)?
Buffer
Route to other destinations?
Parse
* for availability and performance/costs
Or log to Solr directly from app
(i.e. implement a new, embedded log shipper)
3
5
01
A case for buffers
performance & availability
allows batches and threads when Solr is down or can’t keep up
3
6
01
Types of buffers
Disk*, memory or a combination
On the logging host or centralized
* doesn’t mean it fsync()s for every message
file or local log shipper
Easy scaling; fewer moving parts
Often requires a lightweight shipper
Kafka/Redis/etc or central log shipper
Extra features (e.g. TTL)
One place for changes
bufferapp
bufferapp
3
7
01
Multiple destinations
* or flat files, or S3 or...
input buffer
processing
Outputs need to be in sync
input Processing may
cause backpressure
processing
*
3
8
01
Multiple destinations
input
Solr
offset
HDFS
offset
input
processing
processing
3
9
01
Just Solr and maybe flat files? Go simple with a local shipper
Custom, fast-changing processing & multiple destinations? Kafka as a central buffer
Octi’s pipeline preferences
4
0
01
Parsing unstructured data
Ideally, log in JSON*
Otherwise, parse
* or another serialization format
For performance and maintenance
(i.e. no need to update parsing rules)
Regex-based (e.g. grok)
Easy to build rules
Rules are flexible
Typically slow & O(n) on # of rules, but:
Move matching patterns to the top of the list
Move broad patterns to the bottom
Skip patterns including others that didn’t match
Grammar-based
(e.g. liblognorm, PatternDB)
Faster. O(1) on # of rules
Numbers in our 2015 session:
sematext.com/blog/2015/10/16/large-scale-log-analytics-with-solr/
4
1
01
Decide what gives when buffers fill up
input shipper
Can drop data here, but better buffer
when shipper buffer is full, app can block or drop data
Check:
Local files: what happens when files are rotated/archived/deleted?
UDP: network buffers should handle spiky load*
TCP: what happens when connection breaks/times out?
UNIX sockets: what happens when socket blocks writes**?
* you’d normally increase net.core.rmem_max and rmem_default
** both DGRAM and STREAM local sockets are reliable (vs Internet ones, UDP and TCP)
4
2
01
Octi’s flow chart of where to log
critical?
UDP. Increase network
buffers on destination,
so it can handle spiky
traffic
Paying with
RAM or IO?
UNIX socket. Local
shipper with memory
buffers, that can
drop data if needed
Local files. Make sure
rotation is in place or
you’ll run out of disk!
no
IO RAM
yes
4
3
01
Protocols
UDP: cool for the app, but not reliable
TCP: more reliable, but not completely
Application-level ACKs may be needed:
No failure/backpressure handling needed
App gets ACK when OS buffer gets it
⇒ no retransmit if buffer is lost*
* more at blog.gerhards.net/2008/05/why-you-cant-build-reliable-tcp.html
sender receiver
ACKs
Protocol Example shippers
HTTP Logstash, rsyslog, Fluentd
RELP rsyslog, Logstash
Beats Filebeat, Logstash
Kafka Fluentd, Filebeat, rsyslog, Logstash
4
4
01
Octi’s top pipeline+shipper combos
rsyslog
app
UNIX
socket
HTTP
memory+disk buffer
can drop
app
fileKafka
consumer
Filebeat
HTTP
simple & reliable
4
5
01
Conclusions, questions, we’re hiring, thank you
The whole talk was pretty much only conclusions :)
Feels like there’s much more to discover. Please test & share your own nuggets
http://www.relatably.com/m/img/oh-my-god-memes/oh-my-god.jpg
Scary word, ha? Poke us:
@kucrafal @radu0gheorghe @sematext
...or @ our booth here

More Related Content

PDF
Solr for Indexing and Searching Logs
PDF
Tuning Solr for Logs
PDF
Elasticsearch for Logs & Metrics - a deep dive
PPTX
Solr Search Engine: Optimize Is (Not) Bad for You
PDF
Docker Monitoring Webinar
PDF
On Centralizing Logs
PDF
From zero to hero - Easy log centralization with Logstash and Elasticsearch
PDF
Building a near real time search engine & analytics for logs using solr
Solr for Indexing and Searching Logs
Tuning Solr for Logs
Elasticsearch for Logs & Metrics - a deep dive
Solr Search Engine: Optimize Is (Not) Bad for You
Docker Monitoring Webinar
On Centralizing Logs
From zero to hero - Easy log centralization with Logstash and Elasticsearch
Building a near real time search engine & analytics for logs using solr

What's hot (18)

PDF
DOD 2016 - Rafał Kuć - Building a Resilient Log Aggregation Pipeline Using El...
PPTX
Running High Performance & Fault-tolerant Elasticsearch Clusters on Docker
PDF
Data Analytics Service Company and Its Ruby Usage
PDF
High Performance Solr and JVM Tuning Strategies used for MapQuest’s Search Ah...
PDF
Dive into Fluentd plugin v0.12
PPT
ELK stack at weibo.com
PPTX
Administering and Monitoring SolrCloud Clusters
PPTX
Get more than a cache back! The Microsoft Azure Redis Cache (NDC Oslo)
PDF
Logging for Production Systems in The Container Era
PDF
Scaling massive elastic search clusters - Rafał Kuć - Sematext
PDF
Logstash family introduction
PDF
How to create Treasure Data #dotsbigdata
PDF
Fluentd Overview, Now and Then
PDF
Fluentd at Bay Area Kubernetes Meetup
PPTX
Scaling Flink in Cloud
PDF
Building the Right Platform Architecture for Hadoop
PDF
Fluentd meetup #2
PDF
Journée DevOps : Des dashboards pour tous avec ElasticSearch, Logstash et Kibana
DOD 2016 - Rafał Kuć - Building a Resilient Log Aggregation Pipeline Using El...
Running High Performance & Fault-tolerant Elasticsearch Clusters on Docker
Data Analytics Service Company and Its Ruby Usage
High Performance Solr and JVM Tuning Strategies used for MapQuest’s Search Ah...
Dive into Fluentd plugin v0.12
ELK stack at weibo.com
Administering and Monitoring SolrCloud Clusters
Get more than a cache back! The Microsoft Azure Redis Cache (NDC Oslo)
Logging for Production Systems in The Container Era
Scaling massive elastic search clusters - Rafał Kuć - Sematext
Logstash family introduction
How to create Treasure Data #dotsbigdata
Fluentd Overview, Now and Then
Fluentd at Bay Area Kubernetes Meetup
Scaling Flink in Cloud
Building the Right Platform Architecture for Hadoop
Fluentd meetup #2
Journée DevOps : Des dashboards pour tous avec ElasticSearch, Logstash et Kibana
Ad

Viewers also liked (20)

PDF
How to Run Solr on Docker and Why
PDF
Large Scale Log Analytics with Solr (from Lucene Revolution 2015)
PDF
Monitoring and Log Management for
PDF
Top Node.js Metrics to Watch
PDF
Introduction to solr
PDF
Side by Side with Elasticsearch & Solr, Part 2
PPTX
Benchmarking Solr Performance at Scale
PDF
Building Resilient Log Aggregation Pipeline with Elasticsearch & Kafka
PPT
Running High Performance and Fault Tolerant Elasticsearch Clusters on Docker
PDF
Docker Logging Webinar
PDF
Journey of Implementing Solr at Target: Presented by Raja Ramachandran, Target
PDF
Integrate Solr with real-time stream processing applications
PDF
Time Series Processing with Solr and Spark: Presented by Josef Adersberger, Q...
PDF
Solr Anti Patterns
PDF
From Zero to Production Hero: Log Analysis with Elasticsearch (from Velocity ...
PDF
Python Unittest
PDF
From Zero to Hero - Centralized Logging with Logstash & Elasticsearch
PDF
Tuning Solr for Logs: Presented by Radu Gheorghe, Sematext
PDF
The Evolution of Lucene & Solr Numerics from Strings to Points: Presented by ...
PDF
Accelerating the Value of Big Data Analytics for P&C Insurers with Hortonwork...
How to Run Solr on Docker and Why
Large Scale Log Analytics with Solr (from Lucene Revolution 2015)
Monitoring and Log Management for
Top Node.js Metrics to Watch
Introduction to solr
Side by Side with Elasticsearch & Solr, Part 2
Benchmarking Solr Performance at Scale
Building Resilient Log Aggregation Pipeline with Elasticsearch & Kafka
Running High Performance and Fault Tolerant Elasticsearch Clusters on Docker
Docker Logging Webinar
Journey of Implementing Solr at Target: Presented by Raja Ramachandran, Target
Integrate Solr with real-time stream processing applications
Time Series Processing with Solr and Spark: Presented by Josef Adersberger, Q...
Solr Anti Patterns
From Zero to Production Hero: Log Analysis with Elasticsearch (from Velocity ...
Python Unittest
From Zero to Hero - Centralized Logging with Logstash & Elasticsearch
Tuning Solr for Logs: Presented by Radu Gheorghe, Sematext
The Evolution of Lucene & Solr Numerics from Strings to Points: Presented by ...
Accelerating the Value of Big Data Analytics for P&C Insurers with Hortonwork...
Ad

Similar to Tuning Solr & Pipeline for Logs (20)

PPTX
Oracle Open World 2014: Lies, Damned Lies, and I/O Statistics [ CON3671]
PPTX
UKOUG, Lies, Damn Lies and I/O Statistics
PDF
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...
PDF
Best Practices with PostgreSQL on Solaris
PPTX
Low Level CPU Performance Profiling Examples
PPTX
Sql server 2016 it just runs faster sql bits 2017 edition
PDF
Oracle R12 EBS Performance Tuning
PDF
POLARDB: A database architecture for the cloud
PDF
Assignment 2 Theoretical
PPT
A presentaion on Panasas HPC NAS
PPTX
Hadoop Meetup Jan 2019 - Overview of Ozone
PPTX
Presentation db2 best practices for optimal performance
PPT
Lamp Stack Optimization
PPTX
Tuning Java Servers
PDF
Presentation db2 best practices for optimal performance
KEY
Deployment Strategies (Mongo Austin)
PPTX
SQL Server It Just Runs Faster
PDF
Accelerating HBase with NVMe and Bucket Cache
PDF
Kafka on ZFS: Better Living Through Filesystems
PPT
Troubleshooting SQL Server
Oracle Open World 2014: Lies, Damned Lies, and I/O Statistics [ CON3671]
UKOUG, Lies, Damn Lies and I/O Statistics
Cómo se diseña una base de datos que pueda ingerir más de cuatro millones de ...
Best Practices with PostgreSQL on Solaris
Low Level CPU Performance Profiling Examples
Sql server 2016 it just runs faster sql bits 2017 edition
Oracle R12 EBS Performance Tuning
POLARDB: A database architecture for the cloud
Assignment 2 Theoretical
A presentaion on Panasas HPC NAS
Hadoop Meetup Jan 2019 - Overview of Ozone
Presentation db2 best practices for optimal performance
Lamp Stack Optimization
Tuning Java Servers
Presentation db2 best practices for optimal performance
Deployment Strategies (Mongo Austin)
SQL Server It Just Runs Faster
Accelerating HBase with NVMe and Bucket Cache
Kafka on ZFS: Better Living Through Filesystems
Troubleshooting SQL Server

More from Sematext Group, Inc. (12)

PDF
Tweaking the Base Score: Lucene/Solr Similarities Explained
PDF
OOPs, OOMs, oh my! Containerizing JVM apps
PPTX
Is observability good for your brain?
PDF
Introducing log analysis to your organization
PDF
Solr on Docker - the Good, the Bad and the Ugly
PDF
Metrics, Logs, Transaction Traces, Anomaly Detection at Scale
PPTX
Tuning Elasticsearch Indexing Pipeline for Logs
PDF
(Elastic)search in big data
PDF
Side by Side with Elasticsearch and Solr
PDF
Open Source Search Evolution
PDF
Elasticsearch and Solr for Logs
PDF
Introduction to Elasticsearch
Tweaking the Base Score: Lucene/Solr Similarities Explained
OOPs, OOMs, oh my! Containerizing JVM apps
Is observability good for your brain?
Introducing log analysis to your organization
Solr on Docker - the Good, the Bad and the Ugly
Metrics, Logs, Transaction Traces, Anomaly Detection at Scale
Tuning Elasticsearch Indexing Pipeline for Logs
(Elastic)search in big data
Side by Side with Elasticsearch and Solr
Open Source Search Evolution
Elasticsearch and Solr for Logs
Introduction to Elasticsearch

Recently uploaded (20)

PDF
KodekX | Application Modernization Development
PPTX
Cloud computing and distributed systems.
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
PDF
Approach and Philosophy of On baking technology
PDF
Advanced Soft Computing BINUS July 2025.pdf
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Empathic Computing: Creating Shared Understanding
PDF
cuic standard and advanced reporting.pdf
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPT
Teaching material agriculture food technology
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Modernizing your data center with Dell and AMD
KodekX | Application Modernization Development
Cloud computing and distributed systems.
Dropbox Q2 2025 Financial Results & Investor Presentation
Spectral efficient network and resource selection model in 5G networks
Network Security Unit 5.pdf for BCA BBA.
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
Approach and Philosophy of On baking technology
Advanced Soft Computing BINUS July 2025.pdf
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
NewMind AI Monthly Chronicles - July 2025
Advanced methodologies resolving dimensionality complications for autism neur...
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Empathic Computing: Creating Shared Understanding
cuic standard and advanced reporting.pdf
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Teaching material agriculture food technology
“AI and Expert System Decision Support & Business Intelligence Systems”
Diabetes mellitus diagnosis method based random forest with bat algorithm
Modernizing your data center with Dell and AMD

Tuning Solr & Pipeline for Logs

  • 1. OCTOBER 11-14, 2016 • BOSTON, MA
  • 2. Tuning Solr and its Pipeline for Logs Rafał Kuć and Radu Gheorghe Software Engineers, Sematext Group, Inc.
  • 3. 3 01 Agenda Designing a Solr(Cloud) cluster for time-series data Solr and operating system knobs to tune Pipeline patterns and shipping options
  • 4. 4 01 Time-based collections, the single best improvement 14.10 indexing
  • 5. 5 01 Time-based collections, the single best improvement 14.10 indexing 15.10
  • 6. 6 01 Time-based collections, the single best improvement 14.10 indexing 15.10
  • 7. 7 01 Time-based collections, the single best improvement 14.10 15.10 ... 21.10 indexing
  • 8. 8 01 Time-based collections, the single best improvement 14.10 15.10 ... 21.10 indexing Less merging ⇒ faster indexing Quick deletes (of whole collections) Search only some collections Better use of caches
  • 10. 1 0 01 Load is uneven Need to “rotate” collections fast enough to work with this (otherwise indexing and queries will be slow) These will be tiny Black Friday Saturday Sunday
  • 11. 1 1 01 If load is uneven, daily/monthly/etc indices are suboptimal: you either have poor performance or too many collections Octi* is worried: * this is Octi →
  • 12. 1 2 01 Solution: rotate by size indexing logs01 Size limit
  • 13. 1 3 01 Solution: rotate by size indexing logs01 Size limit
  • 14. 1 4 01 Solution: rotate by size indexing logs01 logs02 Size limit
  • 15. 1 5 01 Solution: rotate by size indexing logs01 logs02 Size limit
  • 16. 1 6 01 Solution: rotate by size indexing logs01 logs08... logs02
  • 17. 1 7 01 Solution: rotate by size indexingPredictable indexing and search performance Fewer shards logs01 logs08... logs02
  • 18. 1 8 01 Dealing with size-based collections logs01 logs08... logs02 app (caches results) stats 2016-10-11 2016-10-13 2016-10-12 2016-10-14 2016-10-18 doesn’t matter, it’s the latest collection
  • 19. 1 9 01 Size-based collections handle spiky load better Octi concludes:
  • 20. 2 0 01 Tiered cluster (a.k.a. hot-cold) 14 Oct 11 Oct 10 Oct 12 Oct 13 Oct hot01 cold01 cold02 indexing, most searches longer-running (+cached) searches
  • 21. 2 1 01 Tiered cluster (a.k.a. hot-cold) 14 Oct 11 Oct 10 Oct 12 Oct 13 Oct hot01 cold01 cold02 indexing, most searches longer-running (+cached) searches ⇒ Good CPU and IO* ⇒ Heap. Decent IO for replication&backup * Ideally local SSD; avoid network storage unless it’s really good
  • 22. 2 2 01 Octi likes tiered clusters Costs: you can use different hardware for different workloads Performance (see costs): fewer shards, less overhead Isolation: long-running searches don’t slow down indexing
  • 23. 2 3 01 AWS specifics Hot tier: c3 (compute optimized) + EBS and use local SSD as cache* c4 (EBS only) Cold tier: d2 (big local HDDs + lots of RAM) m4 (general purpose) + EBS i2 (big local SSDs + lots of RAM) General stuff: EBS optimized Enhanced Networking VPC (to get access to c4&m4 instances) * Use --cachemode writeback for async writing: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/ht ml/Logical_Volume_Manager_Administration/lvm_cache_volume_creation.html
  • 24. PIOPS is best but expensive HDD - too slow (unless cold=icy) ⇒ General purpose SSDs 2 4 01 EBS Stay under 3TB. More smaller (<1TB) drives in RAID0 give better, but shorter IOPS bursts Performance isn’t guaranteed ⇒ RAID0 will wait for the slowest disk Check limits (e.g. 160MB/s per drive, instance-dependent IOPS and network) http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumeTypes.html http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html 3 IOPS/GB up to 10K (~3TB), up to 256kb/IO, merges up to 4 consecutive IOs
  • 25. 2 5 01 Octi’s AWS top picks c4s for the hot tier: cheaper than c3s with similar performance m4s for the cold tier: well balanced, scale up to 10xl, flexible storage via EBS EBS drives < 3TB, otherwise avoids RAID0: higher chances of performance drop
  • 26. 2 6 01 Scratching the surface of OS options Say no to swap Disk scheduler: CFQ for HDD, deadline for SSD Mount options: noatime, nodiratime, data=writeback, nobarrier For bare metal: check CPU governor and THP* because strict ordering is for the weak * often it’s enabled, but /proc/sys/vm/nr_hugepages is 0
  • 27. 2 7 01 Schema and solrconfig Auto soft commit (5s?) Auto commit (few minutes?) RAM buffer size + Max buffered docs Doc Values for faceting + retrieving those fields (stored=false) Omit norms, frequencies and positions Don’t store catch-all field(s)
  • 28. 2 8 01 Relaxing the merge policy* Merges are the heaviest part of indexing Facets are the heaviest part of searching Facets (except method=enum) depend on data size more than # of segments Knobs: segmentsPerTier: more segments ⇒ less merging maxMergeAtOnce < segmentsPerTier, to smooth out those IO spikes maxMergedSegmentMB: lower to merge more small segments, less bigger ones * unless you only do “grep”. YMMV, of course. Keep an eye on open files, though ⇒ fewer open files
  • 29. 2 9 01 Some numbers: more segments, more throughput (10-15% here) 10 segmentsPerTier 10 maxMergeAtOnce 50 segmentsPerTier 50 maxMergeAtOnce need to rotate before this drop
  • 30. 3 0 01 Lower max segment (500MB from 5GB default) less CPU fewer segments
  • 31. 3 1 01 There’s more... SPM screenshots from all tests + JMeter test plan here: https://github.com/sematext/lucene-revolution-samples/tree/master/2016/solr_logging We’d love to hear about your own results! correct spelling: sematext.com/spm
  • 32. 3 2 01 Increasing segments per tier while decreasing max merged segment (by an order of magnitude) makes indexing better and search latency less spiky Octi’s conclusions so far
  • 33. 3 3 01 Optimize I/O and CPU by not optimizing Unless you have spare CPU & IO (why would you?) And unless you run out of open files Only do this on “old” indices!
  • 34. 3 4 01 Optimizing the pipeline* logs log shipper(s) Ship using which protocol(s)? Buffer Route to other destinations? Parse * for availability and performance/costs Or log to Solr directly from app (i.e. implement a new, embedded log shipper)
  • 35. 3 5 01 A case for buffers performance & availability allows batches and threads when Solr is down or can’t keep up
  • 36. 3 6 01 Types of buffers Disk*, memory or a combination On the logging host or centralized * doesn’t mean it fsync()s for every message file or local log shipper Easy scaling; fewer moving parts Often requires a lightweight shipper Kafka/Redis/etc or central log shipper Extra features (e.g. TTL) One place for changes bufferapp bufferapp
  • 37. 3 7 01 Multiple destinations * or flat files, or S3 or... input buffer processing Outputs need to be in sync input Processing may cause backpressure processing *
  • 39. 3 9 01 Just Solr and maybe flat files? Go simple with a local shipper Custom, fast-changing processing & multiple destinations? Kafka as a central buffer Octi’s pipeline preferences
  • 40. 4 0 01 Parsing unstructured data Ideally, log in JSON* Otherwise, parse * or another serialization format For performance and maintenance (i.e. no need to update parsing rules) Regex-based (e.g. grok) Easy to build rules Rules are flexible Typically slow & O(n) on # of rules, but: Move matching patterns to the top of the list Move broad patterns to the bottom Skip patterns including others that didn’t match Grammar-based (e.g. liblognorm, PatternDB) Faster. O(1) on # of rules Numbers in our 2015 session: sematext.com/blog/2015/10/16/large-scale-log-analytics-with-solr/
  • 41. 4 1 01 Decide what gives when buffers fill up input shipper Can drop data here, but better buffer when shipper buffer is full, app can block or drop data Check: Local files: what happens when files are rotated/archived/deleted? UDP: network buffers should handle spiky load* TCP: what happens when connection breaks/times out? UNIX sockets: what happens when socket blocks writes**? * you’d normally increase net.core.rmem_max and rmem_default ** both DGRAM and STREAM local sockets are reliable (vs Internet ones, UDP and TCP)
  • 42. 4 2 01 Octi’s flow chart of where to log critical? UDP. Increase network buffers on destination, so it can handle spiky traffic Paying with RAM or IO? UNIX socket. Local shipper with memory buffers, that can drop data if needed Local files. Make sure rotation is in place or you’ll run out of disk! no IO RAM yes
  • 43. 4 3 01 Protocols UDP: cool for the app, but not reliable TCP: more reliable, but not completely Application-level ACKs may be needed: No failure/backpressure handling needed App gets ACK when OS buffer gets it ⇒ no retransmit if buffer is lost* * more at blog.gerhards.net/2008/05/why-you-cant-build-reliable-tcp.html sender receiver ACKs Protocol Example shippers HTTP Logstash, rsyslog, Fluentd RELP rsyslog, Logstash Beats Filebeat, Logstash Kafka Fluentd, Filebeat, rsyslog, Logstash
  • 44. 4 4 01 Octi’s top pipeline+shipper combos rsyslog app UNIX socket HTTP memory+disk buffer can drop app fileKafka consumer Filebeat HTTP simple & reliable
  • 45. 4 5 01 Conclusions, questions, we’re hiring, thank you The whole talk was pretty much only conclusions :) Feels like there’s much more to discover. Please test & share your own nuggets http://www.relatably.com/m/img/oh-my-god-memes/oh-my-god.jpg Scary word, ha? Poke us: @kucrafal @radu0gheorghe @sematext ...or @ our booth here