SlideShare a Scribd company logo
Good and Wicked Fairies
The Tragedy of the Commons
Understanding the Performance
of Java 8 Streams
Kirk Pepperdine @kcpeppe
MauriceNaftalin @mauricenaftalin
#java8fairies
About Kirk
• Specialises in performance tuning
• speaks frequently about performance
• author of performance tuning workshop
• Co-founder jClarity
• performance diagnositic tooling
• Java Champion (since 2006)
About Kirk
• Specialises in performance tuning
• speaks frequently about performance
• author of performance tuning workshop
• Co-founder jClarity
• performance diagnositic tooling
• Java Champion (since 2006)
Kirk’s quiz: what is this?
About Maurice
Developer, designer, architect, teacher, learner, writer
About Maurice
Co-author
Developer, designer, architect, teacher, learner, writer
About Maurice
Co-author Author
Developer, designer, architect, teacher, learner, writer
About Maurice
Co-author Author
Developer, designer, architect, teacher, learner, writer
@mauricenaftalin
The Lambda FAQ
www.lambdafaq.org
Varian 620/i
6
First Computer I Used
Agenda
#java8fairies
Agenda
• Define the problem
#java8fairies
Agenda
• Define the problem
• Implement a solution
#java8fairies
Agenda
• Define the problem
• Implement a solution
• Analyse performance
#java8fairies
Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
#java8fairies
Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
#java8fairies
Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
Case Study: grep -b
grep -b:
“The offset in bytes of a matched pattern
is displayed in front of the matched line.”
Case Study: grep -b
The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall bring it back to cancel half a Line
Nor all thy Tears wash out a Word of it.
rubai51.txt
grep -b:
“The offset in bytes of a matched pattern
is displayed in front of the matched line.”
Case Study: grep -b
The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall bring it back to cancel half a Line
Nor all thy Tears wash out a Word of it.
rubai51.txt
grep -b:
“The offset in bytes of a matched pattern
is displayed in front of the matched line.”
$ grep -b 'W.*t' rubai51.txt
44:Moves on: nor all thy Piety nor Wit
122:Nor all thy Tears wash out a Word of it.
Case Study: grep -b
Obvious iterative implementation
- process file line by line
- maintain byte displacement in an accumulator variable
Case Study: grep -b
Obvious iterative implementation
- process file line by line
- maintain byte displacement in an accumulator variable
Objective here is to implement it in stream code
- no accumulators allowed!
Case Study: grep -b
Streams – Why?
• Intention: replace loops for aggregate operations
10
Streams – Why?
• Intention: replace loops for aggregate operations
List<Person> people = …
Set<City> shortCities = new HashSet<>();

for (Person p : people) {
City c = p.getCity();
if (c.getName().length() < 4 ) {
shortCities.add(c);
}
}
instead of writing this:
10
Streams – Why?
• Intention: replace loops for aggregate operations
• more concise, more readable, composable operations, parallelizable
Set<City> shortCities = new HashSet<>();

for (Person p : people) {
City c = p.getCity();
if (c.getName().length() < 4 ) {
shortCities.add(c);
}
}
instead of writing this:
List<Person> people = …
Set<City> shortCities = people.stream()
.map(Person::getCity)

.filter(c -> c.getName().length() < 4)
.collect(toSet());
11
we’re going to write this:
Streams – Why?
• Intention: replace loops for aggregate operations
• more concise, more readable, composable operations, parallelizable
Set<City> shortCities = new HashSet<>();

for (Person p : people) {
City c = p.getCity();
if (c.getName().length() < 4 ) {
shortCities.add(c);
}
}
instead of writing this:
List<Person> people = …
Set<City> shortCities = people.stream()
.map(Person::getCity)

.filter(c -> c.getName().length() < 4)
.collect(toSet());
11
we’re going to write this:
Streams – Why?
• Intention: replace loops for aggregate operations
• more concise, more readable, composable operations, parallelizable
Set<City> shortCities = new HashSet<>();

for (Person p : people) {
City c = p.getCity();
if (c.getName().length() < 4 ) {
shortCities.add(c);
}
}
instead of writing this:
List<Person> people = …
Set<City> shortCities = people.parallelStream()
.map(Person::getCity)

.filter(c -> c.getName().length() < 4)
.collect(toSet());
12
we’re going to write this:
Visualising Sequential Streams
x2x0 x1 x3x0 x1 x2 x3
Source Map Filter Reduction
Intermediate
Operations
Terminal
Operation
“Values in Motion”
Visualising Sequential Streams
x2x0 x1 x3x1 x2 x3 ✔
Source Map Filter Reduction
Intermediate
Operations
Terminal
Operation
“Values in Motion”
Visualising Sequential Streams
x2x0 x1 x3 x1x2 x3 ❌✔
Source Map Filter Reduction
Intermediate
Operations
Terminal
Operation
“Values in Motion”
Visualising Sequential Streams
x2x0 x1 x3 x1x2x3 ❌✔
Source Map Filter Reduction
Intermediate
Operations
Terminal
Operation
“Values in Motion”
Journey’s End, JavaOne, October 2015
Simple Collector – toSet()
Collectors.toSet()
people.stream().collect(Collectors.toSet())
Journey’s End, JavaOne, October 2015
Simple Collector – toSet()
Stream<Person>
Collectors.toSet()
Set<Person>
people.stream().collect(Collectors.toSet())
Collector<Person,?,Set<Person>>
Journey’s End, JavaOne, October 2015
Simple Collector – toSet()
Stream<Person>
Collectors.toSet()
Set<Person>
people.stream().collect(Collectors.toSet())
bill
Collector<Person,?,Set<Person>>
Journey’s End, JavaOne, October 2015
Simple Collector – toSet()
Stream<Person>
Collectors.toSet()
Set<Person>
people.stream().collect(Collectors.toSet())
bill
Collector<Person,?,Set<Person>>
Journey’s End, JavaOne, October 2015
Simple Collector – toSet()
Stream<Person>
Collectors.toSet()
Set<Person>
people.stream().collect(Collectors.toSet())
bill
Collector<Person,?,Set<Person>>
Journey’s End, JavaOne, October 2015
Simple Collector – toSet()
Stream<Person>
Collectors.toSet()
Set<Person>
people.stream().collect(Collectors.toSet())
bill
jon
Collector<Person,?,Set<Person>>
Journey’s End, JavaOne, October 2015
Simple Collector – toSet()
Stream<Person>
Collectors.toSet()
Set<Person>
people.stream().collect(Collectors.toSet())
bill
jon
Collector<Person,?,Set<Person>>
Journey’s End, JavaOne, October 2015
Simple Collector – toSet()
Stream<Person>
Collectors.toSet()
Set<Person>
people.stream().collect(Collectors.toSet())
amy
bill
jon
Collector<Person,?,Set<Person>>
Journey’s End, JavaOne, October 2015
Simple Collector – toSet()
Stream<Person>
Collectors.toSet()
Set<Person>
people.stream().collect(Collectors.toSet())
amy
bill
jon
Collector<Person,?,Set<Person>>
Journey’s End, JavaOne, October 2015
Simple Collector – toSet()
Collectors.toSet()
Set<Person>
people.stream().collect(Collectors.toSet())
amy
bill
jon
Collector<Person,?,Set<Person>>
Journey’s End, JavaOne, October 2015
Simple Collector – toSet()
Collectors.toSet()
Set<Person>
{ , , }
people.stream().collect(Collectors.toSet())
amybilljon
Collector<Person,?,Set<Person>>
Classical Reduction
+
+
+
0 1 2 3
+
+
+
0
4 5 6 7
0
++
+
Classical Reduction
+
+
+
0 1 2 3
+
+
+
0
4 5 6 7
0
++
+
intStream.reduce(0,(a,b) -> a+b)
Mutable Reduction
a
a
a
e0 e1 e2 e3
a
a
a
e4 e5 e6 e7
aa
c
a: accumulator
c: combiner
Supplier
()->[] ()->[]
Mutable Reduction
a
a
a
e0 e1 e2 e3
a
a
a
e4 e5 e6 e7
aa
c
a: accumulator
c: combiner
Supplier
()->[] ()->[]
Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
122Nor …Moves … 44
grep -b: Collector combiner
0The …[ , 80Shall … ], ,
41 122Nor …Moves … 36 44
grep -b: Collector combiner
0The … 44[ , 42 80Shall … ], ,
41 122Nor …Moves … 36 44
grep -b: Collector combiner
0The … 44[ , 42 80Shall … ]
41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ]
, ,
,] [
41 122Nor …Moves … 36 44
grep -b: Collector solution
0The … 44[ , 42 80Shall … ]
41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ]
, ,
,] [
41 122Nor …Moves … 36 44
grep -b: Collector solution
0The … 44[ , 42 80Shall … ]
41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ]
, ,
,] [
80
41 122Nor …Moves … 36 44
grep -b: Collector solution
0The … 44[ , 42 80Shall … ]
41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ]
, ,
,] [
80
41 122Nor …Moves … 36 44
grep -b: Collector solution
0The … 44[ , 42 80Shall … ]
41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ]
, ,
,] [
80
41 122Nor …Moves … 36 44
grep -b: Collector combiner
0The … 44[ , 42 80Shall … ]
41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ]
, ,
,] [
Moves … 36 0 41 0Nor …42 0Shall …0The … 44[ ]] [[ [] ]
grep -b: Collector accumulator
44 0The moving … writ,
“Moves on: … Wit”
44 0The moving … writ, 36 44Moves on: … Wit
][
][ ,
[ ]
Supplier “The moving … writ,”
accumulator
accumulator
Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
Because we don’t have a problem?
Why Shouldn’t We Optimize Code?
Why Shouldn’t We Optimize Code?
Because we don’t have a problem
- No performance target!
Why Shouldn’t We Optimize Code?
Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
Why Shouldn’t We Optimize Code?
Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
Why Shouldn’t We Optimize Code?
Demo…
Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Why Shouldn’t We Optimize Code?
Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Else there’s a problem in our process, but not in the code
Why Shouldn’t We Optimize Code?
Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Else there’s a problem in our process, but not in the code
Why Shouldn’t We Optimize Code?
Demo…
Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Else there’s a problem in our process, but not in the code
- GC is using all the cycles!
Why Shouldn’t We Optimize Code?
Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Else there’s a problem in our process, but not in the code
- GC is using all the cycles!
Now we can consider the code
Else there’s a problem in the code… somewhere
- now we can go and profile it
Because we don’t have a problem
- No performance target!
Else there is a problem, but not in our process
- The OS is struggling!
Else there’s a problem in our process, but not in the code
- GC is using all the cycles!
Now we can consider the code
Else there’s a problem in the code… somewhere
- now we can go and profile it
Demo…
So, streaming IO is slow
So, streaming IO is slow
If only there was a way of getting all that data into memory…
So, streaming IO is slow
If only there was a way of getting all that data into memory…
Demo…
Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck OR – have a bright idea!
• Fork/Join parallelism in the real world
#java8fairies
Intel Xeon E5 2600 10-core
Parallelism – Why?
The Free Lunch Is Over
http://www.gotw.ca/publications/concurrency-ddj.htm
The Transistor, Blessed at Birth
What’s Happened?
What’s Happened?
Physical limitations of the technology:
What’s Happened?
Physical limitations of the technology:
• signal leakage
What’s Happened?
Physical limitations of the technology:
• signal leakage
• heat dissipation
What’s Happened?
Physical limitations of the technology:
• signal leakage
• heat dissipation
• speed of light!
What’s Happened?
Physical limitations of the technology:
• signal leakage
• heat dissipation
• speed of light!
– 30cm = 1 light-nanosecond
What’s Happened?
Physical limitations of the technology:
• signal leakage
• heat dissipation
• speed of light!
– 30cm = 1 light-nanosecond
We’re not going to get faster cores,
What’s Happened?
Physical limitations of the technology:
• signal leakage
• heat dissipation
• speed of light!
– 30cm = 1 light-nanosecond
We’re not going to get faster cores,
we’re going to get more cores!
x2
Visualizing Parallel Streams
x0
x1
x3
x0
x1
x2
x3
x2
Visualizing Parallel Streams
x0
x1
x3
x0
x1
x2
x3
x2
Visualizing Parallel Streams
x1
x3
x0
x1
x3
✔
❌
x2
Visualizing Parallel Streams
x1
y3
x0
x1
x3
✔
❌
A Parallel Solution for grep -b
• Parallel streams need splittable sources
• Streaming I/O makes you subject to Amdahl’s Law:
A Parallel Solution for grep -b
• Parallel streams need splittable sources
• Streaming I/O makes you subject to Amdahl’s Law:
Blessing – and Curse – on the Transistor
Stream Sources for Parallel Processing
Implemented by a Spliterator
Stream Sources for Parallel Processing
Implemented by a Spliterator
Stream Sources for Parallel Processing
Implemented by a Spliterator
Stream Sources for Parallel Processing
Implemented by a Spliterator
Stream Sources for Parallel Processing
Implemented by a Spliterator
Stream Sources for Parallel Processing
Implemented by a Spliterator
Stream Sources for Parallel Processing
Implemented by a Spliterator
Stream Sources for Parallel Processing
Implemented by a Spliterator
Stream Sources for Parallel Processing
Implemented by a Spliterator
Moves …Wit
LineSpliterator
The moving Finger … writ n Shall … Line Nor all thy … itn n n
spliterator coverage
Moves …Wit
LineSpliterator
The moving Finger … writ n Shall … Line Nor all thy … itn n n
spliterator coverage
MappedByteBuffer
Moves …Wit
LineSpliterator
The moving Finger … writ n Shall … Line Nor all thy … itn n n
spliterator coverage
MappedByteBuffer mid
Moves …Wit
LineSpliterator
The moving Finger … writ n Shall … Line Nor all thy … itn n n
spliterator coverage
MappedByteBuffer mid
Moves …Wit
LineSpliterator
The moving Finger … writ n Shall … Line Nor all thy … itn n n
spliterator coveragenew spliterator coverage
MappedByteBuffer mid
Moves …Wit
LineSpliterator
The moving Finger … writ n Shall … Line Nor all thy … itn n n
spliterator coveragenew spliterator coverage
MappedByteBuffer mid
Demo…
Parallelizing grep -b
• Splitting action of LineSpliterator is O(log n)
• Collector no longer needs to compute index
• Result (relatively independent of data size):
- sequential stream ~2x as fast as iterative solution
- parallel stream >2.5x as fast as sequential stream
- on 4 hardware threads
Parallelizing Streams
Parallel-unfriendly intermediate operations:
stateful ones
– need to store some or all of the stream data in memory
– sorted()
those requiring ordering
– limit()
Collectors Cost Extra!
Depends on the performance of accumulator and combiner
functions
• toList(), toSet(), toCollection() – performance
normally dominated by accumulator
• but allow for the overhead of managing multithread access to non-
threadsafe containers for the combine operation
• toMap(), toConcurrentMap() – map merging is slow.
Resizing maps, especially concurrent maps, is very expensive.
Whenever possible, presize all data structures, maps in particular.
Agenda
• Define the problem
• Implement a solution
• Analyse performance
– find the bottleneck
• Fork/Join parallelism in the real world
#java8fairies
Simulated Server Environment
threadPool.execute(() -> {
try {
double value = logEntries.parallelStream()
.map(applicationStoppedTimePattern::matcher)
.filter(Matcher::find)
.map( matcher -> matcher.group(2))
.mapToDouble(Double::parseDouble)
.summaryStatistics().getSum();
} catch (Exception ex) {}
});
How Does It Perform?
• Total run time: 261.7 seconds
• Max: 39.2 secs, Min: 9.2 secs, Median: 22.0
secs
Tragedy of the Commons
Garrett Hardin, ecologist (1968):
Imagine the grazing of animals on a common ground.
Each flock owner gains if they add to their own flock.
But every animal added to the total degrades the
commons a small amount.
Tragedy of the Commons
Tragedy of the Commons
You have a finite amount of hardware
– it might be in your best interest to grab it all
– but if everyone behaves the same way…
Tragedy of the Commons
You have a finite amount of hardware
– it might be in your best interest to grab it all
– but if everyone behaves the same way…
Be a good neighbor
Tragedy of the Commons
You have a finite amount of hardware
– it might be in your best interest to grab it all
– but if everyone behaves the same way…
With many parallelStream() operations running concurrently,
performance is limited by the size of the common thread
pool and the number of cores you have
Be a good neighbor
Configuring Common Pool
Size of common ForkJoinPool is
• Runtime.getRuntime().availableProcessors() - 1
-Djava.util.concurrent.ForkJoinPool.common.parallelism=N
-Djava.util.concurrent.ForkJoinPool.common.threadFactory
-Djava.util.concurrent.ForkJoinPool.common.exceptionHandler
Fork-Join
Support for Fork-Join added in Java 7
• difficult coding idiom to master
Used internally by parallel streams
• uses a spliterator to segment the stream
• each stream is processed by a ForkJoinWorkerThread
How fork-join works and performs is important to latency
ForkJoinPool invoke
ForkJoinPool.invoke(ForkJoinTask) uses the submitting thread
as a worker
• If 100 threads all call invoke(), we would have
100+ForkJoinThreads exhausting the limiting resource, e.g. CPUs,
IO, etc.
ForkJoinPool submit/get
ForkJoinPool.submit(Callable).get() suspends the submitting
thread
• If 100 threads all call submit(), the work queue can become very
long, thus adding latency
Fork-Join Performance
Fork Join comes with significant overhead
• each chunk of work must be large enough to amortize the
overhead
C/P/N/Q Performance Model
C - number of submitters
P - number of CPUs
N - number of elements
Q - cost of the operation
When to go Parallel
The workload of the intermediate operations must be great
enough to outweigh the overheads (~100µs):
– initializing the fork/join framework
– splitting
– concurrent collection
Often quoted as N x Q
size of data set
(typically > 10,000)
processing cost per element
Kernel Times
CPU will not be the limiting factor when
• CPU is not saturated
• kernel times exceed 10% of user time
More threads will decrease performance
• predicted by Little’s Law
Common Thread Pool
Fork-Join by default uses a common thread pool
• default number of worker threads == number of logical cores - 1
• Always contains at least one thread
Performance is tied to whichever you run out of first
• availability of the constraining resource
• number of ForkJoinWorkerThreads/hardware threads
Everyone is working together to get it done!
All Hands on Deck!!!
Little’s Law
Fork-Join is a work queue
• work queue behavior is typically modeled using Little’s Law
Number of tasks in a system equals the arrival rate times the
amount of time it takes to clear an item
Task is submitted every 500ms, or 2 per second
Number of tasks = 2/sec * 2.8 seconds
= 5.6 tasks
Components of Latency
Latency is time from stimulus to result
• internally, latency consists of active and dead time
Reducing dead time assumes you
• can find it and are able to fill in with useful work
From Previous Example
if there is available hardware capacity then
make the pool bigger
else
add capacity
or tune to reduce strength of the dependency
ForkJoinPool Observability
• In an application where you have many parallel stream
operations all running concurrently performance will be affected
by the size of the common thread pool
• too small can starve threads from needed resources
• too big can cause threads to thrash on contended resources
ForkJoinPool comes with no visibility
• no metrics to help us tune
• instrument ForkJoinTask.invoke()
• gather measures that can be feed into Little’s Law
• Collect
• service times
• time submitted to time returned
• inter-arrival times
Instrumenting ForkJoinPool
public final V invoke() {
ForkJoinPool.common.getMonitor().submitTask(this);
int s;
if ((s = doInvoke() & DONE_MASK) != NORMAL) reportException(s);
ForkJoinPool.common.getMonitor().retireTask(this);
return getRawResult();
}
Performance
Submit log parsing to our own ForkJoinPool
new ForkJoinPool(16).submit(() -> ……… ).get()

new ForkJoinPool(8).submit(() -> ……… ).get()

new ForkJoinPool(4).submit(() -> ……… ).get()
16 worker
threads
8 worker
threads
4 worker
threads
0
75000
150000
225000
300000
Stream Parallel Flood Stream Flood Parallel
Performance mostly doesn’t matter
But if you must…
• sequential streams normally beat iterative solutions
• parallel streams can utilize all cores, providing
- the data is efficiently splittable
- the intermediate operations are sufficiently expensive and are
CPU-bound
- there isn’t contention for the processors
Conclusions
#java8fairies
Resources
#java8fairies
Resources
http://gee.cs.oswego.edu/dl/html/StreamParallelGuidance.html
#java8fairies
Resources
http://gee.cs.oswego.edu/dl/html/StreamParallelGuidance.html
http://shipilev.net/talks/devoxx-Nov2013-benchmarking.pdf
#java8fairies
Resources
http://gee.cs.oswego.edu/dl/html/StreamParallelGuidance.html
http://shipilev.net/talks/devoxx-Nov2013-benchmarking.pdf
http://openjdk.java.net/projects/code-tools/jmh/
#java8fairies
Resources
http://gee.cs.oswego.edu/dl/html/StreamParallelGuidance.html
http://shipilev.net/talks/devoxx-Nov2013-benchmarking.pdf
http://openjdk.java.net/projects/code-tools/jmh/
#java8fairies

More Related Content

PDF
Shooting the Rapids
PDF
Shooting the Rapids: Getting the Best from Java 8 Streams
PDF
Let's Get to the Rapids
PDF
Parallel-Ready Java Code: Managing Mutation in an Imperative Language
PDF
Journey's End – Collection and Reduction in the Stream API
PPTX
Writing Hadoop Jobs in Scala using Scalding
PDF
Hadoop Summit Europe 2014: Apache Storm Architecture
KEY
Scalding: Twitter's Scala DSL for Hadoop/Cascading
Shooting the Rapids
Shooting the Rapids: Getting the Best from Java 8 Streams
Let's Get to the Rapids
Parallel-Ready Java Code: Managing Mutation in an Imperative Language
Journey's End – Collection and Reduction in the Stream API
Writing Hadoop Jobs in Scala using Scalding
Hadoop Summit Europe 2014: Apache Storm Architecture
Scalding: Twitter's Scala DSL for Hadoop/Cascading

What's hot (20)

PPTX
Hot Streaming Java
PDF
Kotlin @ Coupang Backed - JetBrains Day seoul 2018
PDF
Weaving Dataflows with Silk - ScalaMatsuri 2014, Tokyo
PDF
Effective testing for spark programs Strata NY 2015
PDF
HBase RowKey design for Akka Persistence
PDF
Graphite
PDF
Kotlin @ Coupang Backend 2017
PDF
Apache Spark Structured Streaming for Machine Learning - StrataConf 2016
PDF
Introduction of failsafe
PDF
2014 akka-streams-tokyo-japanese
PPTX
Beyond parallelize and collect - Spark Summit East 2016
PPTX
Algebird : Abstract Algebra for big data analytics. Devoxx 2014
PDF
Kotlin Receiver Types 介紹
PDF
Reactive Streams / Akka Streams - GeeCON Prague 2014
PDF
PDF
Ge aviation spark application experience porting analytics into py spark ml p...
PDF
ITSubbotik - как скрестить ежа с ужом или подводные камни внедрения функциона...
PDF
Storm - As deep into real-time data processing as you can get in 30 minutes.
PDF
Fresh from the Oven (04.2015): Experimental Akka Typed and Akka Streams
PDF
Improving PySpark Performance - Spark Beyond the JVM @ PyData DC 2016
Hot Streaming Java
Kotlin @ Coupang Backed - JetBrains Day seoul 2018
Weaving Dataflows with Silk - ScalaMatsuri 2014, Tokyo
Effective testing for spark programs Strata NY 2015
HBase RowKey design for Akka Persistence
Graphite
Kotlin @ Coupang Backend 2017
Apache Spark Structured Streaming for Machine Learning - StrataConf 2016
Introduction of failsafe
2014 akka-streams-tokyo-japanese
Beyond parallelize and collect - Spark Summit East 2016
Algebird : Abstract Algebra for big data analytics. Devoxx 2014
Kotlin Receiver Types 介紹
Reactive Streams / Akka Streams - GeeCON Prague 2014
Ge aviation spark application experience porting analytics into py spark ml p...
ITSubbotik - как скрестить ежа с ужом или подводные камни внедрения функциона...
Storm - As deep into real-time data processing as you can get in 30 minutes.
Fresh from the Oven (04.2015): Experimental Akka Typed and Akka Streams
Improving PySpark Performance - Spark Beyond the JVM @ PyData DC 2016
Ad

Viewers also liked (20)

PPTX
Model multiplication of decimals
PDF
Présentation Kivy (et projets associés) à Pycon-fr 2013
PDF
Metasepi team meeting #6: "Snatch-driven development"
PDF
Managing gang of chaotic developers is complex at Agile Tour Riga 2012
PDF
Agile Management 2013 - Nie tylko it
PDF
A Scalable I/O Manager for GHC
PDF
Monadologie
PDF
A Hitchhiker's Guide to the Inter-Cloud
PDF
Federated CDNs: What every service provider should know
PDF
Be careful when entering a casino (Agile by Example 2012)
PPT
Sneaking Scala through the Back Door
PDF
Vert.x - JDD 2013 (English)
PDF
Efficient Immutable Data Structures (Okasaki for Dummies)
PDF
Protecting Java EE Web Apps with Secure HTTP Headers
PDF
Software Dendrology by Brandon Bloom
ODP
erlang at hover.in , Devcamp Blr 09
PPT
O'Reilly ETech Conference: Laszlo RIA
PDF
Wakanda: a new end-to-end JavaScript platform - JSConf Berlin 2009
PDF
Laszlo PyCon 2005
PDF
High-Performance Haskell
Model multiplication of decimals
Présentation Kivy (et projets associés) à Pycon-fr 2013
Metasepi team meeting #6: "Snatch-driven development"
Managing gang of chaotic developers is complex at Agile Tour Riga 2012
Agile Management 2013 - Nie tylko it
A Scalable I/O Manager for GHC
Monadologie
A Hitchhiker's Guide to the Inter-Cloud
Federated CDNs: What every service provider should know
Be careful when entering a casino (Agile by Example 2012)
Sneaking Scala through the Back Door
Vert.x - JDD 2013 (English)
Efficient Immutable Data Structures (Okasaki for Dummies)
Protecting Java EE Web Apps with Secure HTTP Headers
Software Dendrology by Brandon Bloom
erlang at hover.in , Devcamp Blr 09
O'Reilly ETech Conference: Laszlo RIA
Wakanda: a new end-to-end JavaScript platform - JSConf Berlin 2009
Laszlo PyCon 2005
High-Performance Haskell
Ad

Similar to Good and Wicked Fairies, and the Tragedy of the Commons: Understanding the Performance of Java 8 Streams (20)

PDF
Reactive Web-Applications @ LambdaDays
PDF
Voxxed Days Vienna - The Why and How of Reactive Web-Applications on the JVM
PPT
Hands on Training – Graph Database with Neo4j
PDF
RxSwift to Combine
PDF
RxSwift to Combine
PDF
Apache Spark for Library Developers with William Benton and Erik Erlandson
PPTX
Intro to Akka Streams
PDF
A Few of My Favorite (Python) Things
PPTX
The openCypher Project - An Open Graph Query Language
PDF
Java 8 - Return of the Java
PPTX
PDF
Scala Collections : Java 8 on Steroids
PDF
Community-driven Language Design at TC39 on the JavaScript Pipeline Operator ...
PDF
Free The Enterprise With Ruby & Master Your Own Domain
PDF
Reactive Stream Processing with Akka Streams
PDF
Kotlin Introduction with Android applications
PDF
CM NCCU Class2
PPTX
Graal in GraalVM - A New JIT Compiler
PDF
Tomasz Nurkiewicz - Programowanie reaktywne: czego się nauczyłem
KEY
Spl Not A Bridge Too Far phpNW09
Reactive Web-Applications @ LambdaDays
Voxxed Days Vienna - The Why and How of Reactive Web-Applications on the JVM
Hands on Training – Graph Database with Neo4j
RxSwift to Combine
RxSwift to Combine
Apache Spark for Library Developers with William Benton and Erik Erlandson
Intro to Akka Streams
A Few of My Favorite (Python) Things
The openCypher Project - An Open Graph Query Language
Java 8 - Return of the Java
Scala Collections : Java 8 on Steroids
Community-driven Language Design at TC39 on the JavaScript Pipeline Operator ...
Free The Enterprise With Ruby & Master Your Own Domain
Reactive Stream Processing with Akka Streams
Kotlin Introduction with Android applications
CM NCCU Class2
Graal in GraalVM - A New JIT Compiler
Tomasz Nurkiewicz - Programowanie reaktywne: czego się nauczyłem
Spl Not A Bridge Too Far phpNW09

Recently uploaded (20)

PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PPTX
Transform Your Business with a Software ERP System
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PDF
Design an Analysis of Algorithms I-SECS-1021-03
PPTX
VVF-Customer-Presentation2025-Ver1.9.pptx
PPTX
Materi_Pemrograman_Komputer-Looping.pptx
PDF
top salesforce developer skills in 2025.pdf
PPTX
Online Work Permit System for Fast Permit Processing
PPTX
Introduction to Artificial Intelligence
PDF
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
PPTX
L1 - Introduction to python Backend.pptx
PPTX
Essential Infomation Tech presentation.pptx
PPTX
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PDF
How Creative Agencies Leverage Project Management Software.pdf
PDF
medical staffing services at VALiNTRY
PDF
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
PDF
Understanding Forklifts - TECH EHS Solution
PPT
JAVA ppt tutorial basics to learn java programming
PPTX
Materi-Enum-and-Record-Data-Type (1).pptx
How to Choose the Right IT Partner for Your Business in Malaysia
Transform Your Business with a Software ERP System
Internet Downloader Manager (IDM) Crack 6.42 Build 41
Design an Analysis of Algorithms I-SECS-1021-03
VVF-Customer-Presentation2025-Ver1.9.pptx
Materi_Pemrograman_Komputer-Looping.pptx
top salesforce developer skills in 2025.pdf
Online Work Permit System for Fast Permit Processing
Introduction to Artificial Intelligence
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
L1 - Introduction to python Backend.pptx
Essential Infomation Tech presentation.pptx
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
Design an Analysis of Algorithms II-SECS-1021-03
How Creative Agencies Leverage Project Management Software.pdf
medical staffing services at VALiNTRY
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
Understanding Forklifts - TECH EHS Solution
JAVA ppt tutorial basics to learn java programming
Materi-Enum-and-Record-Data-Type (1).pptx

Good and Wicked Fairies, and the Tragedy of the Commons: Understanding the Performance of Java 8 Streams

  • 1. Good and Wicked Fairies The Tragedy of the Commons Understanding the Performance of Java 8 Streams Kirk Pepperdine @kcpeppe MauriceNaftalin @mauricenaftalin #java8fairies
  • 2. About Kirk • Specialises in performance tuning • speaks frequently about performance • author of performance tuning workshop • Co-founder jClarity • performance diagnositic tooling • Java Champion (since 2006)
  • 3. About Kirk • Specialises in performance tuning • speaks frequently about performance • author of performance tuning workshop • Co-founder jClarity • performance diagnositic tooling • Java Champion (since 2006)
  • 5. About Maurice Developer, designer, architect, teacher, learner, writer
  • 6. About Maurice Co-author Developer, designer, architect, teacher, learner, writer
  • 7. About Maurice Co-author Author Developer, designer, architect, teacher, learner, writer
  • 8. About Maurice Co-author Author Developer, designer, architect, teacher, learner, writer
  • 12. Agenda • Define the problem #java8fairies
  • 13. Agenda • Define the problem • Implement a solution #java8fairies
  • 14. Agenda • Define the problem • Implement a solution • Analyse performance #java8fairies
  • 15. Agenda • Define the problem • Implement a solution • Analyse performance – find the bottleneck #java8fairies
  • 16. Agenda • Define the problem • Implement a solution • Analyse performance – find the bottleneck #java8fairies
  • 17. Agenda • Define the problem • Implement a solution • Analyse performance – find the bottleneck • Fork/Join parallelism in the real world #java8fairies
  • 18. Agenda • Define the problem • Implement a solution • Analyse performance – find the bottleneck • Fork/Join parallelism in the real world #java8fairies
  • 19. Agenda • Define the problem • Implement a solution • Analyse performance – find the bottleneck • Fork/Join parallelism in the real world #java8fairies
  • 20. Case Study: grep -b grep -b: “The offset in bytes of a matched pattern is displayed in front of the matched line.”
  • 21. Case Study: grep -b The Moving Finger writes; and, having writ, Moves on: nor all thy Piety nor Wit Shall bring it back to cancel half a Line Nor all thy Tears wash out a Word of it. rubai51.txt grep -b: “The offset in bytes of a matched pattern is displayed in front of the matched line.”
  • 22. Case Study: grep -b The Moving Finger writes; and, having writ, Moves on: nor all thy Piety nor Wit Shall bring it back to cancel half a Line Nor all thy Tears wash out a Word of it. rubai51.txt grep -b: “The offset in bytes of a matched pattern is displayed in front of the matched line.” $ grep -b 'W.*t' rubai51.txt 44:Moves on: nor all thy Piety nor Wit 122:Nor all thy Tears wash out a Word of it.
  • 24. Obvious iterative implementation - process file line by line - maintain byte displacement in an accumulator variable Case Study: grep -b
  • 25. Obvious iterative implementation - process file line by line - maintain byte displacement in an accumulator variable Objective here is to implement it in stream code - no accumulators allowed! Case Study: grep -b
  • 26. Streams – Why? • Intention: replace loops for aggregate operations 10
  • 27. Streams – Why? • Intention: replace loops for aggregate operations List<Person> people = … Set<City> shortCities = new HashSet<>();
 for (Person p : people) { City c = p.getCity(); if (c.getName().length() < 4 ) { shortCities.add(c); } } instead of writing this: 10
  • 28. Streams – Why? • Intention: replace loops for aggregate operations • more concise, more readable, composable operations, parallelizable Set<City> shortCities = new HashSet<>();
 for (Person p : people) { City c = p.getCity(); if (c.getName().length() < 4 ) { shortCities.add(c); } } instead of writing this: List<Person> people = … Set<City> shortCities = people.stream() .map(Person::getCity)
 .filter(c -> c.getName().length() < 4) .collect(toSet()); 11 we’re going to write this:
  • 29. Streams – Why? • Intention: replace loops for aggregate operations • more concise, more readable, composable operations, parallelizable Set<City> shortCities = new HashSet<>();
 for (Person p : people) { City c = p.getCity(); if (c.getName().length() < 4 ) { shortCities.add(c); } } instead of writing this: List<Person> people = … Set<City> shortCities = people.stream() .map(Person::getCity)
 .filter(c -> c.getName().length() < 4) .collect(toSet()); 11 we’re going to write this:
  • 30. Streams – Why? • Intention: replace loops for aggregate operations • more concise, more readable, composable operations, parallelizable Set<City> shortCities = new HashSet<>();
 for (Person p : people) { City c = p.getCity(); if (c.getName().length() < 4 ) { shortCities.add(c); } } instead of writing this: List<Person> people = … Set<City> shortCities = people.parallelStream() .map(Person::getCity)
 .filter(c -> c.getName().length() < 4) .collect(toSet()); 12 we’re going to write this:
  • 31. Visualising Sequential Streams x2x0 x1 x3x0 x1 x2 x3 Source Map Filter Reduction Intermediate Operations Terminal Operation “Values in Motion”
  • 32. Visualising Sequential Streams x2x0 x1 x3x1 x2 x3 ✔ Source Map Filter Reduction Intermediate Operations Terminal Operation “Values in Motion”
  • 33. Visualising Sequential Streams x2x0 x1 x3 x1x2 x3 ❌✔ Source Map Filter Reduction Intermediate Operations Terminal Operation “Values in Motion”
  • 34. Visualising Sequential Streams x2x0 x1 x3 x1x2x3 ❌✔ Source Map Filter Reduction Intermediate Operations Terminal Operation “Values in Motion”
  • 35. Journey’s End, JavaOne, October 2015 Simple Collector – toSet() Collectors.toSet() people.stream().collect(Collectors.toSet())
  • 36. Journey’s End, JavaOne, October 2015 Simple Collector – toSet() Stream<Person> Collectors.toSet() Set<Person> people.stream().collect(Collectors.toSet()) Collector<Person,?,Set<Person>>
  • 37. Journey’s End, JavaOne, October 2015 Simple Collector – toSet() Stream<Person> Collectors.toSet() Set<Person> people.stream().collect(Collectors.toSet()) bill Collector<Person,?,Set<Person>>
  • 38. Journey’s End, JavaOne, October 2015 Simple Collector – toSet() Stream<Person> Collectors.toSet() Set<Person> people.stream().collect(Collectors.toSet()) bill Collector<Person,?,Set<Person>>
  • 39. Journey’s End, JavaOne, October 2015 Simple Collector – toSet() Stream<Person> Collectors.toSet() Set<Person> people.stream().collect(Collectors.toSet()) bill Collector<Person,?,Set<Person>>
  • 40. Journey’s End, JavaOne, October 2015 Simple Collector – toSet() Stream<Person> Collectors.toSet() Set<Person> people.stream().collect(Collectors.toSet()) bill jon Collector<Person,?,Set<Person>>
  • 41. Journey’s End, JavaOne, October 2015 Simple Collector – toSet() Stream<Person> Collectors.toSet() Set<Person> people.stream().collect(Collectors.toSet()) bill jon Collector<Person,?,Set<Person>>
  • 42. Journey’s End, JavaOne, October 2015 Simple Collector – toSet() Stream<Person> Collectors.toSet() Set<Person> people.stream().collect(Collectors.toSet()) amy bill jon Collector<Person,?,Set<Person>>
  • 43. Journey’s End, JavaOne, October 2015 Simple Collector – toSet() Stream<Person> Collectors.toSet() Set<Person> people.stream().collect(Collectors.toSet()) amy bill jon Collector<Person,?,Set<Person>>
  • 44. Journey’s End, JavaOne, October 2015 Simple Collector – toSet() Collectors.toSet() Set<Person> people.stream().collect(Collectors.toSet()) amy bill jon Collector<Person,?,Set<Person>>
  • 45. Journey’s End, JavaOne, October 2015 Simple Collector – toSet() Collectors.toSet() Set<Person> { , , } people.stream().collect(Collectors.toSet()) amybilljon Collector<Person,?,Set<Person>>
  • 46. Classical Reduction + + + 0 1 2 3 + + + 0 4 5 6 7 0 ++ +
  • 47. Classical Reduction + + + 0 1 2 3 + + + 0 4 5 6 7 0 ++ + intStream.reduce(0,(a,b) -> a+b)
  • 48. Mutable Reduction a a a e0 e1 e2 e3 a a a e4 e5 e6 e7 aa c a: accumulator c: combiner Supplier ()->[] ()->[]
  • 49. Mutable Reduction a a a e0 e1 e2 e3 a a a e4 e5 e6 e7 aa c a: accumulator c: combiner Supplier ()->[] ()->[]
  • 50. Agenda • Define the problem • Implement a solution • Analyse performance – find the bottleneck • Fork/Join parallelism in the real world #java8fairies
  • 51. 122Nor …Moves … 44 grep -b: Collector combiner 0The …[ , 80Shall … ], ,
  • 52. 41 122Nor …Moves … 36 44 grep -b: Collector combiner 0The … 44[ , 42 80Shall … ], ,
  • 53. 41 122Nor …Moves … 36 44 grep -b: Collector combiner 0The … 44[ , 42 80Shall … ] 41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ] , , ,] [
  • 54. 41 122Nor …Moves … 36 44 grep -b: Collector solution 0The … 44[ , 42 80Shall … ] 41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ] , , ,] [
  • 55. 41 122Nor …Moves … 36 44 grep -b: Collector solution 0The … 44[ , 42 80Shall … ] 41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ] , , ,] [ 80
  • 56. 41 122Nor …Moves … 36 44 grep -b: Collector solution 0The … 44[ , 42 80Shall … ] 41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ] , , ,] [ 80
  • 57. 41 122Nor …Moves … 36 44 grep -b: Collector solution 0The … 44[ , 42 80Shall … ] 41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ] , , ,] [ 80
  • 58. 41 122Nor …Moves … 36 44 grep -b: Collector combiner 0The … 44[ , 42 80Shall … ] 41 42Nor …Moves … 36 440The … 44[ , 42 0Shall … ] , , ,] [ Moves … 36 0 41 0Nor …42 0Shall …0The … 44[ ]] [[ [] ]
  • 59. grep -b: Collector accumulator 44 0The moving … writ, “Moves on: … Wit” 44 0The moving … writ, 36 44Moves on: … Wit ][ ][ , [ ] Supplier “The moving … writ,” accumulator accumulator
  • 60. Agenda • Define the problem • Implement a solution • Analyse performance – find the bottleneck • Fork/Join parallelism in the real world #java8fairies
  • 61. Because we don’t have a problem? Why Shouldn’t We Optimize Code?
  • 62. Why Shouldn’t We Optimize Code? Because we don’t have a problem - No performance target!
  • 63. Why Shouldn’t We Optimize Code?
  • 64. Because we don’t have a problem - No performance target! Else there is a problem, but not in our process Why Shouldn’t We Optimize Code?
  • 65. Because we don’t have a problem - No performance target! Else there is a problem, but not in our process Why Shouldn’t We Optimize Code? Demo…
  • 66. Because we don’t have a problem - No performance target! Else there is a problem, but not in our process - The OS is struggling! Why Shouldn’t We Optimize Code?
  • 67. Because we don’t have a problem - No performance target! Else there is a problem, but not in our process - The OS is struggling! Else there’s a problem in our process, but not in the code Why Shouldn’t We Optimize Code?
  • 68. Because we don’t have a problem - No performance target! Else there is a problem, but not in our process - The OS is struggling! Else there’s a problem in our process, but not in the code Why Shouldn’t We Optimize Code? Demo…
  • 69. Because we don’t have a problem - No performance target! Else there is a problem, but not in our process - The OS is struggling! Else there’s a problem in our process, but not in the code - GC is using all the cycles! Why Shouldn’t We Optimize Code?
  • 70. Because we don’t have a problem - No performance target! Else there is a problem, but not in our process - The OS is struggling! Else there’s a problem in our process, but not in the code - GC is using all the cycles! Now we can consider the code Else there’s a problem in the code… somewhere - now we can go and profile it
  • 71. Because we don’t have a problem - No performance target! Else there is a problem, but not in our process - The OS is struggling! Else there’s a problem in our process, but not in the code - GC is using all the cycles! Now we can consider the code Else there’s a problem in the code… somewhere - now we can go and profile it Demo…
  • 72. So, streaming IO is slow
  • 73. So, streaming IO is slow If only there was a way of getting all that data into memory…
  • 74. So, streaming IO is slow If only there was a way of getting all that data into memory… Demo…
  • 75. Agenda • Define the problem • Implement a solution • Analyse performance – find the bottleneck • Fork/Join parallelism in the real world #java8fairies
  • 76. Agenda • Define the problem • Implement a solution • Analyse performance – find the bottleneck OR – have a bright idea! • Fork/Join parallelism in the real world #java8fairies
  • 77. Intel Xeon E5 2600 10-core
  • 78. Parallelism – Why? The Free Lunch Is Over http://www.gotw.ca/publications/concurrency-ddj.htm
  • 82. What’s Happened? Physical limitations of the technology: • signal leakage
  • 83. What’s Happened? Physical limitations of the technology: • signal leakage • heat dissipation
  • 84. What’s Happened? Physical limitations of the technology: • signal leakage • heat dissipation • speed of light!
  • 85. What’s Happened? Physical limitations of the technology: • signal leakage • heat dissipation • speed of light! – 30cm = 1 light-nanosecond
  • 86. What’s Happened? Physical limitations of the technology: • signal leakage • heat dissipation • speed of light! – 30cm = 1 light-nanosecond We’re not going to get faster cores,
  • 87. What’s Happened? Physical limitations of the technology: • signal leakage • heat dissipation • speed of light! – 30cm = 1 light-nanosecond We’re not going to get faster cores, we’re going to get more cores!
  • 92. A Parallel Solution for grep -b • Parallel streams need splittable sources • Streaming I/O makes you subject to Amdahl’s Law:
  • 93. A Parallel Solution for grep -b • Parallel streams need splittable sources • Streaming I/O makes you subject to Amdahl’s Law:
  • 94. Blessing – and Curse – on the Transistor
  • 95. Stream Sources for Parallel Processing Implemented by a Spliterator
  • 96. Stream Sources for Parallel Processing Implemented by a Spliterator
  • 97. Stream Sources for Parallel Processing Implemented by a Spliterator
  • 98. Stream Sources for Parallel Processing Implemented by a Spliterator
  • 99. Stream Sources for Parallel Processing Implemented by a Spliterator
  • 100. Stream Sources for Parallel Processing Implemented by a Spliterator
  • 101. Stream Sources for Parallel Processing Implemented by a Spliterator
  • 102. Stream Sources for Parallel Processing Implemented by a Spliterator
  • 103. Stream Sources for Parallel Processing Implemented by a Spliterator
  • 104. Moves …Wit LineSpliterator The moving Finger … writ n Shall … Line Nor all thy … itn n n spliterator coverage
  • 105. Moves …Wit LineSpliterator The moving Finger … writ n Shall … Line Nor all thy … itn n n spliterator coverage MappedByteBuffer
  • 106. Moves …Wit LineSpliterator The moving Finger … writ n Shall … Line Nor all thy … itn n n spliterator coverage MappedByteBuffer mid
  • 107. Moves …Wit LineSpliterator The moving Finger … writ n Shall … Line Nor all thy … itn n n spliterator coverage MappedByteBuffer mid
  • 108. Moves …Wit LineSpliterator The moving Finger … writ n Shall … Line Nor all thy … itn n n spliterator coveragenew spliterator coverage MappedByteBuffer mid
  • 109. Moves …Wit LineSpliterator The moving Finger … writ n Shall … Line Nor all thy … itn n n spliterator coveragenew spliterator coverage MappedByteBuffer mid Demo…
  • 110. Parallelizing grep -b • Splitting action of LineSpliterator is O(log n) • Collector no longer needs to compute index • Result (relatively independent of data size): - sequential stream ~2x as fast as iterative solution - parallel stream >2.5x as fast as sequential stream - on 4 hardware threads
  • 111. Parallelizing Streams Parallel-unfriendly intermediate operations: stateful ones – need to store some or all of the stream data in memory – sorted() those requiring ordering – limit()
  • 112. Collectors Cost Extra! Depends on the performance of accumulator and combiner functions • toList(), toSet(), toCollection() – performance normally dominated by accumulator • but allow for the overhead of managing multithread access to non- threadsafe containers for the combine operation • toMap(), toConcurrentMap() – map merging is slow. Resizing maps, especially concurrent maps, is very expensive. Whenever possible, presize all data structures, maps in particular.
  • 113. Agenda • Define the problem • Implement a solution • Analyse performance – find the bottleneck • Fork/Join parallelism in the real world #java8fairies
  • 114. Simulated Server Environment threadPool.execute(() -> { try { double value = logEntries.parallelStream() .map(applicationStoppedTimePattern::matcher) .filter(Matcher::find) .map( matcher -> matcher.group(2)) .mapToDouble(Double::parseDouble) .summaryStatistics().getSum(); } catch (Exception ex) {} });
  • 115. How Does It Perform? • Total run time: 261.7 seconds • Max: 39.2 secs, Min: 9.2 secs, Median: 22.0 secs
  • 116. Tragedy of the Commons Garrett Hardin, ecologist (1968): Imagine the grazing of animals on a common ground. Each flock owner gains if they add to their own flock. But every animal added to the total degrades the commons a small amount.
  • 117. Tragedy of the Commons
  • 118. Tragedy of the Commons You have a finite amount of hardware – it might be in your best interest to grab it all – but if everyone behaves the same way…
  • 119. Tragedy of the Commons You have a finite amount of hardware – it might be in your best interest to grab it all – but if everyone behaves the same way… Be a good neighbor
  • 120. Tragedy of the Commons You have a finite amount of hardware – it might be in your best interest to grab it all – but if everyone behaves the same way… With many parallelStream() operations running concurrently, performance is limited by the size of the common thread pool and the number of cores you have Be a good neighbor
  • 121. Configuring Common Pool Size of common ForkJoinPool is • Runtime.getRuntime().availableProcessors() - 1 -Djava.util.concurrent.ForkJoinPool.common.parallelism=N -Djava.util.concurrent.ForkJoinPool.common.threadFactory -Djava.util.concurrent.ForkJoinPool.common.exceptionHandler
  • 122. Fork-Join Support for Fork-Join added in Java 7 • difficult coding idiom to master Used internally by parallel streams • uses a spliterator to segment the stream • each stream is processed by a ForkJoinWorkerThread How fork-join works and performs is important to latency
  • 123. ForkJoinPool invoke ForkJoinPool.invoke(ForkJoinTask) uses the submitting thread as a worker • If 100 threads all call invoke(), we would have 100+ForkJoinThreads exhausting the limiting resource, e.g. CPUs, IO, etc.
  • 124. ForkJoinPool submit/get ForkJoinPool.submit(Callable).get() suspends the submitting thread • If 100 threads all call submit(), the work queue can become very long, thus adding latency
  • 125. Fork-Join Performance Fork Join comes with significant overhead • each chunk of work must be large enough to amortize the overhead
  • 126. C/P/N/Q Performance Model C - number of submitters P - number of CPUs N - number of elements Q - cost of the operation
  • 127. When to go Parallel The workload of the intermediate operations must be great enough to outweigh the overheads (~100µs): – initializing the fork/join framework – splitting – concurrent collection Often quoted as N x Q size of data set (typically > 10,000) processing cost per element
  • 128. Kernel Times CPU will not be the limiting factor when • CPU is not saturated • kernel times exceed 10% of user time More threads will decrease performance • predicted by Little’s Law
  • 129. Common Thread Pool Fork-Join by default uses a common thread pool • default number of worker threads == number of logical cores - 1 • Always contains at least one thread Performance is tied to whichever you run out of first • availability of the constraining resource • number of ForkJoinWorkerThreads/hardware threads
  • 130. Everyone is working together to get it done! All Hands on Deck!!!
  • 131. Little’s Law Fork-Join is a work queue • work queue behavior is typically modeled using Little’s Law Number of tasks in a system equals the arrival rate times the amount of time it takes to clear an item Task is submitted every 500ms, or 2 per second Number of tasks = 2/sec * 2.8 seconds = 5.6 tasks
  • 132. Components of Latency Latency is time from stimulus to result • internally, latency consists of active and dead time Reducing dead time assumes you • can find it and are able to fill in with useful work
  • 133. From Previous Example if there is available hardware capacity then make the pool bigger else add capacity or tune to reduce strength of the dependency
  • 134. ForkJoinPool Observability • In an application where you have many parallel stream operations all running concurrently performance will be affected by the size of the common thread pool • too small can starve threads from needed resources • too big can cause threads to thrash on contended resources ForkJoinPool comes with no visibility • no metrics to help us tune • instrument ForkJoinTask.invoke() • gather measures that can be feed into Little’s Law
  • 135. • Collect • service times • time submitted to time returned • inter-arrival times Instrumenting ForkJoinPool public final V invoke() { ForkJoinPool.common.getMonitor().submitTask(this); int s; if ((s = doInvoke() & DONE_MASK) != NORMAL) reportException(s); ForkJoinPool.common.getMonitor().retireTask(this); return getRawResult(); }
  • 136. Performance Submit log parsing to our own ForkJoinPool new ForkJoinPool(16).submit(() -> ……… ).get() new ForkJoinPool(8).submit(() -> ……… ).get() new ForkJoinPool(4).submit(() -> ……… ).get()
  • 139. Performance mostly doesn’t matter But if you must… • sequential streams normally beat iterative solutions • parallel streams can utilize all cores, providing - the data is efficiently splittable - the intermediate operations are sufficiently expensive and are CPU-bound - there isn’t contention for the processors Conclusions #java8fairies