SlideShare a Scribd company logo
Java Micro-Benchmarking
Constantine Nosovsky
1
Agenda
Benchmark definition, types, common problems
Tools needed to measure performance
Code warm-up, what happens before the steady-state
Using JMH
Side effects that can affect performance
JVM optimizations (Good or Evil?)
A word about concurrency
Full example, “human factor” included
A fleeting glimpse on the JMH details
2 of 50
Benchmark definition, types, common
problems
3 of 50
What is a “Benchmark”?
Benchmark is a program for performance measurement
Requirements:
Dimensions: throughput and latency
Avoid significant overhead
Test what is to be tested
Perform a set of executions and provide stable reproducible
results
Should be easy to run
4 of 50
Benchmark types
By scale
• Micro-benchmark (component level)
• Macro-benchmark (system level)
By nature
• Synthetic benchmark (emulate component load)
• Application benchmark (run real-world application)
5 of 50
We’ll talk about
Synthetic micro-benchmark
• Mimic component workload separately from the application
• Measure performance of a small isolated piece of code
The main concern
• The smaller the Component we test – the stronger the impact of
• Benchmark infrastructure overhead
• JVM internal processes
• OS and Hardware internals
• … and the phases of the Moon
• Don’t we really test one of those?
6 of 50
When micro-benchmark is needed
Most of the time it is not needed at all
Does algorithm A work faster than B?
(Consider equal analytical estimation)
Does this tiny modification make any difference?
(from Java, JVM, native code or hardware point of view)
7 of 50
Tools needed to measure performance
8 of 50
You had one job…
9 of 50
final int COUNT = 100;
long start = System.currentTimeMillis();
for (int i = 0; i < COUNT; i++) {
// doStuff();
}
long duration = System.currentTimeMillis() - start;
long avg = duration / COUNT;
System.out.println("Average execution time is " + avg + " ms");
Pitfall #0
Using profiler to measure performance of small methods
(adds significant overhead, measures execution “as is”)
“You had one job” approach is enough in real life
(not for micro-benchmarks, we got it already)
Annotations and reflective benchmark invocations
(you must be great at java.lang.reflect measurement)
10 of 50
Micro-benchmark frameworks
JMH – Takes into account a lot of internal VM processes
and executes benchmarks with minimal infrastructure (Oracle)
Caliper – Allows to measure repetitive code, works for Android,
allows to post results online (Google)
Japex – Allows to reduce infrastructure code, generates nice
HTML reports with JFreeChart plots
JUnitPerf – Measure functionality of the existing JUnit tests
11 of 50
Java time interval measurement
System.currentTimeMillis()
• Value in milliseconds, but granularity depends on the OS
• Represents a “wall-clock” time (since the start of Epoch)
System.nanoTime()
• Value in nanoseconds, since some time offset
• The accuracy is not worse than System.currentTimeMillis()
ThreadMXBean.getCurrentThreadCpuTime()
• The actual CPU time spent for the thread (nanoseconds)
• Might be unsupported by your VM
• Might be expensive
• Relevant for a single thread only
12 of 50
Code warm-up, what happens before
the steady-state
13 of 50
Code warm-up, class loading
A single warm-up iteration is NOT enough for Class Loading
(not all the branches of classes may load on the first iteration)
Sometimes classes are unloaded (it would be a shame if
something messed your results up with a huge peak)
Get help between iterations from
• ClassLoadingMXBean.getTotalLoadedClassCount()
• ClassLoadingMXBean.getUnloadedClassCount()
• -verbose:class
14 of 50
Code warm-up, compilation
Classes are loaded, verified and then being compiled
Oracle HotSpot and Azul Zing run application in interpreter
The hot method is being compiled after
~10k (server), ~1.5k (client) invocations
Long methods with loops are likely to be compiled earlier
Check CompilationMXBean.getTotalCompilationTime
Enable compilation logging with
• -XX:+UnlockDiagnosticVMOptions
• -XX:+PrintCompilation
• -XX:+LogCompilation -XX:LogFile=<filename>
15 of 50
Code warm-up, OSR
Normal compilation and OSR will result in a similar code
…unless compiler is not able to optimize a given frame
(e.g. inner loop is compiled before the outer one)
In the real world normal compilation is more likely to happen, so
it’s better to avoid OSR in your benchmark
• Do a set of small warm-up iterations instead of a single big one
• Do not perform warm-up loops in the steady-state testing method
16 of 50
Code warm-up, OSR example
Now forget about array range check elimination
17 of 50
Before:
public static void main(String… args) {
loop1: if(P1) goto done1
i=0;
loop2: if(P2) goto done2
A[i++];
goto loop2; // OSR goes here
done2:
goto loop1;
done1:
}
After:
void OSR_main() {
A= // from interpreter
i= // from interpreter
loop2: if(P2) {
if(P1) goto done1
i=0;
} else { A[i++]; }
goto loop2
done1:
}
Reaching the steady-state, summary
Always do warm-up to reach steady-state
• Use the same data and the same code
• Discard warm-up results
• Avoid OSR
• Don’t run benchmark in the “mixed modes” (interpreter/compiler)
• Check class loading and compilation
18 of 50
Using JMH
Provides Maven archetype for a quick project setup
Annotate your methods with @GenerateMicroBenchmark
mvn install will build ready to use runnable jar with your
benchmarks and needed infrastructure
java -jar target/mb.jar <benchmark regex> [options]
Will perform warm-up following by a set of iterations
Print the results
19 of 50
Side effects that can affect
performance
20 of 50
Synchronization puzzle
void testSynchInner() {
synchronized (this) {
i++;
}
}
synchronized void testSynchOuter() {
i++;
}
21 of 50
8,244,087 usec
13,383,707 usec
Synchronization puzzle, side effect
Biased Locking: an optimization in the VM that leaves an object
as logically locked by a given thread even after the thread has
released the lock (cheap reacquisition)
Does not work on VM start up
(4 sec in HotSpot)
Use -XX:BiasedLockingStartupDelay=0
22 of 50
JVM optimizations (Good or Evil?)
WARNING: some of the following optimizations
will not work (at least for the given examples)
in Java 6 (jdk1.6.0_26), consider using Java 7 (jdk1.7.0_21)
23 of 50
Dead code elimination
VM optimization eliminates dead branches of code
Even if the code is meant to be executed,
but the result is never used and does not have any side effect
Always consume all the results of your benchmarked code
Or you’ll get the “over 9000” performance level
Do not accumulate results or store them in class fields that are
never used either
Use them in the unobvious logical expression instead
24 of 50
Dead code elimination, example
Measurement: average nanoseconds / operation, less is better
25 of 50
private double n = 10;
public void stub() { }
public void dead() {
@SuppressWarnings("unused")
double r = n * Math.log(n) / 2;
}
public void alive() {
double r = n * Math.log(n) / 2;
if(r == n && r == 0)
throw new IllegalStateException();
}
1.017
48.514
1.008
Constant folding
If the compiler sees that the result of calculation will always be
the same, it will be stored in the constant value and reused
Measurement: average nanoseconds / operation, less is better
26 of 50
private double x = Math.PI;
public void stub() { }
public double wrong() {
return Math.log(Math.PI);
}
public double measureRight() {
return Math.log(x);
}
1.014
1.695
43.435
Loop unrolling
Is there anything bad?
Measurement: average nanoseconds / operation, less is better
27 of 50
private double[] A = new double[2048];
public double plain() {
double sum = 0;
for (int i = 0; i < A.length; i++)
sum += A[i];
return sum;
}
public double manualUnroll() {
double sum = 0;
for (int i = 0; i < A.length; i += 4)
sum += A[i] + A[i + 1] + A[i + 2] + A[i + 3];
return sum;
}
2773.883
816.791
Loop unrolling and hoisting
Something bad happens when
the loops of benchmark infrastructure code are unrolled
And the calculations that we try to measure
are hoisted from the loop
For example, Caliper style benchmark looks like
private int reps(int reps) {
int s = 0;
for (int i = 0; i < reps; i++)
s += (x + y);
return s;
}
28 of 50
Loop unrolling and hoisting, example
29 of 50
@GenerateMicroBenchmark
public int measureRight() {
return (x + y);
}
@GenerateMicroBenchmark
@OperationsPerInvocation(1)
public int measureWrong_1() {
return reps(1);
}
...
@GenerateMicroBenchmark
@OperationsPerInvocation(N)
public int measureWrong_N() {
return reps(N);
}
Loop unrolling and hoisting, example
Method Result
Right 2.104
Wrong_1 2.055
Wrong_10 0.267
Wrong_100 0.033
Wrong_1000 0.057
Wrong_10000 0.045
Wrong_100000 0.043
30 of 50
Measurement: average nanoseconds / operation, less is better
A word about concurrency
Processes and threads fight for resources
(single threaded benchmark is a utopia)
31 of 50
Concurrency problems of benchmarks
Benchmark states should be correctly
• Initialized
• Published
• Shared between certain group of threads
Multi threaded benchmark iteration should be synchronized
and all threads should start their work at the same time
No need to implement this infrastructure yourself,
just write a correct benchmark using your favorite framework
32 of 50
Full example, “human factor” included
33 of 50
List iteration
Which list implementation is faster for the foreach loop?
ArrayList and LinkedList sequential iteration is linear, O(n)
• ArrayList Iterator.next(): return array[cursor++];
• LinkedList Iterator.next(): return current = current.next;
Let’s check for the list of 1 million Integer’s
34 of 50
List iteration, foreach vs iterator
35 of 50
public List<Integer> arrayListForeach() {
for(Integer i : arrayList) {
}
return arrayList;
}
public Iterator<Integer> arrayListIterator() {
Iterator<Integer> iterator = arrayList.iterator();
while(iterator.hasNext()) {
iterator.next();
}
return iterator;
}
23.659
Measurement: average milliseconds / operation, less is better
22.445
List iteration, foreach < iterator, why?
Foreach variant assigns element to a local variable
for(Integer i : arrayList)
Iterator variant does not
iterator.next();
We need to change Iterator variant to
Integer i = iterator.next();
So now it’s correct to compare the results, at least according to
the bytecode 
36 of 50
List iteration, benchmark
37 of 50
@GenerateMicroBenchmark(BenchmarkType.All)
public List<Integer> arrayListForeach() {
for(Integer i : arrayList) {
}
return arrayList;
}
@GenerateMicroBenchmark(BenchmarkType.All)
public Iterator<Integer> arrayListIterator() {
Iterator<Integer> iterator = arrayList.iterator();
while(iterator.hasNext()) {
Integer i = iterator.next();
}
return iterator;
}
List iteration, benchmark, result
List impl Iteration Java 6 Java 7
ArrayList
foreach 24.792 5.118
iterator 24.769 0.140
LinkedList
foreach 15.236 9.485
iterator 15.255 9.306
38 of 50
Measurement: average milliseconds / operation, less is better
Java 6 ArrayList uses AbstractList.Itr,
LinkedList has its own, so there is less abstractions
(in Java 7 ArrayList has its own optimized iterator)
List iteration, benchmark, result
List impl Iteration Java 6 Java 7
ArrayList
foreach 24.792 5.118
iterator 24.769 0.140
LinkedList
foreach 15.236 9.485
iterator 15.255 9.306
39 of 50
Measurement: average milliseconds / operation, less is better
WTF?!
List iteration, benchmark, loop-hoisting
40 of 50
ListBenchmark.arrayListIterator()
Iterator<Integer> iterator = arrayList.iterator();
while(iterator.hasNext()) {
iterator.next();
}
return iterator;
ArrayList.Itr<E>.next()
if (modCount != expectedModCount) throw new CME();
int i = cursor;
if (i >= size) throw new NoSuchElementException();
Object[] elementData = ArrayList.this.elementData;
if (i >= elementData.length) throw new CME();
cursor = i + 1;
return (E) elementData[lastRet = i];
List iteration, benchmark, BlackHole
41 of 50
@GenerateMicroBenchmark(BenchmarkType.All)
public void arrayListForeach(BlackHole bh) {
for(Integer i : arrayList) {
bh.consume(i);
}
}
@GenerateMicroBenchmark(BenchmarkType.All)
public void arrayListIterator(BlackHole bh) {
Iterator<Integer> iterator = arrayList.iterator();
while(iterator.hasNext()) {
Integer i = iterator.next();
bh.consume(i);
}
}
List iteration, benchmark, correct result
List impl Iteration Java 6 Java 7 Java 7 BlackHole
ArrayList
foreach 24.792 5.118 8.550
iterator 24.769 0.140 8.608
LinkedList
foreach 15.236 9.485 11.739
iterator 15.255 9.306 11.763
42 of 50
Measurement: average milliseconds / operation, less is better
A fleeting glimpse on the JMH details
We already know that JMH
• Uses maven
• Uses annotation-driven approach to detect benchmarks
• Provides BlackHole to consume results (and CPU cycles)
43 of 50
JMH: Building infrastructure
Finds annotated micro-benchmarks using reflection
Generates infrastructure plain java source code
around the calls to the micro-benchmarks
Compile, pack, run, profit
No reflection during benchmark execution
44 of 50
JMH: Various metrics
Single execution time
Operations per time unit
Average time per operation
Percentile estimation of time per operation
45 of 50
JMH: Concurrency infrastructure
@State of benchmark data is shared across the benchmark,
thread, or a group of threads
Allows to perform Fixtures (setUp and tearDown) in scope of
the whole run, iteration or single execution
@Threads a simple way to run concurrent test if you defined
correct @State
@Group threads to assign them for a particular role in the
benchmark
46 of 50
JMH: VM forking
Allows to compare results obtained from various instances of VM
• First test will work on the clean JVM and others will not
• VM processes are not determined and may vary from run to run
(compilation order, multi-threading, randomization)
47 of 50
JMH: @CompilerControl
Instructions whether to compile method or not
Instructions whether to inline methods
Inserting breakpoints into generated code
Printing methods assembly
48 of 50
Conclusions
Do not reinvent the wheel, if you are not sure how it should work
(consider using existing one)
Consider the results being wrong if you don’t have a clear
explanation. Do not swallow that mystical behavior
49 of 50
Thanks for you attention
Questions?
50 of 50

More Related Content

PDF
Spring Boot—Production Boost
PDF
Oracle Active Data Guard: Best Practices and New Features Deep Dive
PDF
【Oracle Cloud ウェビナー】WebLogic Serverのご紹介
PPTX
ここからはじめる SQL Server の状態取得
PDF
Oracle GoldenGate Veridata概要
PDF
Oracle GoldenGate入門
PDF
1_OnBSession_はじめてのOracleCloud_12Jul.pdf
PPTX
Optimizing Performance in Rust for Low-Latency Database Drivers
Spring Boot—Production Boost
Oracle Active Data Guard: Best Practices and New Features Deep Dive
【Oracle Cloud ウェビナー】WebLogic Serverのご紹介
ここからはじめる SQL Server の状態取得
Oracle GoldenGate Veridata概要
Oracle GoldenGate入門
1_OnBSession_はじめてのOracleCloud_12Jul.pdf
Optimizing Performance in Rust for Low-Latency Database Drivers

What's hot (20)

PDF
SQL Server運用実践 - 3年間80台の運用経験から20の教訓
PPTX
Snowflake essentials
PDF
Oracle Database / Exadata Cloud 技術情報(Oracle Cloudウェビナーシリーズ: 2020年7月9日)
PPTX
Serverless Framework Pluginで行うLambdaテスト
PDF
MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11
PDF
Kafka Streams: What it is, and how to use it?
PDF
[Aurora事例祭り]AWS Database Migration Service と Schema Conversion Tool の使いドコロ
PDF
Oracle Stream Analytics - Developer Introduction
PDF
Oracle Database 11g,12cからのアップグレード対策とクラウド移行 (Oracle Cloudウェビナーシリーズ: 2021年7...
PDF
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
PDF
From Java 11 to 17 and beyond.pdf
PDF
RxJS - The Basics & The Future
PDF
[Oracle DBA & Developer Day 2016] しばちょう先生の特別講義!!ストレージ管理のベストプラクティス ~ASMからExada...
PPTX
Introduction to Apache Kafka
PDF
Deep Dive into Building Streaming Applications with Apache Pulsar
PDF
Exadata master series_asm_2020
PDF
Oci object storage deep dive 20190329 ss
PDF
Oracle GoldenGate アーキテクチャと基本機能
PPTX
Microsoft Fabric.pptx
PDF
Standard Edition High Availability (SEHA) - The Why, What & How
SQL Server運用実践 - 3年間80台の運用経験から20の教訓
Snowflake essentials
Oracle Database / Exadata Cloud 技術情報(Oracle Cloudウェビナーシリーズ: 2020年7月9日)
Serverless Framework Pluginで行うLambdaテスト
MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11
Kafka Streams: What it is, and how to use it?
[Aurora事例祭り]AWS Database Migration Service と Schema Conversion Tool の使いドコロ
Oracle Stream Analytics - Developer Introduction
Oracle Database 11g,12cからのアップグレード対策とクラウド移行 (Oracle Cloudウェビナーシリーズ: 2021年7...
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
From Java 11 to 17 and beyond.pdf
RxJS - The Basics & The Future
[Oracle DBA & Developer Day 2016] しばちょう先生の特別講義!!ストレージ管理のベストプラクティス ~ASMからExada...
Introduction to Apache Kafka
Deep Dive into Building Streaming Applications with Apache Pulsar
Exadata master series_asm_2020
Oci object storage deep dive 20190329 ss
Oracle GoldenGate アーキテクチャと基本機能
Microsoft Fabric.pptx
Standard Edition High Availability (SEHA) - The Why, What & How
Ad

Similar to Java Micro-Benchmarking (20)

PPTX
Code instrumentation
PPT
00_Introduction to Java.ppt
PDF
Efficient and Advanced Omniscient Debugging for xDSMLs (SLE 2015)
PDF
Test driven development
PDF
How and what to unit test
PDF
JAVASCRIPT TDD(Test driven Development) & Qunit Tutorial
PDF
Node.js Community Benchmarking WG update
PDF
What’s eating python performance
PPTX
Pi j1.3 operators
PPTX
SoCal Code Camp 2015: An introduction to Java 8
PDF
Automated and Scalable Solutions for Software Testing: The Essential Role of ...
PPTX
Hadoop cluster performance profiler
PPTX
IoT Best Practices: Unit Testing
PPTX
AOP on Android
PDF
How to fake_properly
PPTX
Performance and how to measure it - ProgSCon London 2016
PDF
maXbox Starter 43 Work with Code Metrics ISO Standard
PDF
From System Modeling to Automated System Testing
PPTX
Performance is a Feature!
PDF
BarcelonaJUG2016: walkmod: how to run and design code transformations
Code instrumentation
00_Introduction to Java.ppt
Efficient and Advanced Omniscient Debugging for xDSMLs (SLE 2015)
Test driven development
How and what to unit test
JAVASCRIPT TDD(Test driven Development) & Qunit Tutorial
Node.js Community Benchmarking WG update
What’s eating python performance
Pi j1.3 operators
SoCal Code Camp 2015: An introduction to Java 8
Automated and Scalable Solutions for Software Testing: The Essential Role of ...
Hadoop cluster performance profiler
IoT Best Practices: Unit Testing
AOP on Android
How to fake_properly
Performance and how to measure it - ProgSCon London 2016
maXbox Starter 43 Work with Code Metrics ISO Standard
From System Modeling to Automated System Testing
Performance is a Feature!
BarcelonaJUG2016: walkmod: how to run and design code transformations
Ad

Recently uploaded (20)

PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Encapsulation theory and applications.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
sap open course for s4hana steps from ECC to s4
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
Cloud computing and distributed systems.
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Reach Out and Touch Someone: Haptics and Empathic Computing
Network Security Unit 5.pdf for BCA BBA.
Understanding_Digital_Forensics_Presentation.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Encapsulation_ Review paper, used for researhc scholars
Diabetes mellitus diagnosis method based random forest with bat algorithm
MIND Revenue Release Quarter 2 2025 Press Release
Programs and apps: productivity, graphics, security and other tools
Building Integrated photovoltaic BIPV_UPV.pdf
Encapsulation theory and applications.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
The AUB Centre for AI in Media Proposal.docx
Mobile App Security Testing_ A Comprehensive Guide.pdf
20250228 LYD VKU AI Blended-Learning.pptx
sap open course for s4hana steps from ECC to s4
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Cloud computing and distributed systems.

Java Micro-Benchmarking

  • 2. Agenda Benchmark definition, types, common problems Tools needed to measure performance Code warm-up, what happens before the steady-state Using JMH Side effects that can affect performance JVM optimizations (Good or Evil?) A word about concurrency Full example, “human factor” included A fleeting glimpse on the JMH details 2 of 50
  • 3. Benchmark definition, types, common problems 3 of 50
  • 4. What is a “Benchmark”? Benchmark is a program for performance measurement Requirements: Dimensions: throughput and latency Avoid significant overhead Test what is to be tested Perform a set of executions and provide stable reproducible results Should be easy to run 4 of 50
  • 5. Benchmark types By scale • Micro-benchmark (component level) • Macro-benchmark (system level) By nature • Synthetic benchmark (emulate component load) • Application benchmark (run real-world application) 5 of 50
  • 6. We’ll talk about Synthetic micro-benchmark • Mimic component workload separately from the application • Measure performance of a small isolated piece of code The main concern • The smaller the Component we test – the stronger the impact of • Benchmark infrastructure overhead • JVM internal processes • OS and Hardware internals • … and the phases of the Moon • Don’t we really test one of those? 6 of 50
  • 7. When micro-benchmark is needed Most of the time it is not needed at all Does algorithm A work faster than B? (Consider equal analytical estimation) Does this tiny modification make any difference? (from Java, JVM, native code or hardware point of view) 7 of 50
  • 8. Tools needed to measure performance 8 of 50
  • 9. You had one job… 9 of 50 final int COUNT = 100; long start = System.currentTimeMillis(); for (int i = 0; i < COUNT; i++) { // doStuff(); } long duration = System.currentTimeMillis() - start; long avg = duration / COUNT; System.out.println("Average execution time is " + avg + " ms");
  • 10. Pitfall #0 Using profiler to measure performance of small methods (adds significant overhead, measures execution “as is”) “You had one job” approach is enough in real life (not for micro-benchmarks, we got it already) Annotations and reflective benchmark invocations (you must be great at java.lang.reflect measurement) 10 of 50
  • 11. Micro-benchmark frameworks JMH – Takes into account a lot of internal VM processes and executes benchmarks with minimal infrastructure (Oracle) Caliper – Allows to measure repetitive code, works for Android, allows to post results online (Google) Japex – Allows to reduce infrastructure code, generates nice HTML reports with JFreeChart plots JUnitPerf – Measure functionality of the existing JUnit tests 11 of 50
  • 12. Java time interval measurement System.currentTimeMillis() • Value in milliseconds, but granularity depends on the OS • Represents a “wall-clock” time (since the start of Epoch) System.nanoTime() • Value in nanoseconds, since some time offset • The accuracy is not worse than System.currentTimeMillis() ThreadMXBean.getCurrentThreadCpuTime() • The actual CPU time spent for the thread (nanoseconds) • Might be unsupported by your VM • Might be expensive • Relevant for a single thread only 12 of 50
  • 13. Code warm-up, what happens before the steady-state 13 of 50
  • 14. Code warm-up, class loading A single warm-up iteration is NOT enough for Class Loading (not all the branches of classes may load on the first iteration) Sometimes classes are unloaded (it would be a shame if something messed your results up with a huge peak) Get help between iterations from • ClassLoadingMXBean.getTotalLoadedClassCount() • ClassLoadingMXBean.getUnloadedClassCount() • -verbose:class 14 of 50
  • 15. Code warm-up, compilation Classes are loaded, verified and then being compiled Oracle HotSpot and Azul Zing run application in interpreter The hot method is being compiled after ~10k (server), ~1.5k (client) invocations Long methods with loops are likely to be compiled earlier Check CompilationMXBean.getTotalCompilationTime Enable compilation logging with • -XX:+UnlockDiagnosticVMOptions • -XX:+PrintCompilation • -XX:+LogCompilation -XX:LogFile=<filename> 15 of 50
  • 16. Code warm-up, OSR Normal compilation and OSR will result in a similar code …unless compiler is not able to optimize a given frame (e.g. inner loop is compiled before the outer one) In the real world normal compilation is more likely to happen, so it’s better to avoid OSR in your benchmark • Do a set of small warm-up iterations instead of a single big one • Do not perform warm-up loops in the steady-state testing method 16 of 50
  • 17. Code warm-up, OSR example Now forget about array range check elimination 17 of 50 Before: public static void main(String… args) { loop1: if(P1) goto done1 i=0; loop2: if(P2) goto done2 A[i++]; goto loop2; // OSR goes here done2: goto loop1; done1: } After: void OSR_main() { A= // from interpreter i= // from interpreter loop2: if(P2) { if(P1) goto done1 i=0; } else { A[i++]; } goto loop2 done1: }
  • 18. Reaching the steady-state, summary Always do warm-up to reach steady-state • Use the same data and the same code • Discard warm-up results • Avoid OSR • Don’t run benchmark in the “mixed modes” (interpreter/compiler) • Check class loading and compilation 18 of 50
  • 19. Using JMH Provides Maven archetype for a quick project setup Annotate your methods with @GenerateMicroBenchmark mvn install will build ready to use runnable jar with your benchmarks and needed infrastructure java -jar target/mb.jar <benchmark regex> [options] Will perform warm-up following by a set of iterations Print the results 19 of 50
  • 20. Side effects that can affect performance 20 of 50
  • 21. Synchronization puzzle void testSynchInner() { synchronized (this) { i++; } } synchronized void testSynchOuter() { i++; } 21 of 50 8,244,087 usec 13,383,707 usec
  • 22. Synchronization puzzle, side effect Biased Locking: an optimization in the VM that leaves an object as logically locked by a given thread even after the thread has released the lock (cheap reacquisition) Does not work on VM start up (4 sec in HotSpot) Use -XX:BiasedLockingStartupDelay=0 22 of 50
  • 23. JVM optimizations (Good or Evil?) WARNING: some of the following optimizations will not work (at least for the given examples) in Java 6 (jdk1.6.0_26), consider using Java 7 (jdk1.7.0_21) 23 of 50
  • 24. Dead code elimination VM optimization eliminates dead branches of code Even if the code is meant to be executed, but the result is never used and does not have any side effect Always consume all the results of your benchmarked code Or you’ll get the “over 9000” performance level Do not accumulate results or store them in class fields that are never used either Use them in the unobvious logical expression instead 24 of 50
  • 25. Dead code elimination, example Measurement: average nanoseconds / operation, less is better 25 of 50 private double n = 10; public void stub() { } public void dead() { @SuppressWarnings("unused") double r = n * Math.log(n) / 2; } public void alive() { double r = n * Math.log(n) / 2; if(r == n && r == 0) throw new IllegalStateException(); } 1.017 48.514 1.008
  • 26. Constant folding If the compiler sees that the result of calculation will always be the same, it will be stored in the constant value and reused Measurement: average nanoseconds / operation, less is better 26 of 50 private double x = Math.PI; public void stub() { } public double wrong() { return Math.log(Math.PI); } public double measureRight() { return Math.log(x); } 1.014 1.695 43.435
  • 27. Loop unrolling Is there anything bad? Measurement: average nanoseconds / operation, less is better 27 of 50 private double[] A = new double[2048]; public double plain() { double sum = 0; for (int i = 0; i < A.length; i++) sum += A[i]; return sum; } public double manualUnroll() { double sum = 0; for (int i = 0; i < A.length; i += 4) sum += A[i] + A[i + 1] + A[i + 2] + A[i + 3]; return sum; } 2773.883 816.791
  • 28. Loop unrolling and hoisting Something bad happens when the loops of benchmark infrastructure code are unrolled And the calculations that we try to measure are hoisted from the loop For example, Caliper style benchmark looks like private int reps(int reps) { int s = 0; for (int i = 0; i < reps; i++) s += (x + y); return s; } 28 of 50
  • 29. Loop unrolling and hoisting, example 29 of 50 @GenerateMicroBenchmark public int measureRight() { return (x + y); } @GenerateMicroBenchmark @OperationsPerInvocation(1) public int measureWrong_1() { return reps(1); } ... @GenerateMicroBenchmark @OperationsPerInvocation(N) public int measureWrong_N() { return reps(N); }
  • 30. Loop unrolling and hoisting, example Method Result Right 2.104 Wrong_1 2.055 Wrong_10 0.267 Wrong_100 0.033 Wrong_1000 0.057 Wrong_10000 0.045 Wrong_100000 0.043 30 of 50 Measurement: average nanoseconds / operation, less is better
  • 31. A word about concurrency Processes and threads fight for resources (single threaded benchmark is a utopia) 31 of 50
  • 32. Concurrency problems of benchmarks Benchmark states should be correctly • Initialized • Published • Shared between certain group of threads Multi threaded benchmark iteration should be synchronized and all threads should start their work at the same time No need to implement this infrastructure yourself, just write a correct benchmark using your favorite framework 32 of 50
  • 33. Full example, “human factor” included 33 of 50
  • 34. List iteration Which list implementation is faster for the foreach loop? ArrayList and LinkedList sequential iteration is linear, O(n) • ArrayList Iterator.next(): return array[cursor++]; • LinkedList Iterator.next(): return current = current.next; Let’s check for the list of 1 million Integer’s 34 of 50
  • 35. List iteration, foreach vs iterator 35 of 50 public List<Integer> arrayListForeach() { for(Integer i : arrayList) { } return arrayList; } public Iterator<Integer> arrayListIterator() { Iterator<Integer> iterator = arrayList.iterator(); while(iterator.hasNext()) { iterator.next(); } return iterator; } 23.659 Measurement: average milliseconds / operation, less is better 22.445
  • 36. List iteration, foreach < iterator, why? Foreach variant assigns element to a local variable for(Integer i : arrayList) Iterator variant does not iterator.next(); We need to change Iterator variant to Integer i = iterator.next(); So now it’s correct to compare the results, at least according to the bytecode  36 of 50
  • 37. List iteration, benchmark 37 of 50 @GenerateMicroBenchmark(BenchmarkType.All) public List<Integer> arrayListForeach() { for(Integer i : arrayList) { } return arrayList; } @GenerateMicroBenchmark(BenchmarkType.All) public Iterator<Integer> arrayListIterator() { Iterator<Integer> iterator = arrayList.iterator(); while(iterator.hasNext()) { Integer i = iterator.next(); } return iterator; }
  • 38. List iteration, benchmark, result List impl Iteration Java 6 Java 7 ArrayList foreach 24.792 5.118 iterator 24.769 0.140 LinkedList foreach 15.236 9.485 iterator 15.255 9.306 38 of 50 Measurement: average milliseconds / operation, less is better Java 6 ArrayList uses AbstractList.Itr, LinkedList has its own, so there is less abstractions (in Java 7 ArrayList has its own optimized iterator)
  • 39. List iteration, benchmark, result List impl Iteration Java 6 Java 7 ArrayList foreach 24.792 5.118 iterator 24.769 0.140 LinkedList foreach 15.236 9.485 iterator 15.255 9.306 39 of 50 Measurement: average milliseconds / operation, less is better WTF?!
  • 40. List iteration, benchmark, loop-hoisting 40 of 50 ListBenchmark.arrayListIterator() Iterator<Integer> iterator = arrayList.iterator(); while(iterator.hasNext()) { iterator.next(); } return iterator; ArrayList.Itr<E>.next() if (modCount != expectedModCount) throw new CME(); int i = cursor; if (i >= size) throw new NoSuchElementException(); Object[] elementData = ArrayList.this.elementData; if (i >= elementData.length) throw new CME(); cursor = i + 1; return (E) elementData[lastRet = i];
  • 41. List iteration, benchmark, BlackHole 41 of 50 @GenerateMicroBenchmark(BenchmarkType.All) public void arrayListForeach(BlackHole bh) { for(Integer i : arrayList) { bh.consume(i); } } @GenerateMicroBenchmark(BenchmarkType.All) public void arrayListIterator(BlackHole bh) { Iterator<Integer> iterator = arrayList.iterator(); while(iterator.hasNext()) { Integer i = iterator.next(); bh.consume(i); } }
  • 42. List iteration, benchmark, correct result List impl Iteration Java 6 Java 7 Java 7 BlackHole ArrayList foreach 24.792 5.118 8.550 iterator 24.769 0.140 8.608 LinkedList foreach 15.236 9.485 11.739 iterator 15.255 9.306 11.763 42 of 50 Measurement: average milliseconds / operation, less is better
  • 43. A fleeting glimpse on the JMH details We already know that JMH • Uses maven • Uses annotation-driven approach to detect benchmarks • Provides BlackHole to consume results (and CPU cycles) 43 of 50
  • 44. JMH: Building infrastructure Finds annotated micro-benchmarks using reflection Generates infrastructure plain java source code around the calls to the micro-benchmarks Compile, pack, run, profit No reflection during benchmark execution 44 of 50
  • 45. JMH: Various metrics Single execution time Operations per time unit Average time per operation Percentile estimation of time per operation 45 of 50
  • 46. JMH: Concurrency infrastructure @State of benchmark data is shared across the benchmark, thread, or a group of threads Allows to perform Fixtures (setUp and tearDown) in scope of the whole run, iteration or single execution @Threads a simple way to run concurrent test if you defined correct @State @Group threads to assign them for a particular role in the benchmark 46 of 50
  • 47. JMH: VM forking Allows to compare results obtained from various instances of VM • First test will work on the clean JVM and others will not • VM processes are not determined and may vary from run to run (compilation order, multi-threading, randomization) 47 of 50
  • 48. JMH: @CompilerControl Instructions whether to compile method or not Instructions whether to inline methods Inserting breakpoints into generated code Printing methods assembly 48 of 50
  • 49. Conclusions Do not reinvent the wheel, if you are not sure how it should work (consider using existing one) Consider the results being wrong if you don’t have a clear explanation. Do not swallow that mystical behavior 49 of 50
  • 50. Thanks for you attention Questions? 50 of 50