SlideShare a Scribd company logo
SPIDAL JavaHigh Performance Data Analytics with Java on Large Multicore
HPC Clusters
sekanaya@indiana.edu
https://github.com/DSC-SPIDAL | http://saliya.org
24th High Performance Computing Symposium (HPC 2016)
April 3-6, 2016, Pasadena, CA, USA
as part of the SCS Spring Simulation Multi-Conference (SpringSim'16)
Saliya Ekanayake | Supun Kamburugamuve | Geoffrey Fox
High Performance?
4/4/2016 HPC 2016 2
48 Nodes 128 Nodes
40x Speedup with SPIDAL Java
Typical Java with All MPI
Typical Java with Threads and MPI
64x Ideal (if life was so fair!)
We’ll discuss today
Intel Haswell HPC Clusterwith
40Gbps Infiniband
Introduction
• Big Data and HPC
 Big data + cloud is the norm, but not always
 Some applications demand significant computation and communication
 HPC clusters are ideal
 However, it’s not easy
• Java
 Unprecedented big data ecosystem
 Apache has over 300 big data systems, mostly written in Java
 Performance and APIs have improved greatly with 1.7x
 Not much used in HPC, but can give comparative performance (e.g. SPIDAL Java)
 Comparative performance to C (more on this later)
 Google query https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-
8#q=java%20faster%20than%20c will point to interesting discussions on why.
 Interoperable and productive
4/4/2016 HPC 2016 3
SPIDAL Java
• Scalable Parallel Interoperable Data Analytics Library (SPIDAL)
 Available at https://github.com/DSC-SPIDAL
• Includes Multidimensional Scaling (MDS) and Clustering Applications
 DA-MDS
 Y. Ruan and G. Fox, "A Robust and Scalable Solution for Interpolative Multidimensional Scaling with Weighting," eScience
(eScience), 2013 IEEE 9th International Conference on, Beijing, 2013, pp. 61-69.
doi: 10.1109/eScience.2013.30
 DA-PWC
 Fox, G. C. Deterministic annealing and robust scalable data mining for the data deluge. In Proceedings of the 2nd International
Workshop on Petascal Data Analytics: Challenges and Opportunities, PDAC ’11, ACM (New York, NY, USA, 2011), 39–40.
 DA-VS
 Fox, G., Mani, D., and Pyne, S. Parallel deterministic annealing clustering and its application to lc-ms data analysis. In Big
Data, 2013 IEEE International Conference on (Oct 2013), 665–673.
 MDSasChisq
 General MDS implementation using LevenbergMarquardt algorithm
 Levenberg, K. A method for the solution of certain non-linear problems in least squares. Quarterly Journal of Applied
Mathmatics II, 2 (1944), 164–168.)
4/4/2016 HPC 2016 4
SPIDAL Java Applications
• Gene Sequence Clustering and Visualization
 Results at WebPlotViz
 https://spidal-gw.dsc.soic.indiana.edu/resultsets/991946447
 https://spidal-gw.dsc.soic.indiana.edu/resultsets/795366853
 A few snapshots
4/4/2016 HPC 2016 5
Sequence
File
Pairwise
Alignment
DA-MDS
DA-PWC
3D Plot
100,000 fungi sequences 3D phylogenetic tree 3D plot of vector data
SPIDAL Java Applications
• Stocks Data Analysis
 Time series view of stocks
 E.g. with 1 year moving window https://spidal-
gw.dsc.soic.indiana.edu/public/timeseriesview/825496517
4/4/2016 HPC 2016 6
Performance Challenges
• Intra-node Communication
• Exploiting Fat Nodes
• Overhead of Garbage Collection
• Cost of Heap Allocated Objects
• Cache and Memory Access
4/4/2016 HPC 2016 7
Performance Challenges
• Intra-node Communication [1/3]
 Large core counts per node – 24 to 36
 Data analytics use global collective communication – Allreduce, Allgather, Broadcast, etc.
 HPC simulations, in contrast, typically, uses local communications for tasks like halo exchanges.
4/4/2016 HPC 2016 8
3 million double values distributed uniformly over 48 nodes
• Identical message size per node, yet 24
MPI is ~10 times slower than 1 MPI
• Suggests #ranks per node should be 1
for the best performance
• But how to exploit all available cores?
Performance Challenges
• Intra-node Communication [2/3]
 Solution: Shared memory
 Use threads?  didn’t work well (explained shortly)
 Processes with shared memory communication
 Custom implementation in SPIDAL Java outside of MPI framework
4/4/2016 HPC 2016 9
• Only 1 rank per node participates in the MPI collective call
• Others exchange data using shared memory maps
100K DA-MDS Run Communication
200K DA-MDS Run Communication
Performance Challenges
• Intra-node Communication [3/3]
 Heterogeneity support
 Nodes with 24 and 36 cores
 Automatically detects configuration and allocates memory maps
 Implementation
 Custom shared memory implementation using OpenHFT’s Bytes API
 Supports collective calls necessary within SPIDAL Java
4/4/2016 HPC 2016 10
Performance Challenges
• Exploiting Fat Nodes [1/2]
 Large #Cores per Node
 E.g. 1 Node in Juliet HPC cluster
 2 Sockets
 12 Cores each
 2 Hardware threads per core
 L1 and L2 per core
 L3 shared per socket
 Two approaches
 All processes  1 proc per core
 1 Process multiple threads
 Which is better?
4/4/2016 HPC 2016 11
Socket 0
Socket 1
1 Core – 2 HTs
Performance Challenges
• Exploiting Fat Nodes [2/2]
 Suggested thread model in literature  fork-join regions within a process
4/4/2016 HPC 2016 12
Iterations
1. Thread creation and scheduling
overhead accumulates over
iterations and significant (~5%)
• True for Java threads as well as
OpenMP in C/C++ (see
https://github.com/esaliya/JavaThreads
and
https://github.com/esaliya/CppStack/tree/
master/omp2/letomppar)
2. Long running threads do better
than this model, still have non-
negligible overhead
3. Solution is to use processes with
shared memory communications as
in SPIDAL Java
process
Prev. Optimization
Performance Challenges
• Garbage Collection
 “Stop the world” events are expensive
 Especially, for parallel processes with collective communications
 Typical OOP  allocate – use – forget
 Original SPIDAL code produced frequent garbage of small
arrays
 Solution: Zero-GC using
 Static allocation and reuse
 Off-heap static buffers (more on next slide)
 Advantage
 No GC – obvious
 Scale to larger problem sizes
 E.g. Original SPIDAL code required 5GB (x 24 = 120 GB per node)
heap per process to handle 200K MDS. Optimized code use < 1GB heap
to finish within the same timing.
 Note. Physical memory is 128GB, so with optimized SPIDAL can
now do 1 million point MDS within hardware limits.
4/4/2016 HPC 2016 13
Heap size per
process reaches
–Xmx (2.5GB)
early in the
computation
Frequent GC
Heap size per
process is well
below (~1.1GB)
of –Xmx (2.5GB)
Virtually no GC activity
after optimizing
Performance Challenges
4/4/2016 HPC 2016 14
• I/O with Heap Allocated Objects
 Java-to-native I/O creates copies of objects in heap
 Otherwise can’t guarantee object’s memory location due to GC
 Too expensive
 Solution: Off-heap buffers (memory maps)
 Initial data loading  significantly faster than Java stream API calls
 Intra-node messaging  gives the best performance
 MPI inter-node communications
• Cache and Memory Access
 Nested data structures are neat, but expensive
 Solution: Contiguous memory with 1D arrays over 2D structures
 Indirect memory references are costly
 Also, adopted from HPC
 Blocked loops and loop ordering
Evaluation
4/4/2016 HPC 2016 15
• HPC Cluster
 128 Intel Haswell nodes with 2.3GHz nominal frequency
 96 nodes with 24 cores on 2 sockets (12 cores each)
 32 nodes with 36 cores on 2 sockets (18 cores each)
 128GB memory per node
 40Gbps Infiniband
• Software
 Java 1.8
 OpenHFT JavaLang 6.7.2
 Habanero Java 0.1.4
 OpenMPI 1.10.1
Application: DA-MDS
• Computations grow 𝑂 𝑁2
• Communication global and is 𝑂(𝑁)
4/4/2016 HPC 2016 16
100K
DA-MDS
200K
DA-MDS 400K
DA-MDS
• 1152 Total Parallelism across 48 nodes
• All combinations of 24 way parallelism per node
• LHS is all processes
• RHS is all internal threads and MPI across nodes
1. With SM communications in SPIDAL, processes
outperform threads (blue line)
2. Other optimizations further improves performance
(green line)
4/4/2016 HPC 2016 17
• Speedup for varying data sizes
• All processes
• LHS is 1 proc per node across 48 nodes
• RHS is 24 procs per node across 128 nodes
(ideal 64x speedup)
Larger data sizes show better speedup
(400K – 45x, 200K – 40x, 100K – 38x)
• Speedup on 36 core nodes
• All processes
• LHS is 1 proc per node across 32 nodes
• RHS is 36 procs per node across 32 nodes
(ideal 36x speedup)
Speedup plateaus around 23x after 24
way parallelism per node
4/4/2016 HPC 2016 18
The effect of different optimizations on speedup
Java, is it worth it? – YES!
Also, with JIT some cases in MDS are better
than C

More Related Content

PPTX
High Performance Processing of Streaming Data
PPTX
Big Data HPC Convergence
PPTX
Matching Data Intensive Applications and Hardware/Software Architectures
PPTX
Visualizing and Clustering Life Science Applications in Parallel 
PPTX
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
PPTX
Comparing Big Data and Simulation Applications and Implications for Software ...
PPTX
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
PPTX
Beyond Hadoop 1.0: A Holistic View of Hadoop YARN, Spark and GraphLab
High Performance Processing of Streaming Data
Big Data HPC Convergence
Matching Data Intensive Applications and Hardware/Software Architectures
Visualizing and Clustering Life Science Applications in Parallel 
Next Generation Grid: Integrating Parallel and Distributed Computing Runtimes...
Comparing Big Data and Simulation Applications and Implications for Software ...
Classifying Simulation and Data Intensive Applications and the HPC-Big Data C...
Beyond Hadoop 1.0: A Holistic View of Hadoop YARN, Spark and GraphLab

What's hot (20)

PPTX
Big Data Analytics with Storm, Spark and GraphLab
PPTX
Matching Data Intensive Applications and Hardware/Software Architectures
PDF
Simple, Modular and Extensible Big Data Platform Concept
PDF
Lessons Learned on Benchmarking Big Data Platforms
PPTX
PEARC 17: Spark On the ARC
PPTX
Next generation analytics with yarn, spark and graph lab
PPTX
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
PPTX
Distributed Deep Learning + others for Spark Meetup
PDF
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
PPTX
High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...
PDF
04 open source_tools
PDF
Parikshit Ram – Senior Machine Learning Scientist, Skytree at MLconf ATL
PDF
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
PDF
RISELab:Enabling Intelligent Real-Time Decisions
PPT
Scaling hadoopapplications
PDF
WBDB 2015 Performance Evaluation of Spark SQL using BigBench
PDF
ADMM-Based Scalable Machine Learning on Apache Spark with Sauptik Dhar and Mo...
PDF
Hadoop Summit 2010 Benchmarking And Optimizing Hadoop
PDF
Introduction to Big Data
PDF
Scalable Distributed Real-Time Clustering for Big Data Streams
Big Data Analytics with Storm, Spark and GraphLab
Matching Data Intensive Applications and Hardware/Software Architectures
Simple, Modular and Extensible Big Data Platform Concept
Lessons Learned on Benchmarking Big Data Platforms
PEARC 17: Spark On the ARC
Next generation analytics with yarn, spark and graph lab
HPC-ABDS High Performance Computing Enhanced Apache Big Data Stack (with a ...
Distributed Deep Learning + others for Spark Meetup
Deep Learning on Apache Spark at CERN’s Large Hadron Collider with Intel Tech...
High Resolution Energy Modeling that Scales with Apache Spark 2.0 Spark Summi...
04 open source_tools
Parikshit Ram – Senior Machine Learning Scientist, Skytree at MLconf ATL
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
RISELab:Enabling Intelligent Real-Time Decisions
Scaling hadoopapplications
WBDB 2015 Performance Evaluation of Spark SQL using BigBench
ADMM-Based Scalable Machine Learning on Apache Spark with Sauptik Dhar and Mo...
Hadoop Summit 2010 Benchmarking And Optimizing Hadoop
Introduction to Big Data
Scalable Distributed Real-Time Clustering for Big Data Streams
Ad

Similar to Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC Clusters (20)

PDF
Towards a Systematic Study of Big Data Performance and Benchmarking
PPTX
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
PDF
A Library for Emerging High-Performance Computing Clusters
PDF
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
PDF
Designing HPC & Deep Learning Middleware for Exascale Systems
PDF
Accelerate Big Data Processing with High-Performance Computing Technologies
PPTX
Communication Frameworks for HPC and Big Data
PDF
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
PDF
Programming Trends in High Performance Computing
PPTX
HPAT presentation at JuliaCon 2016
PPTX
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
PDF
Panda scalable hpc_bestpractices_tue100418
PDF
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
PDF
High-Performance and Scalable Designs of Programming Models for Exascale Systems
PDF
EuroMPI 2016 Keynote: How Can MPI Fit Into Today's Big Computing
PDF
Application Profiling at the HPCAC High Performance Center
PDF
[Harvard CS264] 07 - GPU Cluster Programming (MPI & ZeroMQ)
PPT
High Performance Computing
PPT
Anegdotic Maxeler (Romania)
PPTX
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
Towards a Systematic Study of Big Data Performance and Benchmarking
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
A Library for Emerging High-Performance Computing Clusters
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
Designing HPC & Deep Learning Middleware for Exascale Systems
Accelerate Big Data Processing with High-Performance Computing Technologies
Communication Frameworks for HPC and Big Data
MVAPICH2 and MVAPICH2-X Projects: Latest Developments and Future Plans
Programming Trends in High Performance Computing
HPAT presentation at JuliaCon 2016
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
Panda scalable hpc_bestpractices_tue100418
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
High-Performance and Scalable Designs of Programming Models for Exascale Systems
EuroMPI 2016 Keynote: How Can MPI Fit Into Today's Big Computing
Application Profiling at the HPCAC High Performance Center
[Harvard CS264] 07 - GPU Cluster Programming (MPI & ZeroMQ)
High Performance Computing
Anegdotic Maxeler (Romania)
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
Ad

More from Geoffrey Fox (20)

PPTX
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
PPTX
High Performance Computing and Big Data
PPTX
Data Science and Online Education
PPTX
Big Data HPC Convergence and a bunch of other things
PPTX
Lessons from Data Science Program at Indiana University: Curriculum, Students...
PPTX
Data Science Curriculum at Indiana University
PPTX
What is the "Big Data" version of the Linpack Benchmark? ; What is “Big Data...
DOCX
Experience with Online Teaching with Open Source MOOC Technology
PPTX
Cloud Services for Big Data Analytics
PDF
Big Data and Clouds: Research and Education
PDF
High Performance Data Analytics and a Java Grande Run Time
PDF
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...
PPTX
Classification of Big Data Use Cases by different Facets
PPTX
Remarks on MOOC's
PPTX
FutureGrid Computing Testbed as a Service
PPTX
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
PPTX
NIST Big Data Public Working Group NBD-PWG
PPTX
51 Use Cases and implications for HPC & Apache Big Data Stack
PPT
Linking Programming models between Grids, Web 2.0 and Multicore
PPT
CTS Conference Web 2.0 Tutorial Part 2
AI-Driven Science and Engineering with the Global AI and Modeling Supercomput...
High Performance Computing and Big Data
Data Science and Online Education
Big Data HPC Convergence and a bunch of other things
Lessons from Data Science Program at Indiana University: Curriculum, Students...
Data Science Curriculum at Indiana University
What is the "Big Data" version of the Linpack Benchmark? ; What is “Big Data...
Experience with Online Teaching with Open Source MOOC Technology
Cloud Services for Big Data Analytics
Big Data and Clouds: Research and Education
High Performance Data Analytics and a Java Grande Run Time
Multi-faceted Classification of Big Data Use Cases and Proposed Architecture ...
Classification of Big Data Use Cases by different Facets
Remarks on MOOC's
FutureGrid Computing Testbed as a Service
Big Data Applications & Analytics Motivation: Big Data and the Cloud; Centerp...
NIST Big Data Public Working Group NBD-PWG
51 Use Cases and implications for HPC & Apache Big Data Stack
Linking Programming models between Grids, Web 2.0 and Multicore
CTS Conference Web 2.0 Tutorial Part 2

Recently uploaded (20)

PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
Spectroscopy.pptx food analysis technology
PDF
Encapsulation theory and applications.pdf
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Cloud computing and distributed systems.
PPTX
sap open course for s4hana steps from ECC to s4
PDF
A comparative analysis of optical character recognition models for extracting...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPT
Teaching material agriculture food technology
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
MYSQL Presentation for SQL database connectivity
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Unlocking AI with Model Context Protocol (MCP)
MIND Revenue Release Quarter 2 2025 Press Release
Spectroscopy.pptx food analysis technology
Encapsulation theory and applications.pdf
Assigned Numbers - 2025 - Bluetooth® Document
Chapter 3 Spatial Domain Image Processing.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Review of recent advances in non-invasive hemoglobin estimation
Building Integrated photovoltaic BIPV_UPV.pdf
Cloud computing and distributed systems.
sap open course for s4hana steps from ECC to s4
A comparative analysis of optical character recognition models for extracting...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Teaching material agriculture food technology
Reach Out and Touch Someone: Haptics and Empathic Computing
gpt5_lecture_notes_comprehensive_20250812015547.pdf

Spidal Java: High Performance Data Analytics with Java on Large Multicore HPC Clusters

  • 1. SPIDAL JavaHigh Performance Data Analytics with Java on Large Multicore HPC Clusters sekanaya@indiana.edu https://github.com/DSC-SPIDAL | http://saliya.org 24th High Performance Computing Symposium (HPC 2016) April 3-6, 2016, Pasadena, CA, USA as part of the SCS Spring Simulation Multi-Conference (SpringSim'16) Saliya Ekanayake | Supun Kamburugamuve | Geoffrey Fox
  • 2. High Performance? 4/4/2016 HPC 2016 2 48 Nodes 128 Nodes 40x Speedup with SPIDAL Java Typical Java with All MPI Typical Java with Threads and MPI 64x Ideal (if life was so fair!) We’ll discuss today Intel Haswell HPC Clusterwith 40Gbps Infiniband
  • 3. Introduction • Big Data and HPC  Big data + cloud is the norm, but not always  Some applications demand significant computation and communication  HPC clusters are ideal  However, it’s not easy • Java  Unprecedented big data ecosystem  Apache has over 300 big data systems, mostly written in Java  Performance and APIs have improved greatly with 1.7x  Not much used in HPC, but can give comparative performance (e.g. SPIDAL Java)  Comparative performance to C (more on this later)  Google query https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF- 8#q=java%20faster%20than%20c will point to interesting discussions on why.  Interoperable and productive 4/4/2016 HPC 2016 3
  • 4. SPIDAL Java • Scalable Parallel Interoperable Data Analytics Library (SPIDAL)  Available at https://github.com/DSC-SPIDAL • Includes Multidimensional Scaling (MDS) and Clustering Applications  DA-MDS  Y. Ruan and G. Fox, "A Robust and Scalable Solution for Interpolative Multidimensional Scaling with Weighting," eScience (eScience), 2013 IEEE 9th International Conference on, Beijing, 2013, pp. 61-69. doi: 10.1109/eScience.2013.30  DA-PWC  Fox, G. C. Deterministic annealing and robust scalable data mining for the data deluge. In Proceedings of the 2nd International Workshop on Petascal Data Analytics: Challenges and Opportunities, PDAC ’11, ACM (New York, NY, USA, 2011), 39–40.  DA-VS  Fox, G., Mani, D., and Pyne, S. Parallel deterministic annealing clustering and its application to lc-ms data analysis. In Big Data, 2013 IEEE International Conference on (Oct 2013), 665–673.  MDSasChisq  General MDS implementation using LevenbergMarquardt algorithm  Levenberg, K. A method for the solution of certain non-linear problems in least squares. Quarterly Journal of Applied Mathmatics II, 2 (1944), 164–168.) 4/4/2016 HPC 2016 4
  • 5. SPIDAL Java Applications • Gene Sequence Clustering and Visualization  Results at WebPlotViz  https://spidal-gw.dsc.soic.indiana.edu/resultsets/991946447  https://spidal-gw.dsc.soic.indiana.edu/resultsets/795366853  A few snapshots 4/4/2016 HPC 2016 5 Sequence File Pairwise Alignment DA-MDS DA-PWC 3D Plot 100,000 fungi sequences 3D phylogenetic tree 3D plot of vector data
  • 6. SPIDAL Java Applications • Stocks Data Analysis  Time series view of stocks  E.g. with 1 year moving window https://spidal- gw.dsc.soic.indiana.edu/public/timeseriesview/825496517 4/4/2016 HPC 2016 6
  • 7. Performance Challenges • Intra-node Communication • Exploiting Fat Nodes • Overhead of Garbage Collection • Cost of Heap Allocated Objects • Cache and Memory Access 4/4/2016 HPC 2016 7
  • 8. Performance Challenges • Intra-node Communication [1/3]  Large core counts per node – 24 to 36  Data analytics use global collective communication – Allreduce, Allgather, Broadcast, etc.  HPC simulations, in contrast, typically, uses local communications for tasks like halo exchanges. 4/4/2016 HPC 2016 8 3 million double values distributed uniformly over 48 nodes • Identical message size per node, yet 24 MPI is ~10 times slower than 1 MPI • Suggests #ranks per node should be 1 for the best performance • But how to exploit all available cores?
  • 9. Performance Challenges • Intra-node Communication [2/3]  Solution: Shared memory  Use threads?  didn’t work well (explained shortly)  Processes with shared memory communication  Custom implementation in SPIDAL Java outside of MPI framework 4/4/2016 HPC 2016 9 • Only 1 rank per node participates in the MPI collective call • Others exchange data using shared memory maps 100K DA-MDS Run Communication 200K DA-MDS Run Communication
  • 10. Performance Challenges • Intra-node Communication [3/3]  Heterogeneity support  Nodes with 24 and 36 cores  Automatically detects configuration and allocates memory maps  Implementation  Custom shared memory implementation using OpenHFT’s Bytes API  Supports collective calls necessary within SPIDAL Java 4/4/2016 HPC 2016 10
  • 11. Performance Challenges • Exploiting Fat Nodes [1/2]  Large #Cores per Node  E.g. 1 Node in Juliet HPC cluster  2 Sockets  12 Cores each  2 Hardware threads per core  L1 and L2 per core  L3 shared per socket  Two approaches  All processes  1 proc per core  1 Process multiple threads  Which is better? 4/4/2016 HPC 2016 11 Socket 0 Socket 1 1 Core – 2 HTs
  • 12. Performance Challenges • Exploiting Fat Nodes [2/2]  Suggested thread model in literature  fork-join regions within a process 4/4/2016 HPC 2016 12 Iterations 1. Thread creation and scheduling overhead accumulates over iterations and significant (~5%) • True for Java threads as well as OpenMP in C/C++ (see https://github.com/esaliya/JavaThreads and https://github.com/esaliya/CppStack/tree/ master/omp2/letomppar) 2. Long running threads do better than this model, still have non- negligible overhead 3. Solution is to use processes with shared memory communications as in SPIDAL Java process Prev. Optimization
  • 13. Performance Challenges • Garbage Collection  “Stop the world” events are expensive  Especially, for parallel processes with collective communications  Typical OOP  allocate – use – forget  Original SPIDAL code produced frequent garbage of small arrays  Solution: Zero-GC using  Static allocation and reuse  Off-heap static buffers (more on next slide)  Advantage  No GC – obvious  Scale to larger problem sizes  E.g. Original SPIDAL code required 5GB (x 24 = 120 GB per node) heap per process to handle 200K MDS. Optimized code use < 1GB heap to finish within the same timing.  Note. Physical memory is 128GB, so with optimized SPIDAL can now do 1 million point MDS within hardware limits. 4/4/2016 HPC 2016 13 Heap size per process reaches –Xmx (2.5GB) early in the computation Frequent GC Heap size per process is well below (~1.1GB) of –Xmx (2.5GB) Virtually no GC activity after optimizing
  • 14. Performance Challenges 4/4/2016 HPC 2016 14 • I/O with Heap Allocated Objects  Java-to-native I/O creates copies of objects in heap  Otherwise can’t guarantee object’s memory location due to GC  Too expensive  Solution: Off-heap buffers (memory maps)  Initial data loading  significantly faster than Java stream API calls  Intra-node messaging  gives the best performance  MPI inter-node communications • Cache and Memory Access  Nested data structures are neat, but expensive  Solution: Contiguous memory with 1D arrays over 2D structures  Indirect memory references are costly  Also, adopted from HPC  Blocked loops and loop ordering
  • 15. Evaluation 4/4/2016 HPC 2016 15 • HPC Cluster  128 Intel Haswell nodes with 2.3GHz nominal frequency  96 nodes with 24 cores on 2 sockets (12 cores each)  32 nodes with 36 cores on 2 sockets (18 cores each)  128GB memory per node  40Gbps Infiniband • Software  Java 1.8  OpenHFT JavaLang 6.7.2  Habanero Java 0.1.4  OpenMPI 1.10.1 Application: DA-MDS • Computations grow 𝑂 𝑁2 • Communication global and is 𝑂(𝑁)
  • 16. 4/4/2016 HPC 2016 16 100K DA-MDS 200K DA-MDS 400K DA-MDS • 1152 Total Parallelism across 48 nodes • All combinations of 24 way parallelism per node • LHS is all processes • RHS is all internal threads and MPI across nodes 1. With SM communications in SPIDAL, processes outperform threads (blue line) 2. Other optimizations further improves performance (green line)
  • 17. 4/4/2016 HPC 2016 17 • Speedup for varying data sizes • All processes • LHS is 1 proc per node across 48 nodes • RHS is 24 procs per node across 128 nodes (ideal 64x speedup) Larger data sizes show better speedup (400K – 45x, 200K – 40x, 100K – 38x) • Speedup on 36 core nodes • All processes • LHS is 1 proc per node across 32 nodes • RHS is 36 procs per node across 32 nodes (ideal 36x speedup) Speedup plateaus around 23x after 24 way parallelism per node
  • 18. 4/4/2016 HPC 2016 18 The effect of different optimizations on speedup Java, is it worth it? – YES! Also, with JIT some cases in MDS are better than C