SlideShare a Scribd company logo
spcl.inf.ethz.ch
@spcl_eth
Data-Centric Parallel Programming
Torsten Hoefler, Keynote at AsHES @ IPDPS’19, Rio, Brazil
Alexandros Ziogas, Tal Ben-Nun, Guillermo Indalecio, Timo Schneider, Mathieu Luisier, and Johannes de Fine Licht
and the whole DAPP team @ SPCL
https://eurompi19.inf.ethz.ch
spcl.inf.ethz.ch
@spcl_eth
2
Changing hardware constraints and the physics of computing
[1]: Marc Horowitz, Computing’s Energy Problem (and what we can do about it), ISSC 2014, plenary
[2]: Moore: Landauer Limit Demonstrated, IEEE Spectrum 2012
130nm
90nm
65nm
45nm
32nm
22nm
14nm
10nm
0.9 V [1]
32-bit FP ADD: 0.9 pJ
32-bit FP MUL: 3.2 pJ
2x32 bit from L1 (8 kiB): 10 pJ
2x32 bit from L2 (1 MiB): 100 pJ
2x32 bit from DRAM: 1.3 nJ
…
Three Ls of modern computing:
How to address locality challenges on standard architectures and programming?
D. Unat et al.: “Trends in Data Locality Abstractions for HPC Systems”
IEEE Transactions on Parallel and Distributed Systems (TPDS). Vol 28, Nr. 10, IEEE, Oct. 2017
spcl.inf.ethz.ch
@spcl_eth
3
Control in Load-store vs. Dataflow
Memory
Cache
RegistersControl
x=a+b
ld a, r1
ALU
ald b, r2 badd r1, r2
ba
x
bast r1, x Memory
+
c d y
y=(a+b)*(c+d)
a b
+
x
a b c d
a+b c+d
y
Turing Award 1977 (Backus): "Surely there must be a less primitive
way of making big changes in the store than pushing vast numbers
of words back and forth through the von Neumann bottleneck."
Load-store (“von Neumann”)
Energy per instruction: 70pJ
Source: Mark Horowitz, ISSC’14
Energy per operation: 1-3pJ
Static Dataflow (“non von Neumann”)
Very Low High
spcl.inf.ethz.ch
@spcl_eth
Still Lower
Control Locality
4
Single Instruction Multiple Data/Threads (SIMD - Vector CPU, SIMT - GPU)
Memory
Cache
RegistersControl
ALUALU
ALUALU
ALUALU
ALUALU
ALUALU
45nm, 0.9 V [1]
Random Access SRAM:
8 kiB: 10 pJ
32 kiB: 20 pJ
1 MiB: 100 pJ
Memory
+
c d ya b
+
x
a b c d
45nm, 0.9 V [1]
Single R/W registers:
32 bit: 0.1 pJ
[1]: Marc Horowitz, Computing’s Energy Problem (and what we can do about it), ISSC 2014, plenary
High Performance Computing really
became a data management challenge
spcl.inf.ethz.ch
@spcl_eth
5
Data movement will dominate everything!
Source: Fatollahi-Fard et al.
 “In future microprocessors, the energy expended for data movement will have a critical effect on
achievable performance.”
 “… movement consumes almost 58 watts with hardly any energy budget left for computation.”
 “…the cost of data movement starts to dominate.”
 “…data movement over these networks must be limited to conserve energy…”
 the phrase “data movement” appears 18 times on 11 pages (usually in concerning contexts)!
 “Efficient data orchestration will increasingly be critical, evolving to more efficient memory
hierarchies and new types of interconnect tailored for locality and that depend on
sophisticated software to place computation and data so as to minimize data movement.”
Source: NVIDIA
Source: Kogge, Shalf
spcl.inf.ethz.ch
@spcl_eth
 Well, to a good approximation how we programmed yesterday
 Or last year?
 Or four decades ago?
 Control-centric programming
 Worry about operation counts (flop/s is the metric, isn’t it?)
 Data movement is at best implicit (or invisible/ignored)
 Legion [1] is taking a good direction towards data-centric
 Tasking relies on data placement but not really dependencies (not visible to tool-chain)
 But it is still control-centric in the tasks – not (performance) portable between devices!
 Let’s go a step further towards an explicitly data-centric viewpoint
 For performance engineers at least!
6
“Sophisticated software”: How do we program today?
Backus ‘77: “The assignment statement
is the von Neumann bottleneck of programming
languages and keeps us thinking in word-at-a-time
terms in much the same way the computer’s
bottleneck does.”
[1]: Bauer et al.: “Legion: expressing locality and independence with logical regions”, SC12, 2012
spcl.inf.ethz.ch
@spcl_eth
7
Performance Portability with DataCentric (DaCe) Parallel Programming
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
SystemDomain Scientist Performance Engineer
High-Level Program
Data-Centric Intermediate
Representation (SDFG, §3)
𝜕𝑢
𝜕𝑡
− 𝛼𝛻2 𝑢 = 0
Problem Formulation
FPGA Modules
CPU Binary
Runtime
Hardware
Information
Graph Transformations
(API, Interactive, §4)
SDFG Compiler
Transformed
Dataflow
Performance
Results
Thin Runtime
Infrastructure
GPU Binary
Python /
NumPy
𝑳 𝑹
*
*
*
*
*
*
TensorFlow
DSLs
MATLAB
SDFG Builder API
spcl.inf.ethz.ch
@spcl_eth
8
DAPP – Data Centric Programming Concepts
• Store volatile (buffers, queues, RAM) and
nonvolatile (files, I/O) information
• Can be sources or sinks of data
• Stateless functions that perform computations at
any granularity
• Data access only through ports
• Data flowing between containers and tasklets/ports
• Implemented as access, copies, streaming, …
• Map scopes provide parallelism
• States constrain parallelism outside of datatflow
Data Containers Computation
Data Movement / Dependencies Parallelism and States
spcl.inf.ethz.ch
@spcl_eth
9
A first example in DaCe Python
spcl.inf.ethz.ch
@spcl_eth
DIODE User Interface
10
Source Code Transformations
SDFG
(malleable)
SDFGGenerated Code Performance
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
spcl.inf.ethz.ch
@spcl_eth
11
Performance for matrix multiplication on x86
SDFG
Naïve
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
spcl.inf.ethz.ch
@spcl_eth
12
Performance for matrix multiplication on x86
SDFG
MapReduceFusionNaïve
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
spcl.inf.ethz.ch
@spcl_eth
13
Performance for matrix multiplication on x86
SDFG
LoopReorder
MapReduceFusionNaïve
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
spcl.inf.ethz.ch
@spcl_eth
14
Performance for matrix multiplication on x86
SDFG
BlockTiling
LoopReorder
MapReduceFusionNaïve
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
spcl.inf.ethz.ch
@spcl_eth
15
Performance for matrix multiplication on x86
RegisterTiling
BlockTiling
LoopReorder
MapReduceFusionNaïve
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
spcl.inf.ethz.ch
@spcl_eth
16
Performance for matrix multiplication on x86
LocalStorage
RegisterTiling
BlockTiling
LoopReorder
MapReduceFusionNaïve
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
spcl.inf.ethz.ch
@spcl_eth
17
Performance for matrix multiplication on x86
PromoteTransient
LocalStorage
RegisterTiling
BlockTiling
LoopReorder
MapReduceFusionNaïve
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
spcl.inf.ethz.ch
@spcl_eth
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
18
Performance for matrix multiplication on x86
Intel MKL
OpenBLAS
25% difference
DAPP
With more tuning: 98.6% of MKLBut do we really care about MMM on x86 CPUs?
spcl.inf.ethz.ch
@spcl_eth
Hardware Mapping: Load/Store Architectures
 Recursive code generation (C++, CUDA)
 Control flow: Construct detection and gotos
 Parallelism
 Multi-core CPU: OpenMP, atomics, and threads
 GPU: CUDA kernels and streams
 Connected components run concurrently
 Memory and interaction with accelerators
 Array-array edges create intra-/inter-device copies
 Memory access validation on compilation
 Automatic CPU SDFG to GPU transformation
 Tasklet code immutable
19
spcl.inf.ethz.ch
@spcl_eth
Hardware Mapping: Pipelined Architectures
 Module generation with HDL and HLS
 Integration with Xilinx SDAccel
 Nested SDFGs become FPGA state machines
 Parallelism
 Exploiting temporal locality: Pipelines
 Exploiting spatial locality: Vectorization, replication
 Replication
 Enables parametric systolic array generation
 Memory access
 Burst memory access, vectorization
 Streams for inter-PE communication
20
spcl.inf.ethz.ch
@spcl_eth
Performance (Portability) Evaluation
 Three platforms:
 Intel Xeon E5-2650 v4 CPU (2.20 GHz, no HT)
 Tesla P100 GPU
 Xilinx VCU1525 hosting an XCVU9P FPGA
 Compilers and frameworks:
 Compilers:
GCC 8.2.0
Clang 6.0
icc 18.0.3
 Polyhedral optimizing compilers:
Polly 6.0
Pluto 0.11.4
PPCG 0.8
 GPU and FPGA compilers:
CUDA nvcc 9.2
Xilinx SDAccel 2018.2
 Frameworks and optimized libraries:
HPX
Halide
Intel MKL
NVIDIA CUBLAS, CUSPARSE, CUTLASS
NVIDIA CUB
21
spcl.inf.ethz.ch
@spcl_eth
Performance Evaluation: Fundamental Kernels (CPU)
 Database Query: roughly 50% of a 67,108,864 column
 Matrix Multiplication (MM): 2048x2048x2048
 Histogram: 8192x8192
 Jacobi stencil: 2048x2048 for T=1024
 Sparse Matrix-Vector Multiplication (SpMV): 8192x8192 CSR matrix (nnz=33,554,432)
22
99.9% of MKL8.12x faster 98.6% of MKL 2.5x faster 82.7% of Halide
spcl.inf.ethz.ch
@spcl_eth
Performance Evaluation: Fundamental Kernels (GPU, FPGA)
23
GPU
FPGA 309,000x
19.5x of Spatial
90% of CUTLASS
spcl.inf.ethz.ch
@spcl_eth
Performance Evaluation: Polybench (CPU)
 Polyhedral benchmark with 30 applications
 Without any transformations, achieves 1.43x (geometric mean) over
general-purpose compilers
24
spcl.inf.ethz.ch
@spcl_eth
Performance Evaluation: Polybench (GPU, FPGA)
 Automatically transformed from CPU code
25
GPU
(1.12x geomean speedup)
FPGA
The first full set of placed-and-routed Polybench
11.8x
spcl.inf.ethz.ch
@spcl_eth
Case Study: Parallel Breadth-First Search
 Compared with Galois and Gluon
 State-of-the-art graph processing frameworks on CPU
 Graphs:
 Road maps: USA, OSM-Europe
 Social networks: Twitter, LiveJournal
 Synthetic: Kronecker Graphs
26
Performance portability – fine, but who cares about microbenchmarks?
spcl.inf.ethz.ch
@spcl_eth
27
Remember the promise of DAPP – on to a real application!
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
SystemDomain Scientist Performance Engineer
High-Level Program
Data-Centric Intermediate
Representation (SDFG, §3)
𝜕𝑢
𝜕𝑡
− 𝛼𝛻2 𝑢 = 0
Problem Formulation
FPGA Modules
CPU Binary
Runtime
Hardware
Information
Graph Transformations
(API, Interactive, §4)
SDFG Compiler
Transformed
Dataflow
Performance
Results
Thin Runtime
Infrastructure
GPU Binary
Python /
NumPy
𝑳 𝑹
*
*
*
*
*
*
TensorFlow
DSLs
MATLAB
SDFG Builder API
spcl.inf.ethz.ch
@spcl_eth
28
Next-Generation Transistors need to be cooler – addressing self-heating
spcl.inf.ethz.ch
@spcl_eth
 OMEN Code (Luisier et al., Gordon Bell award finalist 2011 and 2015)
 90k SLOC, C, C++, CUDA, MPI, OpenMP, …
29
Quantum Transport Simulations with OMEN
Electrons 𝑮 𝑬, 𝒌 𝒛 Phonons 𝑫 𝝎, 𝒒 𝒛
GF
SSE
SSE
Σ 𝐺 𝐸 + ℏ𝜔, 𝑘 𝑧 − 𝑞 𝑧 𝐷 𝜔, 𝑞 𝑧 𝐸, 𝑘 𝑧
Π 𝐺 𝐸, 𝑘 𝑧 𝐺 𝐸 + ℏ𝜔, 𝑘 𝑧 + 𝑞 𝑧 𝜔, 𝑞 𝑧
𝐸 ⋅ 𝑆 − 𝐻 − Σ 𝑅
⋅ 𝐺 𝑅
= 𝐼
𝐺<
= 𝐺 𝑅
⋅ Σ<
⋅ 𝐺 𝐴
𝜔2
− Φ − Π 𝑅
⋅ 𝐷 𝑅
= 𝐼
𝐷<
= 𝐷 𝑅
⋅ Π<
⋅ 𝐷 𝐴
NEGF
spcl.inf.ethz.ch
@spcl_eth
30
All of OMEN (90k SLOC) in a single SDFG – (collapsed) tasklets contain more SDFGs
𝐻
𝑘 𝑧, 𝐸
RGF
Σ≷
convergence
𝐺≷
Φ
𝑞 𝑧, 𝜔
RGF
Π≷
𝐷≷
𝑏
𝛻𝐻
𝑘 𝑧, 𝐸, 𝑞 𝑧, 𝜔, 𝑎, 𝑏
SSE
Π≷
G≷
Σ≷
D≷
Not 𝑏
𝑏
GF
SSE
𝑖++𝑖=0 𝑞 𝑧, 𝜔𝑘 𝑧, 𝐸
𝐻[0:𝑁𝑘 𝑧
] Φ[0:𝑁𝑞 𝑧
]Σ≷
[0:𝑁𝑘 𝑧
,0:𝑁𝐸]
𝐼𝑒 𝐼 𝜙
Π≷
[0:𝑁𝑞 𝑧
,
1:𝑁 𝜔]
𝐻[𝑘 𝑧] Φ[𝑞 𝑧]Σ≷
[𝑘 𝑧,E] Π≷
[𝑞 𝑧,𝜔]
𝐺≷
[𝑘 𝑧,E] 𝐷≷
[𝑞 𝑧,𝜔]𝐼Φ (CR: Sum)
𝐼Φ (CR: Sum)𝐼e (CR: Sum)
𝐼e (CR: Sum)
𝐷≷
[0:N 𝑞 𝑧
,
1:N 𝜔]
G≷
[0:𝑁𝑘 𝑧
,0:𝑁𝐸]
𝛻𝐻 G≷
D≷
Π≷
(CR: Sum)Σ≷
(CR: Sum)
Σ≷
[…]
(CR: Sum)
Π≷
[…]
(CR: Sum)
𝛻𝐻[…] G≷
[…] D≷
[…]
𝑘 𝑧, 𝐸, 𝑞 𝑧, 𝜔, 𝑎, 𝑏
𝐼e 𝐼Φ
𝑏
spcl.inf.ethz.ch
@spcl_eth
31
Zooming into SSE (large share of the runtime)
DaCe
Transform
Between 100-250x less communication at scale! (from PB to TB)
spcl.inf.ethz.ch
@spcl_eth
32
Additional interesting performance insights
Python is slow! Ok, we knew that – but compiled can be fast!
Piz Daint single node (P100)
cuBLAS can be very inefficient (well, unless you floptimize)
Basic operation in SSE (many very small MMMs)
5k atoms
Piz Daint Summit
spcl.inf.ethz.ch
@spcl_eth
33
10,240 atoms on 27,360 V100 GPUs (full-scale Summit)
- 56 Pflop/s with I/O (28% peak)
Already ~100x speedup on 25%
of Summit – the original OMEN
does not scale further!
Communication time reduced
by 417x on Piz Daint!
Volume on full-scale Summit
from 12 PB/iter  87 TB/iter
spcl.inf.ethz.ch
@spcl_eth
34
An example of fine-grained data-centric optimization (i.e., how to vectorize)
Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
spcl.inf.ethz.ch
@spcl_eth
35
Overview and wrap-up
This project has received funding from the European Research Council (ERC) under grant agreement "DAPP (PI: T. Hoefler)".

More Related Content

PDF
AI is Impacting HPC Everywhere
PDF
ECP Application Development
PDF
HTCC poster for CERN Openlab opendays 2015
PDF
Real-time applications on IntelXeon/Phi
PDF
(Im2col)accelerating deep neural networks on low power heterogeneous architec...
PPT
Presentation
PPT
20072311272506
PPT
OpenCL caffe IWOCL 2016 presentation final
AI is Impacting HPC Everywhere
ECP Application Development
HTCC poster for CERN Openlab opendays 2015
Real-time applications on IntelXeon/Phi
(Im2col)accelerating deep neural networks on low power heterogeneous architec...
Presentation
20072311272506
OpenCL caffe IWOCL 2016 presentation final

What's hot (20)

PDF
Big_Data_Heterogeneous_Programming IEEE_Big_Data 2015
PPTX
APSys Presentation Final copy2
PDF
Solving Endgames in Large Imperfect-Information Games such as Poker
PPTX
Beyond Hadoop 1.0: A Holistic View of Hadoop YARN, Spark and GraphLab
PDF
Performance Optimization of Deep Learning Frameworks Caffe* and Tensorflow* f...
PDF
Massively Parallel K-Nearest Neighbor Computation on Distributed Architectures
PDF
cug2011-praveen
PDF
Towards a Systematic Study of Big Data Performance and Benchmarking
PDF
Xian He Sun Data-Centric Into
PPTX
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
PPTX
Advanced spark deep learning
PPTX
Hardware Acceleration of SVM Training for Real-time Embedded Systems: An Over...
PDF
Accelerate Machine Learning Software on Intel Architecture
PPTX
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
PDF
PADAL19: Runtime-Assisted Locality Abstraction Using Elastic Places and Virtu...
PDF
Spine net learning scale permuted backbone for recognition and localization
PPTX
High Performance Data Analytics with Java on Large Multicore HPC Clusters
PDF
Scalable Distributed Real-Time Clustering for Big Data Streams
PDF
Intro to Machine Learning for GPUs
PDF
FPGA-accelerated High-Performance Computing – Close to Breakthrough or Pipedr...
Big_Data_Heterogeneous_Programming IEEE_Big_Data 2015
APSys Presentation Final copy2
Solving Endgames in Large Imperfect-Information Games such as Poker
Beyond Hadoop 1.0: A Holistic View of Hadoop YARN, Spark and GraphLab
Performance Optimization of Deep Learning Frameworks Caffe* and Tensorflow* f...
Massively Parallel K-Nearest Neighbor Computation on Distributed Architectures
cug2011-praveen
Towards a Systematic Study of Big Data Performance and Benchmarking
Xian He Sun Data-Centric Into
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
Advanced spark deep learning
Hardware Acceleration of SVM Training for Real-time Embedded Systems: An Over...
Accelerate Machine Learning Software on Intel Architecture
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
PADAL19: Runtime-Assisted Locality Abstraction Using Elastic Places and Virtu...
Spine net learning scale permuted backbone for recognition and localization
High Performance Data Analytics with Java on Large Multicore HPC Clusters
Scalable Distributed Real-Time Clustering for Big Data Streams
Intro to Machine Learning for GPUs
FPGA-accelerated High-Performance Computing – Close to Breakthrough or Pipedr...
Ad

Similar to Data-Centric Parallel Programming (20)

PDF
(Costless) Software Abstractions for Parallel Architectures
PDF
Assisting User’s Transition to Titan’s Accelerated Architecture
PDF
Application Optimisation using OpenPOWER and Power 9 systems
PPTX
Exascale Capabl
PDF
Software Abstractions for Parallel Hardware
PPTX
Programmable Exascale Supercomputer
PDF
HDR Defence - Software Abstractions for Parallel Architectures
PDF
E3MV - Embedded Vision - Sundance
PDF
Массовый параллелизм для гетерогенных вычислений на C++ для беспилотных автом...
PPTX
OpenACC and Open Hackathons Monthly Highlights: September 2022.pptx
PDF
Exploring emerging technologies in the HPC co-design space
PDF
Mauricio breteernitiz hpc-exascale-iscte
PDF
Keynote (Mike Muller) - Is There Anything New in Heterogeneous Computing - by...
PDF
Hardware for Deep Learning AI ML CNN.pdf
PDF
Compute API –Past & Future
PDF
Arvindsujeeth scaladays12
PDF
Utilizing AMD GPUs: Tuning, programming models, and roadmap
PDF
Software Design Practices for Large-Scale Automation
PPTX
Matching Data Intensive Applications and Hardware/Software Architectures
PPTX
Matching Data Intensive Applications and Hardware/Software Architectures
(Costless) Software Abstractions for Parallel Architectures
Assisting User’s Transition to Titan’s Accelerated Architecture
Application Optimisation using OpenPOWER and Power 9 systems
Exascale Capabl
Software Abstractions for Parallel Hardware
Programmable Exascale Supercomputer
HDR Defence - Software Abstractions for Parallel Architectures
E3MV - Embedded Vision - Sundance
Массовый параллелизм для гетерогенных вычислений на C++ для беспилотных автом...
OpenACC and Open Hackathons Monthly Highlights: September 2022.pptx
Exploring emerging technologies in the HPC co-design space
Mauricio breteernitiz hpc-exascale-iscte
Keynote (Mike Muller) - Is There Anything New in Heterogeneous Computing - by...
Hardware for Deep Learning AI ML CNN.pdf
Compute API –Past & Future
Arvindsujeeth scaladays12
Utilizing AMD GPUs: Tuning, programming models, and roadmap
Software Design Practices for Large-Scale Automation
Matching Data Intensive Applications and Hardware/Software Architectures
Matching Data Intensive Applications and Hardware/Software Architectures
Ad

More from inside-BigData.com (20)

PDF
Major Market Shifts in IT
PDF
Preparing to program Aurora at Exascale - Early experiences and future direct...
PPTX
Transforming Private 5G Networks
PDF
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
PDF
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
PDF
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
PDF
HPC Impact: EDA Telemetry Neural Networks
PDF
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
PDF
Machine Learning for Weather Forecasts
PPTX
HPC AI Advisory Council Update
PDF
Fugaku Supercomputer joins fight against COVID-19
PDF
Energy Efficient Computing using Dynamic Tuning
PDF
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
PDF
State of ARM-based HPC
PDF
Versal Premium ACAP for Network and Cloud Acceleration
PDF
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
PDF
Scaling TCO in a Post Moore's Era
PDF
CUDA-Python and RAPIDS for blazing fast scientific computing
PDF
Introducing HPC with a Raspberry Pi Cluster
PDF
Overview of HPC Interconnects
Major Market Shifts in IT
Preparing to program Aurora at Exascale - Early experiences and future direct...
Transforming Private 5G Networks
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
HPC Impact: EDA Telemetry Neural Networks
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
Machine Learning for Weather Forecasts
HPC AI Advisory Council Update
Fugaku Supercomputer joins fight against COVID-19
Energy Efficient Computing using Dynamic Tuning
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
State of ARM-based HPC
Versal Premium ACAP for Network and Cloud Acceleration
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
Scaling TCO in a Post Moore's Era
CUDA-Python and RAPIDS for blazing fast scientific computing
Introducing HPC with a Raspberry Pi Cluster
Overview of HPC Interconnects

Recently uploaded (20)

PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
cuic standard and advanced reporting.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
Big Data Technologies - Introduction.pptx
PDF
Chapter 3 Spatial Domain Image Processing.pdf
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Unlocking AI with Model Context Protocol (MCP)
Review of recent advances in non-invasive hemoglobin estimation
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Encapsulation_ Review paper, used for researhc scholars
NewMind AI Weekly Chronicles - August'25 Week I
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Per capita expenditure prediction using model stacking based on satellite ima...
Digital-Transformation-Roadmap-for-Companies.pptx
cuic standard and advanced reporting.pdf
Understanding_Digital_Forensics_Presentation.pptx
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
MYSQL Presentation for SQL database connectivity
“AI and Expert System Decision Support & Business Intelligence Systems”
Agricultural_Statistics_at_a_Glance_2022_0.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
Big Data Technologies - Introduction.pptx
Chapter 3 Spatial Domain Image Processing.pdf

Data-Centric Parallel Programming

  • 1. spcl.inf.ethz.ch @spcl_eth Data-Centric Parallel Programming Torsten Hoefler, Keynote at AsHES @ IPDPS’19, Rio, Brazil Alexandros Ziogas, Tal Ben-Nun, Guillermo Indalecio, Timo Schneider, Mathieu Luisier, and Johannes de Fine Licht and the whole DAPP team @ SPCL https://eurompi19.inf.ethz.ch
  • 2. spcl.inf.ethz.ch @spcl_eth 2 Changing hardware constraints and the physics of computing [1]: Marc Horowitz, Computing’s Energy Problem (and what we can do about it), ISSC 2014, plenary [2]: Moore: Landauer Limit Demonstrated, IEEE Spectrum 2012 130nm 90nm 65nm 45nm 32nm 22nm 14nm 10nm 0.9 V [1] 32-bit FP ADD: 0.9 pJ 32-bit FP MUL: 3.2 pJ 2x32 bit from L1 (8 kiB): 10 pJ 2x32 bit from L2 (1 MiB): 100 pJ 2x32 bit from DRAM: 1.3 nJ … Three Ls of modern computing: How to address locality challenges on standard architectures and programming? D. Unat et al.: “Trends in Data Locality Abstractions for HPC Systems” IEEE Transactions on Parallel and Distributed Systems (TPDS). Vol 28, Nr. 10, IEEE, Oct. 2017
  • 3. spcl.inf.ethz.ch @spcl_eth 3 Control in Load-store vs. Dataflow Memory Cache RegistersControl x=a+b ld a, r1 ALU ald b, r2 badd r1, r2 ba x bast r1, x Memory + c d y y=(a+b)*(c+d) a b + x a b c d a+b c+d y Turing Award 1977 (Backus): "Surely there must be a less primitive way of making big changes in the store than pushing vast numbers of words back and forth through the von Neumann bottleneck." Load-store (“von Neumann”) Energy per instruction: 70pJ Source: Mark Horowitz, ISSC’14 Energy per operation: 1-3pJ Static Dataflow (“non von Neumann”) Very Low High
  • 4. spcl.inf.ethz.ch @spcl_eth Still Lower Control Locality 4 Single Instruction Multiple Data/Threads (SIMD - Vector CPU, SIMT - GPU) Memory Cache RegistersControl ALUALU ALUALU ALUALU ALUALU ALUALU 45nm, 0.9 V [1] Random Access SRAM: 8 kiB: 10 pJ 32 kiB: 20 pJ 1 MiB: 100 pJ Memory + c d ya b + x a b c d 45nm, 0.9 V [1] Single R/W registers: 32 bit: 0.1 pJ [1]: Marc Horowitz, Computing’s Energy Problem (and what we can do about it), ISSC 2014, plenary High Performance Computing really became a data management challenge
  • 5. spcl.inf.ethz.ch @spcl_eth 5 Data movement will dominate everything! Source: Fatollahi-Fard et al.  “In future microprocessors, the energy expended for data movement will have a critical effect on achievable performance.”  “… movement consumes almost 58 watts with hardly any energy budget left for computation.”  “…the cost of data movement starts to dominate.”  “…data movement over these networks must be limited to conserve energy…”  the phrase “data movement” appears 18 times on 11 pages (usually in concerning contexts)!  “Efficient data orchestration will increasingly be critical, evolving to more efficient memory hierarchies and new types of interconnect tailored for locality and that depend on sophisticated software to place computation and data so as to minimize data movement.” Source: NVIDIA Source: Kogge, Shalf
  • 6. spcl.inf.ethz.ch @spcl_eth  Well, to a good approximation how we programmed yesterday  Or last year?  Or four decades ago?  Control-centric programming  Worry about operation counts (flop/s is the metric, isn’t it?)  Data movement is at best implicit (or invisible/ignored)  Legion [1] is taking a good direction towards data-centric  Tasking relies on data placement but not really dependencies (not visible to tool-chain)  But it is still control-centric in the tasks – not (performance) portable between devices!  Let’s go a step further towards an explicitly data-centric viewpoint  For performance engineers at least! 6 “Sophisticated software”: How do we program today? Backus ‘77: “The assignment statement is the von Neumann bottleneck of programming languages and keeps us thinking in word-at-a-time terms in much the same way the computer’s bottleneck does.” [1]: Bauer et al.: “Legion: expressing locality and independence with logical regions”, SC12, 2012
  • 7. spcl.inf.ethz.ch @spcl_eth 7 Performance Portability with DataCentric (DaCe) Parallel Programming Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs SystemDomain Scientist Performance Engineer High-Level Program Data-Centric Intermediate Representation (SDFG, §3) 𝜕𝑢 𝜕𝑡 − 𝛼𝛻2 𝑢 = 0 Problem Formulation FPGA Modules CPU Binary Runtime Hardware Information Graph Transformations (API, Interactive, §4) SDFG Compiler Transformed Dataflow Performance Results Thin Runtime Infrastructure GPU Binary Python / NumPy 𝑳 𝑹 * * * * * * TensorFlow DSLs MATLAB SDFG Builder API
  • 8. spcl.inf.ethz.ch @spcl_eth 8 DAPP – Data Centric Programming Concepts • Store volatile (buffers, queues, RAM) and nonvolatile (files, I/O) information • Can be sources or sinks of data • Stateless functions that perform computations at any granularity • Data access only through ports • Data flowing between containers and tasklets/ports • Implemented as access, copies, streaming, … • Map scopes provide parallelism • States constrain parallelism outside of datatflow Data Containers Computation Data Movement / Dependencies Parallelism and States
  • 10. spcl.inf.ethz.ch @spcl_eth DIODE User Interface 10 Source Code Transformations SDFG (malleable) SDFGGenerated Code Performance Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
  • 11. spcl.inf.ethz.ch @spcl_eth 11 Performance for matrix multiplication on x86 SDFG Naïve Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
  • 12. spcl.inf.ethz.ch @spcl_eth 12 Performance for matrix multiplication on x86 SDFG MapReduceFusionNaïve Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
  • 13. spcl.inf.ethz.ch @spcl_eth 13 Performance for matrix multiplication on x86 SDFG LoopReorder MapReduceFusionNaïve Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
  • 14. spcl.inf.ethz.ch @spcl_eth 14 Performance for matrix multiplication on x86 SDFG BlockTiling LoopReorder MapReduceFusionNaïve Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
  • 15. spcl.inf.ethz.ch @spcl_eth 15 Performance for matrix multiplication on x86 RegisterTiling BlockTiling LoopReorder MapReduceFusionNaïve Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
  • 16. spcl.inf.ethz.ch @spcl_eth 16 Performance for matrix multiplication on x86 LocalStorage RegisterTiling BlockTiling LoopReorder MapReduceFusionNaïve Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
  • 17. spcl.inf.ethz.ch @spcl_eth 17 Performance for matrix multiplication on x86 PromoteTransient LocalStorage RegisterTiling BlockTiling LoopReorder MapReduceFusionNaïve Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
  • 18. spcl.inf.ethz.ch @spcl_eth Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs 18 Performance for matrix multiplication on x86 Intel MKL OpenBLAS 25% difference DAPP With more tuning: 98.6% of MKLBut do we really care about MMM on x86 CPUs?
  • 19. spcl.inf.ethz.ch @spcl_eth Hardware Mapping: Load/Store Architectures  Recursive code generation (C++, CUDA)  Control flow: Construct detection and gotos  Parallelism  Multi-core CPU: OpenMP, atomics, and threads  GPU: CUDA kernels and streams  Connected components run concurrently  Memory and interaction with accelerators  Array-array edges create intra-/inter-device copies  Memory access validation on compilation  Automatic CPU SDFG to GPU transformation  Tasklet code immutable 19
  • 20. spcl.inf.ethz.ch @spcl_eth Hardware Mapping: Pipelined Architectures  Module generation with HDL and HLS  Integration with Xilinx SDAccel  Nested SDFGs become FPGA state machines  Parallelism  Exploiting temporal locality: Pipelines  Exploiting spatial locality: Vectorization, replication  Replication  Enables parametric systolic array generation  Memory access  Burst memory access, vectorization  Streams for inter-PE communication 20
  • 21. spcl.inf.ethz.ch @spcl_eth Performance (Portability) Evaluation  Three platforms:  Intel Xeon E5-2650 v4 CPU (2.20 GHz, no HT)  Tesla P100 GPU  Xilinx VCU1525 hosting an XCVU9P FPGA  Compilers and frameworks:  Compilers: GCC 8.2.0 Clang 6.0 icc 18.0.3  Polyhedral optimizing compilers: Polly 6.0 Pluto 0.11.4 PPCG 0.8  GPU and FPGA compilers: CUDA nvcc 9.2 Xilinx SDAccel 2018.2  Frameworks and optimized libraries: HPX Halide Intel MKL NVIDIA CUBLAS, CUSPARSE, CUTLASS NVIDIA CUB 21
  • 22. spcl.inf.ethz.ch @spcl_eth Performance Evaluation: Fundamental Kernels (CPU)  Database Query: roughly 50% of a 67,108,864 column  Matrix Multiplication (MM): 2048x2048x2048  Histogram: 8192x8192  Jacobi stencil: 2048x2048 for T=1024  Sparse Matrix-Vector Multiplication (SpMV): 8192x8192 CSR matrix (nnz=33,554,432) 22 99.9% of MKL8.12x faster 98.6% of MKL 2.5x faster 82.7% of Halide
  • 23. spcl.inf.ethz.ch @spcl_eth Performance Evaluation: Fundamental Kernels (GPU, FPGA) 23 GPU FPGA 309,000x 19.5x of Spatial 90% of CUTLASS
  • 24. spcl.inf.ethz.ch @spcl_eth Performance Evaluation: Polybench (CPU)  Polyhedral benchmark with 30 applications  Without any transformations, achieves 1.43x (geometric mean) over general-purpose compilers 24
  • 25. spcl.inf.ethz.ch @spcl_eth Performance Evaluation: Polybench (GPU, FPGA)  Automatically transformed from CPU code 25 GPU (1.12x geomean speedup) FPGA The first full set of placed-and-routed Polybench 11.8x
  • 26. spcl.inf.ethz.ch @spcl_eth Case Study: Parallel Breadth-First Search  Compared with Galois and Gluon  State-of-the-art graph processing frameworks on CPU  Graphs:  Road maps: USA, OSM-Europe  Social networks: Twitter, LiveJournal  Synthetic: Kronecker Graphs 26 Performance portability – fine, but who cares about microbenchmarks?
  • 27. spcl.inf.ethz.ch @spcl_eth 27 Remember the promise of DAPP – on to a real application! Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs SystemDomain Scientist Performance Engineer High-Level Program Data-Centric Intermediate Representation (SDFG, §3) 𝜕𝑢 𝜕𝑡 − 𝛼𝛻2 𝑢 = 0 Problem Formulation FPGA Modules CPU Binary Runtime Hardware Information Graph Transformations (API, Interactive, §4) SDFG Compiler Transformed Dataflow Performance Results Thin Runtime Infrastructure GPU Binary Python / NumPy 𝑳 𝑹 * * * * * * TensorFlow DSLs MATLAB SDFG Builder API
  • 28. spcl.inf.ethz.ch @spcl_eth 28 Next-Generation Transistors need to be cooler – addressing self-heating
  • 29. spcl.inf.ethz.ch @spcl_eth  OMEN Code (Luisier et al., Gordon Bell award finalist 2011 and 2015)  90k SLOC, C, C++, CUDA, MPI, OpenMP, … 29 Quantum Transport Simulations with OMEN Electrons 𝑮 𝑬, 𝒌 𝒛 Phonons 𝑫 𝝎, 𝒒 𝒛 GF SSE SSE Σ 𝐺 𝐸 + ℏ𝜔, 𝑘 𝑧 − 𝑞 𝑧 𝐷 𝜔, 𝑞 𝑧 𝐸, 𝑘 𝑧 Π 𝐺 𝐸, 𝑘 𝑧 𝐺 𝐸 + ℏ𝜔, 𝑘 𝑧 + 𝑞 𝑧 𝜔, 𝑞 𝑧 𝐸 ⋅ 𝑆 − 𝐻 − Σ 𝑅 ⋅ 𝐺 𝑅 = 𝐼 𝐺< = 𝐺 𝑅 ⋅ Σ< ⋅ 𝐺 𝐴 𝜔2 − Φ − Π 𝑅 ⋅ 𝐷 𝑅 = 𝐼 𝐷< = 𝐷 𝑅 ⋅ Π< ⋅ 𝐷 𝐴 NEGF
  • 30. spcl.inf.ethz.ch @spcl_eth 30 All of OMEN (90k SLOC) in a single SDFG – (collapsed) tasklets contain more SDFGs 𝐻 𝑘 𝑧, 𝐸 RGF Σ≷ convergence 𝐺≷ Φ 𝑞 𝑧, 𝜔 RGF Π≷ 𝐷≷ 𝑏 𝛻𝐻 𝑘 𝑧, 𝐸, 𝑞 𝑧, 𝜔, 𝑎, 𝑏 SSE Π≷ G≷ Σ≷ D≷ Not 𝑏 𝑏 GF SSE 𝑖++𝑖=0 𝑞 𝑧, 𝜔𝑘 𝑧, 𝐸 𝐻[0:𝑁𝑘 𝑧 ] Φ[0:𝑁𝑞 𝑧 ]Σ≷ [0:𝑁𝑘 𝑧 ,0:𝑁𝐸] 𝐼𝑒 𝐼 𝜙 Π≷ [0:𝑁𝑞 𝑧 , 1:𝑁 𝜔] 𝐻[𝑘 𝑧] Φ[𝑞 𝑧]Σ≷ [𝑘 𝑧,E] Π≷ [𝑞 𝑧,𝜔] 𝐺≷ [𝑘 𝑧,E] 𝐷≷ [𝑞 𝑧,𝜔]𝐼Φ (CR: Sum) 𝐼Φ (CR: Sum)𝐼e (CR: Sum) 𝐼e (CR: Sum) 𝐷≷ [0:N 𝑞 𝑧 , 1:N 𝜔] G≷ [0:𝑁𝑘 𝑧 ,0:𝑁𝐸] 𝛻𝐻 G≷ D≷ Π≷ (CR: Sum)Σ≷ (CR: Sum) Σ≷ […] (CR: Sum) Π≷ […] (CR: Sum) 𝛻𝐻[…] G≷ […] D≷ […] 𝑘 𝑧, 𝐸, 𝑞 𝑧, 𝜔, 𝑎, 𝑏 𝐼e 𝐼Φ 𝑏
  • 31. spcl.inf.ethz.ch @spcl_eth 31 Zooming into SSE (large share of the runtime) DaCe Transform Between 100-250x less communication at scale! (from PB to TB)
  • 32. spcl.inf.ethz.ch @spcl_eth 32 Additional interesting performance insights Python is slow! Ok, we knew that – but compiled can be fast! Piz Daint single node (P100) cuBLAS can be very inefficient (well, unless you floptimize) Basic operation in SSE (many very small MMMs) 5k atoms Piz Daint Summit
  • 33. spcl.inf.ethz.ch @spcl_eth 33 10,240 atoms on 27,360 V100 GPUs (full-scale Summit) - 56 Pflop/s with I/O (28% peak) Already ~100x speedup on 25% of Summit – the original OMEN does not scale further! Communication time reduced by 417x on Piz Daint! Volume on full-scale Summit from 12 PB/iter  87 TB/iter
  • 34. spcl.inf.ethz.ch @spcl_eth 34 An example of fine-grained data-centric optimization (i.e., how to vectorize) Preprint (arXiv): Ben-Nun, de Fine Licht, Ziogas, TH: Stateful Dataflow Multigraphs: A Data-Centric Model for High-Performance Parallel Programs
  • 35. spcl.inf.ethz.ch @spcl_eth 35 Overview and wrap-up This project has received funding from the European Research Council (ERC) under grant agreement "DAPP (PI: T. Hoefler)".

Editor's Notes

  • #3: JOHANNES: you said something about that scaling stops at 0nm which didn’t make a lot of sense. It’s like you also said, the scaling stops when you hit the size of individual molecules  Mark Horowitz talk: https://www.youtube.com/watch?v=7gGnRGko3go 45nm, 0.9V  32bit FADD: 0.9pJ, FMUL: 3.7pJ, 2 operand (64bit) load from L1 (8kiB): 10pJ, from L2 (1MiB): 100pJ, from DRAM: 1.3-2.6nJ [1] FP computation approaches Landauer limit (3e-21 J) [2] Assuming all bits are switched in 32 bit FP, ~0.2aJ (atto Joules = 10e-6pJ) Electrical data movement suffers from resistance Heat loss: ~ resistancy * length / cross-section-area  process shrinkage benefits computation energy but increases communication energy
  • #4: - Add explanation for Control Locality GPUs and CPUs reduce control overhead with SIMD/SIMT – if it’s 10x wide than we’re again op-dominated BUT memory problem remains, so we need large inefficient (due to addressing) caches, not so in Dataflow
  • #5: GPUs and CPUs reduce control overhead with SIMD/SIMT – if it’s 10x wide than we’re again op-dominated BUT memory problem remains, so we need large inefficient (due to addressing) caches, not so in Dataflow SIMD/SIMT is also less flexible --- but of course more efficient due to better silicon optimization
  • #11: IDE Plugin?
  • #20: Vector types for all architectures Weakly connected components
  • #21: Weakly connected components
  • #22: Thorough
  • #26: Keep in mind that PPCG is a specialized tool that does this transformation only for polyhedral applications.
  • #27: Highly irregular Around 3 transformations IIRC We have more applications in deep learning and more
  • #34: - Needs better communication models!!!
  • #35: - Needs better communication models!!!