SlideShare a Scribd company logo
Introduction to Snap Machine Learning
Andreea Anghel
Post-doctoral Researcher
IBM Research – Zurich
IBM Research - Zurich / Introduction to Snap Machine Learning / June 2018 / © 2018 IBM Corporation
Contents
2
Code Examples
snap-ml-local
snap-ml-mpi
snap-ml-spark
Experimental Results
Application and datasets
Single-node experiments
Out-of-core performance (PCIe)
Out-of-core performance (NVLINK)
Dealing with slow networks
Terabyte-scale benchmark
Conclusions
Introduction
What are GLMs?
Why are GLMs useful?
What is Snap Machine Learning?
Snap ML Architecture
Multi-level parallelism
Data-parallel framework
Inter-node communication
Intra-node communication
Implementation
GPU Acceleration
Out-of-core learning (DuHL)
Streaming pipeline
Software architecture
Introduction
3
What are GLMs?
Why are GLMs useful?
What is Snap Machine Learning?
What are GLMs?
4
Ridge
Regression
Lasso
Regression
Support
Vector
Machines
Logistic
Regression
Generalized
Linear Models
Classification
Regression
Fast Training
Can scale to datasets
with billions of
examples and/or
features.
Why are GLMs useful? Less tuning
State-of-the-art
algorithms for training
linear models do not
involve a step-size
parameter.
Widely used in industry
The Kaggle ”State of Data Science” survey asked
16,000 data scientists and ML practitioners what
tools and algorithms they use on a daily basis.
37.6% of respondents use Neural Networks
63.5% of respondents use Logistic Regression
Interpretability
New data protection regulations in Europe (GDPR)
give E.U. citizens the right to “obtain an explanation
of a decision reached” by an algorithm.
5
What is Snap Machine Learning?
6
Framework Models
GPU
Acceleration
Distributed
Training
Sparse Data
Support
scikit-learn ML/{DL} No No Yes
Apache
Spark* MLlib
ML/{DL} No Yes Yes
TensorFlow** ML Yes Yes Limited
Snap ML GLMs Yes Yes Yes
Machine Learning
(ML)
Deep
Learning
(DL)
GLMs
Snap ML: A new framework for fast training of GLMs
* The Apache Software Foundation (ASF) owns all Apache-related trademarks, service marks, and graphic logos
on behalf of our Apache project communities, and the names of all Apache projects are trademarks of the ASF.
**TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.
Snap ML Architecture
7
Multi-level parallelism
Data-parallel framework
Inter-node communication
Intra-node communication
Level 1
Parallelism across
nodes connected via a
network interface.
Level 2
Parallelism across
GPUs within the same
node connected via an
interconnect (e.g.
NVLINK).
Level 3
Parallelism across the streaming multiprocessors
of the GPU hardware.
Multi-level Parallelism
8
Node 0
Data-Parallel Framework
9
Large Dataset
Partition 0
GPU 0
GPU 1
GPU 2
GPU 3
Partition (0,0)
Partition (0,1)
Partition (0,2)
Partition (0,3)
Node 1
Partition 1
GPU 0
GPU 1
GPU 2
GPU 3
Partition (1,0)
Partition (1,1)
Partition (1,2)
Partition (1,3)
Inter-node communication
10
Node 0
10Gbit/s
Node 1
Node 2
Node 3
Intra-node communication
POWER is a registered trademark of International Business Machines Corporation in the United States, other countries, or both.
11
Snap ML Implementation
12
GPU Acceleration
Out-of-code learning (DuHL)
Streaming pipeline
Software architecture
Each local sub-task
can be solved
effectively using
stochastic coordinate
descent (SCD).
[Shalev-Schwartz
2013]
Recently,
asynchronous variants
of SCD have been
proposed to run on
multi-core CPUs.
[Liu 2013]
[Tran 2015]
[Hsiesh 2015]
Twice parallel asynchronous SCD (TPA-SCD) is a
another recent variant designed to run on GPUs.
It assigns each coordinate update to a different
block of threads that executed asynchronously.
Within each thread block, the coordinate update is
computed using many tightly coupled threads.
GPU Acceleration
13
Local Model
T. Parnell, C. Dünner, K. Atasu, M. Sifalakis and H. Pozidis, "Large-
scale stochastic learning using GPUs”, ParLearning 2017
Out-of-Core Learning (DuHL)
14
Main Memory CPU
Randomly sample
points and compute
the duality gap
Periodically identify
set of points with
largest gap and
ensure these points
are in GPU memory
Local
Data
GPU
Update the model using
the subset of the data that
is currently in memory
Model
Subset of data
Periodically copy model
back to CPU
C. Dünner, T. Parnell and M. Jaggi, “Efficient Use of Limited-Memory Resources to Accelerate Linear Learning”, NIPS 2017
While DuHL can
minimize the amount
of data transfer
between CPU and
GPU, it can still
become a bottleneck
particularly in the
beginning of training.
Using CUDA streams,
we have implemented
a training pipeline
whereby the next set
of data is copied onto
the GPU while the
current set is being
trained.
TPA-SCD also requires making a random
permutation of the coordinates in the GPU memory
at any given time.
To achieve this we introduce a 3rd pipeline stage
whereby the CPU generates a set of numbers for
the set of points that are currently being copied.
In the next stage, these random numbers are
copied onto the GPU and sorted to produce the
desired permutation.
Streaming Pipeline
15
Copy data (i+1)RNG (i+1)
Sort RN (i)
Train (i)
Copy RN (i+1)
Copy data (i+2)RNG (i+2)
Sort RN (i+1)
Train (i+1)
Copy RN (i+2)
Copy data (i+3)RNG (i+3)
Sort RN (i+2)
Train (i+2)
Copy RN (i+3)
CPU Interconnect GPU
libglm
Underlying C++/CUDA
template library.
snap-ml-local
Small to medium-
scale data.
Single node
deployment.
Multi-GPU support.
scikit-learn compatible
Python API
snap-ml-mpi
Large-scale data.
Multi-node
deployment in HPC
environments.
Many-GPU support.
Python API.
snap-ml-spark
Large-scale data
Multi-node
deployment in Apache
Spark environments.
Many GPU support
Python/Scala /Java*
API.
*Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates.
16
Code Examples
17
snap-ml-local
snap-ml-mpi
snap-ml-spark
Example: snap-ml-local
Acceleration existing scikit-learn applications
by changing only 2 lines of code.
Example: snap-ml-mpi
Describe application using high-level Python code.
Launch application on 4 nodes using mpirun (4 GPUs per node):
Example: snap-ml-spark
Describe application using high-level Python code:
Launch application on Spark cluster (1 GPU per Spark executor):
Experimental Results
21
Application and datasets
Single-node experiments
Out-of-core performance (PCIe)
Out-of-core performance (NVLINK)
Dealing with slow networks
Terabyte-scale benchmark
Can use train ML
models to predict
whether or not a user
will click on an advert?
Click-through Rate (CTR)
Prediction
Core business of many
internet giants.
Labelled training
examples are being
generated in real-time.
Dataset: criteo-tb
Number of features: 1 million
Number of examples: 4.2 billion
Size: 3 Terabytes (SVM Light)
Dataset: criteo-kaggle
Number of features: 1 million
Number of examples: 45 million
Size: 40 Gigabytes (SVM Light)
22
Single-node Performance
23
Dataset Examples Nodes Total GPUs Network CPU-GPU interface
criteo-kaggle 45 million 1x Power AC922 server 1x V100 n/a NVLINK 2.0
Out-of-core performance (PCIe Gen3)
24
Train chunk
(i) on GPU
Copy chunk
(i+1) onto GPU
90ms
318ms
S1 Init
12ms
330ms
S2
Train chunk
(i+1) on GPU
Copy chunk
(i+2) onto GPU
90ms
318ms
Init
12ms
330ms
Dataset Examples Nodes Total GPUs Network CPU-GPU interface
criteo-tb 200 million 1x Intel Xeon* Gold 6150 1x V100 n/a PCI Gen3
*Intel Xeon is a trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
Out-of-core performance (NVLINK 2.0)
25
90ms
55ms
S1
3ms
93ms
S2
Train chunk
(i+1) on GPU
Copy chunk
(i+2) onto GPU
90ms
55ms
Init
3ms
93ms
Train chunk
(i+2) on GPU
Copy chunk
(i+3) onto GPU
90ms
55ms
Init
3ms
93ms
Train chunk
(i+3) on GPU
Copy chunk
(i+4) onto GPU
90ms
55ms
Init
3ms
93ms
Train chunk
(i+4) on GPU
Copy chunk
(i+5) onto GPU
90ms
55ms
Init
3ms
93ms
Dataset Examples Nodes Total GPUs Network CPU-GPU interface
criteo-tb 200 million 1x Power AC922 server 1x V100 n/a NVLINK 2.0
Train chunk
(i) on GPU
Copy chunk
(i+1) onto GPU
Init
Dealing with slow networks
26
Dataset Examples Nodes Total GPUs Network CPU-GPU interface
criteo-tb 1 billion 4x Power AC922 server 16x V100 1Gbit Ethernet NVLINK 2.0
criteo-tb 1 billion 4x Power AC922 server 16x V100 InfiniBand NVLINK 2.0
Terabyte-scale Benchmark
27
Dataset Examples Nodes Total GPUs Network CPU-GPU interface
criteo-tb 4.2 billion 4x Power AC922 server 16x V100 InfiniBand NVLINK 2.0
Conclusions Snap ML is a new framework for fast training of
GLMs.
Snap ML benefits from GPU acceleration.
It can be deployed in single-node and multi-node
environments.
The hierarchical structure of the framework makes
it suitable for cloud-based deployments.
It can leverage fast interconnects like NVLINK to
achieve streaming, out-of-core performance.
Snap ML significantly outperforms other software
frameworks for training GLMS in both single-node
and multi-node benchmarks.
Snap ML can train a logistic regression classifier on
the Criteo Terabyte Click Logs data in 1.5 minutes.
Snap ML will be available to try in June as part of
IBM PowerAI (Tech Preview)
Celestine Dünner Dimitrios Sarigiannis Andreea Anghel
Nikolas Ioannou Haris Pozidis Thomas Parnell
The Snap ML team:
28
https://www.zurich.ibm.com/snapml/
29

More Related Content

PDF
BSC LMS DDL
PDF
CFD on Power
PPTX
2018 bsc power9 and power ai
PPTX
AI OpenPOWER Academia Discussion Group
PPT
OpenPOWER Webinar
PDF
MIT's experience on OpenPOWER/POWER 9 platform
PDF
Ac922 cdac webinar
PDF
TAU E4S ON OpenPOWER /POWER9 platform
BSC LMS DDL
CFD on Power
2018 bsc power9 and power ai
AI OpenPOWER Academia Discussion Group
OpenPOWER Webinar
MIT's experience on OpenPOWER/POWER 9 platform
Ac922 cdac webinar
TAU E4S ON OpenPOWER /POWER9 platform

What's hot (20)

PDF
OpenPOWER Webinar on Machine Learning for Academic Research
PDF
Summit workshop thompto
PDF
Covid-19 Response Capability with Power Systems
PDF
IBM HPC Transformation with AI
PDF
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
PDF
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
PPTX
PowerAI Deep dive
PDF
OpenPOWER/POWER9 Webinar from MIT and IBM
PPTX
WML OpenPOWER presentation
PDF
Deeplearningusingcloudpakfordata
PPTX
Optimizing Hortonworks Apache Spark machine learning workloads for contempora...
PDF
JMI Techtalk: 한재근 - How to use GPU for developing AI
PDF
OpenPOWER/POWER9 AI webinar
PDF
POWER10 innovations for HPC
PPTX
Large Model support and Distribute deep learning
PDF
Xilinx Edge Compute using Power 9 /OpenPOWER systems
PDF
IBM BOA for POWER
PPTX
A Primer on FPGAs - Field Programmable Gate Arrays
PDF
Transparent Hardware Acceleration for Deep Learning
PDF
"The Xilinx AI Engine: High Performance with Future-proof Architecture Adapta...
OpenPOWER Webinar on Machine Learning for Academic Research
Summit workshop thompto
Covid-19 Response Capability with Power Systems
IBM HPC Transformation with AI
SCFE 2020 OpenCAPI presentation as part of OpenPWOER Tutorial
OpenCAPI-based Image Analysis Pipeline for 18 GB/s kilohertz-framerate X-ray ...
PowerAI Deep dive
OpenPOWER/POWER9 Webinar from MIT and IBM
WML OpenPOWER presentation
Deeplearningusingcloudpakfordata
Optimizing Hortonworks Apache Spark machine learning workloads for contempora...
JMI Techtalk: 한재근 - How to use GPU for developing AI
OpenPOWER/POWER9 AI webinar
POWER10 innovations for HPC
Large Model support and Distribute deep learning
Xilinx Edge Compute using Power 9 /OpenPOWER systems
IBM BOA for POWER
A Primer on FPGAs - Field Programmable Gate Arrays
Transparent Hardware Acceleration for Deep Learning
"The Xilinx AI Engine: High Performance with Future-proof Architecture Adapta...
Ad

Similar to SNAP MACHINE LEARNING (20)

PDF
Netflix machine learning
PPTX
Innovation with ai at scale on the edge vt sept 2019 v0
PDF
Invited Lecture on GPUs and Distributed Deep Learning at Uppsala University
PDF
Deep Dive on Deep Learning (June 2018)
PDF
Open power ddl and lms
PDF
OpenPOWER Boot camp in Zurich
PDF
Introduction to GPUs for Machine Learning
PPTX
08 neural networks
PPTX
Scaling TensorFlow Models for Training using multi-GPUs & Google Cloud ML
PDF
Hardware Acceleration for Machine Learning
PDF
06.09.2017 Computer Science, Machine Learning & Statistiks Meetup - MULTI-GPU...
PDF
Deep Learning using OpenPOWER
PDF
Dl2 computing gpu
PDF
SF Big Analytics & SF Machine Learning Meetup: Machine Learning at the Limit ...
PPTX
GPU and Deep learning best practices
PPTX
MLAPI_DataGuild.pptx
PPTX
Deep Learning for Developers (expanded version, 12/2017)
PDF
Deep-Learning-with-PydddddddddddddTorch.pdf
PDF
Possibilities of generative models
PDF
Apache Spark and Tensorflow as a Service with Jim Dowling
Netflix machine learning
Innovation with ai at scale on the edge vt sept 2019 v0
Invited Lecture on GPUs and Distributed Deep Learning at Uppsala University
Deep Dive on Deep Learning (June 2018)
Open power ddl and lms
OpenPOWER Boot camp in Zurich
Introduction to GPUs for Machine Learning
08 neural networks
Scaling TensorFlow Models for Training using multi-GPUs & Google Cloud ML
Hardware Acceleration for Machine Learning
06.09.2017 Computer Science, Machine Learning & Statistiks Meetup - MULTI-GPU...
Deep Learning using OpenPOWER
Dl2 computing gpu
SF Big Analytics & SF Machine Learning Meetup: Machine Learning at the Limit ...
GPU and Deep learning best practices
MLAPI_DataGuild.pptx
Deep Learning for Developers (expanded version, 12/2017)
Deep-Learning-with-PydddddddddddddTorch.pdf
Possibilities of generative models
Apache Spark and Tensorflow as a Service with Jim Dowling
Ad

More from Ganesan Narayanasamy (20)

PDF
Empowering Engineering Faculties: Bridging the Gap with Emerging Technologies
PDF
Chip Design Curriculum development Residency program
PDF
Basics of Digital Design and Verilog
PDF
180 nm Tape out experience using Open POWER ISA
PDF
Workload Transformation and Innovations in POWER Architecture
PDF
OpenPOWER Workshop at IIT Roorkee
PDF
Deep Learning Use Cases using OpenPOWER systems
PDF
OpenPOWER System Marconi100
PDF
OpenPOWER Latest Updates
PDF
AI in healthcare and Automobile Industry using OpenPOWER/IBM POWER9 systems
PDF
AI in healthcare - Use Cases
PDF
AI in Health Care using IBM Systems/OpenPOWER systems
PDF
AI in Healh Care using IBM POWER systems
PDF
Poster from NUS
PDF
SAP HANA on POWER9 systems
PPTX
Graphical Structure Learning accelerated with POWER9
PDF
AI in the enterprise
PDF
Robustness in deep learning
PDF
Perspectives of Frond end Design
PDF
A2O Core implementation on FPGA
Empowering Engineering Faculties: Bridging the Gap with Emerging Technologies
Chip Design Curriculum development Residency program
Basics of Digital Design and Verilog
180 nm Tape out experience using Open POWER ISA
Workload Transformation and Innovations in POWER Architecture
OpenPOWER Workshop at IIT Roorkee
Deep Learning Use Cases using OpenPOWER systems
OpenPOWER System Marconi100
OpenPOWER Latest Updates
AI in healthcare and Automobile Industry using OpenPOWER/IBM POWER9 systems
AI in healthcare - Use Cases
AI in Health Care using IBM Systems/OpenPOWER systems
AI in Healh Care using IBM POWER systems
Poster from NUS
SAP HANA on POWER9 systems
Graphical Structure Learning accelerated with POWER9
AI in the enterprise
Robustness in deep learning
Perspectives of Frond end Design
A2O Core implementation on FPGA

Recently uploaded (20)

PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Empathic Computing: Creating Shared Understanding
PDF
Machine learning based COVID-19 study performance prediction
PPT
Teaching material agriculture food technology
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Modernizing your data center with Dell and AMD
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Approach and Philosophy of On baking technology
PDF
Advanced Soft Computing BINUS July 2025.pdf
PDF
cuic standard and advanced reporting.pdf
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Advanced IT Governance
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
NewMind AI Weekly Chronicles - August'25 Week I
Empathic Computing: Creating Shared Understanding
Machine learning based COVID-19 study performance prediction
Teaching material agriculture food technology
Diabetes mellitus diagnosis method based random forest with bat algorithm
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Modernizing your data center with Dell and AMD
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Approach and Philosophy of On baking technology
Advanced Soft Computing BINUS July 2025.pdf
cuic standard and advanced reporting.pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
The AUB Centre for AI in Media Proposal.docx
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
“AI and Expert System Decision Support & Business Intelligence Systems”
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Advanced IT Governance
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...

SNAP MACHINE LEARNING

  • 1. Introduction to Snap Machine Learning Andreea Anghel Post-doctoral Researcher IBM Research – Zurich IBM Research - Zurich / Introduction to Snap Machine Learning / June 2018 / © 2018 IBM Corporation
  • 2. Contents 2 Code Examples snap-ml-local snap-ml-mpi snap-ml-spark Experimental Results Application and datasets Single-node experiments Out-of-core performance (PCIe) Out-of-core performance (NVLINK) Dealing with slow networks Terabyte-scale benchmark Conclusions Introduction What are GLMs? Why are GLMs useful? What is Snap Machine Learning? Snap ML Architecture Multi-level parallelism Data-parallel framework Inter-node communication Intra-node communication Implementation GPU Acceleration Out-of-core learning (DuHL) Streaming pipeline Software architecture
  • 3. Introduction 3 What are GLMs? Why are GLMs useful? What is Snap Machine Learning?
  • 5. Fast Training Can scale to datasets with billions of examples and/or features. Why are GLMs useful? Less tuning State-of-the-art algorithms for training linear models do not involve a step-size parameter. Widely used in industry The Kaggle ”State of Data Science” survey asked 16,000 data scientists and ML practitioners what tools and algorithms they use on a daily basis. 37.6% of respondents use Neural Networks 63.5% of respondents use Logistic Regression Interpretability New data protection regulations in Europe (GDPR) give E.U. citizens the right to “obtain an explanation of a decision reached” by an algorithm. 5
  • 6. What is Snap Machine Learning? 6 Framework Models GPU Acceleration Distributed Training Sparse Data Support scikit-learn ML/{DL} No No Yes Apache Spark* MLlib ML/{DL} No Yes Yes TensorFlow** ML Yes Yes Limited Snap ML GLMs Yes Yes Yes Machine Learning (ML) Deep Learning (DL) GLMs Snap ML: A new framework for fast training of GLMs * The Apache Software Foundation (ASF) owns all Apache-related trademarks, service marks, and graphic logos on behalf of our Apache project communities, and the names of all Apache projects are trademarks of the ASF. **TensorFlow, the TensorFlow logo and any related marks are trademarks of Google Inc.
  • 7. Snap ML Architecture 7 Multi-level parallelism Data-parallel framework Inter-node communication Intra-node communication
  • 8. Level 1 Parallelism across nodes connected via a network interface. Level 2 Parallelism across GPUs within the same node connected via an interconnect (e.g. NVLINK). Level 3 Parallelism across the streaming multiprocessors of the GPU hardware. Multi-level Parallelism 8
  • 9. Node 0 Data-Parallel Framework 9 Large Dataset Partition 0 GPU 0 GPU 1 GPU 2 GPU 3 Partition (0,0) Partition (0,1) Partition (0,2) Partition (0,3) Node 1 Partition 1 GPU 0 GPU 1 GPU 2 GPU 3 Partition (1,0) Partition (1,1) Partition (1,2) Partition (1,3)
  • 11. Intra-node communication POWER is a registered trademark of International Business Machines Corporation in the United States, other countries, or both. 11
  • 12. Snap ML Implementation 12 GPU Acceleration Out-of-code learning (DuHL) Streaming pipeline Software architecture
  • 13. Each local sub-task can be solved effectively using stochastic coordinate descent (SCD). [Shalev-Schwartz 2013] Recently, asynchronous variants of SCD have been proposed to run on multi-core CPUs. [Liu 2013] [Tran 2015] [Hsiesh 2015] Twice parallel asynchronous SCD (TPA-SCD) is a another recent variant designed to run on GPUs. It assigns each coordinate update to a different block of threads that executed asynchronously. Within each thread block, the coordinate update is computed using many tightly coupled threads. GPU Acceleration 13 Local Model T. Parnell, C. Dünner, K. Atasu, M. Sifalakis and H. Pozidis, "Large- scale stochastic learning using GPUs”, ParLearning 2017
  • 14. Out-of-Core Learning (DuHL) 14 Main Memory CPU Randomly sample points and compute the duality gap Periodically identify set of points with largest gap and ensure these points are in GPU memory Local Data GPU Update the model using the subset of the data that is currently in memory Model Subset of data Periodically copy model back to CPU C. Dünner, T. Parnell and M. Jaggi, “Efficient Use of Limited-Memory Resources to Accelerate Linear Learning”, NIPS 2017
  • 15. While DuHL can minimize the amount of data transfer between CPU and GPU, it can still become a bottleneck particularly in the beginning of training. Using CUDA streams, we have implemented a training pipeline whereby the next set of data is copied onto the GPU while the current set is being trained. TPA-SCD also requires making a random permutation of the coordinates in the GPU memory at any given time. To achieve this we introduce a 3rd pipeline stage whereby the CPU generates a set of numbers for the set of points that are currently being copied. In the next stage, these random numbers are copied onto the GPU and sorted to produce the desired permutation. Streaming Pipeline 15 Copy data (i+1)RNG (i+1) Sort RN (i) Train (i) Copy RN (i+1) Copy data (i+2)RNG (i+2) Sort RN (i+1) Train (i+1) Copy RN (i+2) Copy data (i+3)RNG (i+3) Sort RN (i+2) Train (i+2) Copy RN (i+3) CPU Interconnect GPU
  • 16. libglm Underlying C++/CUDA template library. snap-ml-local Small to medium- scale data. Single node deployment. Multi-GPU support. scikit-learn compatible Python API snap-ml-mpi Large-scale data. Multi-node deployment in HPC environments. Many-GPU support. Python API. snap-ml-spark Large-scale data Multi-node deployment in Apache Spark environments. Many GPU support Python/Scala /Java* API. *Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. 16
  • 18. Example: snap-ml-local Acceleration existing scikit-learn applications by changing only 2 lines of code.
  • 19. Example: snap-ml-mpi Describe application using high-level Python code. Launch application on 4 nodes using mpirun (4 GPUs per node):
  • 20. Example: snap-ml-spark Describe application using high-level Python code: Launch application on Spark cluster (1 GPU per Spark executor):
  • 21. Experimental Results 21 Application and datasets Single-node experiments Out-of-core performance (PCIe) Out-of-core performance (NVLINK) Dealing with slow networks Terabyte-scale benchmark
  • 22. Can use train ML models to predict whether or not a user will click on an advert? Click-through Rate (CTR) Prediction Core business of many internet giants. Labelled training examples are being generated in real-time. Dataset: criteo-tb Number of features: 1 million Number of examples: 4.2 billion Size: 3 Terabytes (SVM Light) Dataset: criteo-kaggle Number of features: 1 million Number of examples: 45 million Size: 40 Gigabytes (SVM Light) 22
  • 23. Single-node Performance 23 Dataset Examples Nodes Total GPUs Network CPU-GPU interface criteo-kaggle 45 million 1x Power AC922 server 1x V100 n/a NVLINK 2.0
  • 24. Out-of-core performance (PCIe Gen3) 24 Train chunk (i) on GPU Copy chunk (i+1) onto GPU 90ms 318ms S1 Init 12ms 330ms S2 Train chunk (i+1) on GPU Copy chunk (i+2) onto GPU 90ms 318ms Init 12ms 330ms Dataset Examples Nodes Total GPUs Network CPU-GPU interface criteo-tb 200 million 1x Intel Xeon* Gold 6150 1x V100 n/a PCI Gen3 *Intel Xeon is a trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.
  • 25. Out-of-core performance (NVLINK 2.0) 25 90ms 55ms S1 3ms 93ms S2 Train chunk (i+1) on GPU Copy chunk (i+2) onto GPU 90ms 55ms Init 3ms 93ms Train chunk (i+2) on GPU Copy chunk (i+3) onto GPU 90ms 55ms Init 3ms 93ms Train chunk (i+3) on GPU Copy chunk (i+4) onto GPU 90ms 55ms Init 3ms 93ms Train chunk (i+4) on GPU Copy chunk (i+5) onto GPU 90ms 55ms Init 3ms 93ms Dataset Examples Nodes Total GPUs Network CPU-GPU interface criteo-tb 200 million 1x Power AC922 server 1x V100 n/a NVLINK 2.0 Train chunk (i) on GPU Copy chunk (i+1) onto GPU Init
  • 26. Dealing with slow networks 26 Dataset Examples Nodes Total GPUs Network CPU-GPU interface criteo-tb 1 billion 4x Power AC922 server 16x V100 1Gbit Ethernet NVLINK 2.0 criteo-tb 1 billion 4x Power AC922 server 16x V100 InfiniBand NVLINK 2.0
  • 27. Terabyte-scale Benchmark 27 Dataset Examples Nodes Total GPUs Network CPU-GPU interface criteo-tb 4.2 billion 4x Power AC922 server 16x V100 InfiniBand NVLINK 2.0
  • 28. Conclusions Snap ML is a new framework for fast training of GLMs. Snap ML benefits from GPU acceleration. It can be deployed in single-node and multi-node environments. The hierarchical structure of the framework makes it suitable for cloud-based deployments. It can leverage fast interconnects like NVLINK to achieve streaming, out-of-core performance. Snap ML significantly outperforms other software frameworks for training GLMS in both single-node and multi-node benchmarks. Snap ML can train a logistic regression classifier on the Criteo Terabyte Click Logs data in 1.5 minutes. Snap ML will be available to try in June as part of IBM PowerAI (Tech Preview) Celestine Dünner Dimitrios Sarigiannis Andreea Anghel Nikolas Ioannou Haris Pozidis Thomas Parnell The Snap ML team: 28 https://www.zurich.ibm.com/snapml/
  • 29. 29