SlideShare a Scribd company logo
Advanced and
Meetup
May 26, 2016
Meetup Agenda
Meetup Updates (Chris F)
Technology Updates (Chris F)
Spark + TensorFlow Model Serving/Deployment (Chris F)
Neural Net and TensorFlow Landscape (Chris F)
TensorFlow Core, Best Practices, Hidden Gems (Sam A)
TensorFlow Distributed (Fabrizio M)
Meetup Updates
New Sponsor
Meetup Metrics
Co-presenters
Sam Abrahams
“LA Kid (from DC)
Fabrizio Milo
“Italian Stallion”
Chris Fregly
“Chicago Fire”
advancedspark.com
Workshop - June 4th - Spark ML + TensorFlow
Chris Fregly
Technology Updates
github.com/fluxcapacitor/pipeline
Neural Network Tool Landscape
TensorFlow Tool Landscape
TensorFlow Serving
Spark ML Serving
pipeline.io (Shh… Stealth)
Technology Updates...
Spark 2.0
Kafka 0.10.0 + Confluent 3.0
CUDA 8.0 + cuDNN v5
Whole-Stage Code Gen
● SPARK-12795
● Physically Fuse Together Operators (within a Stage) into 1 Operation
● Avoids Excessive Virtual Function Calls
● Utilize CPU Registers vs. L1, L2, Main Memory
● Loop Unrolling and SIMD Code Generation
● Speeds up CPU-bound workloads (not I/O-bound)
Vectorization (SPARK-12992)
● Operate on Batches of Data
● Reduces Virtual Function Calls
● Fallback if Whole-Stage Not an Option
Spark 2.0: Core
Save/Load Support for All Models and Pipelines!
● Python and Scala
● Saved as Parquet ONLY
Local Linear Algebra Library
● SPARK-13944, SPARK-14615
● Drop-in Replacement for Distributed Linear Algebra library
● Opens up door to my new pipeline.io prediction layer!
Spark 2.0: ML
Kafka v0.10 + Confluent v3.0 Platform
Kafka Streams
New Event-Creation Timestamp
Rack Awareness (THX NFLX!)
Kerberos + SASL Improvements
Ability to Pause a Connector (ie. Maintenance)
New max.poll.records to limit messages retrieved
CUDA Deep Neural Network (cuDNN) v5 Updates
LSTM (Long Short-Term Memory) RNN Support for NLP use cases (6x speedup)
Optimized for NVIDIA Pascal GPU Architecture including FP16 (low precision)
Highly-optimized networks with 3x3 convolutions (GoogleNet)
github.com/fluxcapacitor/pipeline Updates
Tools and Examples
● JupyterHub and Python 3 Support
● Spark-Redis Connector
● Theano
● Keras (TensorFlow + Theano Support)
Code
● Spark ML DecisionTree Code Generator (Janino JVM ByteCode Generator)
● Hot-swappable ML Model Watcher (similar to TensorFlow Serving)
● Eigenface-based Image Recommendations
● Streaming Matrix Factorization w/ Kafka
● Netflix Hystrix-based Circuit Breaker - Prediction Service @ Scale
Neural Network Landscape
Interesting Neural Net Use Case
Bonus: CPU vs. GPU
Bonus! GPUs and Branching
TensorFlow Landscape
TensorFlow Core
TensorFlow Distributed
TensorFlow Serving (similar to Prediction.IO, Pipeline.IO)
TensorBoard (Visualize Neural Network Training)
Playground (Toy)
SkFlow (Scikit-Learn + TensorFlow)
Keras (High-level API for both TensorFlow and Theano)
Models (Parsey McParseface/SyntaxNet)
TensorFlow Serving (Model Deployment)
Dynamic Model Loader
Model Versioning & Rollback
Written in C/C++
Extend to Serve any Model!
Demos!!
Spark ML Serving (Model Deployment)
Same thing except for Spark...
Keep an eye on pipeline.io!
Sam Abrahams
TensorFlow Core
Best Practices
Hidden Gems
TensorFlow Core Terminology
gRPC (?) - Should this go here or in distributed?
TensorBoard Overview (?)
Explaining this will go a long way ->
Who dis?
•My name is Sam Abrahams
•Los Angeles based machine learning engineer
•TensorFlow White Paper Notes
•TensorFlow for Raspberry Pi
•Contributor to the TensorFlow project
samjabrahams @sabraha
This Talk: Sprint Through:
• Core TensorFlow API and terminology
• TensorFlow workflow
• Example TensorFlow Code
• TensorBoard
TensorFlow Programming Model
• Very similar to Theano
• The primary user-facing API is Python, though there is a
partial C++ API for executing models
• Computational code is written in C++, implemented for
CPUs, GPUs, or both
• Integrates tightly with NumPy
TensorFlow Programming
Generally boils down to two steps:
1. Build the computational graph(s) that you’d
like to run
2. Use a TensorFlow Session to run your graph
one or more times
Graph
• The primary structure of a TensorFlow model
• Generally there is one graph per program, but
TensorFlow can support multiple Graphs
• Nodes represent computations or data transformations
• Edges represent data transfer or computational control
What is a data flow graph?
• Also known as a “computational graph”, or just “graph”
• Nice way to visualize series of mathematical computations
• Here’s a simple graph showing the addition of two variables:
a ba
a + b
Components of a Graph
Graphs are composed of two types of elements:
• Nodes
• These are the elliptical shapes in the graph, and represent some form of
computation
• Edges
• These are the arrows in the graph, and represent data flowing from one
node into another node.
NODE NODEEDGE
Why Use Graphs?
• They are highly compositional
• Useful for calculating derivatives
• It’s easier to implement distributed computation
• Computation is already segmented nicely
• Neural networks are already implemented as computational graphs!
Tensors
• N-Dimensional Matrices
• 0-Dimensional → Scalar
• 1-Dimensional → Vector
• 2-Dimensional → Matrix
• All data that moves through a TensorFlow graph is a Tensor
• TensorFlow can convert Python native types or NumPy arrays
into Tensor objects
Tensors
•Python
# Vector
> [1, 2, 3]
# 3-D Tensor
> [[ [0,0], [0,1], [0,2] ],
[ [1,0], [1,1], [1,2] ],
[ [2,0], [2,1], [2,2] ]
# Matrix as NumPy Array
> np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]],
dtype=np.int64)
# Scalar
> 3
# Matrix
> [[1, 2, 3],
[4, 5, 6],
[7, 8, 9]]
•NumPy
Tensors
• Best practice is to use NumPy arrays when directly
defining Tensors
• Can explicitly set data type
• This presentation does not create Tensors with NumPy
• Space is limited
• I am lazy
• Tensors returned by TensorFlow graphs are NumPy
arrays
Operations
• “Op” for short
• Represent any sort of computation
• Take in zero or more Tensors as input, and output zero
or more Tensors
• Numerous uses: perform matrix algebra, initialize
variables, print info to the console, etc.
• Do not run when defined: they must be called from a
TensorFlow Session (coming up later)
Operations
•Quick Example
> import tensorflow as tf
> a = tf.mul(3,5) # Returns handle to new tf.mul node
> sess = tf.Session() # Creates TF Session
> sess.run(a) # Actually runs the Operation
out: 15
Placeholders
• Define “input” nodes
• Specifies information that be provided when graph is run
• Typically used for training data
• Define the tensor shape and data type when created:
import tensorflow as tf
# Create a placeholder of size 100x400
# With 32-bit floating point data type
my_placeholder = tf.placeholder(tf.float32, shape=(100,400))
TensorFlow Session
• In charge of coordinating graph execution
• Most important method is run()
• This is what actually runs the Graph
• It takes in two parameters, ‘fetches’ and ‘feed_dict’
• ‘fetches’ is a list of objects you’d like to get the results for, such
as the final layer in a neural network
• ‘feed_dict’ is a dictionary that maps tensors (often
Placeholders) to values those tensors should use
TensorFlow Variables
• Contain tensor data that persists each time you run your
Graph
• Typically used to hold weights and biases of a machine
learning model
• The final values of the weights and biases, along with
the Graph shape, define a trained model
• Before being run in a Graph, must be initialized (will
discuss this at the Session slide)
TensorFlow Variables
•Old school: Define with tensorflow.Variable()
•Best practices: tf.get_variable()
•Update its information with the assign() method
Import tensorflow as tf
# Create a variable with value 0 and name ‘my_variable’
my_var = tf.get_variable(0, name=‘my_variable’)
# Increment the variable by one
my_var.assign(my_var + 1)
Building the graph!
import tensorflow as tf
# create a placeholder for inputting floating point data
x = tf.placeholder(tf.float32)
# Make a Variable with the starting value of 0
start = tf.Variable(0.0)
# Create a node that is the value of (start + x)
y = start.assign(start + x)
a startx
y = x + start
Running a Session (using previous graph)
• Start a Session with tensorflow.Session()
• Close it when you’re done!
# Open up a TensorFlow Session
# and assign it to the handle 'sess'
sess = tf.Session()
# Important: initialize the Variable
init = tf.initialize_all_variables
sess.run(init)
# Run the graph to get the value of y
# Feed in different values of x each time
print(sess.run(y, feed_dict={x:1})) # Prints 1.0
print(sess.run(y, feed_dict={x:0.5})) # Prints 1.5
print(sess.run(y, feed_dict={x:2.2})) # Prints 3.7
# Close the Session
sess.close()
Devices
• A single device is a CPU, GPU, or other computational
unit (TPU?!)
• One machine can have multiple devices
• “Distributed” → Multiple machines
• Multi-device != Distributed
TensorBoard
• One of the most useful (and underutilized) aspects of
TensorFlow - Takes in serialized information from graph to
visualize data.
• Complete control over what data is stored
Let’s do a brief live demo. Hopefully nothing blows up!
TensorFlow Codebase
Standard and third party C++ libraries (Eigen)
TensorFlow core framework
C++ kernel implementation
Swig
Python client API
TensorFlow Codebase Structure: Core
tensorflow/core : Primary C++ implementations and runtimes
•core/ops – Registration of Operation signatures
•core/kernels – Operation implementations
•core/framework – Fundamental TensorFlow classes and functions
•core/platform – Abstractions for operating system platforms
Based on information from Eugene Brevdo, Googler and TensorFlow member
TensorFlow Codebase Structure: Python
tensorflow/python : Python implementations, wrappers, API
•python/ops – Code that is accessed through the Python API
•python/kernel_tests – Unit tests (which means examples)
•python/framework – TensorFlow fundamental units
•python/platform – Abstracts away system-level Python calls
contrib/ : contributed or non-fully adopted code
Based on information from Eugene Brevdo, Googler and TensorFlow member
Learning TensorFlow: Beyond the Tutorials
● There is a ton of excellent documentation in the code itself-
especially in the C++ implementations
● If you ever want to write a custom Operation or class, you need
to immerse yourself
● Easiest way to dive in: look at your #include statements!
● Use existing code as reference material
● If you see something new, learn something new
Tensor - tensorflow/core/framework/tensor.h
• dtype() - returns data type
• shape() - returns TensorShape representing tensor’s shape
• dims() – returns the number of dimensions in the tensor
• dim_size(int d) - returns size of specified dimension
• NumElements() - returns number of elements
• isSameSize(const Tensor& b) – do these Tensors dimensions match?
• TotalBytes() - estimated memory usage
• CopyFrom() – Copy another tensor and share its memory
• ...plus many more
Some files worth checking out:
tensorflow/core/framework
● tensor_util.h: deep copying, concatenation, splitting
● resource_mgr.h: Resource manager for mutable Operation state
● register_types.h: Useful helper macros for registering Operations
● kernel_def_builder.h: Full documentation for kernel definitions
tensorflow/core/util
● cuda_kernel_helper.h: Helper functions for cuda kernel implementations
● sparse/sparse_tensor.h: Documentation for C++ SparseTensor class
General advice for GPU implementation:
1. If possible, always register for GPU, even if it’s not a full implementation
○ Want to be able to run code on GPU if it won’t bottleneck the system
2. Before writing a custom GPU implementation, sanity check! Will it help?
○ Not everything benefits from parallelization
3. Utilize Eigen!
○ TensorFlow’s core Tensor class is based on Eigen
4. Be careful with memory: you are in charge of mutable data structures
Fabrizio Milo
TensorFlow Distributed
githbu.com/Mistobaan twitter.com/fabmilo
Distributed Tensorflow
gRPC (http://www.grpc.io/)
- A high performance, open source, general RPC framework that puts mobile and HTTP/2
first.
- Protocol Buffers
- HTTP2
HTTP2
HTTP2
gRPC
python.Tensor python.Tensor
tensor.proto
Distributed Tensorflow - Terminology
- Cluster
- Jobs
- Tasks
Distributed Tensorflow - Terminology
- Cluster: ClusterSpec
- Jobs: Parameter Server, Worker
- Tasks: Usually 0, 1, 2, …
Example of a Cluster Spec
{
"ps":[
"8.9.15.20:2222",
],
"workers":[
"8.34.25.90:2222",
"30.21.18.24:2222",
"4.17.19.14:2222"
]
}
One Session One Server One Job One Task
One Session One Server One Job One Task
ServerSession
Use Cases
- Training
- Hyper Parameters Optimization
- Ensembling
Graph Components
tf.python.training.session_manager.SessionManager
1. Checkpointing trained variables as the training
progresses.
2. Initializing variables on startup, restoring them from the
most recent checkpoint after a crash, or wait for
checkpoints to become available.
tf.python.training.supervisor.Supervisor
# Single Program
with tf.Graph().as_default():
...add operations to the graph...
# Create a Supervisor that will checkpoint the model in '/tmp/mydir'.
sv = Supervisor(logdir='/tmp/mydir')
# Get a Tensorflow session managed by the supervisor.
with sv.managed_session(FLAGS.master) as sess:
# Use the session to train the graph.
while not sv.should_stop():
sess.run(<my_train_op>)
tf.python.training.supervisor.Supervisor
# Multiple Replica Program
is_chief = (server_def.task_index == 0)
server = tf.train.Server(server_def)
with tf.Graph().as_default():
...add operations to the graph...
# Create a Supervisor that uses log directory on a shared file system.
# Indicate if you are the 'chief'
sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief)
# Get a Session in a TensorFlow server on the cluster.
with sv.managed_session(server.target) as sess:
# Use the session to train the graph.
while not sv.should_stop():
sess.run(<my_train_op>)
Use Cases: Asynchronous Training
Use Cases: Synchronous Training
tf.train.SyncReplicasOptimizer
This optimizer avoids stale gradients by collecting gradients from all replicas,
summing them, then applying them to the variables in one shot, after
which replicas can fetch the new variables and continue.
tf.train.replica_device_setter
The device setter will automatically place Variables ops on separate
parameter servers (ps). The non-Variable ops will be placed on the workers.
tf.train.replica_device_setter(cluster_def)
with tf.device(device_setter):
pass
Training  Replication In Graph Between Graphs
Asynchronous
Synchronous
Use Cases: Training
/job:ps /job:worker
In-Graph Replication
/task:0 /task:0
/task:1/task:1
with tf.device("/job:ps/task:0"):
weights_1 = tf.Variable(...)
biases_1 = tf.Variable(...)
with tf.device("/job:ps/task:1"):
weights_2 = tf.Variable(...)
biases_2 = tf.Variable(...)
with tf.device("/job:worker/task:0"):
input, labels = ...
layer_1 = tf.nn.relu(...)
with tf.device("/job:worker/task:0"):
train_op = ...
logits = tf.nn.relu(...)
with tf.Session() as sess:
for _ in range(10000):
sess.run(train_op)
Client
# To build a cluster with two ps jobs on hosts ps0 and ps1, and 3
worker
# jobs on hosts worker0, worker1 and worker2.
cluster_spec = {
"ps": ["ps0:2222", "ps1:2222"],
"worker": ["worker0:2222", "worker1:2222", "worker2:2222"]}
with tf.device(tf.replica_device_setter(cluster=cluster_spec)):
# Build your graph
v1 = tf.Variable(...) # assigned to /job:ps/task:0
v2 = tf.Variable(...) # assigned to /job:ps/task:1
v3 = tf.Variable(...) # assigned to /job:ps/task:0
# Run compute
...
/job:ps /job:worker
Replication Between Graph
/task:0 /task:0
/task:1/task:1
Use Cases: Train HyperParameter Optimization
- Grid Search
- Random Search
- Gradient Optimization
- Bayesian
Ensemble
Ensembling
Model 0
Model 1
Model 2
Ensemble
Comparison with other frameworks
mxnet
Surprise Demo : Rasberry Pi Cluster
Distributed TensorFlow Cluster
Distributed TensorFlow
TensorBoard Visualizations
Workshop Demo - LARGE GCE Cluster
Super-Large Cluster
Attendee GCE (Google Cloud Engine) Spec
50 GB RAM, 100 GB SSD, 8 CPUs (No GPUs)
TODO:
Build Script for Each Attendee to Run as Either a Worker or Parameter Server
Figure out split between Worker and Parameter Server
ImageNet
TODO: Train full 64-bit precision, quantitize down to 8-bit (Sam A)
Workshop Demo - Initial Cluster
8.34.215.90 (Sam)
130.211.128.240 (Fabrizio)
104.197.159.134 (Fregly)
Cluster Spec: “8.34.215.90, 130.211.128.240, 104.197.159.134”
SSH PEM file: http://advancedspark.com/keys/pipeline-training-gce.pem
chmod 600 pipeline-training-gce.pem
Username: pipeline-training
Password: password9
sudo docker images (VERIFY fluxcapacitor/pipeline)
START: sudo docker run -it --privileged --name pipeline --net=host -m 48g fluxcapacitor/pipeline bash
TensorFlow Operations: Hidden Gems
- Go over macros, functions provided in the library, but not mentioned in
documentation
- Brief brief overview of writing custom op
- Testing GPU code
- Leveraging Eigen and existing code
- Python wrapper

More Related Content

PDF
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
PDF
Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016
PDF
Introduction to Chainer
PPTX
Profiling & Testing with Spark
PDF
Introduction to Chainer 11 may,2018
PDF
Spark Meetup TensorFrames
PDF
Atlanta Spark User Meetup 09 22 2016
PDF
Scaling TensorFlow with Hops, Global AI Conference Santa Clara
Optimize + Deploy Distributed Tensorflow, Spark, and Scikit-Learn Models on GPUs
Chris Fregly, Research Scientist, PipelineIO at MLconf ATL 2016
Introduction to Chainer
Profiling & Testing with Spark
Introduction to Chainer 11 may,2018
Spark Meetup TensorFrames
Atlanta Spark User Meetup 09 22 2016
Scaling TensorFlow with Hops, Global AI Conference Santa Clara

What's hot (20)

PDF
On heap cache vs off-heap cache
PDF
Scaling out Tensorflow-as-a-Service on Spark and Commodity GPUs
PDF
Chainer ui v0.3 and imagereport
PPTX
Improved Reliable Streaming Processing: Apache Storm as example
PPTX
Tensorflow internal
PDF
Profiling PyTorch for Efficiency & Sustainability
PPTX
Chainer v3
PDF
Apache Storm Tutorial
PDF
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
PDF
Storm Anatomy
PDF
Storm Real Time Computation
PDF
Distributed Multi-GPU Computing with Dask, CuPy and RAPIDS
PDF
Tensorflow in Docker
PPTX
Scaling Apache Storm (Hadoop Summit 2015)
PDF
Demystifying DataFrame and Dataset
PPTX
Resource Aware Scheduling in Apache Storm
PPTX
PDF
Chainer v4 and v5
PPTX
Apache Storm Internals
PPTX
Stream Processing Frameworks
On heap cache vs off-heap cache
Scaling out Tensorflow-as-a-Service on Spark and Commodity GPUs
Chainer ui v0.3 and imagereport
Improved Reliable Streaming Processing: Apache Storm as example
Tensorflow internal
Profiling PyTorch for Efficiency & Sustainability
Chainer v3
Apache Storm Tutorial
Spark SQL Catalyst Optimizer, Custom Expressions, UDFs - Advanced Spark and T...
Storm Anatomy
Storm Real Time Computation
Distributed Multi-GPU Computing with Dask, CuPy and RAPIDS
Tensorflow in Docker
Scaling Apache Storm (Hadoop Summit 2015)
Demystifying DataFrame and Dataset
Resource Aware Scheduling in Apache Storm
Chainer v4 and v5
Apache Storm Internals
Stream Processing Frameworks
Ad

Viewers also liked (20)

PPTX
Kubernetes on EGO : Bringing enterprise resource management and scheduling to...
PPTX
When HPC meet ML/DL: Manage HPC Data Center with Kubernetes
PDF
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
PDF
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
PDF
Secure Because Math: A Deep-Dive on Machine Learning-Based Monitoring (#Secur...
PDF
陸永祥/全球網路攝影機帶來的機會與挑戰
PDF
Kafka Summit SF Apr 26 2016 - Generating Real-time Recommendations with NiFi,...
PDF
Machine Learning without the Math: An overview of Machine Learning
PDF
qconsf 2013: Top 10 Performance Gotchas for scaling in-memory Algorithms - Sr...
PDF
Big Data Spain - Nov 17 2016 - Madrid Continuously Deploy Spark ML and Tensor...
PDF
[系列活動] 資料探勘速遊
PDF
02 math essentials
PDF
Boston Spark Meetup May 24, 2016
PDF
Advanced Spark and TensorFlow Meetup 08-04-2016 One Click Spark ML Pipeline D...
PDF
高嘉良/Open Innovation as Strategic Plan
PDF
Deploy Spark ML and Tensorflow AI Models from Notebooks to Microservices - No...
PDF
Machine Learning Essentials (dsth Meetup#3)
PDF
Machine Learning Preliminaries and Math Refresher
PDF
[DSC 2016] 系列活動:李泳泉 / 星火燎原 - Spark 機器學習初探
PDF
Spark on Kubernetes - Advanced Spark and Tensorflow Meetup - Jan 19 2017 - An...
Kubernetes on EGO : Bringing enterprise resource management and scheduling to...
When HPC meet ML/DL: Manage HPC Data Center with Kubernetes
High Performance Distributed TensorFlow with GPUs - TensorFlow Chicago Meetup...
Gradient Descent, Back Propagation, and Auto Differentiation - Advanced Spark...
Secure Because Math: A Deep-Dive on Machine Learning-Based Monitoring (#Secur...
陸永祥/全球網路攝影機帶來的機會與挑戰
Kafka Summit SF Apr 26 2016 - Generating Real-time Recommendations with NiFi,...
Machine Learning without the Math: An overview of Machine Learning
qconsf 2013: Top 10 Performance Gotchas for scaling in-memory Algorithms - Sr...
Big Data Spain - Nov 17 2016 - Madrid Continuously Deploy Spark ML and Tensor...
[系列活動] 資料探勘速遊
02 math essentials
Boston Spark Meetup May 24, 2016
Advanced Spark and TensorFlow Meetup 08-04-2016 One Click Spark ML Pipeline D...
高嘉良/Open Innovation as Strategic Plan
Deploy Spark ML and Tensorflow AI Models from Notebooks to Microservices - No...
Machine Learning Essentials (dsth Meetup#3)
Machine Learning Preliminaries and Math Refresher
[DSC 2016] 系列活動:李泳泉 / 星火燎原 - Spark 機器學習初探
Spark on Kubernetes - Advanced Spark and Tensorflow Meetup - Jan 19 2017 - An...
Ad

Similar to Advanced Spark and TensorFlow Meetup May 26, 2016 (20)

PDF
TensorFlow example for AI Ukraine2016
PPTX
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
PDF
TensorFlow Tutorial.pdf
PDF
Introduction to TensorFlow, by Machine Learning at Berkeley
PPTX
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...
PPTX
Tensorflow in practice by Engineer - donghwi cha
PPTX
An Introduction to TensorFlow architecture
PDF
Tensor flow intro and summit info feb 2017
PDF
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...
PDF
Introduction to TensorFlow
PDF
Introduction To TensorFlow | Deep Learning Using TensorFlow | TensorFlow Tuto...
PDF
Tensorflow 2.0 and Coral Edge TPU
PDF
Tensor flow white paper
PPTX
Tensorflow - Intro (2017)
PPTX
Tensorflow 101 @ Machine Learning Innovation Summit SF June 6, 2017
PPTX
TensorFlow in Your Browser
PPTX
24-TensorFlow-Clipper.pptxnjjjjnjjjjjjmm
PDF
Introduction To Using TensorFlow & Deep Learning
PDF
running Tensorflow in Production
PPSX
Tensorflow basics
TensorFlow example for AI Ukraine2016
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
TensorFlow Tutorial.pdf
Introduction to TensorFlow, by Machine Learning at Berkeley
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...
Tensorflow in practice by Engineer - donghwi cha
An Introduction to TensorFlow architecture
Tensor flow intro and summit info feb 2017
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...
Introduction to TensorFlow
Introduction To TensorFlow | Deep Learning Using TensorFlow | TensorFlow Tuto...
Tensorflow 2.0 and Coral Edge TPU
Tensor flow white paper
Tensorflow - Intro (2017)
Tensorflow 101 @ Machine Learning Innovation Summit SF June 6, 2017
TensorFlow in Your Browser
24-TensorFlow-Clipper.pptxnjjjjnjjjjjjmm
Introduction To Using TensorFlow & Deep Learning
running Tensorflow in Production
Tensorflow basics

More from Chris Fregly (20)

PDF
AWS reInvent 2022 reCap AI/ML and Data
PDF
Pandas on AWS - Let me count the ways.pdf
PDF
Ray AI Runtime (AIR) on AWS - Data Science On AWS Meetup
PDF
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
PDF
Amazon reInvent 2020 Recap: AI and Machine Learning
PDF
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...
PDF
Quantum Computing with Amazon Braket
PDF
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
PDF
AWS Re:Invent 2019 Re:Cap
PDF
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
PDF
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
PDF
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
PDF
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PDF
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
PDF
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
PDF
PipelineAI Optimizes Your Enterprise AI Pipeline from Distributed Training to...
PDF
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
PDF
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
PDF
PipelineAI + TensorFlow AI + Spark ML + Kuberenetes + Istio + AWS SageMaker +...
PDF
PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...
AWS reInvent 2022 reCap AI/ML and Data
Pandas on AWS - Let me count the ways.pdf
Ray AI Runtime (AIR) on AWS - Data Science On AWS Meetup
Smokey and the Multi-Armed Bandit Featuring BERT Reynolds Updated
Amazon reInvent 2020 Recap: AI and Machine Learning
Waking the Data Scientist at 2am: Detect Model Degradation on Production Mod...
Quantum Computing with Amazon Braket
15 Tips to Scale a Large AI/ML Workshop - Both Online and In-Person
AWS Re:Invent 2019 Re:Cap
KubeFlow + GPU + Keras/TensorFlow 2.0 + TF Extended (TFX) + Kubernetes + PyTo...
Swift for TensorFlow - Tanmay Bakshi - Advanced Spark and TensorFlow Meetup -...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
PipelineAI Continuous Machine Learning and AI - Rework Deep Learning Summit -...
PipelineAI Real-Time Machine Learning - Global Artificial Intelligence Confer...
Hyper-Parameter Tuning Across the Entire AI Pipeline GPU Tech Conference San ...
PipelineAI Optimizes Your Enterprise AI Pipeline from Distributed Training to...
Advanced Spark and TensorFlow Meetup - Dec 12 2017 - Dong Meng, MapR + Kubern...
High Performance Distributed TensorFlow in Production with GPUs - NIPS 2017 -...
PipelineAI + TensorFlow AI + Spark ML + Kuberenetes + Istio + AWS SageMaker +...
PipelineAI + AWS SageMaker + Distributed TensorFlow + AI Model Training and S...

Recently uploaded (20)

PDF
Design an Analysis of Algorithms I-SECS-1021-03
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PDF
medical staffing services at VALiNTRY
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PDF
System and Network Administraation Chapter 3
PDF
How Creative Agencies Leverage Project Management Software.pdf
PDF
Understanding Forklifts - TECH EHS Solution
PDF
Nekopoi APK 2025 free lastest update
PPTX
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
PDF
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
PDF
How to Migrate SBCGlobal Email to Yahoo Easily
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PDF
AI in Product Development-omnex systems
PPTX
Transform Your Business with a Software ERP System
PDF
top salesforce developer skills in 2025.pdf
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PDF
Navsoft: AI-Powered Business Solutions & Custom Software Development
Design an Analysis of Algorithms I-SECS-1021-03
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
medical staffing services at VALiNTRY
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
2025 Textile ERP Trends: SAP, Odoo & Oracle
System and Network Administraation Chapter 3
How Creative Agencies Leverage Project Management Software.pdf
Understanding Forklifts - TECH EHS Solution
Nekopoi APK 2025 free lastest update
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
Which alternative to Crystal Reports is best for small or large businesses.pdf
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
How to Migrate SBCGlobal Email to Yahoo Easily
Wondershare Filmora 15 Crack With Activation Key [2025
AI in Product Development-omnex systems
Transform Your Business with a Software ERP System
top salesforce developer skills in 2025.pdf
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Navsoft: AI-Powered Business Solutions & Custom Software Development

Advanced Spark and TensorFlow Meetup May 26, 2016

  • 2. Meetup Agenda Meetup Updates (Chris F) Technology Updates (Chris F) Spark + TensorFlow Model Serving/Deployment (Chris F) Neural Net and TensorFlow Landscape (Chris F) TensorFlow Core, Best Practices, Hidden Gems (Sam A) TensorFlow Distributed (Fabrizio M)
  • 3. Meetup Updates New Sponsor Meetup Metrics Co-presenters Sam Abrahams “LA Kid (from DC) Fabrizio Milo “Italian Stallion” Chris Fregly “Chicago Fire”
  • 4. advancedspark.com Workshop - June 4th - Spark ML + TensorFlow
  • 5. Chris Fregly Technology Updates github.com/fluxcapacitor/pipeline Neural Network Tool Landscape TensorFlow Tool Landscape TensorFlow Serving Spark ML Serving pipeline.io (Shh… Stealth)
  • 6. Technology Updates... Spark 2.0 Kafka 0.10.0 + Confluent 3.0 CUDA 8.0 + cuDNN v5
  • 7. Whole-Stage Code Gen ● SPARK-12795 ● Physically Fuse Together Operators (within a Stage) into 1 Operation ● Avoids Excessive Virtual Function Calls ● Utilize CPU Registers vs. L1, L2, Main Memory ● Loop Unrolling and SIMD Code Generation ● Speeds up CPU-bound workloads (not I/O-bound) Vectorization (SPARK-12992) ● Operate on Batches of Data ● Reduces Virtual Function Calls ● Fallback if Whole-Stage Not an Option Spark 2.0: Core
  • 8. Save/Load Support for All Models and Pipelines! ● Python and Scala ● Saved as Parquet ONLY Local Linear Algebra Library ● SPARK-13944, SPARK-14615 ● Drop-in Replacement for Distributed Linear Algebra library ● Opens up door to my new pipeline.io prediction layer! Spark 2.0: ML
  • 9. Kafka v0.10 + Confluent v3.0 Platform Kafka Streams New Event-Creation Timestamp Rack Awareness (THX NFLX!) Kerberos + SASL Improvements Ability to Pause a Connector (ie. Maintenance) New max.poll.records to limit messages retrieved
  • 10. CUDA Deep Neural Network (cuDNN) v5 Updates LSTM (Long Short-Term Memory) RNN Support for NLP use cases (6x speedup) Optimized for NVIDIA Pascal GPU Architecture including FP16 (low precision) Highly-optimized networks with 3x3 convolutions (GoogleNet)
  • 11. github.com/fluxcapacitor/pipeline Updates Tools and Examples ● JupyterHub and Python 3 Support ● Spark-Redis Connector ● Theano ● Keras (TensorFlow + Theano Support) Code ● Spark ML DecisionTree Code Generator (Janino JVM ByteCode Generator) ● Hot-swappable ML Model Watcher (similar to TensorFlow Serving) ● Eigenface-based Image Recommendations ● Streaming Matrix Factorization w/ Kafka ● Netflix Hystrix-based Circuit Breaker - Prediction Service @ Scale
  • 15. Bonus! GPUs and Branching
  • 16. TensorFlow Landscape TensorFlow Core TensorFlow Distributed TensorFlow Serving (similar to Prediction.IO, Pipeline.IO) TensorBoard (Visualize Neural Network Training) Playground (Toy) SkFlow (Scikit-Learn + TensorFlow) Keras (High-level API for both TensorFlow and Theano) Models (Parsey McParseface/SyntaxNet)
  • 17. TensorFlow Serving (Model Deployment) Dynamic Model Loader Model Versioning & Rollback Written in C/C++ Extend to Serve any Model! Demos!!
  • 18. Spark ML Serving (Model Deployment) Same thing except for Spark... Keep an eye on pipeline.io!
  • 19. Sam Abrahams TensorFlow Core Best Practices Hidden Gems
  • 20. TensorFlow Core Terminology gRPC (?) - Should this go here or in distributed? TensorBoard Overview (?) Explaining this will go a long way ->
  • 21. Who dis? •My name is Sam Abrahams •Los Angeles based machine learning engineer •TensorFlow White Paper Notes •TensorFlow for Raspberry Pi •Contributor to the TensorFlow project samjabrahams @sabraha
  • 22. This Talk: Sprint Through: • Core TensorFlow API and terminology • TensorFlow workflow • Example TensorFlow Code • TensorBoard
  • 23. TensorFlow Programming Model • Very similar to Theano • The primary user-facing API is Python, though there is a partial C++ API for executing models • Computational code is written in C++, implemented for CPUs, GPUs, or both • Integrates tightly with NumPy
  • 24. TensorFlow Programming Generally boils down to two steps: 1. Build the computational graph(s) that you’d like to run 2. Use a TensorFlow Session to run your graph one or more times
  • 25. Graph • The primary structure of a TensorFlow model • Generally there is one graph per program, but TensorFlow can support multiple Graphs • Nodes represent computations or data transformations • Edges represent data transfer or computational control
  • 26. What is a data flow graph? • Also known as a “computational graph”, or just “graph” • Nice way to visualize series of mathematical computations • Here’s a simple graph showing the addition of two variables: a ba a + b
  • 27. Components of a Graph Graphs are composed of two types of elements: • Nodes • These are the elliptical shapes in the graph, and represent some form of computation • Edges • These are the arrows in the graph, and represent data flowing from one node into another node. NODE NODEEDGE
  • 28. Why Use Graphs? • They are highly compositional • Useful for calculating derivatives • It’s easier to implement distributed computation • Computation is already segmented nicely • Neural networks are already implemented as computational graphs!
  • 29. Tensors • N-Dimensional Matrices • 0-Dimensional → Scalar • 1-Dimensional → Vector • 2-Dimensional → Matrix • All data that moves through a TensorFlow graph is a Tensor • TensorFlow can convert Python native types or NumPy arrays into Tensor objects
  • 30. Tensors •Python # Vector > [1, 2, 3] # 3-D Tensor > [[ [0,0], [0,1], [0,2] ], [ [1,0], [1,1], [1,2] ], [ [2,0], [2,1], [2,2] ] # Matrix as NumPy Array > np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=np.int64) # Scalar > 3 # Matrix > [[1, 2, 3], [4, 5, 6], [7, 8, 9]] •NumPy
  • 31. Tensors • Best practice is to use NumPy arrays when directly defining Tensors • Can explicitly set data type • This presentation does not create Tensors with NumPy • Space is limited • I am lazy • Tensors returned by TensorFlow graphs are NumPy arrays
  • 32. Operations • “Op” for short • Represent any sort of computation • Take in zero or more Tensors as input, and output zero or more Tensors • Numerous uses: perform matrix algebra, initialize variables, print info to the console, etc. • Do not run when defined: they must be called from a TensorFlow Session (coming up later)
  • 33. Operations •Quick Example > import tensorflow as tf > a = tf.mul(3,5) # Returns handle to new tf.mul node > sess = tf.Session() # Creates TF Session > sess.run(a) # Actually runs the Operation out: 15
  • 34. Placeholders • Define “input” nodes • Specifies information that be provided when graph is run • Typically used for training data • Define the tensor shape and data type when created: import tensorflow as tf # Create a placeholder of size 100x400 # With 32-bit floating point data type my_placeholder = tf.placeholder(tf.float32, shape=(100,400))
  • 35. TensorFlow Session • In charge of coordinating graph execution • Most important method is run() • This is what actually runs the Graph • It takes in two parameters, ‘fetches’ and ‘feed_dict’ • ‘fetches’ is a list of objects you’d like to get the results for, such as the final layer in a neural network • ‘feed_dict’ is a dictionary that maps tensors (often Placeholders) to values those tensors should use
  • 36. TensorFlow Variables • Contain tensor data that persists each time you run your Graph • Typically used to hold weights and biases of a machine learning model • The final values of the weights and biases, along with the Graph shape, define a trained model • Before being run in a Graph, must be initialized (will discuss this at the Session slide)
  • 37. TensorFlow Variables •Old school: Define with tensorflow.Variable() •Best practices: tf.get_variable() •Update its information with the assign() method Import tensorflow as tf # Create a variable with value 0 and name ‘my_variable’ my_var = tf.get_variable(0, name=‘my_variable’) # Increment the variable by one my_var.assign(my_var + 1)
  • 38. Building the graph! import tensorflow as tf # create a placeholder for inputting floating point data x = tf.placeholder(tf.float32) # Make a Variable with the starting value of 0 start = tf.Variable(0.0) # Create a node that is the value of (start + x) y = start.assign(start + x) a startx y = x + start
  • 39. Running a Session (using previous graph) • Start a Session with tensorflow.Session() • Close it when you’re done! # Open up a TensorFlow Session # and assign it to the handle 'sess' sess = tf.Session() # Important: initialize the Variable init = tf.initialize_all_variables sess.run(init) # Run the graph to get the value of y # Feed in different values of x each time print(sess.run(y, feed_dict={x:1})) # Prints 1.0 print(sess.run(y, feed_dict={x:0.5})) # Prints 1.5 print(sess.run(y, feed_dict={x:2.2})) # Prints 3.7 # Close the Session sess.close()
  • 40. Devices • A single device is a CPU, GPU, or other computational unit (TPU?!) • One machine can have multiple devices • “Distributed” → Multiple machines • Multi-device != Distributed
  • 41. TensorBoard • One of the most useful (and underutilized) aspects of TensorFlow - Takes in serialized information from graph to visualize data. • Complete control over what data is stored Let’s do a brief live demo. Hopefully nothing blows up!
  • 42. TensorFlow Codebase Standard and third party C++ libraries (Eigen) TensorFlow core framework C++ kernel implementation Swig Python client API
  • 43. TensorFlow Codebase Structure: Core tensorflow/core : Primary C++ implementations and runtimes •core/ops – Registration of Operation signatures •core/kernels – Operation implementations •core/framework – Fundamental TensorFlow classes and functions •core/platform – Abstractions for operating system platforms Based on information from Eugene Brevdo, Googler and TensorFlow member
  • 44. TensorFlow Codebase Structure: Python tensorflow/python : Python implementations, wrappers, API •python/ops – Code that is accessed through the Python API •python/kernel_tests – Unit tests (which means examples) •python/framework – TensorFlow fundamental units •python/platform – Abstracts away system-level Python calls contrib/ : contributed or non-fully adopted code Based on information from Eugene Brevdo, Googler and TensorFlow member
  • 45. Learning TensorFlow: Beyond the Tutorials ● There is a ton of excellent documentation in the code itself- especially in the C++ implementations ● If you ever want to write a custom Operation or class, you need to immerse yourself ● Easiest way to dive in: look at your #include statements! ● Use existing code as reference material ● If you see something new, learn something new
  • 46. Tensor - tensorflow/core/framework/tensor.h • dtype() - returns data type • shape() - returns TensorShape representing tensor’s shape • dims() – returns the number of dimensions in the tensor • dim_size(int d) - returns size of specified dimension • NumElements() - returns number of elements • isSameSize(const Tensor& b) – do these Tensors dimensions match? • TotalBytes() - estimated memory usage • CopyFrom() – Copy another tensor and share its memory • ...plus many more
  • 47. Some files worth checking out: tensorflow/core/framework ● tensor_util.h: deep copying, concatenation, splitting ● resource_mgr.h: Resource manager for mutable Operation state ● register_types.h: Useful helper macros for registering Operations ● kernel_def_builder.h: Full documentation for kernel definitions tensorflow/core/util ● cuda_kernel_helper.h: Helper functions for cuda kernel implementations ● sparse/sparse_tensor.h: Documentation for C++ SparseTensor class
  • 48. General advice for GPU implementation: 1. If possible, always register for GPU, even if it’s not a full implementation ○ Want to be able to run code on GPU if it won’t bottleneck the system 2. Before writing a custom GPU implementation, sanity check! Will it help? ○ Not everything benefits from parallelization 3. Utilize Eigen! ○ TensorFlow’s core Tensor class is based on Eigen 4. Be careful with memory: you are in charge of mutable data structures
  • 51. gRPC (http://www.grpc.io/) - A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. - Protocol Buffers - HTTP2 HTTP2 HTTP2
  • 53. Distributed Tensorflow - Terminology - Cluster - Jobs - Tasks
  • 54. Distributed Tensorflow - Terminology - Cluster: ClusterSpec - Jobs: Parameter Server, Worker - Tasks: Usually 0, 1, 2, …
  • 55. Example of a Cluster Spec { "ps":[ "8.9.15.20:2222", ], "workers":[ "8.34.25.90:2222", "30.21.18.24:2222", "4.17.19.14:2222" ] }
  • 56. One Session One Server One Job One Task
  • 57. One Session One Server One Job One Task ServerSession
  • 58. Use Cases - Training - Hyper Parameters Optimization - Ensembling
  • 60. tf.python.training.session_manager.SessionManager 1. Checkpointing trained variables as the training progresses. 2. Initializing variables on startup, restoring them from the most recent checkpoint after a crash, or wait for checkpoints to become available.
  • 61. tf.python.training.supervisor.Supervisor # Single Program with tf.Graph().as_default(): ...add operations to the graph... # Create a Supervisor that will checkpoint the model in '/tmp/mydir'. sv = Supervisor(logdir='/tmp/mydir') # Get a Tensorflow session managed by the supervisor. with sv.managed_session(FLAGS.master) as sess: # Use the session to train the graph. while not sv.should_stop(): sess.run(<my_train_op>)
  • 62. tf.python.training.supervisor.Supervisor # Multiple Replica Program is_chief = (server_def.task_index == 0) server = tf.train.Server(server_def) with tf.Graph().as_default(): ...add operations to the graph... # Create a Supervisor that uses log directory on a shared file system. # Indicate if you are the 'chief' sv = Supervisor(logdir='/shared_directory/...', is_chief=is_chief) # Get a Session in a TensorFlow server on the cluster. with sv.managed_session(server.target) as sess: # Use the session to train the graph. while not sv.should_stop(): sess.run(<my_train_op>)
  • 64. Use Cases: Synchronous Training tf.train.SyncReplicasOptimizer This optimizer avoids stale gradients by collecting gradients from all replicas, summing them, then applying them to the variables in one shot, after which replicas can fetch the new variables and continue.
  • 65. tf.train.replica_device_setter The device setter will automatically place Variables ops on separate parameter servers (ps). The non-Variable ops will be placed on the workers. tf.train.replica_device_setter(cluster_def) with tf.device(device_setter): pass
  • 66. Training Replication In Graph Between Graphs Asynchronous Synchronous Use Cases: Training
  • 67. /job:ps /job:worker In-Graph Replication /task:0 /task:0 /task:1/task:1 with tf.device("/job:ps/task:0"): weights_1 = tf.Variable(...) biases_1 = tf.Variable(...) with tf.device("/job:ps/task:1"): weights_2 = tf.Variable(...) biases_2 = tf.Variable(...) with tf.device("/job:worker/task:0"): input, labels = ... layer_1 = tf.nn.relu(...) with tf.device("/job:worker/task:0"): train_op = ... logits = tf.nn.relu(...) with tf.Session() as sess: for _ in range(10000): sess.run(train_op) Client
  • 68. # To build a cluster with two ps jobs on hosts ps0 and ps1, and 3 worker # jobs on hosts worker0, worker1 and worker2. cluster_spec = { "ps": ["ps0:2222", "ps1:2222"], "worker": ["worker0:2222", "worker1:2222", "worker2:2222"]} with tf.device(tf.replica_device_setter(cluster=cluster_spec)): # Build your graph v1 = tf.Variable(...) # assigned to /job:ps/task:0 v2 = tf.Variable(...) # assigned to /job:ps/task:1 v3 = tf.Variable(...) # assigned to /job:ps/task:0 # Run compute ... /job:ps /job:worker Replication Between Graph /task:0 /task:0 /task:1/task:1
  • 69. Use Cases: Train HyperParameter Optimization - Grid Search - Random Search - Gradient Optimization - Bayesian
  • 71. Comparison with other frameworks mxnet
  • 72. Surprise Demo : Rasberry Pi Cluster Distributed TensorFlow Cluster Distributed TensorFlow TensorBoard Visualizations
  • 73. Workshop Demo - LARGE GCE Cluster Super-Large Cluster Attendee GCE (Google Cloud Engine) Spec 50 GB RAM, 100 GB SSD, 8 CPUs (No GPUs) TODO: Build Script for Each Attendee to Run as Either a Worker or Parameter Server Figure out split between Worker and Parameter Server ImageNet TODO: Train full 64-bit precision, quantitize down to 8-bit (Sam A)
  • 74. Workshop Demo - Initial Cluster 8.34.215.90 (Sam) 130.211.128.240 (Fabrizio) 104.197.159.134 (Fregly) Cluster Spec: “8.34.215.90, 130.211.128.240, 104.197.159.134” SSH PEM file: http://advancedspark.com/keys/pipeline-training-gce.pem chmod 600 pipeline-training-gce.pem Username: pipeline-training Password: password9 sudo docker images (VERIFY fluxcapacitor/pipeline) START: sudo docker run -it --privileged --name pipeline --net=host -m 48g fluxcapacitor/pipeline bash
  • 75. TensorFlow Operations: Hidden Gems - Go over macros, functions provided in the library, but not mentioned in documentation - Brief brief overview of writing custom op - Testing GPU code - Leveraging Eigen and existing code - Python wrapper