SlideShare a Scribd company logo
PyTorch for Tensor ow Developers
Overview
PyTorch constructs
Dynamic Graphs
--- Abdul Muneer
https://www.minds.ai/
Why do we use any Framework?
Model Prediction
Gradient computation ---- (automatic differentiation)
Why should we explore non TF frameworks?
Engineering is a key component in Deep Learning practice
What engineering problems are existing tools fails to solve?
Improves our understanding of TF
We do not end up being one trick pony
Helps understand n/w implementation in those frameworks.
alternative paradigm for implementing neural networks
simple and intuitive to program and debug
What is PyTorch?
It’s a Python based scienti c computing package targeted at two sets of audiences:
A replacement for numpy to use the power of GPUs
a deep learning research platform that provides maximum exibility and speed
In [ ]: # MNIST example
import torch
import torch.nn as nn
from torch.autograd import Variable
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=5, padding=2),
nn.BatchNorm2d(16),
nn.ReLU(),
nn.MaxPool2d(2))
self.layer2 = nn.Sequential(
nn.Conv2d(16, 32, kernel_size=5, padding=2),
nn.BatchNorm2d(32),
nn.ReLU(),
nn.MaxPool2d(2))
self.fc = nn.Linear(7*7*32, 10)
def forward(self, x):
out = self.layer1(x)
out = self.layer2(out)
out = out.view(out.size(0), -1)
out = self.fc(out)
return out
In [ ]: cnn = CNN()
# Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(cnn.parameters(), lr=learning_rate)
# Train the Model
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
images = Variable(images)
labels = Variable(labels)
# Forward + Backward + Optimize
optimizer.zero_grad()
outputs = cnn(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
PyTorch is imperative
In [1]:
In [2]:
import torch
x = torch.Tensor(5, 3)
x
Out[2]: 0.0000e+00 -8.5899e+09 0.0000e+00
-8.5899e+09 6.6449e-33 1.9432e-19
4.8613e+30 5.0832e+31 7.5338e+28
4.5925e+24 1.7448e+22 1.1429e+33
4.6114e+24 2.8031e+20 1.2410e+28
[torch.FloatTensor of size 5x3]
PyTorch is imperative
No need for placeholders, everything is a tensor.
Debug it with a regular python debugger.
You can go almost as high level as keras and as low level as pure Tensor ow.
Let's talk about Tensors and Variables
Tensors
similar to numpy’s ndarrays
can also be used on a GPU to accelerate computing.
In [2]: import torch
x = torch.Tensor(5, 3)
print(x)
0.0000 0.0000 0.0000
-2.0005 0.0000 0.0000
0.0000 0.0000 0.0000
0.0000 0.0000 0.0000
0.0000 0.0000 0.0000
[torch.FloatTensor of size 5x3]
Construct a randomly initialized matrix
In [3]:
In [4]:
x = torch.rand(5, 3)
print(x)
x.size()
0.6543 0.1334 0.1410
0.6995 0.5005 0.6566
0.2181 0.1329 0.7526
0.6533 0.6995 0.6978
0.7876 0.7880 0.9808
[torch.FloatTensor of size 5x3]
Out[4]: torch.Size([5, 3])
Operations
Addition
In [5]:
In [6]:
y = torch.rand(5, 3)
print(x + y)
print(torch.add(x, y))
0.9243 0.3856 0.7254
1.6529 0.9123 1.4620
0.3295 1.0813 1.4391
1.5626 1.5122 0.8225
1.2842 1.1281 1.1330
[torch.FloatTensor of size 5x3]
0.9243 0.3856 0.7254
1.6529 0.9123 1.4620
0.3295 1.0813 1.4391
1.5626 1.5122 0.8225
1.2842 1.1281 1.1330
[torch.FloatTensor of size 5x3]
Operations
Any operation that mutates a tensor in-place is post- xed with an _
For example: x.copy_(y), x.t_() etc. will change x.
Addition: in-place
In [8]:
In [9]:
print(y)
# adds x to y
y.add_(x)
print(y)
0.9243 0.3856 0.7254
1.6529 0.9123 1.4620
0.3295 1.0813 1.4391
1.5626 1.5122 0.8225
1.2842 1.1281 1.1330
[torch.FloatTensor of size 5x3]
1.5786 0.5190 0.8664
2.3523 1.4128 2.1186
0.5476 1.2142 2.1917
2.2159 2.2116 1.5204
2.0718 1.9161 2.1138
[torch.FloatTensor of size 5x3]
numpy-like indexing applies..
In [13]: y[:,1]
Out[13]: 0.5190
1.4128
1.2142
2.2116
1.9161
[torch.FloatTensor of size 5]
Numpy Bridge
The torch Tensor and numpy array will share their underlying memory locations,
Changing one will change the other.
In [6]:
In [7]:
a = torch.ones(3)
print(a)
b = a.numpy()
print(b)
1
1
1
[torch.FloatTensor of size 3]
[ 1. 1. 1.]
In [8]: a.add_(1)
print(a)
print(b)
2
2
2
[torch.FloatTensor of size 3]
[ 2. 2. 2.]
Converting numpy Array to torch Tensor
In [13]:
In [16]:
In [17]:
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
Out[16]: array([ 4., 4., 4., 4., 4.])
[ 4. 4. 4. 4. 4.]
4
4
4
4
4
[torch.DoubleTensor of size 5]
Autograd: automatic di erentiation
autograd package is central to all neural networks in PyTorch.
Variable
The central class of the autograd package
data
the raw tensor
grad
the gradient w.r.t. this variable
creator
creator of this Variable in the graph
Function
Function is another class which is very important for autograd implementation (think
operations in TF)
Variable and Function are interconnected and build up an acyclic graph
The graph encodes a complete history of computation.
Variables and Functions examples:
In [18]: import torch
from torch.autograd import Variable
In [21]: # Create a variable:
x = Variable(torch.ones(2, 2), requires_grad=True)
print(x)
Variable containing:
1 1
1 1
[torch.FloatTensor of size 2x2]
In [22]: print(x.data)
1 1
1 1
[torch.FloatTensor of size 2x2]
In [24]: print(x.grad)
None
In [25]: print(x.creator)
None
In [26]: #Do an operation of variable:
y = x + 2
print(y)
Variable containing:
3 3
3 3
[torch.FloatTensor of size 2x2]
In [27]: print(y.data)
3 3
3 3
[torch.FloatTensor of size 2x2]
In [28]: print(y.grad)
None
In [29]: print(y.creator)
<torch.autograd._functions.basic_ops.AddConstant object at 0x106b449e8>
In [32]: # Do more operations on y
z = y * y * 3
out = z.mean()
print(z, out)
Variable containing:
27 27
27 27
[torch.FloatTensor of size 2x2]
Variable containing:
27
[torch.FloatTensor of size 1]
Gradients
gradients computed automatically upon invoking the .backward method
In [33]: out.backward()
print(x.grad)
Variable containing:
4.5000 4.5000
4.5000 4.5000
[torch.FloatTensor of size 2x2]
Pytorch for tf_developers
Updating Weights
weight = weight - learning_rate * gradient
In [ ]: learning_rate = 0.01
# The learnable parameters of a model are returned by net.parameters()
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate) # weight = weight - learning_rate * g
radient
Use Optimizers instead of updating weights by hand.
In [ ]: import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
for i in range(num_epochs):
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()
Dynamic Computational Graph
Why should we have a Graph in the rst place?
TF begins everything by talking about the graph and sessions
What is Dynamic Graph
Backprop is de ned by how the code is run.
Every single iteration can be different.
Dynamic Computational Graph
Dynamic Computational Graph
Dynamic Computational Graph
Dynamic Computational Graph
Dynamic Computational Graph
DL frameworks usually consists two “interpreters” in the framework.
1. The host language (i.e. Python)
2. The computational graph.
i.e. , a language that sets up the computational graph
and
an execution mechanism that is different from the host language.
Static computational graphs can optimize computation.
Dynamic computational graphs are valuable when you cannot
determine the computation.
e.g. recursive computations that are based on variable data.
Case against dynamic graphs
case against dynamic graphs
You don’t always need a dynamic graph.
Case against dynamic graphs
Dynamic capabilities can be added to a static computation graph.
.. probably not a natural t that your head will appreciate.
exhibit A: tf.while_loop
exhibit B: A whole new library called tensorflow fold
Problems of achieving same result with static graphs
Dif culty in expressing complex ow-control logic
look very different in the graph than in the imperative coding style of the
host language
requires sophistication on the developer’s part.
Complexity of the computation graph implementation
Forced to address all possible cases.
Reduces opportunity for optimization
Case FOR dynamic graphs
Suits well for dynamic data
Any kind of additional convenience will help speed up in
your explorations
it works just like Python
** no split-brain experience that there’s another execution engine that running the
computation.
Easier to debug
Easier to create unique extensions.
Use cases of Dynamic Graphs
Variably sized inputs
Variably structured inputs
Nontrivial inference algorithms
Variably structured outputs
Why Dynamic Computation Graphs are awesome
Deep Learning architectures will traverse the same evolutionary path as
traditional computation.
monolithic stand-alone programs, to more modular programs
In the old days we had monolithic DL systems with single analytic objective
functions.
With dynamic graphs, systems can have multiple networks competing/coperating.
Richer modularity. Similar to Information Encapsulation in OOP
Future Prospects
I predict it will coexist with TF
sort of like Angular vs React in JS world, with pytorch similar to React
sort of like java vs python, with pytorch similar to python.
Increased developer adoption
Better supports for visualization and input management tools
Java
Python
public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello World");
}
}
print("Hellow World")
Thank You
Pytorch for tf_developers

More Related Content

PPTX
PyTorch Tutorial for NTU Machine Learing Course 2017
PDF
Deep Learning with PyTorch
PPTX
[Update] PyTorch Tutorial for NTU Machine Learing Course 2017
PPTX
Introduction to PyTorch
PDF
"PyTorch Deep Learning Framework: Status and Directions," a Presentation from...
PDF
PyTorch for Deep Learning Practitioners
PDF
PyTorch crash course
PDF
Dive Into PyTorch
PyTorch Tutorial for NTU Machine Learing Course 2017
Deep Learning with PyTorch
[Update] PyTorch Tutorial for NTU Machine Learing Course 2017
Introduction to PyTorch
"PyTorch Deep Learning Framework: Status and Directions," a Presentation from...
PyTorch for Deep Learning Practitioners
PyTorch crash course
Dive Into PyTorch

What's hot (20)

PDF
TensorFlow example for AI Ukraine2016
PDF
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
PDF
Learning stochastic neural networks with Chainer
PDF
Rajat Monga at AI Frontiers: Deep Learning with TensorFlow
PDF
Introduction to TensorFlow
PDF
Deep Learning in theano
PDF
Overview of Chainer and Its Features
PPTX
Introduction to theano, case study of Word Embeddings
PDF
Introduction to Chainer 11 may,2018
PDF
Common Design of Deep Learning Frameworks
PDF
TensorFlow Dev Summit 2017 요약
PDF
Machine learning with py torch
PPTX
Chainer v3
PDF
CUDA and Caffe for deep learning
PDF
Introduction to TensorFlow 2.0
PDF
Machine Intelligence at Google Scale: TensorFlow
PDF
Scaling Deep Learning with MXNet
PDF
IIBMP2019 講演資料「オープンソースで始める深層学習」
PDF
Deep learning for molecules, introduction to chainer chemistry
PDF
A minimal introduction to Python non-uniform fast Fourier transform (pynufft)
TensorFlow example for AI Ukraine2016
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
Learning stochastic neural networks with Chainer
Rajat Monga at AI Frontiers: Deep Learning with TensorFlow
Introduction to TensorFlow
Deep Learning in theano
Overview of Chainer and Its Features
Introduction to theano, case study of Word Embeddings
Introduction to Chainer 11 may,2018
Common Design of Deep Learning Frameworks
TensorFlow Dev Summit 2017 요약
Machine learning with py torch
Chainer v3
CUDA and Caffe for deep learning
Introduction to TensorFlow 2.0
Machine Intelligence at Google Scale: TensorFlow
Scaling Deep Learning with MXNet
IIBMP2019 講演資料「オープンソースで始める深層学習」
Deep learning for molecules, introduction to chainer chemistry
A minimal introduction to Python non-uniform fast Fourier transform (pynufft)
Ad

Viewers also liked (20)

PPTX
EIA thermal power plant
PPTX
Environmental Impact Assessment in Water Resources Projects
PPTX
Developing Guidelines for Public Participation on Environmental Impact Assess...
PDF
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기
PPT
Summarization of Environmental Impact Assessment Methodology by Dr. I.M. Mis...
PPT
Environmental impact assessment methodology by Dr. I.M. Mishra Professor, Dep...
PPTX
Eia - environmental impact assessment
PPTX
Deep learning an Introduction with Competitive Landscape
PDF
Environmental Impact Assessment - University of Winnipeg
PPTX
Methods of eia(environmental impact assessment)
PPTX
ENVIRONMENTAL IMPACT ASSESSMENT - EIA
PPTX
Environmental impact assessment methodology
PPTX
Environmental Audit and Environmental Impact Assessment
PPTX
Environmental impact assessment concept
PPSX
13 environmental impact assessment
PPTX
Environmental impact assessment m5
PPT
environmental impact assessment
PPT
Seminar on Environmental Impact Assessment
PPTX
Environmental impact assessment
PPTX
Environmental impact assessment (eia) By Mr Allah Dad Khan Visiting Professor...
EIA thermal power plant
Environmental Impact Assessment in Water Resources Projects
Developing Guidelines for Public Participation on Environmental Impact Assess...
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기
Summarization of Environmental Impact Assessment Methodology by Dr. I.M. Mis...
Environmental impact assessment methodology by Dr. I.M. Mishra Professor, Dep...
Eia - environmental impact assessment
Deep learning an Introduction with Competitive Landscape
Environmental Impact Assessment - University of Winnipeg
Methods of eia(environmental impact assessment)
ENVIRONMENTAL IMPACT ASSESSMENT - EIA
Environmental impact assessment methodology
Environmental Audit and Environmental Impact Assessment
Environmental impact assessment concept
13 environmental impact assessment
Environmental impact assessment m5
environmental impact assessment
Seminar on Environmental Impact Assessment
Environmental impact assessment
Environmental impact assessment (eia) By Mr Allah Dad Khan Visiting Professor...
Ad

Similar to Pytorch for tf_developers (20)

PPTX
Deep Learning, Scala, and Spark
PDF
OpenPOWER Workshop in Silicon Valley
PDF
Google Big Data Expo
PDF
Python + Tensorflow: how to earn money in the Stock Exchange with Deep Learni...
PDF
Power ai tensorflowworkloadtutorial-20171117
PDF
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...
PDF
Neural networks with python
PDF
maxbox starter60 machine learning
PDF
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...
PDF
Introducton to Convolutional Nerural Network with TensorFlow
PDF
Hybridize Functions: A Tool for Automatically Refactoring Imperative Deep Lea...
PDF
Machine Learning and Go. Go!
PDF
Tech day ngobrol santai tensorflow
DOCX
Mmc manual
PDF
Options and trade offs for parallelism and concurrency in Modern C++
PDF
Week002-Presentation.pptx-638674812983397395.pdf
PDF
Language translation with Deep Learning (RNN) with TensorFlow
 
PDF
AIML4 CNN lab256 1hr (111-1).pdf
PDF
Effective Numerical Computation in NumPy and SciPy
PPTX
Introduction to Deep Learning and Tensorflow
Deep Learning, Scala, and Spark
OpenPOWER Workshop in Silicon Valley
Google Big Data Expo
Python + Tensorflow: how to earn money in the Stock Exchange with Deep Learni...
Power ai tensorflowworkloadtutorial-20171117
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...
Neural networks with python
maxbox starter60 machine learning
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...
Introducton to Convolutional Nerural Network with TensorFlow
Hybridize Functions: A Tool for Automatically Refactoring Imperative Deep Lea...
Machine Learning and Go. Go!
Tech day ngobrol santai tensorflow
Mmc manual
Options and trade offs for parallelism and concurrency in Modern C++
Week002-Presentation.pptx-638674812983397395.pdf
Language translation with Deep Learning (RNN) with TensorFlow
 
AIML4 CNN lab256 1hr (111-1).pdf
Effective Numerical Computation in NumPy and SciPy
Introduction to Deep Learning and Tensorflow

Recently uploaded (20)

PDF
System and Network Administraation Chapter 3
PPTX
ManageIQ - Sprint 268 Review - Slide Deck
PDF
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PPTX
VVF-Customer-Presentation2025-Ver1.9.pptx
PPTX
history of c programming in notes for students .pptx
PDF
How Creative Agencies Leverage Project Management Software.pdf
PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PPTX
ai tools demonstartion for schools and inter college
PPTX
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
PDF
medical staffing services at VALiNTRY
PDF
Softaken Excel to vCard Converter Software.pdf
PDF
System and Network Administration Chapter 2
PPTX
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
PDF
top salesforce developer skills in 2025.pdf
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
System and Network Administraation Chapter 3
ManageIQ - Sprint 268 Review - Slide Deck
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
Which alternative to Crystal Reports is best for small or large businesses.pdf
VVF-Customer-Presentation2025-Ver1.9.pptx
history of c programming in notes for students .pptx
How Creative Agencies Leverage Project Management Software.pdf
Upgrade and Innovation Strategies for SAP ERP Customers
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
ai tools demonstartion for schools and inter college
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
medical staffing services at VALiNTRY
Softaken Excel to vCard Converter Software.pdf
System and Network Administration Chapter 2
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
top salesforce developer skills in 2025.pdf
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Design an Analysis of Algorithms II-SECS-1021-03
Adobe Illustrator 28.6 Crack My Vision of Vector Design

Pytorch for tf_developers

  • 1. PyTorch for Tensor ow Developers Overview PyTorch constructs Dynamic Graphs --- Abdul Muneer https://www.minds.ai/
  • 2. Why do we use any Framework? Model Prediction Gradient computation ---- (automatic differentiation)
  • 3. Why should we explore non TF frameworks? Engineering is a key component in Deep Learning practice What engineering problems are existing tools fails to solve? Improves our understanding of TF We do not end up being one trick pony Helps understand n/w implementation in those frameworks.
  • 4. alternative paradigm for implementing neural networks simple and intuitive to program and debug
  • 5. What is PyTorch? It’s a Python based scienti c computing package targeted at two sets of audiences: A replacement for numpy to use the power of GPUs a deep learning research platform that provides maximum exibility and speed
  • 6. In [ ]: # MNIST example import torch import torch.nn as nn from torch.autograd import Variable class CNN(nn.Module): def __init__(self): super(CNN, self).__init__() self.layer1 = nn.Sequential( nn.Conv2d(1, 16, kernel_size=5, padding=2), nn.BatchNorm2d(16), nn.ReLU(), nn.MaxPool2d(2)) self.layer2 = nn.Sequential( nn.Conv2d(16, 32, kernel_size=5, padding=2), nn.BatchNorm2d(32), nn.ReLU(), nn.MaxPool2d(2)) self.fc = nn.Linear(7*7*32, 10) def forward(self, x): out = self.layer1(x) out = self.layer2(out) out = out.view(out.size(0), -1) out = self.fc(out) return out
  • 7. In [ ]: cnn = CNN() # Loss and Optimizer criterion = nn.CrossEntropyLoss() optimizer = torch.optim.Adam(cnn.parameters(), lr=learning_rate) # Train the Model for epoch in range(num_epochs): for i, (images, labels) in enumerate(train_loader): images = Variable(images) labels = Variable(labels) # Forward + Backward + Optimize optimizer.zero_grad() outputs = cnn(images) loss = criterion(outputs, labels) loss.backward() optimizer.step()
  • 8. PyTorch is imperative In [1]: In [2]: import torch x = torch.Tensor(5, 3) x Out[2]: 0.0000e+00 -8.5899e+09 0.0000e+00 -8.5899e+09 6.6449e-33 1.9432e-19 4.8613e+30 5.0832e+31 7.5338e+28 4.5925e+24 1.7448e+22 1.1429e+33 4.6114e+24 2.8031e+20 1.2410e+28 [torch.FloatTensor of size 5x3]
  • 9. PyTorch is imperative No need for placeholders, everything is a tensor. Debug it with a regular python debugger. You can go almost as high level as keras and as low level as pure Tensor ow.
  • 10. Let's talk about Tensors and Variables
  • 11. Tensors similar to numpy’s ndarrays can also be used on a GPU to accelerate computing. In [2]: import torch x = torch.Tensor(5, 3) print(x) 0.0000 0.0000 0.0000 -2.0005 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 [torch.FloatTensor of size 5x3]
  • 12. Construct a randomly initialized matrix In [3]: In [4]: x = torch.rand(5, 3) print(x) x.size() 0.6543 0.1334 0.1410 0.6995 0.5005 0.6566 0.2181 0.1329 0.7526 0.6533 0.6995 0.6978 0.7876 0.7880 0.9808 [torch.FloatTensor of size 5x3] Out[4]: torch.Size([5, 3])
  • 13. Operations Addition In [5]: In [6]: y = torch.rand(5, 3) print(x + y) print(torch.add(x, y)) 0.9243 0.3856 0.7254 1.6529 0.9123 1.4620 0.3295 1.0813 1.4391 1.5626 1.5122 0.8225 1.2842 1.1281 1.1330 [torch.FloatTensor of size 5x3] 0.9243 0.3856 0.7254 1.6529 0.9123 1.4620 0.3295 1.0813 1.4391 1.5626 1.5122 0.8225 1.2842 1.1281 1.1330 [torch.FloatTensor of size 5x3]
  • 14. Operations Any operation that mutates a tensor in-place is post- xed with an _ For example: x.copy_(y), x.t_() etc. will change x.
  • 15. Addition: in-place In [8]: In [9]: print(y) # adds x to y y.add_(x) print(y) 0.9243 0.3856 0.7254 1.6529 0.9123 1.4620 0.3295 1.0813 1.4391 1.5626 1.5122 0.8225 1.2842 1.1281 1.1330 [torch.FloatTensor of size 5x3] 1.5786 0.5190 0.8664 2.3523 1.4128 2.1186 0.5476 1.2142 2.1917 2.2159 2.2116 1.5204 2.0718 1.9161 2.1138 [torch.FloatTensor of size 5x3]
  • 16. numpy-like indexing applies.. In [13]: y[:,1] Out[13]: 0.5190 1.4128 1.2142 2.2116 1.9161 [torch.FloatTensor of size 5]
  • 17. Numpy Bridge The torch Tensor and numpy array will share their underlying memory locations, Changing one will change the other. In [6]: In [7]: a = torch.ones(3) print(a) b = a.numpy() print(b) 1 1 1 [torch.FloatTensor of size 3] [ 1. 1. 1.]
  • 19. Converting numpy Array to torch Tensor In [13]: In [16]: In [17]: import numpy as np a = np.ones(5) b = torch.from_numpy(a) np.add(a, 1, out=a) print(a) print(b) Out[16]: array([ 4., 4., 4., 4., 4.]) [ 4. 4. 4. 4. 4.] 4 4 4 4 4 [torch.DoubleTensor of size 5]
  • 20. Autograd: automatic di erentiation autograd package is central to all neural networks in PyTorch.
  • 21. Variable The central class of the autograd package data the raw tensor grad the gradient w.r.t. this variable creator creator of this Variable in the graph
  • 22. Function Function is another class which is very important for autograd implementation (think operations in TF) Variable and Function are interconnected and build up an acyclic graph The graph encodes a complete history of computation.
  • 24. In [18]: import torch from torch.autograd import Variable
  • 25. In [21]: # Create a variable: x = Variable(torch.ones(2, 2), requires_grad=True) print(x) Variable containing: 1 1 1 1 [torch.FloatTensor of size 2x2]
  • 26. In [22]: print(x.data) 1 1 1 1 [torch.FloatTensor of size 2x2]
  • 29. In [26]: #Do an operation of variable: y = x + 2 print(y) Variable containing: 3 3 3 3 [torch.FloatTensor of size 2x2]
  • 30. In [27]: print(y.data) 3 3 3 3 [torch.FloatTensor of size 2x2]
  • 33. In [32]: # Do more operations on y z = y * y * 3 out = z.mean() print(z, out) Variable containing: 27 27 27 27 [torch.FloatTensor of size 2x2] Variable containing: 27 [torch.FloatTensor of size 1]
  • 34. Gradients gradients computed automatically upon invoking the .backward method In [33]: out.backward() print(x.grad) Variable containing: 4.5000 4.5000 4.5000 4.5000 [torch.FloatTensor of size 2x2]
  • 36. Updating Weights weight = weight - learning_rate * gradient In [ ]: learning_rate = 0.01 # The learnable parameters of a model are returned by net.parameters() for f in net.parameters(): f.data.sub_(f.grad.data * learning_rate) # weight = weight - learning_rate * g radient
  • 37. Use Optimizers instead of updating weights by hand. In [ ]: import torch.optim as optim # create your optimizer optimizer = optim.SGD(net.parameters(), lr=0.01) for i in range(num_epochs): # in your training loop: optimizer.zero_grad() # zero the gradient buffers output = net(input) loss = criterion(output, target) loss.backward() optimizer.step()
  • 39. Why should we have a Graph in the rst place? TF begins everything by talking about the graph and sessions
  • 40. What is Dynamic Graph Backprop is de ned by how the code is run. Every single iteration can be different.
  • 46. DL frameworks usually consists two “interpreters” in the framework. 1. The host language (i.e. Python) 2. The computational graph. i.e. , a language that sets up the computational graph and an execution mechanism that is different from the host language.
  • 47. Static computational graphs can optimize computation. Dynamic computational graphs are valuable when you cannot determine the computation. e.g. recursive computations that are based on variable data.
  • 49. case against dynamic graphs You don’t always need a dynamic graph.
  • 50. Case against dynamic graphs Dynamic capabilities can be added to a static computation graph.
  • 51. .. probably not a natural t that your head will appreciate. exhibit A: tf.while_loop exhibit B: A whole new library called tensorflow fold
  • 52. Problems of achieving same result with static graphs Dif culty in expressing complex ow-control logic look very different in the graph than in the imperative coding style of the host language requires sophistication on the developer’s part. Complexity of the computation graph implementation Forced to address all possible cases. Reduces opportunity for optimization
  • 53. Case FOR dynamic graphs Suits well for dynamic data Any kind of additional convenience will help speed up in your explorations it works just like Python ** no split-brain experience that there’s another execution engine that running the computation. Easier to debug Easier to create unique extensions.
  • 54. Use cases of Dynamic Graphs Variably sized inputs Variably structured inputs Nontrivial inference algorithms Variably structured outputs
  • 55. Why Dynamic Computation Graphs are awesome Deep Learning architectures will traverse the same evolutionary path as traditional computation. monolithic stand-alone programs, to more modular programs In the old days we had monolithic DL systems with single analytic objective functions. With dynamic graphs, systems can have multiple networks competing/coperating. Richer modularity. Similar to Information Encapsulation in OOP
  • 56. Future Prospects I predict it will coexist with TF sort of like Angular vs React in JS world, with pytorch similar to React sort of like java vs python, with pytorch similar to python. Increased developer adoption Better supports for visualization and input management tools
  • 57. Java Python public class HelloWorld { public static void main(String[] args) { System.out.println("Hello World"); } } print("Hellow World")