SlideShare a Scribd company logo
Introduction to Deep Learning
Outline
• Introduction
• Supervised Learning
– Convolutional Neural Network
– Sequence Modelling: RNN and its extensions
• Unsupervised Learning
– Autoencoder
– Stacked Denoising Autoencoder
• Reinforcement Learning
– Deep Reinforcement Learning
– Two applications: Playing Atari & AlphaGo
Introduction
• Traditional pattern recognition models use hand-
crafted features and relatively simple trainable
classifier.
• This approach has the following limitations:
– It is very tedious and costly to develop hand-
crafted features
– The hand-crafted features are usually highly
dependents on one application, and cannot be
transferred easily to other applications
hand-crafted
feature
extractor
“Simple”
Trainable
Classifier
output
Deep Learning
• Deep learning (a.k.a. representation learning) seeks to
learn rich hierarchical representations (i.e. features)
automatically through multiple stage of feature
learning process.
Low-level
features
output
Mid-level
features
High-level
features
Trainable
classifier
Feature visualization of convolutional net trained on ImageNet
(Zeiler and Fergus, 2013)
Learning Hierarchical
Representations
• Hierarchy of representations with increasing level of
abstraction. Each stage is a kind of trainable nonlinear
feature transform
• Image recognition
– Pixel → edge → texton → motif → part → object
• Text
– Character → word → word group → clause → sentence →
story
Low-
level
features
output
Mid-
level
features
High-level
features
Trainable
classifier
Increasing level of abstraction
The Mammalian Visual
Cortex is Hierarchical
• It is good to be inspired
by nature, but not too
much.
• We need to understand
which details are
important, and which
details are merely the
result of evolution.
• Each module in Deep
Learning transforms its
input representation into
a higher-level one, in a
way similar to human
cortex.(van Essen and Gallant, 1994)
Supervised Learning
• Convolutional Neural Network
• Sequence Modelling
– Why do we need RNN?
– What are RNNs?
– RNN Extensions
– What can RNNs can do?
Convolutional Neural Network
• Input can have very high dimension. Using a fully-connected
neural network would need a large amount of parameters.
• Inspired by the neurophysiological experiments conducted by
[Hubel & Wiesel 1962], CNNs are a special type of neural
network whose hidden units are only connected to local receptive
field. The number of parameters needed by CNNs is much
smaller.
Example: 200x200 image
a) fully connected: 40,000
hidden units => 1.6 billion
parameters
b) CNN: 5x5 kernel, 100
feature maps => 2,500
parameters
Three Stages of a
Convolutional Layer
1. Convolution stage
2. Nonlinearity: a
nonlinear transform
such as rectified
linear or tanh
3. Pooling: output a
summary statistics
of local input, such
as max pooling and
average pooling
Convolution Operation in CNN
• Input: an image (2-D array) x
• Convolution kernel/operator(2-D array of learnable parameters): w
• Feature map (2-D array of processed data): s
• Convolution operation in 2-D domains:
Multiple Convolutions
Usually there are multiple feature maps,
one for each convolution operator.
Non-linearity
Tanh(x) ReLU
tanh x =
ex
− e−x
ex + e−x
𝑓 𝑥 = max(0, 𝑥)
Pooling
• Common pooling operations:
– Max pooling: reports the maximum output within a rectangular
neighborhood.
– Average pooling: reports the average output of a rectangular
neighborhood (possibly weighted by the distance from the
central pixel).
Deep CNN: winner
of ImageNet 2012
• Multiple feature maps per convolutional layer.
• Multiple convolutional layers for extracting
features at different levels.
• Higher-level layers take the feature maps in
lower-level layers as input.
(Alex et al., 2012)
Deep CNN for Image
Classification
Try out a live demo at
http://demo.caffe.berkeleyvision.org/
Deep CNN in AlphaGO
Policy network:
• Input: 19x19, 48
input channels
• Layer 1: 5x5 kernel,
192 filters
• Layer 2 to 12: 3x3
kernel, 192 filters
• Layer 13: 1x1 kernel,
1 filter
Value network has
similar architecture to
policy network
(Silver et al, 2016)
Sequence Modelling
• Why do we need RNN?
• What are RNNs?
• RNN Extensions
• What can RNNs can do?
Why do we need RNNs?
The limitations of the Neural network (CNNs)
• Rely on the assumption of independence among the
(training and test) examples.
– After each data point is processed, the entire state of the
network is lost
• Rely on examples being vectors of fixed length
We need to model the data with temporal or sequential
structures and varying length of inputs and outputs
– Frames from video
– Snippets of audio
– Words pulled from sentences
Recurrent neural networks (RNNs) are connectionist models with the
ability to selectively pass information across sequence steps, while
processing sequential data one element at a time.
The simplest form of fully recurrent
neural network is an MLP with the
previous set of hidden unit activations
feeding back into the network along
with the inputs
ℎ 𝑡 = 𝑓𝐻 𝑊𝐼𝐻 𝑥 𝑡 + 𝑊𝐻𝐻ℎ(𝑡 − 1)
𝑦 𝑡 = 𝑓𝑂(𝑊𝐻𝑂ℎ(𝑡))
𝑓𝐻 and 𝑓𝑂 are the activation function for
hidden and output unit; 𝑊𝐼𝐻, 𝑊𝐻𝐻, and
𝑊𝐻𝑂 are connection weight matrices which
are learnt by training
Allow a ‘memory’ of previous inputs to persist in
the network’s internal state, and thereby
influence the network output
What are RNNs?
An unfolded recurrent network. Each node represents a layer of network units at a single time
step. The weighted connections from the input layer to hidden layer are labelled ‘w1’, those from
the hidden layer to itself (i.e. the recurrent weights) are labelled ‘w2’ and the hidden to output
weights are labelled‘w3’. Note that the same weights are reused at every time step. Bias weights
are omitted for clarity.
What are RNNs?
• The recurrent network can be converted into a
feed-forward network by unfolding over time
What are RNNs?
• Training RNNs (determine the parameters)
Back Propagation Through Time (BPTT) is often used to learn the RNN
BPTT is an extension of the back-propagation (BP)
𝑠𝑡 = tanh 𝑈𝑥 𝑡 + 𝑊𝑠𝑡−1
𝑦𝑡 = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝑉𝑠𝑡)
 The loss/error function of this network is
 The output of this RNN is 𝑦𝑡
𝐸𝑡 𝑦𝑡, 𝑦𝑡 = −𝑦𝑡log 𝑦𝑡
𝐸 𝑦, 𝑦 =
𝑡
𝐸𝑡(𝑦𝑡, 𝑦𝑡)
The error at
each time step
the total loss is
the sum of the
errors at each
time step
What are RNNs?
• Training RNNs (determine the parameters)
 The gradients of the error with respect to our parameters
Just like we sum up the errors, we also sum up the gradients at each
time step for one training example. For parameter 𝑊, the gradient is
𝜕𝐸
𝜕𝑊
=
𝑡
𝜕𝐸𝑡
𝜕𝑊
 The gradient at each time step
we use time 3 as an example
𝜕𝐸3
𝜕𝑊
=
𝜕𝐸3
𝜕 𝑦3
𝜕 𝑦3
𝜕𝑠3
𝜕𝑠3
𝜕𝑊
Chain Rule
𝑠3 = tanh(𝑈𝑥1 + 𝑊𝑠2)
𝑠3 depends on 𝑊and 𝑠1, we cannot simply
treat 𝑠2 a constant
𝜕𝐸3
𝜕𝑊
=
𝑘=0
3
𝜕𝐸3
𝜕 𝑦3
𝜕 𝑦3
𝜕𝑠3
𝜕𝑠3
𝜕𝑠 𝑘
𝜕𝑠 𝑘
𝜕𝑊
Apply Chain Rule again on 𝑠 𝑘
What are RNNs?
 Training RNNs (determine the parameters)
Becaise 𝑊 is used in every step up to the output we care about, we need to
back-propagate gradients from 𝑡 = 3 through the network all the way to 𝑡 = 0
𝜕𝐸3
𝜕𝑊
=
𝑘=0
3
𝜕𝐸3
𝜕 𝑦3
𝜕 𝑦3
𝜕𝑠3
𝜕𝑠3
𝜕𝑠 𝑘
𝜕𝑠 𝑘
𝜕𝑊
What are RNNs?
 The vanishing gradient problem
To understand why, let’s take a closer look at the gradient we calculated above:
𝜕𝐸3
𝜕𝑊
=
𝑘=0
3
𝜕𝐸3
𝜕 𝑦3
𝜕 𝑦3
𝜕𝑠3
𝜕𝑠3
𝜕𝑠 𝑘
𝜕𝑠 𝑘
𝜕𝑊
𝜕𝐸3
𝜕𝑊
=
𝑘=0
3
𝜕𝐸3
𝜕 𝑦3
𝜕 𝑦3
𝜕𝑠3
𝑗=𝑘+1
3
𝜕𝑠𝑗
𝜕𝑠𝑗−1
𝜕𝑠 𝑘
𝜕𝑊
Because the layers and time steps of deep neural
networks relate to each other through multiplication,
derivatives are susceptible to vanishing
Gradient contributions from “far away” steps become zero, and the
state at those steps doesn’t contribute to what you are learning: You
end up not learning long-range dependencies.
What are RNNs?
 How to sole the vanishing gradient problem?
Proper initialization of the matrix can reduce the effect of vanishing gradient
 Proper initialization of the 𝑊 matrix can reduce the effect of vanishing
gradients
 Use ReLU instead of tanh or sigmoid activation function
ReLU derivate is a constant of either 0 or 1, so it isn’t likely to suffer
from vanishing gradients
 Use Long Short-Term Memory or Gated Recurrent unit architectures
LSTM will be introduced later
ℎ 𝑡 = 𝑓(𝑊 𝑥ℎ 𝑥 𝑡 + 𝑊ℎℎℎ 𝑡−1 + 𝑏ℎ)
ℎ 𝑡 = 𝑓(𝑊 𝑥ℎ 𝑥 𝑡 + 𝑊ℎℎℎ 𝑡−1 + 𝑏ℎ)
𝑦𝑡 = 𝑊ℎ𝑦ℎ 𝑡 + 𝑊ℎ𝑦ℎ 𝑡 + 𝑏 𝑦
training
sequence
forwards and
backwards to
two separate
recurrent
hidden layers
past and future context
determines the output
An unfolded BRNN
RNN Extensions: Bidirectional
Recurrent Neural Networks
Traditional RNNs only model the dependence of the current state on
the previous state, BRNNs (Schuster and Paliwal, 1997) extend to
model dependence on both past states and future states.
For example: predicting a missing word in a sequence you want to
look at both the left and the right context.
A gating mechanism of the LSTM , which
generates the current hidden state by the paste
hidden state and current input ..It contains five
modules: input gate, new memory cell, forget
gate, final memory generation, and output gate.
RNN Extensions: Long
Short-term Memory
The vanishing gradient problem prevents standard RNNs from learning
long-term dependencies. LSTMs (Hochreiter and Schmidhuber, 1997)
were designed to combat vanishing gradients through a gating mechanism.
New memory cell
use the input word and the past hidden
state to generate a new memory which
includes aspects of the new input
𝑐𝑡 = tanh(𝑊 𝑐 𝑥𝑡 + 𝑈 𝑐ℎ 𝑡−1)
New memory
RNN Extensions: Long
Short-term Memory
A gating mechanism of the LSTM
Forget gate
The forget gate looks at the input
word and the past hidden state and
makes an assessment on whether
the past memory cell is useful for
the computation of the current
memory cell
𝑓𝑡 = σ( 𝑊 𝑓 𝑥𝑡 + 𝑈 𝑓ℎ 𝑡−1)
RNN Extensions: Long
Short-term Memory
A gating mechanism of the LSTM
Final memory cell
𝑖 𝑡
𝑓𝑡
𝑐𝑡−1
𝑐𝑡
𝑐𝑡 = 𝑓𝑡 ∘ 𝑐𝑡−1 + 𝑖 𝑡 ∘ 𝑐𝑡
This stage first takes the advice of
the forget gate ft and accordingly
forgets the past memory
ct−1. Similarly, it takes the advice
of the input gate 𝑖 𝑡 and accordingly
gates the new memory. It then
sums these two results to produce
the final memory
RNN Extensions: Long
Short-term Memory
A gating mechanism of the LSTM
Output gate
𝑜𝑡 = 𝜎(𝑊𝑜 𝑥𝑡 + 𝑈 𝑜ℎ 𝑡−1)
This gate makes the
assessment regarding what
parts of the memory 𝑐𝑡 needs to
be exposed/present in the
hidden state ℎ 𝑡.
RNN Extensions: Long
Short-term Memory
A gating mechanism of the LSTM
The hidden state
ℎ 𝑡 = 𝑜𝑡 ∘ tanh(𝑐𝑡)
RNN Extensions: Long
Short-term Memory
A gating mechanism of the LSTM
RNN extensions: Long
Short-term memory
LSTMs contain information outside the normal flow of the
recurrent network in a gated cell. Information can be stored in,
written to, or read from a cell, much like data in a computer’s
memory. The cells learn when to allow data to enter, leave or be
deleted through the iterative process of making guesses, back-
propagating error, and adjusting weights via gradient descent.
Conclusions on LSTM
RNN extensions: Long
Short-term Memory
Why LSTM can combat the vanish gradient problem?
LSTMs help preserve the error that can be back-propagated
through time and layers. By maintaining a more constant error,
they allow recurrent nets to continue to learn over many time steps
(over 1000), thereby opening a channel to link causes and effects
remotely
What can RNNs can do?
Machine Translation Visual Question Answering
In machine translation, the input is a sequence of
words in source language, and the output is a
sequence of words in target language.
Encoder-decoder architecture for machine translation
Encoder: An RNN to encode
the input sentence into a
hidden state (feature)
Decoder: An RNN
with the hidden state
of the sentence in
source language as
the input and output
the translated
sentence
Machine Translation
Demo Website
VQA: Given an image and a natural language
question about the image, the task is to provide an
accurate natural language answer
Visual Question Answering
(VQA)
Picture from (Antol et al., 2015)
The output is to be conditioned on both image and textual
inputs. A CNN is used to encode the image and a RNN is
implemented to encode the sentence.
Visual Question Answering
Unsupervised Learning
• Autoencoders
• Deep Autoencoders
• Denoising Autoencoders
• Stacked Denoising Autoencoders
Autoencoders
An Autoencoder is a
feedforward neural network
that learns to predict the
input itself in the output.
𝑦(𝑖) = 𝑥(𝑖)
• The input-to-hidden part
corresponds to an
encoder
• The hidden-to-output part
corresponds to a decoder.
Deep Autoencoders
• A deep Autoencoder is
constructed by
extending the encoder
and decoder of
autoencoder with
multiple hidden layers.
• Gradient vanishing
problem: the gradient
becomes too small as
it passes back through
many layers
Diagram from (Hinton and Salakhutdinov, 2006)
Training Deep Autoencoders
Diagram from (Hinton and Salakhutdinov, 2006)
Denoising Autoencoders
• By adding stochastic noise to the, it can force
Autoencoder to learn more robust features.
Training Denoising Autoencoder
The loss function of Denoising autoencoder:
where
Like deep Autoencoder, we can
stack multiple denoising
autoencoders layer-wisely to form
a Stacked Denoising
Autoencoder.
Training Denoising Autoencoder on
MNIST
• The following pictures show the difference between the
resulting filters of Denoising Autoencoder trained on
MNIST with different noise ratios.
No noise (noise ratio=0%) noise ratio=30%
Diagram from (Hinton and Salakhutdinov, 2006)
Deep Reinforcement Learning
• Reinforcement Learning
• Deep Reinforcement Learning
• Applications
– Playing Atari Games
– AlphaGO
Reinforcement Learning
• What’s Reinforcement Learning?
Environment
Agent
{Observation, Reward} {Actions}
• Agent interacts with an environment and learns by maximizing a
scalar reward signal
• No labels or any other supervision signal.
• Previously suffering from hand-craft states or representation.
Policies and Value Functions
• Policy 𝜋 is a behavior function selecting actions
given states
𝑎 = 𝜋(s)
• Value function 𝑄 𝜋(s,a) is expected total reward 𝑟
from state s and action a under policy 𝜋
“How good is action 𝑎 in state 𝑠?”
Approaches To
Reinforcement Learning
• Policy-based RL
– Search directly for the optimal policy 𝜋∗
– Policy achieving maximum future reward
• Value-based RL
– Estimate the optimal value function 𝑄∗(s,a)
– Maximum value achievable under any policy
• Model-based RL
– Build a transition model of the environment
– Plan (e.g. by look-ahead) using model
Bellman Equation
• Value function can be unrolled recursively
• Optimal value function Q∗ (s, a) can be unrolled
recursively
• Value iteration algorithms solve the Bellman
equation
Deep Reinforcement Learning
• Human
• So what’s DEEP RL?
Environment
{Actions}{Raw Observation, Reward}
Deep Reinforcement Learning
• Represent value function by deep Q-network with
weights w
• Define objective function by mean-squared error in Q-
values
• Leading to the following Q-learning gradient
DQN in Atari
• End-to-end learning of values Q(s, a) from pixels
• Input state s is stack of raw pixels from last 4 frames
• Output is Q(s, a) for 18 joystick/button positions
• Reward is the change in the score for that step
Mnih, Volodymyr, et al. 2015.
DQN in Atari :
Human Level Control
Mnih, Volodymyr, et al. 2015.
AlphaGO:
Monte Carlo Tree Search
• MCTS: Model look ahead to reduce searching
space by predicting opponent’s moves
Silver, David, et al. 2016.
AlphaGO: Learning Pipeline
• Combine SL and RL to learn the search direction
in MCTS
• SL policy Network
– Prior search probability or potential
• Rollout:
– combine with MCTS for quick simulation on leaf node
• Value Network:
– Build the Global feeling on the leaf node situation
Silver, David, et al. 2016.
Learning to Prune:
SL Policy Network
• 13-layer CNN
• Input board position 𝑠
• Output: p 𝜎 (𝑎|𝑠), where 𝑎 is the next
move
Learning to Prune:
RL Policy Network
• 1 Million samples are used to train.
• RL-Policy network VS SL-Policy network.
• RL-Policy alone wins 80% games against SL-Policy.
• Combined with MCTS, SL-Policy network is better
• Used to derive the Value Network as the ground truth
– Making enough data for training
Self play
Learning to Prune:
Value Network
• Regression: Similar architecture
• SL Network: Sampling to generate
a unique game.
• RL Network: Simulate to get the
game’s final result.
• Train: 50 million mini-batches of
32 positions(30 million unique
games)
AlphaGO:Evaluation
The version solely using the policy network
does not perform any searchSilver, David, et al. 2016.
Introduction to deep learning

More Related Content

PPTX
Deep learning
PPTX
Convolutional Neural Networks
PPTX
CNN and its applications by ketaki
PPTX
Introduction to CNN
PDF
Convolutional Neural Network Models - Deep Learning
PDF
Deep Learning - Convolutional Neural Networks
PPTX
Convolutional Neural Networks
PPTX
Transfer Learning and Fine-tuning Deep Neural Networks
Deep learning
Convolutional Neural Networks
CNN and its applications by ketaki
Introduction to CNN
Convolutional Neural Network Models - Deep Learning
Deep Learning - Convolutional Neural Networks
Convolutional Neural Networks
Transfer Learning and Fine-tuning Deep Neural Networks

What's hot (20)

PPTX
CNN Tutorial
PDF
Intro to Deep Learning for Computer Vision
PPTX
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
PPT
Deep learning ppt
PPTX
Convolutional Neural Network and Its Applications
PPTX
AlexNet(ImageNet Classification with Deep Convolutional Neural Networks)
PPTX
PDF
PPTX
Techniques in Deep Learning
PDF
Convolutional neural network in practice
PPTX
Convolutional neural networks
PPTX
Convolutional neural network
PPT
Multi-Layer Perceptrons
PDF
Rnn and lstm
PPTX
Semantic segmentation with Convolutional Neural Network Approaches
PPTX
Multilayer perceptron
PPTX
Feedforward neural network
PDF
Densely Connected Convolutional Networks
PDF
PR-169: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
PPTX
Regularization in deep learning
CNN Tutorial
Intro to Deep Learning for Computer Vision
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Deep learning ppt
Convolutional Neural Network and Its Applications
AlexNet(ImageNet Classification with Deep Convolutional Neural Networks)
Techniques in Deep Learning
Convolutional neural network in practice
Convolutional neural networks
Convolutional neural network
Multi-Layer Perceptrons
Rnn and lstm
Semantic segmentation with Convolutional Neural Network Approaches
Multilayer perceptron
Feedforward neural network
Densely Connected Convolutional Networks
PR-169: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Regularization in deep learning
Ad

Similar to Introduction to deep learning (20)

PDF
Deep Learning: Application & Opportunity
PPT
14889574 dl ml RNN Deeplearning MMMm.ppt
PPTX
Deep learning (2)
PPTX
10.0 SequenceModeling-merged-compressed_edited.pptx
PPTX
Recurrent Neural Network
PPTX
Complete solution for Recurrent neural network.pptx
PPTX
Recurrent neural network
PDF
Introduction to Recurrent Neural Network
PDF
Recurrent Neural Networks
PDF
Recurrent Neural Networks (DLAI D7L1 2017 UPC Deep Learning for Artificial In...
PDF
Recurrent Neural Networks, LSTM and GRU
PDF
Sequencing and Attention Models - 2nd Version
PDF
Concepts of Temporal CNN, Recurrent Neural Network, Attention
PDF
Video Analysis with Recurrent Neural Networks (Master Computer Vision Barcelo...
PDF
Recurrent Neural Networks. Part 1: Theory
PPTX
Lecture on Deep Learning
PDF
Recurrent and Recursive Nets (part 2)
PDF
rnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
PDF
Recurrent Neural Networks
PPTX
RNN and LSTM model description and working advantages and disadvantages
Deep Learning: Application & Opportunity
14889574 dl ml RNN Deeplearning MMMm.ppt
Deep learning (2)
10.0 SequenceModeling-merged-compressed_edited.pptx
Recurrent Neural Network
Complete solution for Recurrent neural network.pptx
Recurrent neural network
Introduction to Recurrent Neural Network
Recurrent Neural Networks
Recurrent Neural Networks (DLAI D7L1 2017 UPC Deep Learning for Artificial In...
Recurrent Neural Networks, LSTM and GRU
Sequencing and Attention Models - 2nd Version
Concepts of Temporal CNN, Recurrent Neural Network, Attention
Video Analysis with Recurrent Neural Networks (Master Computer Vision Barcelo...
Recurrent Neural Networks. Part 1: Theory
Lecture on Deep Learning
Recurrent and Recursive Nets (part 2)
rnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
Recurrent Neural Networks
RNN and LSTM model description and working advantages and disadvantages
Ad

Recently uploaded (20)

PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Encapsulation theory and applications.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Machine learning based COVID-19 study performance prediction
PDF
KodekX | Application Modernization Development
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Empathic Computing: Creating Shared Understanding
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Electronic commerce courselecture one. Pdf
PPT
Teaching material agriculture food technology
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Chapter 3 Spatial Domain Image Processing.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Digital-Transformation-Roadmap-for-Companies.pptx
Advanced methodologies resolving dimensionality complications for autism neur...
Encapsulation theory and applications.pdf
Spectral efficient network and resource selection model in 5G networks
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Per capita expenditure prediction using model stacking based on satellite ima...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Machine learning based COVID-19 study performance prediction
KodekX | Application Modernization Development
Encapsulation_ Review paper, used for researhc scholars
The Rise and Fall of 3GPP – Time for a Sabbatical?
Empathic Computing: Creating Shared Understanding
The AUB Centre for AI in Media Proposal.docx
Electronic commerce courselecture one. Pdf
Teaching material agriculture food technology
MYSQL Presentation for SQL database connectivity
Network Security Unit 5.pdf for BCA BBA.
Diabetes mellitus diagnosis method based random forest with bat algorithm
Chapter 3 Spatial Domain Image Processing.pdf

Introduction to deep learning

  • 2. Outline • Introduction • Supervised Learning – Convolutional Neural Network – Sequence Modelling: RNN and its extensions • Unsupervised Learning – Autoencoder – Stacked Denoising Autoencoder • Reinforcement Learning – Deep Reinforcement Learning – Two applications: Playing Atari & AlphaGo
  • 3. Introduction • Traditional pattern recognition models use hand- crafted features and relatively simple trainable classifier. • This approach has the following limitations: – It is very tedious and costly to develop hand- crafted features – The hand-crafted features are usually highly dependents on one application, and cannot be transferred easily to other applications hand-crafted feature extractor “Simple” Trainable Classifier output
  • 4. Deep Learning • Deep learning (a.k.a. representation learning) seeks to learn rich hierarchical representations (i.e. features) automatically through multiple stage of feature learning process. Low-level features output Mid-level features High-level features Trainable classifier Feature visualization of convolutional net trained on ImageNet (Zeiler and Fergus, 2013)
  • 5. Learning Hierarchical Representations • Hierarchy of representations with increasing level of abstraction. Each stage is a kind of trainable nonlinear feature transform • Image recognition – Pixel → edge → texton → motif → part → object • Text – Character → word → word group → clause → sentence → story Low- level features output Mid- level features High-level features Trainable classifier Increasing level of abstraction
  • 6. The Mammalian Visual Cortex is Hierarchical • It is good to be inspired by nature, but not too much. • We need to understand which details are important, and which details are merely the result of evolution. • Each module in Deep Learning transforms its input representation into a higher-level one, in a way similar to human cortex.(van Essen and Gallant, 1994)
  • 7. Supervised Learning • Convolutional Neural Network • Sequence Modelling – Why do we need RNN? – What are RNNs? – RNN Extensions – What can RNNs can do?
  • 8. Convolutional Neural Network • Input can have very high dimension. Using a fully-connected neural network would need a large amount of parameters. • Inspired by the neurophysiological experiments conducted by [Hubel & Wiesel 1962], CNNs are a special type of neural network whose hidden units are only connected to local receptive field. The number of parameters needed by CNNs is much smaller. Example: 200x200 image a) fully connected: 40,000 hidden units => 1.6 billion parameters b) CNN: 5x5 kernel, 100 feature maps => 2,500 parameters
  • 9. Three Stages of a Convolutional Layer 1. Convolution stage 2. Nonlinearity: a nonlinear transform such as rectified linear or tanh 3. Pooling: output a summary statistics of local input, such as max pooling and average pooling
  • 10. Convolution Operation in CNN • Input: an image (2-D array) x • Convolution kernel/operator(2-D array of learnable parameters): w • Feature map (2-D array of processed data): s • Convolution operation in 2-D domains:
  • 11. Multiple Convolutions Usually there are multiple feature maps, one for each convolution operator.
  • 12. Non-linearity Tanh(x) ReLU tanh x = ex − e−x ex + e−x 𝑓 𝑥 = max(0, 𝑥)
  • 13. Pooling • Common pooling operations: – Max pooling: reports the maximum output within a rectangular neighborhood. – Average pooling: reports the average output of a rectangular neighborhood (possibly weighted by the distance from the central pixel).
  • 14. Deep CNN: winner of ImageNet 2012 • Multiple feature maps per convolutional layer. • Multiple convolutional layers for extracting features at different levels. • Higher-level layers take the feature maps in lower-level layers as input. (Alex et al., 2012)
  • 15. Deep CNN for Image Classification Try out a live demo at http://demo.caffe.berkeleyvision.org/
  • 16. Deep CNN in AlphaGO Policy network: • Input: 19x19, 48 input channels • Layer 1: 5x5 kernel, 192 filters • Layer 2 to 12: 3x3 kernel, 192 filters • Layer 13: 1x1 kernel, 1 filter Value network has similar architecture to policy network (Silver et al, 2016)
  • 17. Sequence Modelling • Why do we need RNN? • What are RNNs? • RNN Extensions • What can RNNs can do?
  • 18. Why do we need RNNs? The limitations of the Neural network (CNNs) • Rely on the assumption of independence among the (training and test) examples. – After each data point is processed, the entire state of the network is lost • Rely on examples being vectors of fixed length We need to model the data with temporal or sequential structures and varying length of inputs and outputs – Frames from video – Snippets of audio – Words pulled from sentences
  • 19. Recurrent neural networks (RNNs) are connectionist models with the ability to selectively pass information across sequence steps, while processing sequential data one element at a time. The simplest form of fully recurrent neural network is an MLP with the previous set of hidden unit activations feeding back into the network along with the inputs ℎ 𝑡 = 𝑓𝐻 𝑊𝐼𝐻 𝑥 𝑡 + 𝑊𝐻𝐻ℎ(𝑡 − 1) 𝑦 𝑡 = 𝑓𝑂(𝑊𝐻𝑂ℎ(𝑡)) 𝑓𝐻 and 𝑓𝑂 are the activation function for hidden and output unit; 𝑊𝐼𝐻, 𝑊𝐻𝐻, and 𝑊𝐻𝑂 are connection weight matrices which are learnt by training Allow a ‘memory’ of previous inputs to persist in the network’s internal state, and thereby influence the network output What are RNNs?
  • 20. An unfolded recurrent network. Each node represents a layer of network units at a single time step. The weighted connections from the input layer to hidden layer are labelled ‘w1’, those from the hidden layer to itself (i.e. the recurrent weights) are labelled ‘w2’ and the hidden to output weights are labelled‘w3’. Note that the same weights are reused at every time step. Bias weights are omitted for clarity. What are RNNs? • The recurrent network can be converted into a feed-forward network by unfolding over time
  • 21. What are RNNs? • Training RNNs (determine the parameters) Back Propagation Through Time (BPTT) is often used to learn the RNN BPTT is an extension of the back-propagation (BP) 𝑠𝑡 = tanh 𝑈𝑥 𝑡 + 𝑊𝑠𝑡−1 𝑦𝑡 = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥(𝑉𝑠𝑡)  The loss/error function of this network is  The output of this RNN is 𝑦𝑡 𝐸𝑡 𝑦𝑡, 𝑦𝑡 = −𝑦𝑡log 𝑦𝑡 𝐸 𝑦, 𝑦 = 𝑡 𝐸𝑡(𝑦𝑡, 𝑦𝑡) The error at each time step the total loss is the sum of the errors at each time step
  • 22. What are RNNs? • Training RNNs (determine the parameters)  The gradients of the error with respect to our parameters Just like we sum up the errors, we also sum up the gradients at each time step for one training example. For parameter 𝑊, the gradient is 𝜕𝐸 𝜕𝑊 = 𝑡 𝜕𝐸𝑡 𝜕𝑊  The gradient at each time step we use time 3 as an example 𝜕𝐸3 𝜕𝑊 = 𝜕𝐸3 𝜕 𝑦3 𝜕 𝑦3 𝜕𝑠3 𝜕𝑠3 𝜕𝑊 Chain Rule 𝑠3 = tanh(𝑈𝑥1 + 𝑊𝑠2) 𝑠3 depends on 𝑊and 𝑠1, we cannot simply treat 𝑠2 a constant 𝜕𝐸3 𝜕𝑊 = 𝑘=0 3 𝜕𝐸3 𝜕 𝑦3 𝜕 𝑦3 𝜕𝑠3 𝜕𝑠3 𝜕𝑠 𝑘 𝜕𝑠 𝑘 𝜕𝑊 Apply Chain Rule again on 𝑠 𝑘
  • 23. What are RNNs?  Training RNNs (determine the parameters) Becaise 𝑊 is used in every step up to the output we care about, we need to back-propagate gradients from 𝑡 = 3 through the network all the way to 𝑡 = 0 𝜕𝐸3 𝜕𝑊 = 𝑘=0 3 𝜕𝐸3 𝜕 𝑦3 𝜕 𝑦3 𝜕𝑠3 𝜕𝑠3 𝜕𝑠 𝑘 𝜕𝑠 𝑘 𝜕𝑊
  • 24. What are RNNs?  The vanishing gradient problem To understand why, let’s take a closer look at the gradient we calculated above: 𝜕𝐸3 𝜕𝑊 = 𝑘=0 3 𝜕𝐸3 𝜕 𝑦3 𝜕 𝑦3 𝜕𝑠3 𝜕𝑠3 𝜕𝑠 𝑘 𝜕𝑠 𝑘 𝜕𝑊 𝜕𝐸3 𝜕𝑊 = 𝑘=0 3 𝜕𝐸3 𝜕 𝑦3 𝜕 𝑦3 𝜕𝑠3 𝑗=𝑘+1 3 𝜕𝑠𝑗 𝜕𝑠𝑗−1 𝜕𝑠 𝑘 𝜕𝑊 Because the layers and time steps of deep neural networks relate to each other through multiplication, derivatives are susceptible to vanishing Gradient contributions from “far away” steps become zero, and the state at those steps doesn’t contribute to what you are learning: You end up not learning long-range dependencies.
  • 25. What are RNNs?  How to sole the vanishing gradient problem? Proper initialization of the matrix can reduce the effect of vanishing gradient  Proper initialization of the 𝑊 matrix can reduce the effect of vanishing gradients  Use ReLU instead of tanh or sigmoid activation function ReLU derivate is a constant of either 0 or 1, so it isn’t likely to suffer from vanishing gradients  Use Long Short-Term Memory or Gated Recurrent unit architectures LSTM will be introduced later
  • 26. ℎ 𝑡 = 𝑓(𝑊 𝑥ℎ 𝑥 𝑡 + 𝑊ℎℎℎ 𝑡−1 + 𝑏ℎ) ℎ 𝑡 = 𝑓(𝑊 𝑥ℎ 𝑥 𝑡 + 𝑊ℎℎℎ 𝑡−1 + 𝑏ℎ) 𝑦𝑡 = 𝑊ℎ𝑦ℎ 𝑡 + 𝑊ℎ𝑦ℎ 𝑡 + 𝑏 𝑦 training sequence forwards and backwards to two separate recurrent hidden layers past and future context determines the output An unfolded BRNN RNN Extensions: Bidirectional Recurrent Neural Networks Traditional RNNs only model the dependence of the current state on the previous state, BRNNs (Schuster and Paliwal, 1997) extend to model dependence on both past states and future states. For example: predicting a missing word in a sequence you want to look at both the left and the right context.
  • 27. A gating mechanism of the LSTM , which generates the current hidden state by the paste hidden state and current input ..It contains five modules: input gate, new memory cell, forget gate, final memory generation, and output gate. RNN Extensions: Long Short-term Memory The vanishing gradient problem prevents standard RNNs from learning long-term dependencies. LSTMs (Hochreiter and Schmidhuber, 1997) were designed to combat vanishing gradients through a gating mechanism.
  • 28. New memory cell use the input word and the past hidden state to generate a new memory which includes aspects of the new input 𝑐𝑡 = tanh(𝑊 𝑐 𝑥𝑡 + 𝑈 𝑐ℎ 𝑡−1) New memory RNN Extensions: Long Short-term Memory A gating mechanism of the LSTM
  • 29. Forget gate The forget gate looks at the input word and the past hidden state and makes an assessment on whether the past memory cell is useful for the computation of the current memory cell 𝑓𝑡 = σ( 𝑊 𝑓 𝑥𝑡 + 𝑈 𝑓ℎ 𝑡−1) RNN Extensions: Long Short-term Memory A gating mechanism of the LSTM
  • 30. Final memory cell 𝑖 𝑡 𝑓𝑡 𝑐𝑡−1 𝑐𝑡 𝑐𝑡 = 𝑓𝑡 ∘ 𝑐𝑡−1 + 𝑖 𝑡 ∘ 𝑐𝑡 This stage first takes the advice of the forget gate ft and accordingly forgets the past memory ct−1. Similarly, it takes the advice of the input gate 𝑖 𝑡 and accordingly gates the new memory. It then sums these two results to produce the final memory RNN Extensions: Long Short-term Memory A gating mechanism of the LSTM
  • 31. Output gate 𝑜𝑡 = 𝜎(𝑊𝑜 𝑥𝑡 + 𝑈 𝑜ℎ 𝑡−1) This gate makes the assessment regarding what parts of the memory 𝑐𝑡 needs to be exposed/present in the hidden state ℎ 𝑡. RNN Extensions: Long Short-term Memory A gating mechanism of the LSTM
  • 32. The hidden state ℎ 𝑡 = 𝑜𝑡 ∘ tanh(𝑐𝑡) RNN Extensions: Long Short-term Memory A gating mechanism of the LSTM
  • 33. RNN extensions: Long Short-term memory LSTMs contain information outside the normal flow of the recurrent network in a gated cell. Information can be stored in, written to, or read from a cell, much like data in a computer’s memory. The cells learn when to allow data to enter, leave or be deleted through the iterative process of making guesses, back- propagating error, and adjusting weights via gradient descent. Conclusions on LSTM
  • 34. RNN extensions: Long Short-term Memory Why LSTM can combat the vanish gradient problem? LSTMs help preserve the error that can be back-propagated through time and layers. By maintaining a more constant error, they allow recurrent nets to continue to learn over many time steps (over 1000), thereby opening a channel to link causes and effects remotely
  • 35. What can RNNs can do? Machine Translation Visual Question Answering
  • 36. In machine translation, the input is a sequence of words in source language, and the output is a sequence of words in target language. Encoder-decoder architecture for machine translation Encoder: An RNN to encode the input sentence into a hidden state (feature) Decoder: An RNN with the hidden state of the sentence in source language as the input and output the translated sentence Machine Translation
  • 37. Demo Website VQA: Given an image and a natural language question about the image, the task is to provide an accurate natural language answer Visual Question Answering (VQA) Picture from (Antol et al., 2015)
  • 38. The output is to be conditioned on both image and textual inputs. A CNN is used to encode the image and a RNN is implemented to encode the sentence. Visual Question Answering
  • 39. Unsupervised Learning • Autoencoders • Deep Autoencoders • Denoising Autoencoders • Stacked Denoising Autoencoders
  • 40. Autoencoders An Autoencoder is a feedforward neural network that learns to predict the input itself in the output. 𝑦(𝑖) = 𝑥(𝑖) • The input-to-hidden part corresponds to an encoder • The hidden-to-output part corresponds to a decoder.
  • 41. Deep Autoencoders • A deep Autoencoder is constructed by extending the encoder and decoder of autoencoder with multiple hidden layers. • Gradient vanishing problem: the gradient becomes too small as it passes back through many layers Diagram from (Hinton and Salakhutdinov, 2006)
  • 42. Training Deep Autoencoders Diagram from (Hinton and Salakhutdinov, 2006)
  • 43. Denoising Autoencoders • By adding stochastic noise to the, it can force Autoencoder to learn more robust features.
  • 44. Training Denoising Autoencoder The loss function of Denoising autoencoder: where Like deep Autoencoder, we can stack multiple denoising autoencoders layer-wisely to form a Stacked Denoising Autoencoder.
  • 45. Training Denoising Autoencoder on MNIST • The following pictures show the difference between the resulting filters of Denoising Autoencoder trained on MNIST with different noise ratios. No noise (noise ratio=0%) noise ratio=30% Diagram from (Hinton and Salakhutdinov, 2006)
  • 46. Deep Reinforcement Learning • Reinforcement Learning • Deep Reinforcement Learning • Applications – Playing Atari Games – AlphaGO
  • 47. Reinforcement Learning • What’s Reinforcement Learning? Environment Agent {Observation, Reward} {Actions} • Agent interacts with an environment and learns by maximizing a scalar reward signal • No labels or any other supervision signal. • Previously suffering from hand-craft states or representation.
  • 48. Policies and Value Functions • Policy 𝜋 is a behavior function selecting actions given states 𝑎 = 𝜋(s) • Value function 𝑄 𝜋(s,a) is expected total reward 𝑟 from state s and action a under policy 𝜋 “How good is action 𝑎 in state 𝑠?”
  • 49. Approaches To Reinforcement Learning • Policy-based RL – Search directly for the optimal policy 𝜋∗ – Policy achieving maximum future reward • Value-based RL – Estimate the optimal value function 𝑄∗(s,a) – Maximum value achievable under any policy • Model-based RL – Build a transition model of the environment – Plan (e.g. by look-ahead) using model
  • 50. Bellman Equation • Value function can be unrolled recursively • Optimal value function Q∗ (s, a) can be unrolled recursively • Value iteration algorithms solve the Bellman equation
  • 51. Deep Reinforcement Learning • Human • So what’s DEEP RL? Environment {Actions}{Raw Observation, Reward}
  • 52. Deep Reinforcement Learning • Represent value function by deep Q-network with weights w • Define objective function by mean-squared error in Q- values • Leading to the following Q-learning gradient
  • 53. DQN in Atari • End-to-end learning of values Q(s, a) from pixels • Input state s is stack of raw pixels from last 4 frames • Output is Q(s, a) for 18 joystick/button positions • Reward is the change in the score for that step Mnih, Volodymyr, et al. 2015.
  • 54. DQN in Atari : Human Level Control Mnih, Volodymyr, et al. 2015.
  • 55. AlphaGO: Monte Carlo Tree Search • MCTS: Model look ahead to reduce searching space by predicting opponent’s moves Silver, David, et al. 2016.
  • 56. AlphaGO: Learning Pipeline • Combine SL and RL to learn the search direction in MCTS • SL policy Network – Prior search probability or potential • Rollout: – combine with MCTS for quick simulation on leaf node • Value Network: – Build the Global feeling on the leaf node situation Silver, David, et al. 2016.
  • 57. Learning to Prune: SL Policy Network • 13-layer CNN • Input board position 𝑠 • Output: p 𝜎 (𝑎|𝑠), where 𝑎 is the next move
  • 58. Learning to Prune: RL Policy Network • 1 Million samples are used to train. • RL-Policy network VS SL-Policy network. • RL-Policy alone wins 80% games against SL-Policy. • Combined with MCTS, SL-Policy network is better • Used to derive the Value Network as the ground truth – Making enough data for training Self play
  • 59. Learning to Prune: Value Network • Regression: Similar architecture • SL Network: Sampling to generate a unique game. • RL Network: Simulate to get the game’s final result. • Train: 50 million mini-batches of 32 positions(30 million unique games)
  • 60. AlphaGO:Evaluation The version solely using the policy network does not perform any searchSilver, David, et al. 2016.

Editor's Notes

  • #6: The word ‘deep’ in deep learning refers to the layered model architectures which are usually deeper than conventional learning models.
  • #14: By spacing pooling regions k > 1 (rather than 1) pixels apart, the next higher layer has roughly k times fewer inputs to process, leading to downsampling.
  • #43: Pretraining consists of learning a stack of restricted Boltzmann machines (RBMs), each having only one layer of feature detectors. The learned feature activations of one RBM are used as the ‘‘data’’ for training the next RBM in the stack. After the pretraining, the RBMs are ‘‘unrolled’’ to create a deep autoencoder, which is then fine-tuned using backpropagation of error derivatives.
  • #44: 1. A higher-level representation should be rather stable and robust under corruptions of the input. 2. Performing the denoising task well requires extracting features that capture useful structure in the input distribution. 3. Denoising is not the primary goal. It is advocated and investigated as a training criterion for learning to extract useful features that will constitute better higher-level representation.